Alphaville is free access. Just create a free account.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      24 hours ago

      Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.

      See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:

      …In 2025, with the launch of Manus and Claude Code, we realized that coding and agentic functions are more useful. They contribute more economically and significantly improve people’s efficiency. We are no longer putting simple chat at the top of our priorities. Instead, we are exploring more on the coding side and the agent side. We observe the trend and do many experiments on it.

      https://www.chinatalk.media/p/the-zai-playbook

      They talk about how the next release will be very small/lightweight, and more task focused. How important gaining efficiency through architecture (not scaling up) is now. They even touch on how their own models are starting to be useful utilities in their workflows, and specifically not miraculous worker replacements.

    • PullPantsUnsworn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      15 hours ago

      I am a developer. While AI is being marketed as snake oil, the things they can do is astonishing. One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.

      I see people are just forming two groups. Those who thinks AI will solve everything and those who thinks AI is useless. Neither of them are right.

      • teohhanhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        42 minutes ago

        One example is it reviews code a lot better than human beings. It’s not just finding obvious errors but it catches logical error that no human would have caught.

        No, it does not.

        Source: Open-source contributor who’s constantly annoyed by the useless CodeRabbit AI that some open source projects have chosen to use.

      • bthest@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        32 minutes ago

        And how many errors is it creating that we don’t know about? You’re using AI to review code but then someone has to review what the AI did because they fuck up.

        If humans code something, then humans can troubleshoot and fix it. It’s on our level. But how are you going to fix a gigantic complicated tangle of vibe code that AI makes and only AI understands? Make a better AI to fix it? Again and again? Just get the AI to code another system from scratch every month? This shit is not sustainable.

      • ThirdConsul@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 hours ago

        It’s not just finding obvious errors but it catches logical error that no human would have caught.

        I’m not having the same experience.

    • jaykrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      18 hours ago

      Yes, as far as scalability, cheaper more efficient models can be used in applications which require thousands of uses a day.