On a deeper level than small talk, of course.

  • hexthismess [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Because the llms have been trained on however many cured data sets with mostly correct info. It sees how many times a phrase has been used in relation to other phrases, calculates the probability of if this is the correct output, then gambles on a certain preprogrammed risk tolerance, and spits out the output. Of course the software engineers will polish it up with barriers to keep it within certain boundaries,

    But the key thing is that the llm doesnt understand the fundamental concepts of what you’re asking it.

    I’m not a programmer, so I could be misunderstanding the overall process, but from what I’ve seen on how llms work and are trained, AI makes a very good attempt of what you almost wanted. I don’t how quickly AI will progress, but for now I I just see it as an extremely expensive party trick

    • MayoPete [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      That party trick is shipping code and is good enough to replace thousands of developers at Microsoft and other companies. Maybe that says something about how common production programming problems are. A lot of business code boils down to putting things in and pulling things out of databases or moving data around via API calls and other communication methods. This tool handles that kind of work with ease.