• FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    3 days ago

    I feel like the Atari 2600 is quickly becoming for so-called AI what the “how much is a gallon of milk?” gotcha question had become for politicians who run for office. A rather pointless bit of news.

    As Scotty said: the right tool for the right job. An LLM is maybe not a chess engine and that’s fine too. Why would we expect these models to be Magnus effing Carlson if they cannot reliably summarize an email or recommend eating pebbles?

    • Admax@lemmy.world
      link
      fedilink
      arrow-up
      28
      ·
      3 days ago

      Your question is probably rethorical but I feel the need to put it out there : It’s because it’s been advertised as such. LLMs are not advertised as language based AI but as something “intelligent” with “reasoning” abilities, which they inherently do not have.

      But that’s not what most people were told. For a large amount of them, LLMs can “think” and should be able to solve problems, such as chess…

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      9
      ·
      3 days ago

      Because people keep asking like LLMs are a magical solution to every problem. This is an effective way to show that that’s false