Just heard this today - co worker used an LLM to find him some shoes that fit. Basically prompted it to find specific shoes that fit wider/narrower feet and it just scraped reviews and told him what to get. I guess it worked perfectly for him.

I hate this sort of thing - however this is why normies love LLM’s. It’s going to be the new way every single person uses the internet. Hell, on a new win 11 install the first thing that comes up is copilot saying “hey use me i’m better than google!!”

Frustrating.

  • visc@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 hours ago

    Yes. That is how the internet will be used, to a significant degree. Even today AI represents an extremely powerful and astonishingly human-adapted user interface.

    The internet brought a vast quantity of knowledge together and made it accessible to anyone, in theory. In practice you need arcane knowledge to get what you want. You need to wiggle the mouse just so, need to know the abstract structure of the internet, the peculiarities of search terms, … it’s eminently doable but it’s not natural or intuitive. You must be taught how to use it.

    If you put a medieval Tamil farmer in a room with a ChatGPT audio interface they could use it and have access to all of that internet knowledge.

    I understand a lot of the backlash against AI but I don’t get hating on it because of how good of an interface it makes.

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      6
      ·
      9 hours ago

      Unfortunately youre right. If it wasn’t completely owned by massive corps pushing techo-facist ideology and mass surveillance, i could maybe see it as a positive thing. But it will not be used for good in the long run.

    • stabby_cicada@slrpnk.net
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      8 hours ago

      Yeah, and how does that Tamil farmer fact check their black box audio interface when it tells them to spray Roundup on their potatoes, or warns them to buy bottled water because their Hindu-hating Muslim neighbors have poisoned their well, or any other garbage it’s been deliberately or accidentally poisoned with?

      One of the huge weaknesses of AI as a user interface is that you have to go outside the interface to verify what it tells you. If I search for information about a disease using a search engine, and I find an .edu website discussing the results of double blind scientific studies of treatments for a disease, and a site full of anti-Semitic conspiracy theories and supplement ads telling me about THE SECRET CURE DOCTORS DON’T WANT YOU TO KNOW, I can compare the credibility of those two sources. If I ask ChatGPT for information about a disease, and it recommends a particular treatment protocol, I don’t know where it’s getting its information or how reliable it is. Even if it gives me some citations, I have to check its citations anyway, because I don’t know whether they’re reliable sources, unreliable sources, or hallucinations that don’t exist at all.

      And people who trust their LLM and don’t check its sources end up poisoning themselves when it tells them to mix bleach and vinegar to clean their bathrooms.

      If LLMS were being implemented as a new interface to gather information - as a tool to enhance human cognition rather than supplant, monitor, and control it - I would have a lot fewer problems with them.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      Why would anybody hate on endless free CPU cycles? It’s like handing out candy. Just gotta keep that “investor” cash flowing…