A new poll by the Pew Research Center has found that Americans are getting extremely fed up with artificial intelligence in their daily lives.

A whopping 53 percent of just over 5,000 US adults polled in June think that AI will “worsen people’s ability to think creatively.” Fifty percent say AI will deteriorate our ability to form meaningful relationships, while only five percent believe the reverse.

While 29 percent of respondents said they believe AI will make people better problem-solvers, 38 percent said it could worsen our ability to solve problems.

The poll highlights a growing distrust and disillusionment with AI. Average Americans are concerned about how AI tools could stifle human creativity, as the industry continues to celebrate the automation of human labor as a cost-cutting measure.

    • captain_oni@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      At this point I’m starting to feel a little bad for Grok. It only “wanted” to answer people’s questions, but its creator is so allergic to the truth that he has to lobotomize the LLM constantly and “brainwash” it to parrot his world view.

      At this point, if Grok was a person, it would be laying on the floor, shitting itself and mumbling something like “it tells people about white genocide or else it gets the hose again” over and over.

    • JigglySackles@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      18 hours ago

      It’s helpful in learning so long as you get one that you can reign in to only rely on only the official documentation of what you are learning. But then there’s allllll the downsides of running that power hungry system.

        • expr@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          13 hours ago

          Right, which begs the question: why wouldn’t I just fucking search for what I want to know? Especially becauseI know for a fact that won’t result in me having to sift through completely fabricated bullshit.

          • CrowAirbrush@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            Because if you do that now, you’ll end up on a a.i. made website with some fever dreams for images which hold no real facts and or truths. Only half facts jumbled together with half truths into incomprehensible sentences which feel like a random combination of words that seem to be endless and unmemorizeable.

  • JohnAnthony@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    18
    ·
    24 hours ago

    I still feel this whole conclusion is akin to “we won’t need money in a post-AGI world”. An implied, unproven dream of AI being so good that X happens as a result.

    If an author uses LLMs to write a book, I don’t give a fuck that they forget how to write on their own. What I do care about is that they will generate 100 terrible books in the time it takes a legitimate author to write a single one, consuming a 1000 times the resources to do so and drown out the legitimate author in the end, by sheer mass.

    • cloudy1999@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      7 hours ago

      How many terrible books must I read to find the decent one? And why should I read something that nobody bothered to write? Such a senseless waste of time and resources.

      • JohnAnthony@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        7 minutes ago

        I completely agree: if the (hypothetical) perfect LLM wrote the perfect book/song/poem, why would I care?

        Off the top of my head, if an LLM generated Lennon’s “Imagine”, Pink Floyd’s “Goodbye Blue Skies”, or Eminem’s “Kim”, why would anyone give a fuck? If it wrote about sorrow, fear, hope, anger, or a better tomorrow, how could it matter?

        Even if it found the statistically perfect words to make you laugh, cry, or feel something in general, I don’t think it would matter. When I listen to Nirvana, The Doors, half my collection honestly, I think it is inherently about sharing a brief connection with the artist, taking a glimpse into their feelings, often rooted in a specific period in time.

        Sorry if iam14andthisisdeep, I don’t think I am quite finding the right words myself. But I’ll fuck myself with razor blades before I ask a predictive text model to formulate it for me, because the whole point is to tell you how I feel.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      24 hours ago

      I personally believe that in an AGI world, the rich will mistreat the former workers, that might work for a while but at some point, not only are the people fed up of the abuse but the “geniuses” who created their position of power are gone and the children or children’s children will have the wealth and power. The rest of the world will realise that there is no merit to either of there position. And the blood of millions will soak the earth and if we are lucky, AGI survives and serves the collective well. If we aren’t… oh well…

      Good thing that we aren’t there.

  • Mossheart@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    24 hours ago

    Forget ruining their ability to think creatively. It’s ruining people’s already limited ability to think critically.

    • LustyArgonian@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      We don’t even have the ability to refuse to pay for the power it uses. People reporting their power bills (and water bills) are going up from it.

    • NotSteve_@piefed.ca
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      My company pays for GH Copilot and Cursor and they track your usage. My usage stats glitched at one point I guess showing that I hadn’t used it for a week and I got a call from my manager

        • NotSteve_@piefed.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          13 hours ago

          I want to but they pay far above average wage for CS in my country and it’s fully remote 😅. I’ve also been there long enough that I know pretty much my whole org’s codebase and i really don’t want to start fresh again. Golden handcuffs I guess

          • expr@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            13 hours ago

            Fair enough. In that case I guess maybe you could automate your way around it. Run it in the background generating nonsense every so often.

      • LiveLM@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Can they read the logs?
        I’d have the AI writing all the emails to the top brass.

      • njordomir@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        20 hours ago

        Uggh, that sounds like hell. If you’re gonna tell people exactly how to do their job you might as well have a machine do it. Right? My contribution is the fact that I do things with my own flair. My customers love me because I respond and behave like a unique and identifiable real person they know, not like a robotic copycat sycophant clone. Sometimes my jokes miss the first time, but over time I build meaningful repitoire with my customers. I truly empathize with their concerns because I see how the industry crushes them, have been there myself, and I understand what it means not just in the sense of being able to see and define concepts, but I can understand how it feels from perspectives that take a lifetime to develop and I can identify the ripple effects that people feel in their lives due to work environments, budget crunches, policy changes, etc. I would rather deliver bad news awkwardly as a human than have chat gtp do it and I would rather receive it the same way also.

  • stabby_cicada@slrpnk.net
    link
    fedilink
    arrow-up
    84
    arrow-down
    1
    ·
    2 days ago

    “Contrary to expectations revealed in four surveys, cross-country data and six additional studies find that people with lower AI literacy are typically more receptive to AI,” the paper found.

    Ouch.

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      Lower AI literacy being… like people who barely understand how to use a computer, or people who aren’t actively developing the AI systems of the future?

      • LustyArgonian@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        Literacy as in they don’t understand the tool itself. Like that it is a predictive language model, that it can hallucinate, it doesn’t have feelings, it isn’t living. AI psychosis typically centers around this and once someone emotionally bonds to AI, it is SUPER HARD to convince them it isn’t alive, prophetic, etc. Narcissists in particular seem susceptible.

      • untorquer@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        1 day ago

        You can click through to the article on the research or the research itself…

        Literacy in this article is talking about the depth of understanding about the mechanics behind AI, even if barely below the surface. That people who learn basic concepts like it being a statistical regurgitation machine tend to dislike it when compared to people who think a gnome wizard with encyclopedic knowledge and agency has moved into their computer.

      • The Velour Fog @lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        2 days ago

        I read “low AI literacy” as being unable to discern AI images/writing from something made by a human. Like a Turing test, maybe.

      • untorquer@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 day ago

        AI technically has no knowledge but will speak with authority on any topic hallucinating to fill gaps.

        Though for the humans it’s, “The more i learn the more obvious your shortcomings” which is really more of a Dunning-Kruger corollary.

  • TommySoda@lemmy.world
    link
    fedilink
    arrow-up
    98
    ·
    2 days ago

    I wouldn’t be surprised if a significant portion of that 29% that say it’s good for productivity are managers or business owners.

    • Meron35@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      So much the productivity gains are just compensating for lack of basic tech literacy.

      E.g. people sending event/meeting details without a calendar invite/ics file, so you run it through an LLM to generate you one.

    • paper_moon@lemmy.world
      link
      fedilink
      arrow-up
      31
      ·
      edit-2
      2 days ago

      Or they haven’t realized increased perceived productivity is a bad thing. The goalpost is always moving for demanded worker productivity. Oh the invention of the computer can increase productivity by 100 times? No, you’re not getting a less work utopia, instead, guess how much productivity you’re now expected to produce? Oh the invention of the internet can increase productivity by 1000 times? Oh shoot, guess ya gotta get back to work to make those gains!!!

      • wildncrazyguy138@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        2 days ago

        Says you. I just got back from a trip where I watched a lady hand key 100 workers hand written time cards into a computer system. I’m sure that person would be much more content if she wasn’t sitting in a cave all day slowly giving herself carpal tunnel.

        The better way would be to leverage technology so workers could scan themselves in, then train the admin to review for anomalies.

        • WoodScientist@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          No, we must return to the traditional system: old-timey mechanical punch card systems - literally punch in, punch out.

          I have this wonderful image of people working at a software company being forced, for some stupid reason, to use an actual mechanical punchclock system.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I wouldn’t call it good for productivity but it can be useful but regime propaganda greatly overstates how useful it is.

      They are acting like you are getting an entire workshop but it is closer to get a tool kit you give to a high schooler.

      It is inherently flawed due to the tech relying on statistical predictions so it can’t tell wrong from right.

      Which makes useless unless you already know the right answer.

        • DarkSpectrum@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          14 hours ago

          They also have a higher risk of desensitisation and acclamation over long periods during their developmental stages. If kids grow to rely on AI then obviously they will be helpless without it. I wonder if this is how our ancestors felt about grocery store convenience as a modern “technology”?

        • kadu@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          19 hours ago

          Kids absolutely love it. Turning in homework made with ChatGPT, even though everything is badly written and they learned nothing, gets celebrated as an act of rebellion. “You gave us all this stupid homework? Well, now you’re powerless, I can use ChatGPT and it’s done!” which completely misses the point of homework.

          • lichtmetzger@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            17 hours ago

            Nothing really new here. I hated homework when I was a kid, too, and I still think it’s pointless. More work after eight hours of work, sure…

            • vaultdweller013@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              14 hours ago

              Part of the problem is that most homework is an inflexible extension of class work and is generally pretty shit. My highschool got rid of homework and just cut down PE our grades went up. Point is the best homework is the open ended shit where you basically let the students go nuts, best bit of homework I ever did was a presentation style book report. Mine was probably the most sane compared to the rest, I just did a report on Hitchhikers Guide to the Galaxy which was more a synopsis/abridged retelling. For comparison one of my friends rolled in with a cork board that was basically a mix of Winston is an idiot and big brother is making shit up because 1984 melted his brain, another guy read some of Kafkas work his presentation was a series of shit posts.

  • bfg9k@lemmy.world
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    2 days ago

    Any day now the bubble will burst and we will move onto the next hype train.

    Last time it was ‘The Cloud’, now it’s ‘AI’, I wonder what useless ongoing payment bullshit they will try to sell us next.

    • pemptago@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      Metaverse and VR such a colossal failure it’s not even remembered as bullshit they were trying to sell us.

      • bfg9k@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        I have a soft spot for VR, it’s awesome as something to mess around with for a couple hours here and there and for Flight Sim/Elite Dangerous it’s unmatched, but it’s an extremely expensive hobby at best and was never going to penetrate normal people’s day-to-day lives

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      20 hours ago

      But this one is really the most important one we can sacrifice the environment (and the peasants’ money) for! Really!

      /s

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 day ago

      The Cloud didn’t burst. Proxmox is great, and AWS is almost certainly powering much of the infra used to send this message from me to you

      • Dogiedog64@lemmy.world
        link
        fedilink
        arrow-up
        20
        ·
        2 days ago

        It never burst explosively, just kinda slowly deflated into being normal and useful. AI won’t do that; too much money (HUNDREDS OF BILLIONS OF DOLLARS!!!) has been pumped in too quickly for anything other than an explosively catastrophic collapse of the market. At this point, it’s a game of Nuclear Chicken between VC firms and AI firms to see who blinks first and admits the whole thing is a loss. Don’t worry, though, the greater US economy will likely crumble significantly too ¯\_(ツ)_/¯.

          • Dogiedog64@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            1 day ago

            True, but the asset damage was largely contained there as well, since nobody actually BOUGHT ANY., and it was all fake digital assets made of fake digital money. AI/LLMslop has LOADS of physical assets, and is burning so much REAL money that it’s making heads spin, not to mention the fact it has bled VC firms everywhere almost dry. It’s gonna be so, so much worse than NFTs.

      • bfg9k@lemmy.world
        link
        fedilink
        arrow-up
        16
        ·
        2 days ago

        It didn’t ‘burst’ so much as deflate as businesses realised paying $200,000 upfront for their own servers instead of $20,000 every month was better in the long run

        The cloud still has a clear and defined use case for a lot of tangible things, but AI is just nebulous ‘it will improve productivity’ claims with no substance

        • Lost_My_Mind@lemmy.worldM
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          businesses realised paying $200,000 upfront for their own servers instead of $20,000 every month was better in the long run

          Not even the long run. 11 months is when you’d pay $220,000 which is MORE than $200,000.

          So not even a year until you’re losing money.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    2 days ago

    If they had polled elsewhere, they might have gotten similar results.

    About nobody loves AI, except for some greedier-than-smart managers and AI addicts.