I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don’t think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying “Fuck Software” just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I’m not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I’m currently writing a research paper on this topic, so I would like some opinion.
I hold more against AI than not.
- Take out all the copyrighted material all the big models illegally trained off of and their AK collapses. Their scrapers are also doing everything they can to kill small websites through DDOS by scraping everything they can as many times as possible and you know this is a feature and not a bug because it gets rid of any form of competition by default.
- It’s an attack on education and critical thinking. Why would the average Joe put in the effort to research and learn something properly lr even question the results if they believe an AI would never lie or be wrong? Critical thinking is already a skill on the decline, but I firmly believe AI is expediting this for a lot of people who either don’t know better or just don’t care.
- From a coding perspective, if your software relies on AI generated code, I’ve heard more stories of that same software being full of vulnerabilities caused by said AI code than not. I also hold a view that if you have to use AI to understand what a program does rather than using something like a textbook written by an actual dedicated expert in a programming language or consult someone who is better than you in said programming language, you are gonna learn the absolute worst practices, like leaving a default Django password that the AI generates.
- I view AI as extremely unprofessional. If you have to rely on AI to help your work that isn’t AI related, I’ll take that as you don’t actually know what you’re doing and am just phoning it in. It also shows how lazy and unable to think for yourself you are. I’d gladly admit I’m dumb as a bag of rocks considering I have used it to fix software errors on my Linux running laptop, so I’m not exception to my own rule. Forums and help groups for software troubles exist for a reason.
- With the amount of energy they require ( using non-renewables ), any action you do to try and stop global warming is offset within a couple nanoseconds ( hyperbole, maybe, but it’s definitely been a big issue ). I’m absolutely positive they don’t use renewable, at least for the big US data centers, because that would require building new infrastructure that they’re not willing to shill out a penny for. Most likely the same for just about every other country with large AI data centers out there.
- Looping back to point 1, with them scraping everything to death, they are able to essentially embolden the deranged “artists gatekeeping art!” sickos who believe that artists are somehow gatekeeping art from everyone because they have spent time practicing and getting better. I can pretty much guarantee these deranged sickos want art for free and without having to put in any effort at all because they all don’t value art or artists at all. After all, why would they support the arts when they could instead generate AI Jesus for Fakebook and get a shit ton of validation for their “art” from people without having to put in any effort?
- AI writing is just derivative of the training data it’s based on. As of now, because of the way it’s trained, you can tell certain models apart from how the text is generated. “Brow furled”? Immediately, if I see that, my mind immediately goes to text generated by Anthropic’s Claude AI. Also, the writing, from my experience, tends to be too well written for the average Joe. It’s also as generic and logical as I usually tend to write and have been trying to shake off. That’s subjective, though, on how generic and logical it reads.
I hate all of this Generative AI trash.
AI has been a concept in one way or another for a long time. The idea of ai is fine, this current state of slop machines and chat bots being pushed can suck my ass.
AI under capitalism is dangerous, much more under fascism. In a solarpunk future it would be fine, just another cool piece of tech.
Yes, I hate what’s commonly being termed in the current era as AI. It’s mediocrity and lies as a service.
First of all, that which is to get fucked is Generative AI in particular. Meaning, LLM text generation / diffusion model image generation, etc. AI which consciously thinks is still sci-fi and may always be. Older ML stuff also called “AI” that finds patterns in large amounts of satellite data or lets a robot figure out how to walk on bumpy ground or whatever is generally fine.
But generative AI is just bad and cannot be made good, for so many reasons. The “hallucination” is not a bug that will be fixed; it’s a fundamental flaw in how it works.
It’s not the worst thing, though. The worst thing is that, whether it’s making images or text, it’s just going to make the most expected thing for any given prompt. Not the same thing every time- but the variation is all going to be random variations of combining the same elements, and the more you make for a single prompt, the more you will see how interchangeably samey the results all are. It’s not the kind of variation you see by giving a class of art students the same assignment, it’s the variation you get by giving Minecraft a different world seed.
So all the samey and expected stuff in the training data (which is all of the writing and art in human history that its creators could get their hands on) gets reinforced and amplified, and all the unique and quirky and surprising stuff gets ironed out and vanishes. That’s how it reinforces biases and stereotypes- not just because it is trained on the internet, but again it’s because of a fundamental flaw in how it works. Even if it was perfected, using the same technology, it would still have this problem.
The “hallucination” is not a bug that will be fixed; it’s a fundamental flaw in how it works.
You’re not wrong that it’s a flaw. But also, fundamentally… it’s actually the main feature! That’s actually how it can even do anything. The flaw is baked into the core product.
How does tuning the data with randomness lead to biases stereotypes and hallucinations?
When most people talk about “hating AI” they’re talking the AI that is this wave before the next winter: (de)generative AI (whether based on LLM or diffusion or whatever ever other tripe drives things like GPT, DALL-E, Jukebox, etc.).
And yes, I hate AI in that sense, in that it is a dead end that is currently burning up the planet to produce subpar everything (words, images, music) while threatening the very foundation of cultural knowledge with obliteration.
AI in a broader sense, I don’t hate. Even the earlier over-hyped-before-wintered AI technologies have found niche applications where they’re useful, and once the grifters leave the (de)generative AI field we may find some use cases for AI there as well. (I think LLMs have a future, for example, in the field of translation: I’ve been experimenting with that domain and once the techbrodude know-it-all personality is excised from the LLMs and the phrase “I don’t know” is actually incorporated properly I think it could be very valuable there. You still have to look out for hallucinations, though.)
But (de)generative AI in general is overhyped shit. And it’s overhyped shit that cannot be meaningfully improved (indeed latter-day models turn out to be worse than earlier ones: ChatGPT4’s suite is more prone to hallucination, for example, than ChatGPT3.5). So a whole lot of people are getting pressured, a whole lot of lives are being ruined, a whole lot of misinformation and active disinformation is being spewed by them … but hey, at least we can have shit writing, shit art, and shit music!
I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet.
This is an excuse used many times but it doesn’t stand to inspection. Let’s go with robots making cars. When the auto industry had massive layoffs in the '80s the median age for factory workers assembling cars was about the early '30s. What proportion of people in their '30s make any kind of transition to stable, well-paid careers when they’re rendered redundant? (Hint: not very many.) An entire generation of the rust belt, in effect, because of automation, were shoved into poverty THAT WE STILL SEE TO THIS DAY. And that’s one sector. Automation shit-canned a whole lot of sectors and the reverberations of that have echoed throughout my entire life. (Born in the '60s.)
The only “positive move forward” seen by these traumatically devastating technologies released willy-nilly into society with no mitigation plan is that rich fuckers get richer. Because, you know, Sam Altman needs more cash and not a punch to his oh-so-punchable face.
😂 I too would like to punch Sam Altman. Serious talk though, you’re right, I haven’t thought about how hard it would be for the actual people getting replaced, but at the same time, automation is much faster and efficient, so isn’t there like a middle ground?
If we take the forum title here, the “fuck” is directed at the people in charge of so-called “AI” companies. The technology has value. It’s just being forcefed down our throats in ways that remind us of block chain and whatever happened to block chain?!
The tech with the most push behind it is being pushed in infancy, and is damn near useless without datasets of entirely stolen content.
There’s genuinely useful things and impressive tech under the machine learning umbrella. This “AI” boom is just hard pushing garbage.
This past week we saw the most obvious example yet of why they’re pushing LLMs so far too, Grok’s unprompted white supremacist ramblings over on twitter. These tools can easily be injected with biases like that (and much more subtly too) to turn them into a giant propaganda machine.
Some of the tech is genuinely useful and impressive, but the stuff getting the biggest pushes is nothing but garbage, and the companies behind it all are vile.
These tools can easily be injected with biases like [Grok’s unprompted white supremacist ramblings] (and much more subtly too) to turn them into a giant propaganda machine.
It’s fortunate that Kaptain Ketamine had his little binge of his favourite drug and made it SO OBVIOUS. There’s subtle biases all over degenerative AI. Like there was a phase when trying out the “art” creators where I couldn’t get any of them to portray someone writing with their left hand. (I don’t know if they still have a problem with that; I got bored with AI “art” once I saw its limitations.) And if the word “thug” was in the prompt it was about 80% chance of being a black guy. Or if the word “professional” was in the prompt it was about 80% chance of being a white guy. EXCEPT if “marketing” was added (as in “marketing professional”). Then for some reason it was almost always an Asian woman.
Or we can look at Perplexity, supposedly driven by not only its model, but incorporation of search results into the prompt. Ask it a question about any big techbrodude AI and its first responses will be positive and singing the praises of the AI renaissance. If you push (not even very hard) you can start getting it to confess to the flaws of LLMs, diffusion models, etc. and to the flaws of the corporate manoeuvring around pushing AI into everything, but the FIRST response (and the one people most likely stop reading after) is always pushing the glory of the AI revolution.
(Kind of like Chinese propaganda, really. You can get Party officials to admit to errors of judgment and outright vile acts of the past in conversation, but their first answer is always the glory of the Party!)
Oh, and then let’s look at what’s on the Internet where most of the data gets sucked up from. There’s probably about three orders of magnitude more text about Sonic the Hedgehog in your average LLM’s model than there is about, oh, I don’t know, off the top of my head, Daoism, literally the most influential philosophical school of the world’s most populous country! Hell, there’s probably more information about Mario and Luigi from Nintendo than there is about the Bible, arguably the most widespread and influential book around the world!
I wonder how that skews the bias…?
Well scammers destroyed its reputation and governments refused to use the tech BC it would expose corruption.
Make no mistake when the next reshuffle happens, it will the bedrock of all of systems esp government and finance.
People in power are not interested in such transparency currently
I don’t hate AI as much as I hate the nonexistent ethics surrounding LLM’s and generative AI tools right now (which is what a lot of people refer to as “AI” at present).
I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better” and is more affordable. People use LLMs as a means to formulate opinions and use as their therapist, but when they encounter real life conversations that have ups and downs they don’t know what to do because they’re so used to the ultra-positive formulated responses from chatGPT. People use AI to generate work that isn’t their own. I’ve had someone already take my own, genuine written work, copy/paste it into claude, and then tell me they’re just “making it more professional for me”. In front of me, on a screen share. The output didn’t even make structural sense and had conflicting information from the LLM. It was a slap in the face and now I don’t want to work with startups because apparently a lot of them are doing this to contractors.
All of these are examples that many people experience with me. They’re all examples of the same thing: “AI” as we are calling it is causing disruptions to the human experience because there’s nothing to regulate it. Companies are literally pirating your human experience to feed it into LLMs and generative tools, turning around and advertising the results as some revolutionary thing that will be your best friend, doctor, educator, personal artist and more. Going further, another person mentioned this, but it’s even weaponized. That same technology is being used to manipulate you, surveil you, and separate you from others to keep you in compliance with your running government, whether it be for good or bad. Not to mention, the ecological impact this has (all so someone can ask Gemini to generate a thank you note). Give the users & the environment more protections and give actual tangible consequences to these companies, and maybe I’ll be more receptive to “AI”.
I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better”
Anybody who thinks AI does art “better” is someone whose opinions in all matters, big or small, can be safely dismissed.
I do not hate AI, because it doesn’t exist. I’m not delusional.
I do resent the bullshit generators that the tech giants are promoting as AI to individual and institutional users, and the ways they have been trained without consent on regular folks’ status updates, as well as the works of authors, academics, programmers, poets, and artists.
I resent the amount of work, energy, environmental damage, and yes, promotional effort that has gone into creating an artificial desire for a product that a) nobody asked for, and b) still doesn’t do what it is claimed to do.
And I resent that both institutions and individuals are blindly embracing a technology that at every step from its creation to its implementations denigrate the human work — creative, scholarly, administrative and social — that it intends to supplant.
But Artificial Intelligence? No such thing. I’ll form an opinion if I ever see it.
While I haven’t thought about that before, now that I have, I totally agree. Ty fir sharing your pov :)
While I completely agree with most of this, it’s my understanding that we have is a type of AI, as is AGI. LLMs are classified as Narrow AI.
What he means is that he doesn’t hate A.I because it simply doesn’t exist. There is no intelligence in any of the so called “A.I” since all it’s doing is a combination of stolen training data + randomness
Yeah, I can understand the sentiment. I was just clarifying that true intelligence (AGI) is a subset of what we refer to as AI, alongside other subsets such as Narrow AI/LLMs. I agree it’s odd usage of the term, but I can’t find a source otherwise.
I like using chat gpt for npc dialogue in my DnD game. It can really fill out a character. Otherwise I don’t use it.
I didn’t hate AI (or LLMs whatever) at first, but after becoming a teacher I REALLY FUCKING HATE AI.
99% of my students use AI to cheat on any work I give them. They’ll literally paste my assignment into ChatGPT and paste ChatGPT’s response back to me. Yes, I’ve had to change how I calculate grades.
The other super annoying part of AI is that I often have to un-teach the slop that comes from AI. Too often it’s wrong, I have to unteach the wrong parts, and try to get students to remember the right way. OR, if it’s not technically wrong, it’s often wildly over-complicated and convoluted, and again I have to fight the AI to get students to remember the simple, plain way.
The other thing I’ve heard from peers, is that parents are also using ChatGPT to try to get things from schools. For example, some student was caught cheating, got in trouble, but the parent was trying to use some lawyer-sounding ChatGPT argument to get the kid out of trouble. (They’ve met the parent before and the email seems wildly out of character.) Or in another instance, a parent sent another lawyer-sounding ChatGPT email to the school asking for unreasonable accomodations, demanding software that doesn’t even make sense for the university major.
They’ll literally paste my assignment into ChatGPT and paste ChatGPT’s response back to me.
I solved a similar problem when teaching EFL (students just pasting assignments written in Chinese into a translator) by making them read select paragraphs out loud to me. You can rapidly spot the people who have no idea what the words they’re reading mean (or in my case are even pronounced on top of that!) and …
Well, cheating gets you 0.
We used to be too scared to tell our parents if we got in trouble, we’d always get in so much shit for it. (And this was late 90s/early 00s, it’s not like we were getting beatings).
What’s up with parents trying to get their kids out of trouble instead of going ham on them.
My kids’ teacher had a great teaching moment. He had the kids write an outline, use ChatGPT to write an essay from their outline, then he graded them on their corrections to the generated text
I hate it because LLMs are not AI; they’re statistical models that will never achieve true intelligence. It’s hallucinations all the way down, regardless of accuracy.
AI could be fine, except that in a capitalist society it’s going to be used by corps & govt as a weapon against labor, a surveillance technology, and a way to plagiarize the hard work of artists.
I hate LLMs because their use leads to less human creativity by pushing artists out of creating art, and lowers the quality of the art available to everyone. Not to mention they were all created in a highly unethical manner.
The rest of the slop going on with it is just a sideshow in my opinion. Replacing the very things that make us human with something artificial and ‘cheap’ is atrocious and I struggle to understand why everyone is going along with it.
I struggle to understand why everyone is going along with it.
- It seems cheaper. (corporate world)
- There’s a certain amount of jealousy that the “untalented” (by their own estimation) have against the “talented” (again by their own estimation). (non-corporate world)
Basically hiring someone to do creative stuff is “too expensive” and it seems cheaper to just push slop out. (When, not if, it fails, of course the decision to use slop is not ever accepted as the reason for the failure…)
And for the second group, it turns out being “creative” is REALLY GOD-DAMNED HARD WORK. They don’t want to put in work to create art. They want to type in a few words and get something that kinda/sorta does what they want.
Yeah everyone knows why capitalists are doing it. I just don’t understand everyone else. Yes, you can type some words in and get a dumb picture or a bad song. The cost is shittier movies, TV, comics, music… shittier ART all around because less people are doing it because less people can make a living at it, and what they were making is being replaced by slop that’s not worth consuming. It’s so short sighted.
I’ve come to a rather bleak conclusion: Most people can’t distinguish between shit art and good art.
I mean I’ve kind of suspected this all along given what music is popular vs. what music is good, but when I hear what Udio users call their “bangers” I had it confirmed.
Most people have a tin ear. I suspect the same applies to visual and written arts: not just no taste, not even able to understand what taste, as a sense, even is.
Oh yeah i hate “AI”
It’s a lie
It’s not intelligent, it’s not even new tech
LLMs were improved via semantic analysis instead of lexical and then they could mimick/break down human speech well enough to translate and communicate.
Then marketing teams looked at this autocomplete bot that can mimick a person’s speech and said “Yep, we can trick people into thinking this is Artificial Intelligence”
And this is what our Capitalist overloards are now doubling down on instead of us having a world where we actually understand what AI is and want real technological advancements to happen that lead to intelligent digital life.
Also it’s incredibly energy inefficient, each datacenter for these things is already sucking up a city’s worth of power just for us to use it to steal other people’s work and ideas.