In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn’t been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn’t it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!
I appreciate you describing the LTV distinctions between the thinkers, thank you, sincerely!
I think the problem I have with AI - and it sounds like you agree at least partially - is that it positions human creative work, and human labour in general, as only a means to an end, rather than also as an end in itself.
(I think this holds true even with something like a video game texture, which I would argue is indeed part of a greater whole of creative expression and should not be so readily discounted.)
This makes AI something more along the lines of what Ursula Franklin called a ‘prescriptive technology’, as opposed to a ‘holistic technology’.
In other words, the way a technology defines how we work implies a kind of political relation: if humans are equivalent to machines, then what is the appropriate way to treat workers?
Is it impossible that there are technologies that are capitalist through and through?
Tools are different in different modes of production. A hammer is capital in the hands of a capitalist whose workers use it to drive nails, but is just a tool in the hands of a yeoman who uses it to fix up their homestead. My driving point is that art and AI images have intrinsically different use-values, and thus AI cannot take the place of art. It can pretty much occupy a similar space as stock images, but it cannot take the place of what we appreciate art for.
Humans will never be equivalent to machines, but products of labor and products of machinery can be equal. However, what makes art “useful” is not something a machine can replicate, a machine is not a human and cannot represent a human expression of the human experience. A human can use AI as a part of their art, but simply prompting art and churning something out has as little artistic value as a napkin doodle by an untrained artist.
The products of artisanal labour and factory labour might indeed be able to be equivalent in terms of the end product’s use value, but they are not equivalent as far as the worker is concerned; the same loss of autonomy, the loss of opportunity for thought and problem-solving and learning and growing, these are part of the problem with capitalist social relations.
I’m trying to say that AI has this social relation baked in, because its entire purpose is to have the user cede autonomy to the system.
I’m sorry, but that doesn’t make any sense. AI is not intrinsically capitalist, it isn’t about cedeing autonomy. AI is trained on a bunch of inputs, and spits out an output based on nudging. It isn’t intrinsically capital, it’s just a tool that can do some things and can’t do others. I think the way you view capitalism is fundamentally different from the way Marxists view capitalism, and this is the crux of the miscommunication here.
Literally the only thing AI does is cause its users to cede autonomy. Its only function is to act as a (poor) facsimile of human cognitive processing and resultant output (edit: perhaps more accurate to say its function is to replace decision-making). This isn’t a hammer, it’s a political artifact, as Ali Alkhatib’s essay ‘Defining AI’ describes.
AI is, quite literally, a tool that approximates an output based on its training and prompting. It isn’t a political artifact or anything metaphysical.
AI is a process, a way of producing, it is not a tool. The assumptions baked into that process are political in nature.
I really don’t follow, something like Deepseek is quite literally a program trained on inputs that spits out an output depending on prompts. It isn’t inherently political, in that its relation to production depends on the social role it plays. Again, a hammer isn’t always capital, it is if that’s the role it plays.
And that social role is, at least in part, to advance the idea that communication and cognition can be replicated by statistically analyzing an enormous amount of input text, while ignoring the human and social context and conditions that actual communication takes place in. How can that not be considered political?