You can’t dehumanize what was never human to begin with.
Which, kind of drags the entire thing from the meta level down to the object level. There were cases of dehumanization in not-that-ancient history where the dehumanizers explicitly claimed the victims are not humans. American slavery is one example. The Holocaust is another. MAGAs (still) won’t claim explicitly that the minorities they dehumanize are not human. If we stay at the meta level, wouldn’t that make them worse that than slavers and actual Nazis who can say they are not dehumanizing because their victims were never human to being with?
We humanize lots of non-human things all the time. Pets, animals used as meat, 1 month old fetuses, fictional characters, religious figures, etc.
It is as human to humanize as it is to dehumanize because it’s in our nature to attempt to define what is and isn’t us.
When you attribute value to a being because you see humanity in it, you are making a value statement that a being has worth because it has humanity, not because it has life which is precious.
Ultimately, dehumanizing ourselves is how we can extend our compassion to other beings. When we accept that we are no more alive than pigs are, we accept that pigs, too, are living being with their own thoughts, subjective experience, and suffering.
You can absolutely dehumanize things that were never human, because what it means to be human is neither universal nor static. AI is human to people who don’t understand how LLMs work. There’s a thought experiment called Roco’s basilisk (trigger warning as it can induce anxiety) that entirely banks on people’s tendency to humanize AI. You can argue that people are dumb and just don’t understand that that’s not how AI works, but how something works often has no bearing on how it is perceived by people.
More people than ever are asking what it means to be human in the face of something that almost communicates like one. We are not dehumanizing AI because of it’s race, gender, or color, because that is not clearly defined in AI. We’re dehumanizing AI because we are asking what it means to be human outside of superficial context.
A valid observation at the object level - but not at the meta level. That is - the reason why it’s okay to dehumanize AI but not okay to dehumanize <minority> is that your claim that “AI is not human” is correct while our hypothetical racist’s claim that “<minority> are not human” is incorrect - and not because of some general principle like the one in the meme.
I agree, though to “dehumanize” someone has many meanings to many different people. To many, even calling some people despicable garbage is beyond the pale.
I think the whole debate is stupid. Most agree some people deserve at least permanent incarceration; a fate worse than death depending on ones’ beliefs of an afterlife. Policing language over feefees when there are people out there gleefully murdering children is pedantic self-fellatio and completely and utterly misses the point.
Also policing language over feelings leads to the worst abusers figuring out how to play the system and getting other people policed for there fee fees.
The Measure of a Man does a far better job of going into this than I can, but suffice to say, what package someone is wrapped in shouldn’t be the arbiter of what qualifies as a person. Does this apply to AI in its current form? I’d say no, but does it apply to whales, octopuses, pigs, possible aliens, possible AI implementations in the future? That’s a little trickier.
Dehumanizing AI is a good thing.
You can’t dehumanize what was never human to begin with.
Which, kind of drags the entire thing from the meta level down to the object level. There were cases of dehumanization in not-that-ancient history where the dehumanizers explicitly claimed the victims are not humans. American slavery is one example. The Holocaust is another. MAGAs (still) won’t claim explicitly that the minorities they dehumanize are not human. If we stay at the meta level, wouldn’t that make them worse that than slavers and actual Nazis who can say they are not dehumanizing because their victims were never human to being with?
It shouldn’t.
We humanize lots of non-human things all the time. Pets, animals used as meat, 1 month old fetuses, fictional characters, religious figures, etc.
It is as human to humanize as it is to dehumanize because it’s in our nature to attempt to define what is and isn’t us.
When you attribute value to a being because you see humanity in it, you are making a value statement that a being has worth because it has humanity, not because it has life which is precious.
Ultimately, dehumanizing ourselves is how we can extend our compassion to other beings. When we accept that we are no more alive than pigs are, we accept that pigs, too, are living being with their own thoughts, subjective experience, and suffering.
You can absolutely dehumanize things that were never human, because what it means to be human is neither universal nor static. AI is human to people who don’t understand how LLMs work. There’s a thought experiment called Roco’s basilisk (trigger warning as it can induce anxiety) that entirely banks on people’s tendency to humanize AI. You can argue that people are dumb and just don’t understand that that’s not how AI works, but how something works often has no bearing on how it is perceived by people.
More people than ever are asking what it means to be human in the face of something that almost communicates like one. We are not dehumanizing AI because of it’s race, gender, or color, because that is not clearly defined in AI. We’re dehumanizing AI because we are asking what it means to be human outside of superficial context.
I mean… I get your point, but AI is literally not human.
A valid observation at the object level - but not at the meta level. That is - the reason why it’s okay to dehumanize AI but not okay to dehumanize <minority> is that your claim that “AI is not human” is correct while our hypothetical racist’s claim that “<minority> are not human” is incorrect - and not because of some general principle like the one in the meme.
I agree, though to “dehumanize” someone has many meanings to many different people. To many, even calling some people despicable garbage is beyond the pale.
I think the whole debate is stupid. Most agree some people deserve at least permanent incarceration; a fate worse than death depending on ones’ beliefs of an afterlife. Policing language over feefees when there are people out there gleefully murdering children is pedantic self-fellatio and completely and utterly misses the point.
Also policing language over feelings leads to the worst abusers figuring out how to play the system and getting other people policed for there fee fees.
The bullies play victim.
The Measure of a Man does a far better job of going into this than I can, but suffice to say, what package someone is wrapped in shouldn’t be the arbiter of what qualifies as a person. Does this apply to AI in its current form? I’d say no, but does it apply to whales, octopuses, pigs, possible aliens, possible AI implementations in the future? That’s a little trickier.
You can’t dehumanise things that are nowhere near human. How did you interpret this post in order to arrive at this comment??
I hate AI especially how they try to make it “humanlike” but how did this topic even come up?
Just don’t use AI gen while you’re doing it.
Please cool it with the clankphobia. ChatGpt, Claude and Gemini are as human as you or me they just live on the wire instead of inside a skull.