That “”““human””“” skeleton in the fourth item gave it away immediately. Now that I look at it further, “Isolation & Surveillance” and a picture of a megaphone??? “Fear as a tool of control” with a lightning bolt in someone’s head??? Did OP even read their slop before vomiting it here?
Yeah I’ve seen so much AI slop with the yellow tinge. It’s kinda hilarious that we’re watching AI model collapse in real time but the bubble keeps growing
I’ve also heard theories that its related to lots of “golden hour” photos but ultimately (and this is one of the significant problems with machine learning) the specific cause is unknowable due to the nature of the software
What’s wrong with the skeleton? It’s stylised of course as these sorts of icons tend to be, but generally correct. Pelvis, spine, ribs, head, etc.
The megaphone seems like a very good way to evoke images of an abusive overseer controlling the camp’s prisoners using technology of the modern day, an effective image for a section on monitoring and control, no?
There is no standardised symbol for fear within a person’s mind, so again, a stylised symbol showing a lightning bolt is fine. Especially given that it is likely there on purpose - think shocks. Shocks of a different kind you may receive under an evil oppressive prisoner camp system (imagine the sudden shock in ones mind as a guard shouts or lashes out at you, I would certainly consider symbolising that in this manner).
It’s as if you’ve never looked at anything anyone’s made with simple clipart and the like before, and assume everything must be extremely deep and custom designed by experts?
Even if this were made with the help of AI, I don’t see the message being any less valid, just because the person didn’t go download an image editor to a PC, learn how to use it, learn how to import SVG icons and research for the most appropriate ones, build the image and export it appropriately, etc.
Not everybody is as skilled or capable as you or I may be in producing something that we might consider simple. Heck, some people only have a smartphone, not everybody has the luxury of owning a PC and proper software, nor the time or inclination to learn such tools.
The message in this image is conveyed very well, and is relevant to the current fascist regime’s actions in the USA (and indeed is a universally important message).
If you want to suggest it’s bad (or “slop”, as you so evocatively put it) just because you don’t like the image creator used to put it to print, well, that’s a weird hill to die on, to be honest.
You better hope your country never duplicates the USA’s slide into fascism, or you yourself may one day end up in a camp… or worse. How quick to attack the people trying to raise awareness of these abuses of human rights then, I wonder?
Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.
A lot? What’s the threshold before it’s considered bad?
What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’
What about a deinterlacer network that’s been trained on other interlaced footage?
My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.
I think it is a pelvis on the left as others have said. I have to admit though I thought I was looking at two skulls, probably because I was biased to look from left to right so I just accepted the left one as a skull and then the right skull actually looks like a skull. My first thought though was that it was an abstract depiction of overcrowding, so it was intentional to show two skeletons pushed close together.
Wow. It certainly passes the test for first viewing. I fell for it until I read this comment and cannot unsee it now. Good reminder how fast propaganda of any subject can propagate, I guess
You should see the commenter that I blocked under mine. Apparently, some people don’t have the technological means to go to PowerPoint Online and Ctrl-C/Ctrl-V some stock images, but they do have the means to prompt slop by mail. Silly me for assuming privilege.
That “”““human””“” skeleton in the fourth item gave it away immediately. Now that I look at it further, “Isolation & Surveillance” and a picture of a megaphone??? “Fear as a tool of control” with a lightning bolt in someone’s head??? Did OP even read their slop before vomiting it here?
The Star of David is fucked up, too. You’d think that would be an easy shape to get right.
Also the color of the background. For some reason genAI uses that a lot.
Yeah I’ve seen so much AI slop with the yellow tinge. It’s kinda hilarious that we’re watching AI model collapse in real time but the bubble keeps growing
I reckon that it probably began with a lot of the training material being scans of aged paper.
I’ve also heard theories that its related to lots of “golden hour” photos but ultimately (and this is one of the significant problems with machine learning) the specific cause is unknowable due to the nature of the software
What’s wrong with the skeleton? It’s stylised of course as these sorts of icons tend to be, but generally correct. Pelvis, spine, ribs, head, etc.
The megaphone seems like a very good way to evoke images of an abusive overseer controlling the camp’s prisoners using technology of the modern day, an effective image for a section on monitoring and control, no?
There is no standardised symbol for fear within a person’s mind, so again, a stylised symbol showing a lightning bolt is fine. Especially given that it is likely there on purpose - think shocks. Shocks of a different kind you may receive under an evil oppressive prisoner camp system (imagine the sudden shock in ones mind as a guard shouts or lashes out at you, I would certainly consider symbolising that in this manner).
It’s as if you’ve never looked at anything anyone’s made with simple clipart and the like before, and assume everything must be extremely deep and custom designed by experts?
Even if this were made with the help of AI, I don’t see the message being any less valid, just because the person didn’t go download an image editor to a PC, learn how to use it, learn how to import SVG icons and research for the most appropriate ones, build the image and export it appropriately, etc.
Not everybody is as skilled or capable as you or I may be in producing something that we might consider simple. Heck, some people only have a smartphone, not everybody has the luxury of owning a PC and proper software, nor the time or inclination to learn such tools.
The message in this image is conveyed very well, and is relevant to the current fascist regime’s actions in the USA (and indeed is a universally important message).
If you want to suggest it’s bad (or “slop”, as you so evocatively put it) just because you don’t like the image creator used to put it to print, well, that’s a weird hill to die on, to be honest.
You better hope your country never duplicates the USA’s slide into fascism, or you yourself may one day end up in a camp… or worse. How quick to attack the people trying to raise awareness of these abuses of human rights then, I wonder?
Someone owns stock in AI 💀💀
Welcome to Lemmy (and Reddit).
Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.
A lot? What’s the threshold before it’s considered bad?
Well those things aren’t generative AI so there isn’t much of an issue with them
What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’
What about a deinterlacer network that’s been trained on other interlaced footage?
My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.
Id say if there is training beforehand, then its “generative AI”
What’s wrong with the skeleton is, that it has a second head where an ass should be.
I think it is a pelvis on the left as others have said. I have to admit though I thought I was looking at two skulls, probably because I was biased to look from left to right so I just accepted the left one as a skull and then the right skull actually looks like a skull. My first thought though was that it was an abstract depiction of overcrowding, so it was intentional to show two skeletons pushed close together.
The rib cage is also way too long, and wtf is that bone under the ass head
That’s a pelvis.
A pelvis with 4 holes.
EDIT: and the small holes are above the bigger hole.
Those are speed holes.
So the opposite of trump where there is a second asshole in his face
Looks more like a pelvis than a head to me. 🤷
nobody is gonna be reading all that lmao
Wow. It certainly passes the test for first viewing. I fell for it until I read this comment and cannot unsee it now. Good reminder how fast propaganda of any subject can propagate, I guess
i had trouble believing this was AI because why would someone use genAI to make, like, 6 clip art images and a wall of text
You should see the commenter that I blocked under mine. Apparently, some people don’t have the technological means to go to PowerPoint Online and Ctrl-C/Ctrl-V some stock images, but they do have the means to prompt slop by mail. Silly me for assuming privilege.
What, you dont have an extra set of ribs in your hip bone?