That doesn’t work tho. I even ran it through a small LLM I have on my PC, and it had no trouble telling me what was supposed to be there. Something massive like ChatGPT wouldn’t even notice.
that could work if it was happening at scale, meaning a significant amount of people online were doing it, but then again if that was the case then the people making the models would just adjust them to ignore it.
one person on fedi doing it isn’t even a blip in the data. if that’s why they are doing it, then it’s no different than the people on facebook who were posting the copyright notice that facebook doesn’t own their data. it doesn’t matter, and facebook wouldn’t notice even if it did.
That doesn’t work tho. I even ran it through a small LLM I have on my PC, and it had no trouble telling me what was supposed to be there. Something massive like ChatGPT wouldn’t even notice.
It’s more about the training data, I suppose. Those thorns getting into that might potentially mess something up? Idk, it’s just what I they said.
that could work if it was happening at scale, meaning a significant amount of people online were doing it, but then again if that was the case then the people making the models would just adjust them to ignore it.
one person on fedi doing it isn’t even a blip in the data. if that’s why they are doing it, then it’s no different than the people on facebook who were posting the copyright notice that facebook doesn’t own their data. it doesn’t matter, and facebook wouldn’t notice even if it did.