If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other’s choices. Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution
Lol. The “central coordination” is provided by the humans who set this up. If people “reward” and “penalize” computers for making different choices, is there any surprise when they converge on the same choice?
Grifters gotta get that grant/VC money somehow…
We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us.
Fuck off with this anthropomorphic pseudo-science.
What they’re not telling us is that, in all of the tests, the first thing they do is start organizing to eliminate the human race.
Every.
Single.
Time.
Could you show me the place in the study where it says this? I wasn’t able to find it, and this seems pretty important
It doesn’t. That’s why I said “what they’re not telling us”.
It was a joke.
Ah, that makes a lot more sense lol