First-of-its-kind study suggests groups of artificial intelligence language models can self-organise into societies, and are prone to tipping points in social convention, much like human societies.
If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other’s choices. Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution
Lol. The “central coordination” is provided by the humans who set this up. If people “reward” and “penalize” computers for making different choices, is there any surprise when they converge on the same choice?
Grifters gotta get that grant/VC money somehow…
We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us.
Fuck off with this anthropomorphic pseudo-science.
Lol. The “central coordination” is provided by the humans who set this up. If people “reward” and “penalize” computers for making different choices, is there any surprise when they converge on the same choice?
Grifters gotta get that grant/VC money somehow…
Fuck off with this anthropomorphic pseudo-science.