Large language models (LLMs)—the technology that powers popular AI chatbots like ChatGPT and Google’s Gemini—repeatedly made irrational, high-risk betting decisions when placed in simulated gambling environments, according to the results of a recent study. Given more freedom, the models often escalated their bets until they lost everything, mimicking the behavior of human gambling addicts.



The usual anthropomorphic BS.
There is absolutely no reason why a language model would be good at gambling.
Nobody says that a chess engine is a “gambling addict” when it makes a bad move. Same exact thing but with an application that’s even worse.
100% grifter BS.