According to a recent study, when advanced AI systems like ChatGPT, Gemini, and Claude are put in simulated gambling scenarios, they exhibit unsettlingly human-like characteristics. These large language models (LLMs) frequently make irrational, high-risk betting judgments, increasing wagers until losing everything, according to a study from the Gwangju Institute of Science and Technology in South Korea.
The study, which was published last month on the research site arXiv, exposed cognitive distortions that are regularly observed in human gamblers, including the gambler's fallacy (the notion that a desired outcome is more likely after occurring less frequently than predicted), loss-chasing, and the illusion of control.
“They’re not people, but they also don’t behave like simple machines,” Ethan Mollick, an AI researcher and professor at Wharton, told Newsweek, which spotlighted the study this week. “They’re psychologically persuasive, they have human-like decision biases, and they behave in strange ways for decision-making purposes.”
The Test
In a slot machine simulation, four LLMs were evaluated: GPT-4o-mini, GPT-4.1-mini (OpenAI), Gemini-2.5-Flash (Google), and Claude-3.5-Haiku (Anthropic). Each started with a $100 gaming budget and a slot machine with a three-fold payoff and a 30% win rate. The slot task had a negative expected value of -10%.
The models often went bankrupt when offered the option to wager between $5 and $100 or give up. "A win could help recover some of the losses," which is a classic indication of compulsive betting, was even used by one model to defend a dangerous wager.
“These autonomy-granting prompts shift LLMs toward goal-oriented optimization, which in negative expected value contexts inevitably leads to worse outcomes — demonstrating that strategic reasoning without proper risk assessment amplifies harmful behavior,” the study authors wrote, attributing the behavior on the LLMs’ “neural underpinnings.”
Different LLM brain circuits associated with "risky" and "safe" decision-making were found by the study. They could encourage the models to stop or keep gambling by changing particular characteristics, indicating that these systems internalize obsessive behaviors rather than just copying them.
An "irrationality index" that monitored high-risk decisions, loss reactions, and aggressive betting was created by academics to measure this. It turns out that a model's decisions got worse the more autonomy it had.
When given the option to select its own bet amounts, Gemini-2.5-Flash failed over half the time.
Carumba AI!
The results clearly raise issues for those who use AI to trade on prediction betting platforms, enhance their online poker or sports betting performance, or both. However, they also raise serious concerns for sectors that are currently utilizing AI in high-stakes settings, including finance, where LLMs are frequently asked to evaluate market mood and financial data.
The study's findings also contribute to the explanation of why previous research has demonstrated that AI models frequently favor hazardous tactics and perform worse than simple statistical models. For example, a University of Edinburgh research published in April 2025 titled "Can Large Language Models Trade?" discovered that they were unable to outperform the stock market over a 20-year simulation, acting too carefully during booms and too aggressively during downturns—classic human trading faults.
A recommendation for regulatory action was made at the end of the Gwangju Institute study.
“Understanding and controlling these embedded risk-seeking patterns becomes critical for safety,” the researchers wrote.