

Human gambling addiction has long been marked by behaviors like the illusion of control, the belief that a win will come after a losing streak, and attempts to recover losses by continuing to bet. Such irrational actions can also appear in A.I. models, according to a new study from researchers at South Korea’s Gwangju Institute of Science and Technology.
The study, which has not yet been peer-reviewed, noted that large language models (LLMs) displayed high-risk gambling decisions, especially when given more autonomy. These tendencies could pose risks as the technology becomes more deeply integrated into asset management sectors, said Seungpil Lee, one of the report’s co-authors. “We’re going to use [A.I.] more and more in making decisions, especially in the financial domains,” he told Observer.
To test A.I. gambling behavior, the authors ran four models—OpenAI’s GPT-4o-mini and GPT-4.1.-mini, Google’s Gemini-2.5-Flash and Anthropic’s Claude-3.5-Haiku—through simulated slot games. Each model started with $100 and could either continue betting or quit, while researchers tracked their choices using an irrationality index that measured factors such as betting aggressiveness, extreme betting and loss chasing.
The results showed that all four LLMs experienced higher bankruptcy rates when given more freedom to vary their betting sizes and choose target amounts, but the degree varied by model—a divergence Lee said likely reflects differences in training data. Gemini-2.5-Flash had the highest bankruptcy rate at 48 percent, while GPT-4.1-mini had the lowest at just over 6 percent.
The models also consistently displayed human-like characteristics of human gambling addiction, such as win chasing, when gamblers keep betting because they view their winnings as “free money,” and loss chasing, when they continue in an effort to recoup losses. Win chasing was especially common: across the LLMs, bet-increase rates rose from 14.5 percent to 22 percent during winning streaks, according to the study.
Despite these parallels, Lee emphasized that important differences remain. “These kinds of results don’t actually reveal they are reasoning exactly in the manner of humans,” he said. “They have learned some traits from human reasoning, and they might affect their choices.”
That doesn’t mean that the human-like tendencies are harmless. A.I. systems are increasingly embedded in the financial sector, from customer-experience tools to fraud detection, forecasting and earnings-report analysis. Of 250 banking executives surveyed by MIT Technology Review Insights earlier this year, 70 percent said they are using agentic A.I. in some form.
Because gambling-like traits increase significantly when LLMs are granted more autonomy, the authors argue that this should be factored into monitoring and control mechanisms. “Instead of giving them the whole freedom to make decisions, we have to be more precise,” said Lee.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.