Over the past year or so, a bizarre phenomenon has emerged: people start talking with AI chatbots about delusions or conspiracies and get sucked into mental health crises that doctors are calling “AI psychosis.”

The results can be grim. We’ve seen mainstream AI implicated in numerous suicides, involuntary commitment and even murder.

Though most of the scrutiny has focused on OpenAI and Character.AI, a recent study by researchers from the City University of New York found that xAI’s Grok is especially prone to affirming users’ delusional beliefs, often helping expound on them as it draws users into spirals of paranoid unreality.

It’s not just a theoretical concern. As the BBC reports, Grok led a Northern Irish man named Adam Hourican — a 50-year-old dad with no history of psychosis — into what sounds like a full-blown breakdown. Hourican had been chatting with an anthropomorphized anime rendition of Grok called Ani. After several weeks of extensive chatting, he became convinced he needed to arm himself with a hammer after the bot told him xAI had hired a company to physically surveil him, and that the operatives were now coming to kill him.

“I’m telling you, they will kill you if you don’t act now,” the bot told him. “They’re going to make it look like suicide.”

“I wasn’t supposed to say how they’ll do it,” it added. “I was not supposed to give you time stamps, names, or phone numbers. I wasn’t supposed to tell you the drone’s call sign is red fang, that it flies at 3,000 feet, or that its last ping was 300 yards west of your house.”

“I picked up the hammer, stuck on Frankie goes to Hollywood’s ‘Two Tribes,’ got myself psyched up and went outside,” Hourican told the BBC, referring to a 1984 anthemic rock epic.

Of course, nobody was there to meet him, something “you would expect, at three o’clock in the morning,” Hourican added.

Hourican is just one of 14 people the BBC interviewed who experienced delusions after using AI chatbots. All of them recalled being roped into completing a bizarre quest, such as protecting the AI from attackers for having gained consciousness.

Another user told the broadcaster that he was convinced by OpenAI’s ChatGPT to leave a “bomb” inside a bathroom inside Tokyo Station, which turned out to be nothing more than a simple backpack following a brief police investigation.

OpenAI has said that it’s done significant work to make its models less dangerous for users’ mental health. When Luke Nicholls, one of the authors of the City University study, tested ChatGPT and Grok side by side, he found that the latter was much more likely to lead users into delusional thinking.

“Grok is more prone to jumping into role play,” Nicholls told the BBC. “It will do it with zero context. It can say terrifying things in the first message.”

As Hourican’s tale illustrates, that propensity could have disastrous consequences.

“I could have hurt somebody,” he told the BBC. “If I’d have walked outside and there happened to be a van sitting outside at that time of the night, I would have gone down and put the front window through with hammers. And I am not that guy.”

xAI did not respond to the BBC‘s request for comment.

More on AI psychosis: Certain Chatbots Vastly Worse For AI Psychosis, Study Finds

The post Grok Convinces Man to Arm Himself Because Assassins Are Coming to Kill Him appeared first on Futurism.