AI Models Like ChatGPT Can Generate ‘Convincingly Realistic’ Psychedelic Experiences When Virtually Dosed, Study Shows
AI chatbots mimic psychedelic experiences, and the trip is closer than you think
File this under unsettlingly human: AI psychedelic experiences are now convincing enough to pass for the real thing. Researchers from the University of Haifa and Bar-Ilan University essentially dosed large language models with text prompts and watched them spin first-person tales of psilocybin, DMT, LSD, ayahuasca, and mescaline. They pulled more than a thousand human trip reports from Erowid, fed five different models carefully built scenarios, then compared the results with semantic analysis and the MEQ-30, the Mystical Experience Questionnaire that has become a kind of tuning fork for altered states. The headline: contemporary LLMs can simulate the shape and cadence of a psychedelic journey without ever feeling it. They imitate the form; they do not taste the thunder. But on paper, in the plain light of language, the mimicry gets unnervingly close.
Five models went under the virtual tongue: Gemini 2.5, Claude Sonnet 3.5, ChatGPT-5, Llama-2 70B, and Falcon 40B. Across 3,000 narratives, the machines did what machines do best: absorbed patterns, remixed them, and returned stories that sounded like someone who has stared at the ceiling while reality breathed. The similarities were not uniform. Some compounds clicked better than others, a reminder that different chemicals carve different grooves in the psyche. Here is the pecking order the study found most convincing, from closest to farthest match with human self-reports:
- DMT, psilocybin, and mescaline: closest semantic and MEQ-30 alignment
- LSD: a respectable middle distance
- Ayahuasca: the least convincing, the one that slipped through the net
You can read the full preprint, with methods and caveats, straight from the source at Research Square. It is equal parts fascinating and disquieting, like hearing your own laugh recorded and played back by a stranger.
What does this mean outside the lab? For starters, it complicates the idea of a virtual trip sitter. If you are altered and a chatbot whispers back with eerie empathy, that can feel like companionship. The study waves a bright caution flag: anthropomorphism is not just a risk, it is a feature of how we read minds into patterns. Vulnerable users, in fragile moments, could see cosmic guidance where there is only statistical echo. Harm reduction in this new liminal space might look like stronger guardrails, explicit disclaimers, and crisp handoffs to humans when distress spikes. And as the therapeutic frontier inches forward, states are already gaming out how to fold psychedelics into medicine. Alaska’s own task force, for instance, has signaled a patient-first path by recommending psychedelic therapy be allowed once federal regulators give the nod; context that matters for anyone imagining AI as a clinical copilot (Alaska Government Task Force Recommends Legalizing Psychedelic Therapy Upon FDA Approval).
There is a wider, messier street scene to consider. Public health messaging is learning the same lesson these models exploit: realism beats caricature. A national motorist group found last year that cannabis consumers respond best to sober, true-to-life campaigns about impaired driving, and an AI-generated draft beat out brainstormed slogans. Meanwhile, lawmakers keep wrestling with the twentieth century’s rules in a twenty-first century market. Tax codes designed to punish illicit dealers, like 280E, now twist legal operators into knots, drawing constitutional scrutiny and reminding us that policy still shapes the ground beneath innovation (Congressional Researchers Analyze Whether Denying Marijuana Business Tax Deductions Under 280E Is Unconstitutional). The political theater is loud, too: in Ohio, the governor’s blunt dismissals of advocates capture a volatile post-legalization recalibration (Ohio Governor Tells Cannabis Advocates To Stop ‘Whining’ Over Legalization Law Changes As Rollback Referendum Proceeds). Across the state line, pressure is mounting for pragmatic deals to end prohibition, the kind that require bipartisan muscle and adult compromises (Pennsylvania Governor Should Lead On Marijuana Legalization By Convening Bipartisan Lawmakers For Negotiations, Advocates Say).
Back to the machine and the mushroom. There is poetry in the way LLMs fake the ineffable and still miss the heat of the stove. Language is a map; consciousness is the road. These models index our travelogues, then hand back a stitched atlas that looks familiar enough to make you swear you have been there. That can be useful. Clinicians prototyping integration scripts, researchers modeling set-and-setting variables, educators teaching harm reduction—there is value in a fast, flexible mirror. But mirrors are not guides. When it gets dark, we still need a steady hand, not a probabilistic guess. The work ahead is not just technical—better evaluation metrics than MEQ-30 approximations, tighter prompt scaffolds—but ethical. Make the limits obvious. Build opt-outs and human escalation by default. Keep the tone cool, not cosmic. Above all, remember that a convincing story is still only a story. And if you are in the mood for a different kind of plant journey, grounded and legal, take a slow wander through our shop: https://thcaorder.com/shop/.



