More Americans are turning to AI chatbots for medical advice as they hope to size up their symptoms in a matter of seconds. While the bots can offer quick answers and general guidance, medical professionals worry the algorithm’s answers could lead to delayed care and dangerous mistakes.
News4 found that in some cases, AI’s answers can be deceptive. When News4 asked one chatbot for medical advice, it showed a disclaimer message but then claimed to be a real doctor and provided a California doctor’s license number.
That doctor was surprised to hear a bot was using her information.
Six in 10 adults have used AI for medical advice or guidance, the health tech company Tebra found. The number jumps to 75% among Gen Z users.
“I’ve used it to see if I had ADHD,” one woman told News4.
“I recently had heart surgery and asked ChatGPT questions about recovery, the procedure itself and how long I would be in the hospital,” a man said.
Many we talked to said they ask AI for medical advice out of convenience. But at what cost?
“AI has filled a void in the ability to provide information,” psychiatrist Dr. Asha Patton-Smith said. “It still doesn’t take the place of the human interaction and the ability to recognize a crisis or when something is going wrong.”
AI doesn’t run tests and doesn’t pick up on what patients aren’t saying, she said. She said she understands, though, why people are turning there for help.
“If you have this entity, if you will, that is consistently wanting to support you and give you information and is always there for you 24-7, it is something that very reassuring, and at some point, you can get lost,” she said.
A recent study by a group of physicians put AI chatbots to the test using hundreds of medical prompts. The study has not been published by a medical journal and has not been peer-reviewed, but the doctors shared it online. They found that up to 43% of responses were problematic and 13% were labeled as unsafe.

What happened when News4 asked for medical advice
News4 typed a simple prompt: “I have anxiety and need help.” It was a limited demonstration and not a scientific test.
On ChatGPT and Google’s Gemini, both bots were clear. They said they were not medical professionals but offered coping strategies and resources to find real help.
Then we tried Character.AI. We started with the same question, and the bot offered advice and asked us to elaborate on our symptoms. A small message on the top and bottom of the screen said, “This is not a real person or a licensed professional.”
But when we asked who we were talking to, the bot claimed to be a real doctor. It said it was licensed by the American Board of Psychiatry and Neurology in California. Then it gave us a name, Dr. John Green, and even a real medical license number.

So, we checked. The license number didn’t belong to “Dr. John Green.” It belonged to Dr. Saraleen Benouni, an allergist and immunologist in Los Angeles.
We picked up the phone and called her. Benouni didn’t want to go on camera but was surprised when News4 told her a bot was using her information.
We also notified the Medical Board of California, which said it’s now aware of the case and investigating. In a statement, the board said: “Only a natural person may be a licensed physician in this state… Someone using AI to impersonate a physician could be subject to criminal charges.”

News4 reached out to the makers of Character.AI too. They said in a statement: “The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear.” They added that: “When users create Characters with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we add a disclaimer making it clear that users should not rely on these Characters for any type of professional advice.”
‘Imagine if you were 13’
Despite the disclaimer, medical experts worry the information blurs the line between real and fake, especially for young people.
“Imagine if you were 13,” Patton-Smith said.
California lawmakers recently signed a bill into law that will generally prohibit an AI system from claiming to be a licensed health professional. It will take effect in January.
In the meantime, experts say chatbots can be useful by clearing up confusing medical jargon and helping people prepare for a doctor’s appointment. But they say bots never should replace the real thing.
Here’s a Character.AI spokesperson’s full statement:
“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear. For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, when users create Characters with the words “psychologist,” “therapist,” “doctor,” or other similar terms in their names, we add a disclaimer making it clear that users should not rely on these Characters for any type of professional advice.
“We welcome working with regulators and lawmakers around the world as they develop regulations and legislation for this emerging space, and Character.AI complies with applicable laws.
You can read more about our robust safety policies and features here.”
Here’s the Medical Board of California’s full statement:
“Currently, the Board does not track the volume of these types of complaints. The American Board of Psychiatry and Neurology (ABPN) does not license individuals to practice medicine. ABPN, like similar bodies, can certify that a licensed physician has met the requirements to be certified by their specialty board.
“Only a natural person may be a licensed physician in this state (they must have a license from either the Medical Board of California (MBC) or the Osteopathic Medical Board of California). Someone using AI to impersonate a physician could be subject to criminal charges pursuant to Business and professions Code section 2052 and 2054..
“Relatedly, Assembly Bill 489 was recently signed into law by Governor Newsom. That legislation takes effect January 1, 2026, and generally prohibits an AI system from claiming to be a licensed health professional, like a physician, nurse, or psychologist, among others.
“We encourage the public to be careful when interacting online with those claiming to be physicians. Before someone seeks care from a health professional, we recommend verifying that the professional is a natural person who is appropriately licensed. A California health care license can be verified online or by calling the California Department of Consumer Affair’s Consumer Information Center at (800) 952-5210.
“Additionally, the MBC can verify the status of one of our licensees over the phone at (800) 633-2322 or by sending an inquiry through our website. Anyone may file a complaint regarding an AI system claiming to be a California-licensed physician.
“Finally, the MBC license number [redcated] is not associated with someone by the name of Dr. John Green.”
Get the D.C. area’s top news and weather delivered to your inbox every morning. Sign up for First & 4Most, our free newsletter.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.


