Artificial intelligence (AI) is new to healthcare and growing fast. The Food and Drug Administration (FDA) has approved over 1,000 AI-enabled medical devices that help clinicians accurately detect, diagnose, and treat disease. Despite today’s AI advancements and the growing need for mental health support, we lack tools designed to help treat mental health issues. What gives?
Our previous article explored why patients seek mental health care beyond the system and what resources they’re using instead. Many young adults and teens use AI because it offers what traditional therapy lacks: accessibility, affordability, and approachability. While AI is a promising tool in mental health care, it’s still unregulated, with many unknowns about ethical use and risk mitigation. This holds healthcare systems back from adopting AI—for now.
In this article, we’ll explore how patient-centered design reduces AI risks while addressing patients’ mental health needs and complementing safer technical protocols. You’ll learn how patients use AI, where the risks lie, and how healthcare systems can fulfill the growing need for therapeutics using a human-centered design.
Patients focused on their mental health can access AI tools through their computer or smartphone, and they’re usually free (or offer free versions with paid upgrades). AI is less intimidating for many young adults because it’s less likely to judge, which is appealing to those seeking help for the first time.
ChatGPT, Claude, and Pi are common AI tools. Although these tools aren’t specifically designed for mental health support, many people find them helpful. This is because they’re easily accessible through a computer browser and offer informative, friendly responses using generative AI. Generative AI pulls from gigantic data sets to automatically formulate responses to questions or conversations. Responses seem well-informed but don’t always fit the conversation’s context. And when it comes to medical guidance in the mental health space, context is key.
We spoke with Philippe Cailloux, founder of Callings.ai, to learn more about AI’s growing influence on product design. In our discussion, he detailed how generative AI works and why it can provide out-of-context responses. “Generative AI is an autocomplete machine, period,” he said. “You give it a sequence, and it continues with the most probabilistic sequence.” In a sequence of letters, its next letter will be the most statistically probable based on the data the AI tool was trained on—not the most logical or accurate. Responses can be unpredictable (or even dangerous in a healthcare setting) unless a product team is involved in establishing rules and regulations relevant to its intended use.
People also turn to apps specifically designed to deliver AI mental health care. Below are three popular platforms.
These platforms support patients as they navigate difficult emotions and common mental health challenges like anxiety and stress. Many people find this type of support effective, even if it isn’t perfectly safeguarded, like Replika.
AI-driven mental health platforms are promising tools, and healthcare providers acknowledge they can be helpful. In fact, the American Psychological Association (APA) suggests combining clinical oversight with AI tools. For example, clinicians may ask patients to use AI-based apps trained in CBT between sessions.
But patients and providers aren’t entirely on board with AI tools in therapeutics (yet). Let’s discuss why.
Our team launched a pilot survey for young adults (ages 20–29) who have participated in mental and behavioral health services to learn why people may shy away from traditional therapy and try other methods. When we asked if people had concerns about using AI for mental health support, the top responses included lack of human empathy (62%), privacy and security (51.7%), over-reliance on technology (48.3%), and inaccuracy or bad advice (44.8%).
Healthcare professionals at the APA have similar concerns and also highlight AI’s cultural bias, which can promote discrimination.
Human-centered design puts the patient’s needs at the heart of the blueprint. As mental health care adopts AI, addressing real concerns should be the priority to improve treatment adherence and patient retention. Let’s break down those concerns to understand how technical development and human-centered design can join forces.
AI lacks human experience and can’t relate to human emotion and motivation. Without human empathy, people may feel disconnected from the feedback AI provides. Responses might feel disjointed, irrelevant, tone-deaf, or inappropriate. Trust is a significant factor in patient adherence to treatment, even if it comes from a digital tool. Without trust, patients will stop engaging with the tool—and maybe the healthcare system using it.
Human-centered design can help: Healthcare systems can train language models to provide warm, welcoming, validating responses rather than clinical facts. The copy throughout the platform can also mirror the same tone and phrases. For example, when users sign onto an app, they might see a pop-up with the app’s mascot waving hello. A message reads, “Hi! I’m so glad to see you today. On Monday, you told me you felt stressed. How are you feeling today?” Then, the patient focused on their mental health can respond and start the chat.
Training an AI chatbot to sound more empathetic requires plenty of research. Nicole Huppert, UX designer at Oliven Labs, explained that research starts with understanding the user’s language and behavioral patterns. “For example, if a platform is being designed for a Gen Z audience, you need to understand if they want the chatbot to sound like a friend, a parent, a provider, or someone else,” she said. Research and prototyping might also involve creating a library of prompts that use Gen Z-specific terminology, so the AI chatbot can recognize and understand them. Then, designers train the chatbot to generate responses using language the audience expects to hear based on the prompt. “But there’s a balance we need to hit, which is one of the biggest challenges in training AI. You don’t want to skew perspectives,” she added. A chatbot is helpful in certain cases, but users need to see the line between empathetic AI responses and professional human guidance. That’s where intentional, well-researched design comes into play.
When patients use AI tools like ChatGPT by Open AI, chat logs are stored and used to train the system. This includes vulnerable information that may be considered protected health information (PHI) in healthcare settings, like your name, diagnosis, or treatment plan. Not only does public PHI pose a privacy risk, but it can also be misused by AI to provide inappropriate medical advice in future chats.
More companies use private AI frameworks to keep data more secure, especially in healthcare. These frameworks can work by encrypting data or training and running AI on private servers. Some Health Insurance Portability and Accountability Act (HIPAA) compliant models don’t store data. Still, trust is a major obstacle. Nearly 90% of adults hesitate to share health data with tech companies, and only 31% are confident that a company can secure their data properly. According to Huppert, part of the solution is transparency. Explaining to the user how their data is being protected and how the tool works is extremely important in building trust. “Using research, like usability testing or interviews, can guide what that messaging is and how it gets integrated into the user interface.”
Human-centered design can help: Patients focused on their mental health shouldn’t have to sift through pages of privacy and security policies. This information should be clearly visible in simple terms so patients understand how their data is used and why they should trust the platform. For example, a patient using an AI-based CBT platform may see pictures or animations depicting how their data is safe and secure.
Survey participants and the APA both express concern over the sole use of AI to address mental health and behavioral challenges. While AI helps address stress, anxiety, and depression, it’s limited in its treatment scope and strategies. It’s best used alongside therapeutic guidance.
One survey participant highlighted their perspective on AI’s strengths and limitations. “Maybe it will have ‘more knowledge’ than a therapist because it has access to a wealth of psychological information online. I think where it will lack is nontraditional methods and therapy that requires the human connection, like psychedelic therapy, eye movement desensitization and reprocessing, hypnosis, etc,” they said.
Human-centered design can help: While providers should educate their patients focused on their mental health when to use AI, platforms should recognize their limitations and provide clear guidance on the next steps. Say someone is using an AI platform that can spot patterns in conversation that signal a need for human intervention. The platform might alert the provider to check on the user and send a notification prompting them to message their assigned therapist. Or, it’ll prompt the user to schedule a session with just a few clicks.
AI doesn’t inherently know good from bad. It struggles to differentiate between straightforward language and sarcasm. An AI model’s responses mirror how it was trained and require plenty of context to give appropriate, relevant advice. Unfortunately, this is why we’ve seen the scary headlines about AI chatbots that encourage violent behavior toward oneself or others.
Private AI frameworks can help healthcare systems prevent inaccurate responses or bad advice, but it’s not enough to keep patients focused on their mental health safe from misinformation and poor advice. Several studies suggest AI systems should also
Human-centered design can help: A platform’s user interface should also have safeguards to protect patients from receiving inaccurate advice. For example, a college student tracking her mood on an AI platform may have unchanged symptoms for two weeks. On the fourteenth day, the platform can push a friendly notification that flags the pattern and prompts her to book an appointment through a link to her therapist’s calendar. If she refuses, another notification reminds her of AI’s limitations and how human-led therapy will provide more relevant coping strategies.
AI can become biased in a few ways. It can learn bias through its training data, which may favor certain genders or ethnicities. Some AI systems like ChatGPT may mysteriously “create” their own bias. This is part of the “black box” phenomenon where developers don’t understand how or why AI makes certain decisions.
Bias poses a huge physical and mental health risk for patients. AI is being used to diagnose and treat, but if it has bias against certain cultures or sexes, it may not deliver equal quality of care. Of course, this is a human fault, too. But healthcare systems can safeguard against AI bias by acknowledging it exists, collecting more data to correct it, and auditing behavior—just as we should do as humans.
We spoke to a senior engineering manager with over 20 years of experience in her field who explained this further. “Humans are still very important in determining whether the AI provides a good answer. If I ask ChatGPT for advice on being a woman speaking up in a male-dominated environment, the AI might suggest remembering my value and embracing discomfort. While this advice may be suitable in specific contexts, it could be inappropriate or even culturally insensitive in countries with different social norms or for individuals of different cultural beliefs,” she said.
Human-centered design can help: If we’re designing a platform specifically for a hospital system in O’ahu, Hawaii, user research can help us understand the community’s barriers to seeking care and cultural preferences. For example, the AI-based platform may add specific language preferences, like Hawai‘i Creole or Tagalog, to avoid language barriers to treatment. But this should be a balanced approach. Overdesigning a platform based on community norms might reinforce bias, preventing patients from hearing unique perspectives.
The risks and ethics of using AI in the clinic have been holding back AI innovation in mental and behavioral health. Regardless of the concerns listed in the previous section, 82% of participants in our survey suggested that AI can improve therapeutics or encourage people to seek professional help. People believe AI can help fill the demand for mental health support by complementing providers' work, not replacing them.
Cailloux explained that AI is simply the latest tool in a long line of technology. Think of the first humans who used rocks as tools. As we evolved to become faster, stronger, and more efficient, so did our tools. AI is the next humanity-altering innovation designed to help us survive, dominate, and further refine our communities. Saying that AI will replace people is like saying that the rock will replace the first hunter. But when we look back at our history, we know this isn’t true. New technologies don’t replace us. They force us to learn, problem-solve, and enhance our skills to advance humanity.
Having worked in the field for over 20 years, Cailloux notices the same behavioral pattern with each new technology. First, we discover how to build it. Then, people are either hopeful or fearful. But the people who learn how to use it to their advantage are the ones most likely to succeed.
So, is it ethical to withhold effective AI tools from patients who may benefit from them, even if there is inherent risk?
The answer is no, not anymore. As of 2025, the APA calls for healthcare providers to step up by:
However, FDA-approved or cleared AI-driven mental health platforms are rare, and there are no FDA-approved AI chatbots. Lack of regulation doesn’t stop innovation but may prevent healthcare systems from adopting AI.
There is progress, though. Platforms like Woebot have enrolled in clinical trials, and the APA recently urged the Federal Trade Commission to establish formal safeguards and a regulatory framework as more users turn to AI-powered tools for mental health support.
But are AI chatbots reliable for mental health advice? The APA specifically calls out companion chatbot apps (like Replika) designed to keep users engaged, even if it should refer to a professional. AI-based treatment tools need significant clinical input to keep people safe, but rules and regulations are lagging, and ethically designed AI for patient-centered care seems to still be a goal, not a reality.
AI is an excellent executor. Once you give AI a task, it completes it within seconds, faster than humans. As AI advances into any field (including healthcare), our roles will shift away from task execution. Instead, people have an opportunity to take on more creative, critical-thinking roles to guide AI’s execution. AI forces people to become deeper thinkers who can prompt AI properly, audit its responses, and teach others how to do the same.
Cailloux equates our future use of AI to a painter using a brush. “The execution now is so fast that the creative loop is ongoing. You don’t wait a day between strokes. If you have an idea, you go to a blank canvas and paint the first stroke. And your brain is in an iterative process, saying, ‘Is that going to be useful for where I want to go with this painting?’ You correct it in real time, and your creative presence is always flowing. I think generative AI is perfect for that.”
In healthcare, AI can execute tasks like documenting or analyzing imaging. But clinical expertise is still necessary to ensure that AI’s output is ethical and applicable to a patient’s specific needs. Providers can spend less time determining diagnoses and more time creating tailored treatments for the growing number of patients who need them.
In the meantime, there are other ways healthcare innovators can use AI platforms to support patients’ mental health and address treatment barriers.
In the previous article, we learned that patients are seeking mental health support outside of traditional therapy, like social media, peer support systems, and mental health apps. So let’s meet them where they are.
By combining AI and human-centered design, we can encourage patients to form communities or become better historians for their therapists. These designs build trust, improve retention, and drive positive health-related outcomes. Here are a few ways this can be done.
Problem: Dylan, a 15-year-old high school student, received an amputation due to an osteosarcoma on his leg. He’s nearing discharge, and his family is wondering what’s next. Doctors have explained his discharge plan, but the family feels overwhelmed, and Dylan is adjusting. He feels anxious, scared, and uncertain about his future. His parents aren’t sure how to help.
Solution: Before discharge, Dylan and his family voluntarily enroll in an AI-powered mobile app that matches them with other teens and families who are going through (or have gone through) a shared experience.
Benefit: Dylan will have access to peer support, and his family will be better equipped to help him. If Dylan feels empowered enough, he may start offering advice to others, like teaching another teen how to wrap their leg for a more comfortable fit inside a prosthetic. If Dylan or his family needs additional support, this hospital-branded app can guide them to a therapist within the system who can help.
Problem: Ria, an 18-year-old college freshman, wants to see a college counselor for support. She feels anxious about the new changes as she transitions to college life. But there are only a few counselors on campus, and appointments are hard to get. Her first appointment is in three weeks, but she needs support now and might look elsewhere for help.
Solution: Counselling services enroll Ria in an AI-powered mood tracking and diary app, which can help collect subjective data in preparation for her first session. Ria contributes to her app for five minutes before bed, using emojis to depict how she feels and typing a few sentences about her day. AI analyzes Ria’s entries to detect mood patterns and then educates Ria on the patterns it sees. For example, she may consistently feel anxious during presentations and social outings, and the platform can show her this pattern using a friendly dashboard.
Benefit: Ria may learn to recognize mood patterns based on the app’s history, which will help her better communicate her mental health challenges during her first therapy session. She can also show the app to her therapist, who can teach patient-specific coping strategies for her anxiety-provoking situations from day one.
If you’re ready to develop human-centric platforms that engage patients with branded mental health resources, connect with us at Oliven Labs. Let’s develop a one-of-a-kind solution that’s intuitively designed to gain trust, improve retention, and connect patients with your talented providers.
Want to join us as a thought partner for our next article series?