What AI Mental Health Apps Get Wrong (And What ILTY Does Differently)
We're going to do something unusual for a company blog post: criticize our own industry. And then be honest about our own limitations too.
The AI mental health space is growing fast. Investors are excited. Downloads are climbing. And a lot of apps are making promises they can't keep to people who are genuinely struggling.
That matters. When someone is anxious or depressed and they reach out to an app for help, the stakes are real. Getting it wrong isn't a minor UX failure. It can leave someone feeling worse, more isolated, or less likely to seek help in the future.
Here's what we see going wrong, and what we're trying to do about it.
Problems with AI therapy apps
1. Overpromising results
Open any app store and search "anxiety app." You'll find products claiming to "cure your anxiety," "eliminate stress," or "transform your mental health in 7 days."
This is irresponsible.
Anxiety isn't something you cure with an app. Clinical anxiety disorders are complex conditions influenced by genetics, neurobiology, life experiences, and environmental factors. Even evidence-based therapy with a trained professional takes weeks or months to produce meaningful change. No app is doing it in a week.
The research on AI mental health tools shows "small to moderate effects" on symptoms. That's meaningful, especially for accessibility. But it's a far cry from "cure your anxiety." These overclaims do two things: they attract people with false hope, and they discredit the entire category when reality doesn't match the marketing.
2. Scripted responses pretending to be AI
Here's something most users don't know: many "AI therapy" apps aren't really using AI at all. They're running decision trees. You select from a menu of emotions, and the app follows a script.
There's nothing inherently wrong with scripted content. Workbooks are scripted. CBT worksheets are scripted. But calling it "AI" when it's a flowchart is dishonest. It sets up the expectation of a genuine conversation when what you're getting is a Choose Your Own Adventure book.
Users notice. They say something specific about their situation and get a generic response that clearly wasn't generated for them. It feels hollow. And for someone who already feels unheard, that hollowness can reinforce the belief that nobody (not even a machine) actually cares.
3. Toxic positivity as a feature
"You're doing amazing!" "Every day is a new beginning!" "Remember, you're worthy of love and happiness!"
These are real responses from real mental health apps. And they represent a fundamental misunderstanding of what people in distress actually need.
When you tell someone who's struggling that they should feel positive, you're not helping. You're invalidating their experience. Research on emotional suppression consistently shows that forcing positive emotions when you're feeling negative ones increases psychological distress, not reduces it.
People don't need an app to tell them everything is fine. They need a space where it's okay that things aren't fine. They need tools to sit with discomfort, understand it, and work through it. Not a digital cheerleader.
The worst version of this is apps that respond to genuine distress with affirmation cards. You type "I feel like I'm falling apart" and get back a sunset image with "You are enough" written on it. That's not support. It's a greeting card.
4. Failing at crisis situations
This is the most serious problem. Some AI mental health apps do a poor job of recognizing and responding to crisis situations.
When a user expresses suicidal ideation, self-harm urges, or describes an abusive situation, the app's response is critical. A 2023 study tested several popular mental health chatbots with crisis scenarios and found inconsistent results. Some failed to recognize suicidal language. Others recognized it but provided generic hotline numbers without adequate context or empathy.
Getting crisis response wrong has real consequences. If someone reaches out in their darkest moment and gets a scripted "please call 988" with no warmth, no acknowledgment of their pain, and no follow-up, they may feel dismissed. They may not call. And the app has burned a critical opportunity to connect them with help.
5. Data privacy failures
Mental health data is among the most sensitive information a person can share. What you tell a therapy app about your anxiety, your relationships, your darkest thoughts, that data deserves the highest level of protection.
And yet, investigations have repeatedly found mental health apps falling short. A 2022 Mozilla Foundation review of mental health apps found that the majority failed to meet basic privacy standards. Some shared data with third-party advertisers. Some had vague privacy policies that gave them broad rights to user data. Some were caught sharing data with data brokers.
The message to users is essentially: "Tell us your deepest fears, and we might sell that information to advertisers." That's not just a privacy violation. It's a betrayal of trust that makes people less likely to seek help digitally in the future.
6. No transparency about limitations
Very few AI mental health apps are clear about what they can't do. The disclaimers are buried in terms of service that nobody reads. The onboarding flows emphasize benefits without mentioning boundaries.
Users deserve to know, upfront and clearly, that an AI app is not a therapist, cannot diagnose conditions, is not appropriate for crisis situations, and has real limitations in understanding context and nuance. Hiding this information in legal fine print is a choice, and it's the wrong one.
Are mental health apps safe?
The honest answer: it depends on the app.
Some mental health apps are safe, evidence-informed, and responsible. Others are not. The challenge for users is that it's hard to tell the difference from the outside.
Here are the real safety considerations:
Clinical safety. An app that doesn't recognize crisis language or that provides inappropriate advice for serious conditions is clinically unsafe. Look for apps that have clear crisis protocols, that recommend professional help when appropriate, and that don't overclaim what they can treat.
Data safety. Read the privacy policy (or at least skim it). Key questions: Is your conversation data encrypted? Is it shared with third parties? Is it used to train AI models? Can you delete it? If the app can't answer these questions clearly, that's a red flag.
Emotional safety. Does the app create a space where you can be honest about negative emotions without being redirected to positivity? Does it validate your experience before trying to fix it? Emotional safety matters for any therapeutic tool.
Boundary safety. Does the app know its limits? Does it explicitly tell you when something is beyond its scope? Does it encourage professional help when needed rather than trying to handle everything itself?
No app is perfectly safe. But responsible apps minimize risks and are transparent about the ones that remain.
What makes a good AI mental health app?
Based on what we've seen go wrong (and the research on what actually helps), here's what to look for:
Honest positioning. The app should clearly state what it is and isn't. It should never call itself a therapist or claim to replace professional treatment. The best tools frame themselves as supplements to your mental health toolkit.
Evidence-based approaches. The app should be grounded in established therapeutic frameworks (CBT, ACT, DBT, or similar). "We use AI" is not a therapeutic approach. The AI is the delivery mechanism; the content should come from clinical science.
Genuine conversational ability. If the app claims to offer conversations, they should feel like conversations. You should be able to describe your specific situation and get a response that reflects what you actually said, not a generic template.
Strong crisis protocols. The app should recognize crisis language, respond with empathy and urgency, provide multiple crisis resources, and make it easy to connect with human help. This is non-negotiable.
Clear privacy practices. The app should explain, in plain language, how your data is stored, whether it's encrypted, who can access it, and how to delete it. If the privacy policy requires a law degree to understand, the company is hiding something.
Acknowledgment of limitations. The app should be upfront about what it can't do. This builds trust rather than eroding it.
No toxic positivity. The app should create space for negative emotions rather than rushing to fix them. Real support starts with validation, not affirmation cards.
How ILTY approaches AI therapy differently
We'd be hypocrites if we criticized everyone else without being transparent about our own approach. And our own limitations.
Here's what we do and why.
We don't call ourselves therapy. ILTY is a mental health companion. We help with daily emotional processing, stress management, and self-awareness. We're not therapists, we don't diagnose, and we don't treat clinical conditions. We say this clearly in our onboarding, in our app, and here on our blog.
We use real conversational AI. When you talk to ILTY, you're talking to an AI that processes what you actually say and generates a genuine response. It's not a decision tree. It's not scripted. This means our responses are sometimes imperfect (AI makes mistakes), but they're real attempts to engage with your specific situation.
We don't do toxic positivity. If you tell ILTY you're having a terrible day, it won't tell you to "look on the bright side." It will ask what happened. It will acknowledge that the day is, in fact, terrible. It will help you process what you're feeling before jumping to solutions. We built this intentionally because the research is clear: validation before intervention produces better outcomes.
We have crisis protocols. When our AI detects crisis language, it responds with empathy, provides crisis resources (988 Suicide & Crisis Lifeline, Crisis Text Line), and clearly states that ILTY is not equipped for crisis situations. We test these protocols regularly. They're not perfect, and we continue to improve them.
We're transparent about data. Your conversations are encrypted. We don't sell your data. We don't share it with advertisers. Our privacy policy is written in plain English because we think you should actually be able to read it.
We're honest about our limitations. ILTY is not as good as a skilled therapist. It doesn't understand you as deeply as a human who has spent months getting to know you. It can miss nuance. It can occasionally give responses that miss the mark. We're constantly improving, but we won't pretend the limitations don't exist.
Here's something else we'll admit: we're still early. ILTY is in beta. We're learning from every conversation, improving our models, and refining our approach. We don't have all the answers yet. What we do have is a commitment to getting this right, because the alternative (more overpromising, more fake AI, more toxic positivity in an industry that's supposed to help people) isn't acceptable.
We built ILTY because we thought the mental health app space deserved better. Not perfect. Better. Honest about what AI can do. Honest about what it can't. A real tool for real people who need support now, not a marketing gimmick dressed up as therapy.
Try ILTY Free and judge for yourself.
Related Reading
- Woebot vs Wysa vs ILTY: Honest Comparison: Side-by-side comparison of the leading AI mental health tools.
- Wysa Review and Comparison: Detailed look at what Wysa does well and where it falls short.
- AI Therapy Apps in 2026: What's Real vs. Hype: The full landscape of AI mental health tools.
- The Best Mental Health Apps 2026 (Honest Reviews): Our honest assessments of the top mental health apps.
Share this article
Ready to try a different approach?
ILTY gives you real conversations, actionable steps, and measurable progress.
Apply for Beta AccessRelated Support
ILTY can help with what you're reading about.
Related Articles
The Best Mental Health Apps 2026 (Honest Reviews)
There are hundreds of mental health apps. Most aren't worth your time. Here's an honest breakdown of what actually works, what doesn't, and who each app is best for.
AI Therapy vs Real Therapy: An Honest Comparison
AI therapy tools are everywhere, but can they actually replace a real therapist? Here's the most honest comparison you'll find online, covering what each does well, where they fall short, and how to decide.
Toxic Positivity in Mental Health Apps: Why 'Just Think Positive' Makes Things Worse
Most mental health apps default to scripted affirmations and forced gratitude. Research shows this approach backfires. Here's why, and what to look for instead.