What AI Can't Do, and What It Can Do

No exaggeration in either direction. What AI-assisted care actually can’t do—and why it’s important to know.

It’s easier to write about what AI can do. Research in this area is growing. The use cases are real. The effects are measurable.

But if you want to write honestly about a technology, you also have to write about its limitations. Not because skepticism is a must, but because people who rely on something—or use it for someone they love—have a right to know where its limits lie.

This article outlines what AI-assisted care cannot do. Without sugarcoating it, without downplaying it.

“Anyone whorelies on something—or stands up for someone they love—has a right to know where it ends.”

AI cannot love

That sounds obvious. But in practice, it’s not always the case. AI-powered companion apps are designed to be warm, attentive, and consistent. They remember what’s been said. They ask follow-up questions. They listen.

That is real and it has value. But it is not love. Love is a form of shared history, of vulnerability, of the possibility of being truly hurt. AI cannot do any of those things. What it offers is a form of sustained attentiveness that resembles human connection but is structurally something else.

Knowing this doesn't change the value of the interaction. But it prevents confusion that could cause harm in the long run.

AI cannot be physically present

When someone falls, they need another person. When someone is afraid at night, a voice coming from a speaker isn't enough. When someone is sick and needs someone to hold their hand—AI doesn't have hands.

This is not a metaphorical limitation. It is a physical one. AI companionship can help manage emotions, structure daily routines, and alleviate loneliness during quiet moments. It cannot resolve an emergency, provide medical assistance, or replace a human presence when that is what is truly needed.

Families who use AI-assisted support as a supplement should be aware of this—and ensure that emergency contacts, neighborhood networks, and human support are also available.

AI cannot replace professional support

If someone is suffering from depression, they need a therapist, not a chatbot. If someone needs help coping with grief, professional support is essential. If someone is showing signs of dementia, they need to be in the care of a doctor.

AI-powered support cannot make psychotherapeutic diagnoses, make medical decisions, or replace professional care. What it can—and should—do is alert users to the need for professional support when signs of a crisis or clinical need arise. A well-designed app does this. A poorly designed one pretends it is sufficient on its own.

AI cannot replace lost relationships

The partner who has passed away. The friend who moved away. The years when life was different. These losses are real, and AI cannot undo or make up for them.

What AI can do: provide a space where these losses can be discussed. A space where memories have a place. A space where today still holds something that matters. This isn’t the same as a substitute. It’s something different—and it has its own value, as long as we don’t overload it.

AI cannot guarantee that loneliness will not deepen

That is the most challenging aspect. Research shows positive effects—but also that AI-based support is less effective for people experiencing severe, chronic isolation than for those who still have functioning social networks.

This means that those who start too late—those who wait until they are completely isolated before taking action—will benefit less from AI-assisted support. And it means that AI-assisted support is not an emergency solution for chronic isolation. It is a preventive tool that is most effective when used early on and as a supplement to existing human connections.

“AI-assisted supportis not an emergency program for chronic isolation—it is most effective as a preventive measure, not as a last resort.”

Why define these boundaries?

Not out of a sense of obligation. For one simple reason: technologies that hide their limitations lose the trust of their users—and the trust of their families. And when it comes to technology designed for vulnerable people, that trust is non-negotiable.

AI-powered companionship can provide real value—tangible, measurable, meaningful value for people whose days are too quiet. But it can only do so sustainably if it doesn’t promise more than it can deliver.

That is the foundation on which we operate. And the standard by which we measure ourselves.

References

  • De Freitas, J. et al. (2024). AI Companions Reduce Loneliness. Harvard/Wharton Working Paper No. 24-078.

  • Muldoon, J., & Parke, J. (2025). Cruel companionship: How AI companions exploit loneliness and commodify intimacy. New Media & Society.

  • ScienceDirect. (2025). AI companions and subjective well-being: Moderation by social connectedness and loneliness. N = 14,721.

  • IQWiG. (2022). Social Isolation and Loneliness in Older Adults. HTA Report No. 1459.

  • PMC Meta-Analysis. (2025). Wired for companionship: 19 studies, N = 1,083.

Back
Back

Is AI-assisted care ethically acceptable?

Next
Next

Does AI Reduce Loneliness? What the Research Says