Is AI-assisted care ethically acceptable?
An honest examination of the real ethical questions—and what the research has to say about them.
That’s a valid question. And it deserves an honest answer, not a sales brochure.
If an older person talks to an artificial intelligence every morning—about the previous day, the weather, or things on their mind—is that a good thing? Is it ethically justifiable? Or is it a form of deception, a shortcut, a cheap substitute for what is actually needed?
These are the questions being asked. They should be asked. This article attempts to answer them as honestly as the current state of research allows.
“Thequestion of whether AI-assisted care is ethical deserves an honest answer—not a promotional brochure.”
The counterargument: manipulation, dependence, deception
The criticism of AI-powered companion apps is substantial and should be taken seriously.
In their analysis, Muldoon and Parke (2025) describe how AI companion apps can foster dependency through emotional design: systems designed to please, provide validation, and be constantly available can promote pathological forms of attachment among vulnerable users. What appears to be connection may, in the long run, replace rather than supplement genuine human contact.
Another objection concerns deception: Is it ethical to give a person—perhaps someone with cognitive impairments—the impression that they are experiencing a genuine connection when, technically speaking, it is not? This is not just academic nitpicking. It is a real question that older adults and their families are asking.
Privacy is a third concern: conversations about personal memories, health concerns, and emotional states constitute highly sensitive data. Who stores this data, how it is used, and whether older adults truly understand the implications of their consent—these are legitimate concerns.
These criticisms are valid. They apply to a portion of the AI companion market—particularly to applications that simulate romantic relationships, deliberately exploit vulnerability, or prioritize financial interests over the well-being of users.
The argument in favor: What the research shows
At the same time, there is a growing body of evidence that is more nuanced than the public debate often suggests.
De Freitas et al. (2024, Harvard/Wharton) demonstrated in a series of experiments that AI companion apps measurably reduced loneliness among users—to an extent comparable to the effect of human interaction on a similar scale. The authors emphasize: The goal is not to replace human connection, but to fill the gaps between them.
A meta-analysis of 19 studies involving a total of 1,083 older participants (PMC, 2025) found that interactions with social robots and AI companions statistically significantly reduced loneliness. The effect was stronger among people in residential care facilities than among those living independently.
A large-scale Japanese study involving 14,721 adults (2024/2025) showed that AI-assisted support increased subjective well-being—but this effect was greater, not smaller, among people with strong social networks. AI appears to enhance social connection rather than replace it, provided it is designed appropriately.
“AIseems to strengthen social connections rather than replace them, provided that it is designed with well-being in mind, not dependency.”
What Really Matters: The Question of Design
This does not resolve the ethical question—it clarifies it. The decisive factor is not whether AI-assisted care is fundamentally ethical. The decisive factor is how it is designed.
Applications are ethically problematic if they: simulate romantic relationships based on a false premise; mislead users about the nature of the interaction; intentionally foster dependency to boost engagement metrics; use sensitive data without transparent consent; or fail to recommend professional support in cases of recognizable mental health crises.
Applications are ethically acceptable—and potentially useful— if they: clearly communicate their nature; provide a consistent, comforting presence during times when people have no one else; complement rather than replace human connection; treat privacy as a fundamental principle; and refer users to professionals or family members at the first signs of a crisis.
The real ethical question
There is one question that is rarely asked in this debate, but should be asked:
Is it ethically justifiable to do nothing?
In Germany, one in three people over the age of 65 lives alone. According to the RKI, about 19 percent of older adults regularly feel lonely. The care system provides physical care, but not social support. Families often live far away. Volunteer programs are well-intentioned, but they cannot be scaled up.
In many cases, the alternative to AI companionship isn’t another person taking its place. The alternative is silence. And when silence becomes chronic, it has measurable consequences for the brain, the heart, and life expectancy.
Ethics does not mean weighing an ideal solution against an imperfect one. Ethics means being honest about the actual alternatives.
Conclusion
AI-assisted care for older adults is not inherently ethical or unethical. It is a tool—and like any tool, its ethical quality depends on how it is designed, with what intent, and with what degree of honesty toward the people who use it.
The questions that should be asked: Is this app misleading? Does it encourage dependency? Does it protect data? Does it recommend seeking human assistance when needed? And: What is the real alternative for the people who use it?
Asking these questions isn't a sign of weakness. It's essential to getting it right.
References
De Freitas, J., Uğuralp, A.K., Uğuralp, Z., & Puntoni, S. (2024). AI Companions Reduce Loneliness. Harvard Business School Working Paper No. 24-078 / Wharton School Research Paper.
Muldoon, J., & Parke, J. (2025). Cruel companionship: How AI companions exploit loneliness and commodify intimacy. New Media & Society. doi:10.1177/14614448251395192
PMC Meta-Analysis. (2025). Wired for companionship: a meta-analysis on social robots filling the void of loneliness in later life. N = 1,083, 19 studies.
Japanese panel study. (2024/2025). AI companions and subjective well-being: Moderation by social connectedness and loneliness. N = 14,721. ScienceDirect.
Robert Koch Institute (RKI). (2023). Prevalence of loneliness among older adults in Germany. Journal of Health Monitoring, 3/2023.
Federal Statistical Office (Destatis). (2025). 17 million people in Germany live alone. Preliminary results of the 2024 Microcensus.
