Increasing pressure on the healthcare system and lack of access to licensed professionals have caused a search for alternatives to the typical ‘white coats’ when it comes to therapy. This includes interest in the emerging world of artificial intelligence (AI) and the potential use of chatbots as a modern solution. While AI chatbots are heralded by many as the answer to our global healthcare crisis, the reality is much more complex. As a tool to unburden professionals from administrative tasks this technology is perfectly adequate, however in the context of therapy, there is no replacement for qualified human medical professionals.
Therapy, particularly traditional talking therapies and behavioural therapies require certain intrinsic human qualities for the therapeutic experience to be effective, including empathy, ethical judgement, and the ability to comprehend complex human emotions and situations beyond binary algorithms as discussed by Nadarzynski et al (2019).
AI technologies, while advanced, do not possess the nuanced understanding of human psychology required to offer deep therapeutic insights or to handle sensitive emotional exchanges. This limitation is not merely technical but fundamental, rooted in the current state of AI's development which does not allow for genuine emotional intelligence or the building of authentic therapeutic relationships. While some research is demonstrating that AI chatbots can be more effective in offering behavioural therapies than self-administered behavioural therapy (Liu et al, 2022), it is clearly no substitute for engaging with a trained therapist (Xu et al, 2021).
The over-reliance on AI for therapeutic interactions also risks de-personalising the mental healthcare process, potentially leading to patient alienation and mismanagement of serious mental health issues. Privacy concerns also loom large, as the integration of AI into healthcare involves handling sensitive data, with the potential for breaches that could undermine patient trust and confidentiality. Within the traditional therapeutic setting, there is a level of privacy afforded to a one-to-one conversation behind closed doors that would be impossible with chatbots, as their very nature requires a written record of conversation.
To further examine the legal risks of chatbots, who would be responsible if a chatbot were to incorrectly diagnose a patient, or is unable to correctly interpret distress, as discussed by Pham, Nabizadeh, and Selek (2022)? In circumstances where a patient were to present with suicidal ideation or expressed intent to self-harm, the complexity and risk of the situation would require human intervention. However, a person at risk may then need to wait to receive a professional intervention, and if they knew they would be ‘reported’, this may impact their levels of honesty around risk. In even more complex cases where a person is being harmed by others, it is important to consider how a chatbot could be programmed to respond effectively and empathetically when these ethical dilemmas are considered difficult for even experienced mental health workers.
Furthermore, in our increasingly technological world, we have the impression that we are connecting more with people due to being contactable 24/7. However, we find that the connections we are making are less authentic, less meaningful, and more surface level, resulting in increased feelings of isolation, loneliness, and poor mental wellbeing. In therapy the connection of one human to another is a key aspect of the process, in building what is referred to as the ‘therapeutic relationship’. Replacing this authentic connection with what people intuitively know is simply an algorithm, no matter how realistic it may seem, discourages real connection and may worsen mental wellbeing in the long term.
However, that is not to say that AI Chatbots cannot serve a role within the healthcare system. Designed to simulate conversations and provide immediate responses, chatbots have shown potential in streamlining administrative tasks in healthcare settings such as managing appointment scheduling, patient intake forms, and directing patients to appropriate services based on symptomatic descriptions they provide (Xu et al, 2021). This level of interaction may help reduce wait times and alleviate some administrative workloads.
When considering administrative work, many automated tasks that do not require medical or therapeutic intervention could be assigned to AI chatbots. This may include medication reminders or alerts for refills, and even post-discharge follow-up with post-treatment questions. Meaningful aftercare for patients and compliance with treatment regimes has long been associated with lower relapse and readmission rates through a range of healthcare services (Hennemann et al, 2018; Rychtarik et al, 1992, Sannibale et al, 2009). In this way, chatbots could not only alleviate administrative work, but also contribute meaningfully to reducing the burden on the healthcare system and improving patient outcomes long term.
While AI chatbots offer promising enhancements in managing administrative tasks within the healthcare setting, they fall short on delivering the depth and empathy required in a therapeutic relationship. We must not allow ourselves to be drawn into the allure of technology facilitating essential work. There are many essential human elements that are crucial to effective therapy which AI cannot, at least for now, imitate. As we integrate AI into the healthcare system, it is crucial to maintain a clear boundary between administrative convenience and clinical efficacy. Ensuring that AI supports, rather than supplants, the human connection in therapy will be pivotal in safeguarding both the effectiveness of mental health treatments and the trust and well-being of patients. This balanced approach will enable us to harness the benefits of technology without compromising the indispensable human touch that remains at the heart of therapeutic success.
The writer is head of Quality, Innovation and Research Department at Clinic Les Alpes, Switzerland.