top of page

AI in Mental Health

Writer: Santo RussoSanto Russo

What is artificial intelligence (Ai)?


Artificial intelligence is the machine simulation of human intelligence processes. Machines pretending to be human. Ai has been around for many years; however, its rapid rise in our awareness has been prompted by programs like ChatGPT. As interesting as it is, Ai has many challenges. Take ChatGPT for example. It uses your input to improve itself and then makes you pay to use the best versions of it – is that reasonable?


The increasing injection of artificial intelligence, Ai, into the mental health / therapy world, as a cheap and easy treatment process, raises many concerns.


Machine learning relies on taking the user information to improve its computerised responses. The privacy implications around baring your soul to a bot is a major concern. What they do with that information is even more concerning. We are seeing more and more hacking. Huge companies (Optus, Medibank, Latitude), even governments (ACT, NT, Tasmania) identified in the more than 30 hacks in the first half of 2023 (https://www.webberinsurance.com.au/data-breaches-list#twentythree ) have been unable to prevent the access of this information.


Bots don’t have the same confidentiality rules that human therapists have. Anything a bot learns from you remains in its system. Therapists have a set of rules regarding holding and disposing of your information that bots don’t currently have to follow.


Another issue with Ai is the inherent biases that the programming algorithm uses. What does it say about our society when the servile bots such as Alexa and Siri have female names and female voices? Algorithms are a written set of rules that the bot follows depending on the information in-put you give it. Can those rules be free of the inherent bias that the programmer has embedded in the program?


Social media ‘feeds’ are a perfect example of the narrowing of information, misinformation, and disinformation that is damaging our social fabric. Youth mental health is escalating at a rapid pace corresponding to social media use. What power does this give the programmer to manipulate the user?


Critically, how will chatbots function in an emergency crisis situation? Suicidality and self-harm? The empathy that humans can bring to the interaction is well established in therapeutic outcome research. Symptoms of mental illness or self-harm of any sort requires the nuanced understanding of the human condition in the context of the individual’s circumstances. Will chatbots be able to display empathy appropriately?


We have seen the devastating effect that the technology of social media has delivered on our youth. The obvious inability for large technology companies to deal with the issues inherent in social media is a strong indicator of the challenges that mental health Ai bots may bring without careful consideration of the implications.


Mental health Ai bots should have to meet the same standards of evidence-based research and ethical practice that all other medical devices must meet. This is established through the Therapeutic Goods Administration. Could you imagine using a heart defibrillator that has not been approved by the TGA? Or doctors being able to prescribe an anti-depressant medication that has not been approved by the TGA?


As we enter this brave new world, we should pause to consider the implications of Ai mental health bots before it is too late.

 
 

Recent Posts

See All

CONTACT US

blc.jpeg
  • Facebook

COPYRIGHT  ©  2024 Better Life Centre

1/538 Gympie Road Kedron 

Phone Number: (07) 3353 5430

Email: admin@betterlife.com.au

Thank you!

bottom of page