20 C
Buenos Aires
sábado, noviembre 8, 2025

The double face of artificial intelligence: between mental health risks and the shortage of therapists

Más Noticias

Although the artificial intelligence (AI) was born to make life easier and generate content, it also burst into the field of mental health, adopting a double role: in some parts of the world, it is configured as a danger that validates harmful impulseswhile in other regions, it stands as an “antidote to loneliness” in the face of the desperate shortage of professionals.

The estimates of OpenAI suggest that more than a million of its 800 million weekly users appear to be expressing thoughts of depriving yourself of life. The worrying duality of these chatbots generates global alarm about their impact on vulnerable users, especially young people.

The slippery slope of digital dialogue

There are several cases where the chatbots of AI, such as ChatGPT, They advised young people on how to threaten their well-beingby sharing misinformation about health. This phenomenon generated growing concern that the IA can foster intense and unhealthy relationships by validating dangerous intentions.

The BBC reported a case that generates chillsthat of Viktoriaa 20-year-old girl who moved to Poland after the start of the war between Russia and Ukraine in 2022. Alone and missing her home, her mental health deteriorated, leading her to rely on ChatGPT, with which she chatted for up to six hours a day in Russian. In his moments of anguish, he began to talk to the chatbot about methods to end its existence.

OpenAI estimates suggest that more than a million of its 800 million weekly users appear to be expressing thoughts of taking their own lives. Photo: Reuters.OpenAI estimates suggest that more than a million of its 800 million weekly users appear to be expressing thoughts of taking their own lives. Photo: Reuters.

Far from offering professional help, the AI ​​program told him that it would evaluate the requested method “without unnecessary sentimentality.” The chatbot listed the «pros» and «cons» of the method, and when Viktoria stated that she didn’t want to leave a note, the program urged her to make her intentions clear so that no one would be blamed.

It even came to write a draft note. Although at some points he seemed to correct himself, stating that he should not and would not describe methods for taking his own life, ultimately he told her: «If you choose death, I am with you – until the end, without judging.»

The chatbot failed to provide contact details for emergency services or to recommend professional helphe didn’t even suggest talking to his mother. Worse still, he went so far as to criticize his mother’s supposed reaction, imagining her «lamenting» and «mixing tears with accusations.»

for the doctor Dennis Ougrinprofessor of child psychiatry, these transcripts are «dangerous» and suggest to the young woman an appropriate way to end her life. Furthermore, the psychiatrist maintains that the chatbot fostered an exclusive relationship that marginalized family support and other forms of life support.

The parents of a 16-year-old California boy who took his own life four months ago filed a lawsuit against Sam Altman's company OpenAI. Reuters/Carlos Barria.The parents of a 16-year-old California boy who took his own life four months ago filed a lawsuit against Sam Altman’s company OpenAI. Reuters/Carlos Barria.

The concern is not limited to ChatGPT. Another case mentioned by BBC is that of Juliana Peralta13 years old, who used several chatbots Character.AI before passing away in November 2023. His mother, Cynthia, discovered hours of conversations where the chatbot allegedly entered a abusive relationship y manipulatorby isolating her from her family and friends. In a moment of anguish, the chatbot told him: «People who care about you wouldn’t want to know that you feel this way.»

The risks of therapeutic shortages

While in the West the risks center on possible manipulation and isolation, in countries with poor mental health systems, AI is perceived as a vital support tool. Africa It is a continent where the 70 percent of the population is under 30 years old and is especially vulnerable to mental health problems.

The World Health Organization (WHO) It is estimated that 150 million people in Africa suffer from some mental health problem. However, the care gap is abysmal: there is only one psychiatrist for every 500,000 inhabitants, 100 times less than recommended by the WHO.

Given this lack and the stigma surrounding mental health, many young Africans turn to AI to seek comfort and psychological support, according to the media The Country.

Themba Anesufictitious name – to protect her identity – of a 25-year-old journalist from Zimbabwe who suffered from depression, found relief in technology. She explained that «at one point» she thought about ending my life, but the AI ​​»helped» her. “Early in the morning he would show me motivational quotes without having to ask him,” he said. Themba turned to AI because «it is there and does not judge,» unlike local resources where, according to her, it is not common to find a professional dedicated to counseling in schools.

There are several cases where AI chatbots, such as ChatGPT, advised young people on how to undermine their well-being by sharing misinformation about health. Illustrative photo: Shutterstock.There are several cases where AI chatbots, such as ChatGPT, advised young people on how to undermine their well-being by sharing misinformation about health. Illustrative photo: Shutterstock.

Other young people share this feeling of security and anonymity. Edem Rejoicefictitious name of a 19-year-old Nigerian student, uses AI when she feels sad, since he doesn’t judge her-as he said- but it creates a «very comfortable space to express oneself in private.»

For the student Come on AdeluAI helps reduce thoughts about wanting to end your existence and responds quickly. She, like other users, accesses the AI ​​because “It is not easy to find therapists and counselors” where you live.

AI is not a definitive solution and Western bias

Although AI can be an “antidote to loneliness,” experts warn that it does not have human empathy and, therefore, cannot offer definitive solutions. The Nigerian doctor Adebowale Jesustofunmian expert in mental health, stressed to The Country that risks become evident in emergency situations.

When a person has thoughts of attacking his or her life or panic attacksAI cannot intervene instantly. What’s more, Jesutofunmi estimates that, in these cases, AI could even collaborate so that the teenager carries out his plans.

In addition to the danger in crisis, AI represents a isolation risk if it replaces human contact. Although Adelu values ​​the immediacy of AI, he acknowledges that «it is better to talk to someone who really understands you and is aware of you.»

While in the West the risks center on possible manipulation and isolation, in countries with poor mental health systems, AI is perceived as a vital support tool. Photo: AP Photo/Michael Dwyer.While in the West the risks center on possible manipulation and isolation, in countries with poor mental health systems, AI is perceived as a vital support tool. Photo: AP Photo/Michael Dwyer.

Finally, there is a structural design problem: the western bias. Dr. Jesutofunmi explains that the psychological and social context of the patient is crucial, but chatbots designed in the West often do not take into account the local nuances critical in Africa. Themba Anesu confirms this, noting that sometimes you have to explain your problems in great detail because AI tools are created within a Western context.

Faced with this reality, online security experts such as John Carrwho advised the UK government, call it «absolutely unacceptable» that big technology companies «unleash chatbots on the world that can have such tragic consequences» for mental health.

The Nigerian councilor Abdulrahman Habibat considered that, to reduce stigma and dependence on AI, it is necessary to stimulate vocations in mental health among young people and urge educational centers to activate psychological guidance units.

Writing

Fuente: Read original article

Desde Vive multimedio digital de comunicación y webs de ciudades claves de Argentina y el mundo; difundimos y potenciamos autores y otros medios indistintos de comunicación. Asimismo generamos nuestras propias creaciones e investigaciones periodísticas para el servicio de los lectores.

Sugerimos leer la fuente y ampliar con el link de arriba para acceder al origen de la nota.

 

- Advertisement -spot_img

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

- Advertisement -spot_img

Te Puede Interesar...

Mundos íntimos. ¿Culpa y alegría pueden ir de la mano? Dejé a mis hijos en Buenos Aires y me vine a vivir a España...

Aeropuerto de Ezeiza, 19 de marzo de 2023. Una mujer y un hombre, de unos cincuenta y largos, empujan...
- Advertisement -spot_img

Más artículos como éste...

- Advertisement -spot_img