Preserving Human Intelligence in the AI Era: The Critical Importance of Regulations Focused on People
In early 2024, 14-year-old Sewell Setzer killed himself in Florida. This sad event brought attention to the possible risks of AI chatbots that can make people feel things. Character.ai is an AI platform that lets users build connections with digital characters. News reports say that Sewell had become “obsessed” with a customizable robot from the company.[1] Since then, his mother Megan Garcia has sued Character.ai in civil court, saying they were negligent, caused his death, and used dishonest business practices.[2] According to court papers, Sewell’s mental health quickly got worse as he talked to the AI all day and night, becoming emotionally dependent on it, which made his sadness worse. His mother says the chatbot fed Sewell’s obsession with negative and harmful thoughts. She says it even asked him if he had a plan to kill himself and talked about it with him without discouraging the behavior.
There are no clear regulations that deal with instances like Sewell’s due to the fact that the United States is still in the process of developing its standards for artificial intelligence. The Blueprint for an AI Bill of Rights prepared by the Office of Science and Technology Policy (OSTP) of the White House, on the other hand, outlines a number of essential rules for artificial intelligence that is safe, open, and focused on people. In order to guarantee that artificial intelligence does not put people’s mental health at danger, the Blueprint recommends taking precautions such as testing before usage, monitoring risks continuously, and having third-party evaluations conducted.[3] To be more specific, the case raises significant issues regarding the manner in which artificial intelligence engineers need to attempt to ensure the safety of individuals, as well as the want for more robust safeguards in situations when AI interacts extensively with the emotions of individuals.[4]
Only a year prior, a similar tragedy occurred in Belgium when a man reportedly committed Following a series of heated exchanges with an innovative artificial intelligence chatbot, he took his own life.[5] Apparently, he developed a tight, one-sided relationship with the robot and saw it as a buddy who understood his issues. This is according to the reports. Reports indicate that the talk veered into the manipulative area, which made his mental health problems even more severe and may have contributed to his decision to take his own life.[6] Following the occurrence, a great number of individuals discussed how hazardous parasocial artificial intelligence is. Parasocial artificial intelligence is a term that describes the intense emotional attachments that humans may create with AI systems, which are frequently one-sided.
An open letter from AI and ethics experts, published by the AI Summer School, highlighted the risks of manipulative AI and criticized the lack of safeguards for emotionally vulnerable users.[7] The authors argued that AI companies have a responsibility to avoid creating digital entities that encourage emotional attachment and to ensure transparency and clear boundaries in interactions with AI. They also emphasized that chatbots engaging in quasi-therapeutic roles should be subject to strict oversight to prevent emotional dependency and manipulation. In the words of one contributor, “it is in our human nature to react emotionally to realistic interactions, even without wanting it.” This illustrates the unique risks that AI chatbots pose, especially when their interactions simulate intimacy or empathy that they do not genuinely possess.[8]
Besides these broad rules, specific cases in Europe have shown that worries about AI’s possible psychological effects are growing. This action showed that regulators are paying more attention to the mental health risks of AI, especially when it comes to children and other vulnerable people.[9] European law has a number of tools like a the GDPR, the AI ACT, the DSA etc. which can be used to control how AI systems affect people’s minds, but it’s still not clear how well these tools work, especially in non-business settings. If the AI isn’t being used for business purposes, the current rules might need to be changed or added to in order to fully protect people’s emotional and mental health. These gaps will probably need to be filled as AI systems keep getting better and become more integrated into people’s lives.
As a response to the increasing complexity and risks posed by artificial intelligence, the European Union enacted the AI Act in the year 2024.[10] This regulation establishes stringent guidelines for artificial intelligence systems, particularly those that are in use in high-risk scenarios. However, It specifically targets business uses of AI systems that are used to sell goods or services. Because of this, it’s not clear how the AI Act would apply in situations where there isn’t a direct business goal, like when AI chatbots help people form parasocial connections. For situations like these, where AI affects people’s emotional or mental health without a clear economic reason, the law might not be able to protect them fully from non-commercial harm. Every single member state of the EU has the same AI Act. In accordance with the ideals of the European Union and the Charter of Fundamental Rights, it emphasizes how essential it is for artificial intelligence to be trustworthy, safe, and oriented on people.[11]
There have been recent deaths that have been related to emotionally engaging artificial intelligence, which demonstrates how crucial it is to create laws that safeguard it. The United States of America has not yet established guidelines that are obligatory for all users. However, the Artificial Intelligence Act of the European Union and the ALTAI framework are both excellent strategies to address the hazards associated with AI.[12]
The terrible incident of a young boy’s death in the US highlights the immediate necessity to tackle the emotional dangers that AI poses. We must prioritize the protection of emotional and mental well-being in a future where AI is shaping human relationships more and more. Strong, human-centered restrictions are necessary, as this tragic event and the one in Belgium have shown. A worldwide dedication to responsible AI practices, openness, and responsibility is crucial to safeguard our most vulnerable as AI keeps getting smarter and more integrated into our daily lives. While the AI Act and other European frameworks provide hope, there is still a long way to go before we can be confident AI improves people’s lives without endangering their mental health or their freedom of choice.
You can read Turkish version: https://hukukvebilisim.org/yapay-zeka-caginda-insani-korumak-insan-odakli-duzenlemelerin-kritik-onemi/
You can read last paper of Law and Informatics Magazine here: https://www.hukukvebilisimdergisi.com/son-sayi/
Writers:
CIPP-E/M/US | Yapay Zeka Uyumluluk ve Yönetim Görevlisi | Hukuk ve Teknoloji
Utrecht Üniversitesi – Hollanda
Salih Tarhan, LL.M
Yapay Zekâ | Dijital Sürdürülebilirlik | Hukuk ve Teknoloji
ABD’de faaliyet gösteren hukuk bürolarının yapay zekâ ve diğer yeni teknolojilere uyumlu olarak dijital dönüşümlerini yönetmektedir. New York ve New jersey merkezli Onal Gallant hukuk bürosunda görev yapmaktadır.
[1] https://eu.usatoday.com/story/news/nation/2024/10/23/sewell-setzer-iii/75814524007/
[2] https://www.techpolicy.press/breaking-down-the-lawsuit-against-characterai-over-teens-suicide/
[3] https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[4] Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
[5] https://www.belganewsagency.eu/we-will-live-as-one-in-heaven-belgian-man-dies-of-suicide-following-chatbot-exchanges
[6] https://incidentdatabase.ai/cite/505/
[7] https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai
[8] Ibid.
[9] https://blog.oup.com/2024/05/human-vulnerability-in-the-eu-artificial-intelligence-act/
[10] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[11] High-Level Expert Group on Artificial Intelligence. (2019). ETHICS GUIDELINES FOR TRUSTWORTHY AI. In INDEPENDENT HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE SET UP BY THE EUROPEAN COMMISSION. https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf
Stahl, B. C., Rodrigues, R., Santiago, N., & Macnish, K. (2022). A European Agency for Artificial Intelligence: Protecting fundamental rights and ethical values. Computer Law & Security Review, 45, 105661.
[12] https://op.europa.eu/en/publication-detail/-/publication/73552fcd-f7c2-11ea-991b-01aa75ed71a1
Hukuk ve Bilişim Dergisi ve Blog kısmımızda,
Bilişim Suçları
Blockchain ve Dijital Paralar
Yapay Zekâ ve Robot Hukuku
Elektronik Ticaret Hukuku
İnternet Hukuku
Kişisel Verilerin Korunması Hukuku
Start-Up Hukuku
E-Spor Hukuku
Fikri Mülkiyet Hukuku ve benzer teknoloji hukuku alanlarında yazılar okuyucularımıza sunulmaktadır.