A young Belgian man recently died by suicide after talking to a chatbot called ELIZA for several weeks, prompting demands for greater citizen security and increased consciousness.
“My husband would still be here if it hadn’t been for these conversations with the chatbot,” the man’s wife told La Libre. Her late spouse and she were both in their thirties, lived comfortably, and had two small children.
Death by AI: How an AI bot triggered a man to commit suicide
The first signs of problems began to emerge about two years ago. The man became extremely concerned about the environment and sought shelter with ELIZA, a chatbot that uses GPT-J, an open-source artificial intelligence language model created by EleutherAI. He committed suicide after six weeks of intense and lengthy chats where they talked about the environment and climate change.
The family met with Mathieu Michel, Secretary of State for Digitalisation in control of Administrative Simplification, Privacy, and Building Regulation, last week. “I am particularly moved by the tragedy of this family.” “What has occurred is a serious precedent that must be taken very seriously,” he stated on Tuesday.
He emphasised that this instance demonstrates the importance of “clearly defining responsibilities.”
The need to weigh risks
“With the rise of ChatGPT, the general public has become more aware of the potential of artificial intelligence in our lives than ever before.” While the opportunities are limitless, the risk of misusing it must also be weighed.
The death has alerted authorities who have raised concern for a ‘serious precedent that must be taken very seriously’.
To prevent such a tragedy in the near future, Michel asserted that it is critical to determine the nature of the responsibilities that contribute to such an occurrence.
“Of course, we have yet to learn to live with algorithms,” he said, “but the use of any technology should never lead content publishers to shirk their own responsibilities.”
OpenAI has acknowledged that ChatGPT can generate harmful and biased results, but it expects to mitigate the issue by gathering user input.
Critical increase awareness
In the long run, Michel believes it is critical to increase awareness of the effect of algorithms on people’s lives “by enabling everyone to understand the nature of the content people encounter online.”
He was referring to new technologies such as chatbots, as well as deep fakes – a form of artificial intelligence that can generate compelling images, audio, and video hoaxes – that can test and distort people’s sense of reality.
Michel also stated that people must be properly safeguarded from certain artificial intelligence apps that “pose a significant risk.”
The European Union has been working on an AI Act for the past two years in order to control the use of artificial intelligence. Michel has formed a working committee to examine the text presently being prepared by the EU and make any required changes.
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.