Sunday, December 4, 2022
HomeOpinionAI mind should not overshadow its real problems

AI mind should not overshadow its real problems

Google put a researcher on forced leave this week for suggesting that an AI chatbot he was talking to had become sentient. The network exploded with speculation. Was Blake Lemoine right? Has Google, which in 2018 removed its “do no evil” motto from its code of conduct, attempt to cover up a project aimed at having humanoid software rule humanity? The question of intelligent AI is an old one. The Intelligence Lab created a natural language bot named Eliza, which caused a lot of surprise for its sheer ability to express human emotions in speech. However, its creator’s goal was to demonstrate how superficial human conversations are with machines. Eliza didn’t know what it was. He only spoke according to the rules of clever algorithms to reproduce the information he was fed. We humans have long let the idea of ​​bots with feelings and feelings distract us from the real problems of AI. Google’s Language Model for Conversational Applications (LaMDA), which allegedly feared being shut down, is just the latest example.

Lemoine was also not the first to suggest the presence of intelligence in LaMDA. Previously, one of the leaders of Google said that the company is “taking steps towards consciousness.” He explained that she learns by absorbing vast amounts of data, including books, forum posts, etc., in order to understand how our spoken languages ​​work, and thus be able to express herself. What often goes unsaid is that even cleverly crafted algorithms don’t really “understand” what they’re saying. AI Ethics researcher Timnit Gebru and her colleagues found that people tend to mistake information spewed out by complex language patterns “for real understanding of natural language.” Other studies have shown that bots have a long way to go before they overcome their lack of common sense. The Allen Institute for Artificial Intelligence project in Seattle conducted a test by asking such bots questions that required “social common sense inference”; their accuracy has been found to be 20-30% of the average human.

While AI can improve its performance over time, being human means making value judgments in a variety of social settings. So far, the AI ​​has not been able to get even the basics right. Gebru’s article, which also discussed bias in language models and the harm that deployment of such algorithms could cause, led to her being fired from Google, sparking controversy over the ethics of AI. A study of facial recognition software by the US National Institute of Standards and Technology in 2019 confirmed a scandalous high number of false positive identification calls for residents of West/East Africa and East Asia. The danger here is that AI, fueled by data from humans, can reinforce bias. Used by security and law enforcement agencies, the results can be disastrous. Humans at least have self-awareness, say shame, which can change decisions based on socially programmed prejudices. It remains questionable whether AI can achieve this degree of intelligence. The chatbot can scan human articulation and pick up expressions that convey feelings of sadness or concern for one’s “life”. The bot may even seem attractive, and people have fallen in love with AI creations. This does not mean that the point of singularity of false humanity has come upon us. In general, the intelligence of AI is still far-fetched. Barring something dramatic, the hard truth is that bots are, at best, good at finding the right answer for the wrong reason.

subscribe to Mint Newsletters

* Please enter a valid email address

* Thank you for subscribing to the newsletter.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments