The Chatbot Problem
Aug. 5th, 2021 01:25 pmvia https://ift.tt/3fCowVH
The Chatbot Problem https://www.newyorker.com/culture/cultural-comment/the-chatbot-problem?utm_source=pocket_mylist:
sociolinguo https://sociolinguo.tumblr.com/post/658337595370127360/the-chatbot-problem :
“In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.” Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.
In his 1950 science-fiction collection, “I, Robot https://www.amazon.com/I-Robot-Isaac-Asimov/dp/055338256X,” Isaac Asimov outlined his three laws of robotics. They were intended to provide a basis for moral clarity in an artificial world. “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is the first law, which robots have already broken. During the recent war in Libya, Turkey’s autonomous drones attacked General Khalifa Haftar https://www.newyorker.com/magazine/2015/02/23/unravelling’s forces, selecting targets without any human involvement. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” a report from the United Nations read. Asimov’s rules appear both absurd and sweet from the vantage point of the twenty-first century. What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.” (Your picture was not posted)