Google put a researcher on forced leave this week for suggesting that an AI chatbot he was talking to had become sentient. The network exploded with speculation. Was Blake Lemoine right? Has Google, which in 2018 removed its “do no evil” motto from its code of conduct, attempt to cover up a project aimed at having humanoid software rule humanity? The question of intelligent AI is an old one. The Intelligence Lab created a natural language bot named Eliza, which caused a lot of surprise for its sheer ability to express human emotions in speech. However, its creator’s goal was to demonstrate how superficial human conversations are with machines. Eliza didn’t know what it was. He only spoke according to the rules of clever algorithms to reproduce the information he was fed. We humans have long let the idea of bots with feelings and feelings distract us from the real problems of AI. Google’s Language Model for Conversational Applications (LaMDA), which allegedly feared being shut down, is just the latest example.