According to The Washington Post, Google engineer Blake Lemoine, who works in the company’s Responsible AI division, believes one of the company’s AI projects has gained sentience. It’s simple to see why after reading his interactions using LaMDA (short for Language Model for Dialogue Applications).
The chatbot system, based on Google language models and billions of words from the internet, appears capable of considering its own existence and role in the world.After speaking with a member of the House Judiciary Committee about his work and questionable AI practices, he was placed on paid administrative leave for violating confidentiality agreement.
Daniel Dennett, a philosopher who has spent a decade studying consciousness and the human mind, explains why we should be wary about attributing intelligence to artificial intelligence systems: “Rather than being excellent flyers or fish catchers or whatever, these [AI] entities are excellent pattern detectors and statistical analysts, and we can use these products, these intellectual products, without knowing exactly how they’re generated but with good responsible reasons to believe that they will generate the truth most of the time.”
Be First to Comment