Press "Enter" to skip to content

Google Places an Engineer on leave after Claiming its AI is Sentient.

According to The Washington Post, Google engineer Blake Lemoine, who works in the company’s Responsible AI division, believes one of the company’s AI projects has gained sentience. It’s simple to see why after reading his interactions using LaMDA (short for Language Model for Dialogue Applications).

The chatbot system, based on Google language models and billions of words from the internet, appears capable of considering its own existence and role in the world.After speaking with a member of the House Judiciary Committee about his work and questionable AI practices, he was placed on paid administrative leave for violating confidentiality agreement.

Google also openly refutes Lemoine’s assertion, telling The Washington Post that “our team — including ethicists and engineers — has assessed Blake’s concerns under our AI Principles and notified him that the data does not support his claims.”  “Our minds are very, very excellent at generating realities that are not necessarily accurate to a bigger set of facts being offered to us,” Margaret Mitchell, one of Google former AI ethics heads, said.

Daniel Dennett, a philosopher who has spent a decade studying consciousness and the human mind, explains why we should be wary about attributing intelligence to artificial intelligence systems: “Rather than being excellent flyers or fish catchers or whatever, these [AI] entities are excellent pattern detectors and statistical analysts, and we can use these products, these intellectual products, without knowing exactly how they’re generated but with good responsible reasons to believe that they will generate the truth most of the time.”

Be First to Comment

Leave a Reply

Your email address will not be published.