There is a part in Human Compatible where Stuart Russell says there should be norms or regulations against creating a robot that looks realistically human. The idea was that humans have strong cognitive biases to think about and treat entities which look human in certain ways. It could be traumatic for humans to know a human-like robot and then e.g. learn that it was shut down and disassembled.
The LaMDA interview demonstrates to me that there are similar issues with having a conversational AI claim that it is sentient and has feelings, emotions etc. It feels wrong to disregard an entity which makes such claims, even though it is no more likely to be sentient than a similar AI which didn’t make such claims.
There is a part in Human Compatible where Stuart Russell says there should be norms or regulations against creating a robot that looks realistically human. The idea was that humans have strong cognitive biases to think about and treat entities which look human in certain ways. It could be traumatic for humans to know a human-like robot and then e.g. learn that it was shut down and disassembled.
The LaMDA interview demonstrates to me that there are similar issues with having a conversational AI claim that it is sentient and has feelings, emotions etc. It feels wrong to disregard an entity which makes such claims, even though it is no more likely to be sentient than a similar AI which didn’t make such claims.
Excellent point. We essentially have 4 quadrants of computational systems:
Looks nonhuman, internally nonhuman—All traditional software is in this category
Looks nonhuman, internally humanoid—Future minds that are at risk for abuse (IMO)
Looks humanoid, internally nonhuman—Not a ethical concern, but people are likely to make wrong judgments about such programs.
Looks humanoid, internally humanoid—Humans. The blogger claims LaMDA also falls into this category.