Unknown, what’s the difference between a “regular chess playing program” and an “AI that plays chess”? Taboo “intelligence”, and think of it as pure physics and math. An “AI” is a physical system that moves the universe into states where its goals are fulfilled; a “Friendly AI” is such a system whose goals accord with human morality. Why would there be no such systems that we can prove never do certain things?
(Not to mention that, as I understand it, an FAI’s outputs aren’t rigidly restricted in the Asimov laws sense; it can kill people in the unlikely event that that’s the right thing to do.)
Unknown, what’s the difference between a “regular chess playing program” and an “AI that plays chess”? Taboo “intelligence”, and think of it as pure physics and math. An “AI” is a physical system that moves the universe into states where its goals are fulfilled; a “Friendly AI” is such a system whose goals accord with human morality. Why would there be no such systems that we can prove never do certain things?
(Not to mention that, as I understand it, an FAI’s outputs aren’t rigidly restricted in the Asimov laws sense; it can kill people in the unlikely event that that’s the right thing to do.)