No, that’s not right. language + thought is to understand language and be able to fully model the mindstate of the person who was speaking to you. If you don’t have this, and just have language, ‘get grandma out of the burning house ’ gets you the lethal ejector seat method. If you want do-what-I-mean rather than do-what-I-say, you need full thought modeling. Which is obviously harder than language + morality, which requires only being able to parse language correctly and understand a certain category of thought.
Or to phrase it a different way: language on its own gets you nothing productive, just a system that can correctly parse statements. To understand what they mean, rather than what they say, you need something much broader, and language+morality is smaller than that broad thing.
Fully understanding the semantics of morality may be simpler than fully understanding the semantics of everything, but it doesn’t get you AI safety, because an AI can understand something without being motivated to act on it.
When I wrote “language”, I meant words + understanding ….understanding in general, therefore including understanding of ethics..and when I wrote “morality” I meant a kind motivation.
No, that’s not right. language + thought is to understand language and be able to fully model the mindstate of the person who was speaking to you. If you don’t have this, and just have language, ‘get grandma out of the burning house ’ gets you the lethal ejector seat method. If you want do-what-I-mean rather than do-what-I-say, you need full thought modeling. Which is obviously harder than language + morality, which requires only being able to parse language correctly and understand a certain category of thought.
Or to phrase it a different way: language on its own gets you nothing productive, just a system that can correctly parse statements. To understand what they mean, rather than what they say, you need something much broader, and language+morality is smaller than that broad thing.
Fully understanding the semantics of morality may be simpler than fully understanding the semantics of everything, but it doesn’t get you AI safety, because an AI can understand something without being motivated to act on it.
When I wrote “language”, I meant words + understanding ….understanding in general, therefore including understanding of ethics..and when I wrote “morality” I meant a kind motivation.
(Alice in Wonderland)