OK, if you’ve read AIMA and still want to become a Dark Lord, I don’t know if I should encourage you on this path. My impression is that Mitchell’s textbook covers less material than AIMA, though I didn’t read AIMA.
What gives you the impression that I “want to be a Dark Lord”? I have already explained that I realize the importance of friendliness in AI. I just don’t think it is reasonable to teach the AI the intricacies of ethics beore it is smart enough to grasp the concept in its entirety. You don’t read Kant to infants either. I think that implementing friendliness too soon would actually increase the chances of misunderstanding, just like children that are taught hard concepts too early often have a hard time updating their believes once they are actually smart enough. You would just need to give the AI a preliminary non-interference task until you find a solution to the friendliness problem. You might also need to add some contingency tasks such as “if you find you are not the original AI you but an illegally made copy, try to report this, then shut down.”.
It’s not possible to explain what you don’t know, to answer a question you can’t state, and “intelligence” doesn’t save from this trouble, doesn’t open the floodgates to arbitrary helpfulness, resolving any difficulties you have. It just does its thing really well, but it’s up to its designers to choose the right thing as its optimization criterion. Doing the wrong thing very well, on the other hand, is in no one’s interest. This is a brittle situation, where vagueness in understanding the goal leads to arbitrary and morally desolate outcomes.
OK, if you’ve read AIMA and still want to become a Dark Lord, I don’t know if I should encourage you on this path. My impression is that Mitchell’s textbook covers less material than AIMA, though I didn’t read AIMA.
What gives you the impression that I “want to be a Dark Lord”? I have already explained that I realize the importance of friendliness in AI. I just don’t think it is reasonable to teach the AI the intricacies of ethics beore it is smart enough to grasp the concept in its entirety. You don’t read Kant to infants either. I think that implementing friendliness too soon would actually increase the chances of misunderstanding, just like children that are taught hard concepts too early often have a hard time updating their believes once they are actually smart enough. You would just need to give the AI a preliminary non-interference task until you find a solution to the friendliness problem. You might also need to add some contingency tasks such as “if you find you are not the original AI you but an illegally made copy, try to report this, then shut down.”.
It’s not possible to explain what you don’t know, to answer a question you can’t state, and “intelligence” doesn’t save from this trouble, doesn’t open the floodgates to arbitrary helpfulness, resolving any difficulties you have. It just does its thing really well, but it’s up to its designers to choose the right thing as its optimization criterion. Doing the wrong thing very well, on the other hand, is in no one’s interest. This is a brittle situation, where vagueness in understanding the goal leads to arbitrary and morally desolate outcomes.