It just seemed to me that as intelligence increases new beliefs about what should be done are likely to be discovered
It seems that way because we are human and we don’t have a clearly defined consistent goal structure. As you find out new things you can flesh out your goal structure more and more.
If one starts with a well-defined goal structure, what knowledge might alter it?
Because an AI with a non-well-defined goal structure that changes it minds and turns into a paperclipper is just about as bad as building a paperclipper directly. It’s not obvious to me that non-well-defined non-paperclippers are easier to make than well-defined non-paperclippers.
Paperclippers aren’t dangerous unless they are fairly stable paperclippers...and something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are the goals Omuhudro thinks AIs will converge on.
It seems that way because we are human and we don’t have a clearly defined consistent goal structure. As you find out new things you can flesh out your goal structure more and more.
If one starts with a well-defined goal structure, what knowledge might alter it?
If starting with a well defined goal structure is a necessary prerequisite for a paperclippers, why do that?
Because an AI with a non-well-defined goal structure that changes it minds and turns into a paperclipper is just about as bad as building a paperclipper directly. It’s not obvious to me that non-well-defined non-paperclippers are easier to make than well-defined non-paperclippers.
Paperclippers aren’t dangerous unless they are fairly stable paperclippers...and something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are the goals Omuhudro thinks AIs will converge on.
Why do you think so?
Which bit, there’s about three claim there.
The second and third.
I’ve added a longer treatment.
http://lesswrong.com/lw/l4g/superintelligence_9_the_orthogonality_of/blsc