If you are not putting effort into choosing a utility function, building this AGI seems as likely as building an FAI
You’ve made a lot of good comments in this thread, but I disagree with this. As likely?
It seems you are assuming that every possible point in AI mind space is equally likely, regardless of history, context, or programmer intent. This is like saying that, if someone writes a routine to sort numbers numerically, it’s just as likely to sort them phonetically.
It seems likely to me that this belief, that the probability distribution over AI mindspace is flat, has become popular on LessWrong, not because there is any logic to support it, but because it makes the Scary Idea even scarier.
Yes, my predictions of what will happen when you don’t put effort into choosing a utility function are inaccurate in the case where you do put effort into choosing a utility function.
This is like saying that, if someone writes a routine to sort numbers numerically, it’s just as likely to sort them phonetically.
Well, lets suppose someone wants a routine to sort numbers numerically, but doesn’t know how to do this, and tries a bunch of stuff without understanding. Conditional on the programmer miraculously achieving some sort of sorting routine, what should we expect about it? Sorting phonetically would add extra complication over sorting numerically, as the information about the names of numbers would have to be embedded within the program, so that would seem less likely. But a routine that sorts numerically ascending is just as likely as a routine that sorts numerically descending, as these routines have a complexity preserving one to one correspondance by interchaning “greater than” with “less than”.
And the utility functions I clamed were equally likely before have the same complexity preserving one to one correspondance.
You’ve made a lot of good comments in this thread, but I disagree with this. As likely?
It seems you are assuming that every possible point in AI mind space is equally likely, regardless of history, context, or programmer intent. This is like saying that, if someone writes a routine to sort numbers numerically, it’s just as likely to sort them phonetically.
It seems likely to me that this belief, that the probability distribution over AI mindspace is flat, has become popular on LessWrong, not because there is any logic to support it, but because it makes the Scary Idea even scarier.
Yes, my predictions of what will happen when you don’t put effort into choosing a utility function are inaccurate in the case where you do put effort into choosing a utility function.
Well, lets suppose someone wants a routine to sort numbers numerically, but doesn’t know how to do this, and tries a bunch of stuff without understanding. Conditional on the programmer miraculously achieving some sort of sorting routine, what should we expect about it? Sorting phonetically would add extra complication over sorting numerically, as the information about the names of numbers would have to be embedded within the program, so that would seem less likely. But a routine that sorts numerically ascending is just as likely as a routine that sorts numerically descending, as these routines have a complexity preserving one to one correspondance by interchaning “greater than” with “less than”.
And the utility functions I clamed were equally likely before have the same complexity preserving one to one correspondance.