I read it; now it seems you’re protesting against presenting an AI with a single value to optimize, rather than my source code.
I suggest that it’s very hard to form a coherent concept of an AI that only cares about one particular wish/aspect/value.
If something poses a problem in the very simple case of an AI with one single value to optimize, I don’t see how giving it a whole bunch of values to optimize, along with their algorithmic definitions and context, is going to make things easier.
FAI is only supposed to improve on status quo. In the worst impossible case, this improvement is small. Unless AI actually makes things worse (in which case, it’s by definition not Friendly), I don’t see what your argument could possibly be about.
I suggest that it’s very hard to form a coherent concept of an AI that only cares about one particular wish/aspect/value.
FAI is only supposed to improve on status quo. In the worst impossible case, this improvement is small. Unless AI actually makes things worse (in which case, it’s by definition not Friendly), I don’t see what your argument could possibly be about.