The strong version as defined by Yudkowsky… is pretty obvious IMO
I didn’t expect you’d say that. In my view it’s pretty obviously false. Knowledge and skills are not value-neutral, and some goals are a lot harder to instill into an AI than others bc the relevant training data will be harder to come by. Eliezer is just not taking into account data availability whatsoever, because he’s still fundamentally thinking about things in terms of GOFAI and brains in boxes in basements rather than deep learning. As Robin Hanson pointed out in the foom debate years ago, the key component of intelligence is “content.” And content is far from value neutral.
Hmm, maybe I’m interpreting the statement to mean something weaker and more handwavy than you are. I agree with claims like “with current technology, it can be hard to make an AI pursue some goals as competently as other goals” and “if a goal is hard to specify given available training data, then it’s harder to make an AI pursue it”.
However, I think how competently an AI pursues a goal is somewhat different than whether an AI tries to pursues a goal at all.(Which is what I think the strong version of the thesis is still getting at.) I was trying to get at the “hard to specify” thing with the simplicity caveat. There are also many other caveats because goals and other concepts are quite handwavy.
Doesn’t seem important to discuss further.
I think I agree with everything you said. (Except for the psychologising about Eliezer on which I have no particular opinion.)
Could you give an example of knowledge and skills not being value neutral?
(No need to do so if you’re just talking about the value of information depending on the values one has, which is unsurprising. But it sounds like you might be making a more substantial point?)
I didn’t expect you’d say that. In my view it’s pretty obviously false. Knowledge and skills are not value-neutral, and some goals are a lot harder to instill into an AI than others bc the relevant training data will be harder to come by. Eliezer is just not taking into account data availability whatsoever, because he’s still fundamentally thinking about things in terms of GOFAI and brains in boxes in basements rather than deep learning. As Robin Hanson pointed out in the foom debate years ago, the key component of intelligence is “content.” And content is far from value neutral.
Hmm, maybe I’m interpreting the statement to mean something weaker and more handwavy than you are. I agree with claims like “with current technology, it can be hard to make an AI pursue some goals as competently as other goals” and “if a goal is hard to specify given available training data, then it’s harder to make an AI pursue it”.
However, I think how competently an AI pursues a goal is somewhat different than whether an AI tries to pursues a goal at all.(Which is what I think the strong version of the thesis is still getting at.) I was trying to get at the “hard to specify” thing with the simplicity caveat. There are also many other caveats because goals and other concepts are quite handwavy.
Doesn’t seem important to discuss further.
I think I agree with everything you said. (Except for the psychologising about Eliezer on which I have no particular opinion.)
Could you give an example of knowledge and skills not being value neutral?
(No need to do so if you’re just talking about the value of information depending on the values one has, which is unsurprising. But it sounds like you might be making a more substantial point?)