Luke, Stuart, and anyone else trying to convince AI researchers to be more cautious, can we please stop citing the orthogonality thesis? I just don’t see what the point is, if no AI researcher actually holds its denial, or if all they have to do to blunt the force of your argument is take one step back and start talking about possibility in practice instead of in theory.
I’m not confident about any of the below, so please add cautions in the text as appropriate.
The orthogonality thesis is both stronger and weaker than we need. It suffices to point out that neither we nor Ben Goertzel know anything useful or relevant about what goals are compatible with very large amounts of optimizing power, and so we have no reason to suppose that superoptimization by itself points either towards or away from things we value. By creating an “orthogonality thesis” that we defend as part of our arguments, we make it sound like we have a separate burden of proof to meet, whereas in fact it’s the assertion that superoptimization tells us something about the goal system that needs defending.
By creating an “orthogonality thesis” that we defend as part of our arguments, we make it sound like we have a separate burden of proof to meet, whereas in fact it’s the assertion that superoptimization tells us something about the goal system that needs defending.
So: evolution tends to produce large-scale cooperative systems. Kropotkin, Nowak, Wilson, and many others have argued this. Cooperative systems are favoured by game theory—which is why they currently dominate the biosphere. “Arbitrary” goal systems tend not to evolve.
I’m glad to see that you implicitly accept my point, which is that in the absence of specific arguments such as the one you advance here we have no reason to believe any particular non-orthogonality thesis.
You’re assuming the purpose of SIAI is to convince AI researchers to be more cautious. The SIAI’s behaviour seem more consistent with signaling to the third parties though, at the expense of, if anything, looking silly to the AI researchers.
Luke, Stuart, and anyone else trying to convince AI researchers to be more cautious, can we please stop citing the orthogonality thesis? I just don’t see what the point is, if no AI researcher actually holds its denial, or if all they have to do to blunt the force of your argument is take one step back and start talking about possibility in practice instead of in theory.
I’m not confident about any of the below, so please add cautions in the text as appropriate.
The orthogonality thesis is both stronger and weaker than we need. It suffices to point out that neither we nor Ben Goertzel know anything useful or relevant about what goals are compatible with very large amounts of optimizing power, and so we have no reason to suppose that superoptimization by itself points either towards or away from things we value. By creating an “orthogonality thesis” that we defend as part of our arguments, we make it sound like we have a separate burden of proof to meet, whereas in fact it’s the assertion that superoptimization tells us something about the goal system that needs defending.
So: evolution tends to produce large-scale cooperative systems. Kropotkin, Nowak, Wilson, and many others have argued this. Cooperative systems are favoured by game theory—which is why they currently dominate the biosphere. “Arbitrary” goal systems tend not to evolve.
I’m glad to see that you implicitly accept my point, which is that in the absence of specific arguments such as the one you advance here we have no reason to believe any particular non-orthogonality thesis.
You’re assuming the purpose of SIAI is to convince AI researchers to be more cautious. The SIAI’s behaviour seem more consistent with signaling to the third parties though, at the expense of, if anything, looking silly to the AI researchers.