What the orthogonality thesis says and the paperclip maximizer example illustrates is that it’s possible in principle to construct arbitrarily effective agents deserving of moniker superintelligence with arbitrarily silly or worthless goals (in human view). This seems clearly true, but valuable to notice to fix intuitions that would claim otherwise. Then there’s a “practical version of orthogonality thesis”, which shouldn’t be called “orthogonality thesis”, but often enough gets confused with it. It says that by default goals of AIs that will be constructed in practice will tend towards arbitrary things that humans wouldn’t find agreeable, including something silly or simple. This is much less obviously correct, and the squiggle maximizer sketch is closer to arguing for some version of this.
Squiggle maximizer (which is tagged for this post) and paperclip maximizer are significantly different points. Paperclip maximizer (as opposed to squiggle maximizer) is centrally an illustration for the orthogonality thesis (see greaterwrong mirror of arbital if the arbital page doesn’t load).
What the orthogonality thesis says and the paperclip maximizer example illustrates is that it’s possible in principle to construct arbitrarily effective agents deserving of moniker superintelligence with arbitrarily silly or worthless goals (in human view). This seems clearly true, but valuable to notice to fix intuitions that would claim otherwise. Then there’s a “practical version of orthogonality thesis”, which shouldn’t be called “orthogonality thesis”, but often enough gets confused with it. It says that by default goals of AIs that will be constructed in practice will tend towards arbitrary things that humans wouldn’t find agreeable, including something silly or simple. This is much less obviously correct, and the squiggle maximizer sketch is closer to arguing for some version of this.