Eliezer developed a hostility towards the outside view because people were misusing the outside view, entirely missing the point and making absolutely ridiculous claims based of superficial similarities.
Of the arguments he mentions Robin Hanson is trying to fit a line through too few data points, so while his argument is flawed it’s not his use of the outside view that’s the real problem. The argument made by taw is mostly correct, even if he somewhat overstates his case, in particular the success rate for the reference class of beliefs in coming of a new world, be it good or evil, (depending on exactly what you mean by “new world”) is slightly above 0%.
Most relevant is the unpacking of the reasoning underlying outside view considerations—see the bottom half of the post.
He appears to be using the narrowest possible argument for the outside view he can get away with. Thus ruling out a lot of valid applications of the outside view. A strict reading would even rule out Wei Dai’s application in the OP.
The argument made by taw is mostly correct, even if he somewhat overstates his case
If my memory serves me the constant misuse of (and borderline ranting about) ‘outside view’ by taw in particular did far more to discourage the appeal of ‘outside view’ references than anything Eliezer may have said. A preface of ‘outside view’ does not transform an analogy into a bulletproof argument.
It’s sad and true. For instance automatically thinking of reference classes for beliefs and strategies can be useful but I don’t see it applied often enough. When it comes to something like (strategies about / popularizing interest in) predictability of the Singularity, for example, people bring up objections like “you’ll never be able to convince anyone that something big and potentially dangerous might happen based off of extrapolations of current trends”, but the outside view response “then explain global warming” actually narrows the discussion and points out features of the problem that might not have been obvious.
Of the arguments he mentions Robin Hanson is trying to fit a line through too few data points, so while his argument is flawed it’s not his use of the outside view that’s the real problem. The argument made by taw is mostly correct, even if he somewhat overstates his case, in particular the success rate for the reference class of beliefs in coming of a new world, be it good or evil, (depending on exactly what you mean by “new world”) is slightly above 0%.
He appears to be using the narrowest possible argument for the outside view he can get away with. Thus ruling out a lot of valid applications of the outside view. A strict reading would even rule out Wei Dai’s application in the OP.
If my memory serves me the constant misuse of (and borderline ranting about) ‘outside view’ by taw in particular did far more to discourage the appeal of ‘outside view’ references than anything Eliezer may have said. A preface of ‘outside view’ does not transform an analogy into a bulletproof argument.
It’s sad and true. For instance automatically thinking of reference classes for beliefs and strategies can be useful but I don’t see it applied often enough. When it comes to something like (strategies about / popularizing interest in) predictability of the Singularity, for example, people bring up objections like “you’ll never be able to convince anyone that something big and potentially dangerous might happen based off of extrapolations of current trends”, but the outside view response “then explain global warming” actually narrows the discussion and points out features of the problem that might not have been obvious.
You can use outside view arguments, just not connotations of “outside view”.