I think mostly you are arguing against LW in general, which seems fine but not particularly helpful here or relevant to my point.
Some people were worrying over a very very far fetched scenario, being unable to assign it low enough probability. The property of summing to 1 over the enormous number of likewise far fetched mutually exclusive scenarios would definitely have helped, compared to the state of—I suspect—summing to a very very huge number.
What is the “very very far fetched scenario”? If you mean the intelligence explosion scenario, I do think this is reasonably unlikely, but:
Eliezer thinks this scenario is very likely, and many people around here agree. This is hardly a problem of being unwilling to assign a probability too close to 0.
In what sense is fast takeoff one hypothesis out of a very large number of equally plausible hypotheses? It seems like a fast takeoff is a priori reasonably likely, and the main reasons you think it seems unlikely are because experts don’t take it seriously and it seems incongruous with other tech progress. This seems unrelated to your critique.
I think mostly you are arguing against LW in general, which seems fine but not particularly helpful here or relevant to my point.
What is the “very very far fetched scenario”? If you mean the intelligence explosion scenario, I do think this is reasonably unlikely, but:
Eliezer thinks this scenario is very likely, and many people around here agree. This is hardly a problem of being unwilling to assign a probability too close to 0.
In what sense is fast takeoff one hypothesis out of a very large number of equally plausible hypotheses? It seems like a fast takeoff is a priori reasonably likely, and the main reasons you think it seems unlikely are because experts don’t take it seriously and it seems incongruous with other tech progress. This seems unrelated to your critique.