Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is “keep NASA at current funding levels and increase funding for nuclear weapons research” then you should be very suspicious.
I think you’re missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.
Your application of cynicism proves everything, and so proves nothing. Every strategy can be—rightly—pointed out to benefit some group and disadvantage some other group.
The only time this wouldn’t apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn’t benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don’t seem to actually believe it rather than believing them because this particular use of Outside View doesn’t penalize them.
(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn’t believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - ‘if I were a mathematical genius, I could be making more progress on FAI than if I were an ordinary person whose main skills are OK writing and research’ - or for counterfactual selves with stronger willpower, but for oneself? This is analogous to Moore’s paradox or the epistemic question, what sort of agent doesn’t believe that his current beliefs are the best for him to hold? “It’s raining outside, but I don’t believe it is.” So this leads to a remarkable result: for every agent which is trying to accomplish something, we can cynically say ‘how very convenient that the approach you think is best is the one you happen to be using! How awfully awfully convenient! Not.’ And since we can say it for every agent equally, the argument is entirely useless.)
Incidentally:
it does seem like FAI has a special attraction for armchair rationalists:
I think you badly overstate your case here. Most armchair rationalists seem to much prefer activities like… saving the world by debunking theism (again). How many issues have Skeptic or Skeptical Inquirer devoted to discussing FAI?
There’s a much more obvious reason why many LWers would find FAI interesting other than the concept being some sort of attractive death spiral for armchair rationalists in general...
My suspicion isn’t because the recommended strategy has some benefits, it’s because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn’t require us to do anything particularly hard. What’s suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.
Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is “keep NASA at current funding levels and increase funding for nuclear weapons research” then you should be very suspicious.
I think you’re missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.
Your application of cynicism proves everything, and so proves nothing. Every strategy can be—rightly—pointed out to benefit some group and disadvantage some other group.
The only time this wouldn’t apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn’t benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don’t seem to actually believe it rather than believing them because this particular use of Outside View doesn’t penalize them.
(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn’t believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - ‘if I were a mathematical genius, I could be making more progress on FAI than if I were an ordinary person whose main skills are OK writing and research’ - or for counterfactual selves with stronger willpower, but for oneself? This is analogous to Moore’s paradox or the epistemic question, what sort of agent doesn’t believe that his current beliefs are the best for him to hold? “It’s raining outside, but I don’t believe it is.” So this leads to a remarkable result: for every agent which is trying to accomplish something, we can cynically say ‘how very convenient that the approach you think is best is the one you happen to be using! How awfully awfully convenient! Not.’ And since we can say it for every agent equally, the argument is entirely useless.)
Incidentally:
I think you badly overstate your case here. Most armchair rationalists seem to much prefer activities like… saving the world by debunking theism (again). How many issues have Skeptic or Skeptical Inquirer devoted to discussing FAI?
There’s a much more obvious reason why many LWers would find FAI interesting other than the concept being some sort of attractive death spiral for armchair rationalists in general...
My suspicion isn’t because the recommended strategy has some benefits, it’s because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn’t require us to do anything particularly hard. What’s suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.
FHI, for what it’s worth, does say that simulation shutdown is underestimated but doesn’t suggest doing anything.