From a Bayesian point of view, your prior should place low probability on a figure like “8 lives per dollar”. Therefore, lots of evidence is required to overcome that prior.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don’t look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren’t 97% confident that we have so little control over the future (I’ve thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn’t an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don’t look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren’t 97% confident that we have so little control over the future (I’ve thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn’t an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
Can you clarify what you mean by this?
Only that you consider the arguments you have advanced in good faith, as a difficulty and a piece of evidence rather than potential excuses.