As I wrote in a comment to the survey results post, the interpretation of assignment of low probability to cryonics as some sort of disagreement or opposition is misleading:
… if … probability of global catastrophe … [is] taken into account … even though I’m almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as “doesn’t believe in cryonics”?
Of course not. Why the low probability is important is because it defeats the simplistic non-probabilistic usual accounts of cultists as believing in dogmatic shibboleths; if Bart119 were sophisticated enough to say that 10% is still too much, then we can move the discussion to a higher plane of disagreement than simply claiming ‘LW seems obsessed with cryonics’, hopefully good arguments like ‘$250k is too much to pay for such a risky shot at future life’ or ‘organizational mortality implies <1% chance of cryopreservation over centuries and the LW average is shockingly optimistic’ etc.
To continue your existential risk analogy, this is like introducing someone to existential risks and saying it’s really important stuff, and then them saying ‘but all those risks have never happened to us!’ This person clearly hasn’t grasped the basic cost-benefit claim, so you need to start at the beginning in a way you would not with someone who immediately grasps it and makes a sophisticated counter-claim like ‘anthropic arguments show that existential risks have been overestimated’.
As I wrote in a comment to the survey results post, the interpretation of assignment of low probability to cryonics as some sort of disagreement or opposition is misleading:
Of course not. Why the low probability is important is because it defeats the simplistic non-probabilistic usual accounts of cultists as believing in dogmatic shibboleths; if Bart119 were sophisticated enough to say that 10% is still too much, then we can move the discussion to a higher plane of disagreement than simply claiming ‘LW seems obsessed with cryonics’, hopefully good arguments like ‘$250k is too much to pay for such a risky shot at future life’ or ‘organizational mortality implies <1% chance of cryopreservation over centuries and the LW average is shockingly optimistic’ etc.
To continue your existential risk analogy, this is like introducing someone to existential risks and saying it’s really important stuff, and then them saying ‘but all those risks have never happened to us!’ This person clearly hasn’t grasped the basic cost-benefit claim, so you need to start at the beginning in a way you would not with someone who immediately grasps it and makes a sophisticated counter-claim like ‘anthropic arguments show that existential risks have been overestimated’.