To be clear, I very much agree with being careful with technologies that have 10% chance of causing existential catastrophe. But I don’t see how the part of OP about conservatism connects to it. I think it’s more likely that being conservative about impact would generate probabilities much less than 10%. And if anyone says that their probability is 10%, then maybe it’s the case of people only having enough resolution for three kinds of probabilities and they think it’s less than 50%. Or they are already trying to not be very certain and explicitly widen their confidence intervals (maybe after getting probability from someone more confident), but they actually believe in being conservative more than they believe in their stated probability. So then it becomes about why it is at least 10% - why being conservative in that direction is wrong in general or what are your clear arguments and how are we supposed to weight them against “it’s hard to make impact”?
I don’t know what you mean by “conservative about impact”
I mean predicting modest impact for reasons futurist maybe should predict modest impacts (like “existential catastrophes never happened before” or “novel technologies always plateau” or whole cluster of similar heuristics in opposition to “building safety buffer”).
It sounds like you’re saying “being rigorous and circumspect in your predictions will tend to yield probabilities much less than 10%”?
Not necessary “rigorous”—I’m not saying such thinking is definitely correct. I just can’t visualize thought process that arrives at 50% before correction, then applies conservative adjustment, because it’s all crazy, still gets 10% and proceeds to “then it’s fine”. So if survey respondents have higher probabilities and no complicated plan, then I don’t actually believe that opposite-of-engineering-conservatism mindset applies to them. Yes, maybe you mostly said things about not being decision-maker, but then what’s the point of that quote about bridges?
To be clear, I very much agree with being careful with technologies that have 10% chance of causing existential catastrophe. But I don’t see how the part of OP about conservatism connects to it. I think it’s more likely that being conservative about impact would generate probabilities much less than 10%. And if anyone says that their probability is 10%, then maybe it’s the case of people only having enough resolution for three kinds of probabilities and they think it’s less than 50%. Or they are already trying to not be very certain and explicitly widen their confidence intervals (maybe after getting probability from someone more confident), but they actually believe in being conservative more than they believe in their stated probability. So then it becomes about why it is at least 10% - why being conservative in that direction is wrong in general or what are your clear arguments and how are we supposed to weight them against “it’s hard to make impact”?
I don’t know what you mean by “conservative about impact”. The OP distinguishes three things:
conservatism in decision-making and engineering: building in safety buffer, erring on the side of caution.
non-conservatism in decision-making and engineering, that at least doesn’t shrug at things like “10% risk of killing all humans”.
non-conservatism that does shrug at medium-probability existential risks.
It separately distinguishes these two things:
forecasting “conservatism”, in the sense of being rigorous and circumspect in your predictions.
forecasting pseudo-conservatism (‘assuming without argument that everything will be normal and familiar indefinitely’).
It sounds like you’re saying “being rigorous and circumspect in your predictions will tend to yield probabilities much less than 10%”? I don’t know why you think that, and I obviously disagree, as do 91+% of the survey respondents in https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results. See e.g. AGI Ruin for a discussion of why the risk looks super high to me.
I mean predicting modest impact for reasons futurist maybe should predict modest impacts (like “existential catastrophes never happened before” or “novel technologies always plateau” or whole cluster of similar heuristics in opposition to “building safety buffer”).
Not necessary “rigorous”—I’m not saying such thinking is definitely correct. I just can’t visualize thought process that arrives at 50% before correction, then applies conservative adjustment, because it’s all crazy, still gets 10% and proceeds to “then it’s fine”. So if survey respondents have higher probabilities and no complicated plan, then I don’t actually believe that opposite-of-engineering-conservatism mindset applies to them. Yes, maybe you mostly said things about not being decision-maker, but then what’s the point of that quote about bridges?