surprisingly powerful demonstration soon could change things too, 1% seems low. look at how quickly views can change about things like it’s just the flu, current wave of updating from gpt3 (among certain communities), etc
one conceptual contribution I’d put forward for consideration is whether this question may more about emotions or social equilibria than about reaching a reasoned intellectual consensus. it’s worth considering how a relatively proximate/homogenous group of people tends to change its beliefs. for better or worse, everything from viscerally compelling demonstrations of safety problems to social pressure to coercion or top-down influence to the transition from intellectual to grounded/felt risk should be part of the model of change—alongside rational, lucid, considered debate tied to deeper understanding or the truth of the matter. the demonstration doesn’t actually have to be a compelling demonstration of risks to be a compelling illustration of them (imagine a really compelling VR experience, as a trivial example).
maybe the term I’d use is ‘belief cascades’, and I might point to a rapid shift towards office closures during early COVID as an example of this. the tipping point arrived sooner than some expected, not due to considered updates in beliefs about risk or the utility of closures (the evidence had been there for a while), but rather from a cascade of fear, a noisy consensus that not acting/thinking in alignment with the perceived consensus (‘this is a real concern’) would lead to social censure, etc.
in short, this might happen sooner, more suddenly, and for stranger reasons than I think the prior distribution implies.
NB the point about a newly unveiled population of researchers in my first bin might stretch the definition of ‘top AI researchers’ in the question specification, but I believe it’s in line with the spirit of the question
+1 for the general idea of belief cascades. This is an important point, though I had already considered it. When I said “percolates to the general AI community over the next few years” I wasn’t imagining that this would happen via reasoned intellectual discourse, I was more imagining compelling demonstrations (which may or may not be well-connected to the actual reasons for worry).
surprisingly powerful demonstration soon could change things too, 1% seems low. look at how quickly views can change about things like it’s just the flu, current wave of updating from gpt3 (among certain communities), etc
my (quickly-made) snapshot: https://elicit.ought.org/builder/dmtz3sNSY
one conceptual contribution I’d put forward for consideration is whether this question may more about emotions or social equilibria than about reaching a reasoned intellectual consensus. it’s worth considering how a relatively proximate/homogenous group of people tends to change its beliefs. for better or worse, everything from viscerally compelling demonstrations of safety problems to social pressure to coercion or top-down influence to the transition from intellectual to grounded/felt risk should be part of the model of change—alongside rational, lucid, considered debate tied to deeper understanding or the truth of the matter. the demonstration doesn’t actually have to be a compelling demonstration of risks to be a compelling illustration of them (imagine a really compelling VR experience, as a trivial example).
maybe the term I’d use is ‘belief cascades’, and I might point to a rapid shift towards office closures during early COVID as an example of this. the tipping point arrived sooner than some expected, not due to considered updates in beliefs about risk or the utility of closures (the evidence had been there for a while), but rather from a cascade of fear, a noisy consensus that not acting/thinking in alignment with the perceived consensus (‘this is a real concern’) would lead to social censure, etc.
in short, this might happen sooner, more suddenly, and for stranger reasons than I think the prior distribution implies.
NB the point about a newly unveiled population of researchers in my first bin might stretch the definition of ‘top AI researchers’ in the question specification, but I believe it’s in line with the spirit of the question
+1 for the general idea of belief cascades. This is an important point, though I had already considered it. When I said “percolates to the general AI community over the next few years” I wasn’t imagining that this would happen via reasoned intellectual discourse, I was more imagining compelling demonstrations (which may or may not be well-connected to the actual reasons for worry).