I do take the broad interpretation of AGI-related work.
I hadn’t considered the point that people may ask prominent AI researchers their opinion about AI safety, and that leading them to have better beliefs about safety. I think overall I don’t actually expect this to be a major factor, but it’s a good point and updated me slightly towards sooner.
I wouldn’t expect a geometric distribution—consensus building requires time, as a result you might expect a buildup from 0 for <time taken to build consensus> and then have it follow a geometric distribution. In addition, getting to 50% seems likely to require a warning shot of some significance; current AI systems don’t seem capable enough to produce a compelling enough warning shot.
I do take the broad interpretation of AGI-related work.
I hadn’t considered the point that people may ask prominent AI researchers their opinion about AI safety, and that leading them to have better beliefs about safety. I think overall I don’t actually expect this to be a major factor, but it’s a good point and updated me slightly towards sooner.
I wouldn’t expect a geometric distribution—consensus building requires time, as a result you might expect a buildup from 0 for <time taken to build consensus> and then have it follow a geometric distribution. In addition, getting to 50% seems likely to require a warning shot of some significance; current AI systems don’t seem capable enough to produce a compelling enough warning shot.