If {the reasoning for why AGI might not be near} comprises {a list of missing capabilities}, then my current guess is that the least-bad option would be to share that reasoning in private with a small number of relevant (and sufficiently trustworthy) people[1].
(More generally, my priors strongly suggest keeping any pointers to AGI-enabling capabilities private.)
E.g. the most capable alignment researchers who seem (to you) to be making bad strategic decisions due to not having considered {the reasoning for why AGI might not be near}.
I think that sharing the reasoning in private with a small number of people might somewhat help with the “Alignment people specifically making bad strategic decisions that end up having major costs” cost, but not the others, and even then it would only help a small amount of the people working in alignment rather than the field in general.
I also think that impact is very unevenly distributed over people; the most impactful 5% of people probably account for >70% of the impact. [1]
And if so, then the difference in positive impact between {informing the top 5%} and {broadcasting to the field in general on the open Internet} is probably not very large. [2]
Possibly also worth considering: Would (e.g.) writing a public post actually reach those few key people more effectively than (e.g.) sending a handful of direct/targeted emails? [3]
Talking about AI (alignment) here, but I think something like this applies in many fields. I don’t have a good quantification of “impact” in mind, though, so this is very hand-wavey.
Each approach has its downsides. The first approach requires identifying the relevant people, and is likely more effortful. The latter approach has the downside of putting potentially world-ending information in the hands of people who would use it to end the world (a bit sooner than they otherwise would).
If {the reasoning for why AGI might not be near} comprises {a list of missing capabilities}, then my current guess is that the least-bad option would be to share that reasoning in private with a small number of relevant (and sufficiently trustworthy) people[1].
(More generally, my priors strongly suggest keeping any pointers to AGI-enabling capabilities private.)
E.g. the most capable alignment researchers who seem (to you) to be making bad strategic decisions due to not having considered {the reasoning for why AGI might not be near}.
I think that sharing the reasoning in private with a small number of people might somewhat help with the “Alignment people specifically making bad strategic decisions that end up having major costs” cost, but not the others, and even then it would only help a small amount of the people working in alignment rather than the field in general.
I mostly agree.
I also think that impact is very unevenly distributed over people; the most impactful 5% of people probably account for >70% of the impact. [1]
And if so, then the difference in positive impact between {informing the top 5%} and {broadcasting to the field in general on the open Internet} is probably not very large. [2]
Possibly also worth considering: Would (e.g.) writing a public post actually reach those few key people more effectively than (e.g.) sending a handful of direct/targeted emails? [3]
Talking about AI (alignment) here, but I think something like this applies in many fields. I don’t have a good quantification of “impact” in mind, though, so this is very hand-wavey.
Each approach has its downsides. The first approach requires identifying the relevant people, and is likely more effortful. The latter approach has the downside of putting potentially world-ending information in the hands of people who would use it to end the world (a bit sooner than they otherwise would).
What is in fact the most effective way to reach whoever needs to be reached? (I don’t know.)