On 3, I notice this part of your post jumps out to me:
Of course, I’d have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
On 3, I notice this part of your post jumps out to me:
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).