I agree with this very intensely. I strongly regret unilaterally promoting the CFAR Handbook on various groups on Facebook; I thought that it was critical to minimize the number of AI safety and adjacent people using Facebook and that spreading the CFAR handbook was the best way to do that, and I mistakenly believed that CFAR was bad at marketing their material instead of choosing not to in order to avoid overcomplicating things. I had no way of knowing about the long list of consequences for CFAR for spreading their research in the wrong places, and CFAR had no way of warning me because they had no idea who I was and what I would do in response to their request. Hopefully, this won’t make it harder for CFAR to post helpful content to Lesswrong in the future.
There are too many outside-the-box thinkers, the chaos factor is so high that it’s like herding cats even when 99% of agents want to be cooperative. There needs to be defense mechanisms that take confusion into account so that well-intentioned unilateralists don’t get tangled up in systems meant for deliberate, consistently strategic harm-maximizers (who very clearly and unambiguously exist). The only thing I can think of is finding ways to discourage every cooperative person from acting unilaterally in the first place, but I agree with So8res that I can’t think of good ways to do that.
I agree with this very intensely. I strongly regret unilaterally promoting the CFAR Handbook on various groups on Facebook; I thought that it was critical to minimize the number of AI safety and adjacent people using Facebook and that spreading the CFAR handbook was the best way to do that, and I mistakenly believed that CFAR was bad at marketing their material instead of choosing not to in order to avoid overcomplicating things. I had no way of knowing about the long list of consequences for CFAR for spreading their research in the wrong places, and CFAR had no way of warning me because they had no idea who I was and what I would do in response to their request. Hopefully, this won’t make it harder for CFAR to post helpful content to Lesswrong in the future.
There are too many outside-the-box thinkers, the chaos factor is so high that it’s like herding cats even when 99% of agents want to be cooperative. There needs to be defense mechanisms that take confusion into account so that well-intentioned unilateralists don’t get tangled up in systems meant for deliberate, consistently strategic harm-maximizers (who very clearly and unambiguously exist). The only thing I can think of is finding ways to discourage every cooperative person from acting unilaterally in the first place, but I agree with So8res that I can’t think of good ways to do that.