I wonder how you react to naysayers who say things like:
How about if you solve a ban on gain-of-function research first, and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
¿Por qué no los dos? Different problems, both worth solving, and not especially connected, in my model.
“Start with an easier problem with similar dynamics to build up expertise” sounds to me like a fully general argument for working on anything other than the actual thing one cares about; it’s a procrastinator’s mantra, in most cases. There are plenty of exceptions, but I don’t think this is one of them.
If I thought there could only ever be one concerted attempt at an AGI ban, and that this was much less likely to fail if the promoters had previously solved some other kind of ban, I could sort of see the logic. But that doesn’t match my model. I’m not even convinced that banning bio gain-of-function is harder, having not tried it, and noticing that I’m confused about the entrenched interests that must exist for it to have continued as long as it has.
I also could see this approach backfiring, as the AGI ban promoters would be seen as the “guys who like to ban stuff” people, and as people who were more concerned about gain-of-function than AI catastrophe.
I wonder how you react to naysayers who say things like:
¿Por qué no los dos? Different problems, both worth solving, and not especially connected, in my model.
“Start with an easier problem with similar dynamics to build up expertise” sounds to me like a fully general argument for working on anything other than the actual thing one cares about; it’s a procrastinator’s mantra, in most cases. There are plenty of exceptions, but I don’t think this is one of them.
If I thought there could only ever be one concerted attempt at an AGI ban, and that this was much less likely to fail if the promoters had previously solved some other kind of ban, I could sort of see the logic. But that doesn’t match my model. I’m not even convinced that banning bio gain-of-function is harder, having not tried it, and noticing that I’m confused about the entrenched interests that must exist for it to have continued as long as it has.
I also could see this approach backfiring, as the AGI ban promoters would be seen as the “guys who like to ban stuff” people, and as people who were more concerned about gain-of-function than AI catastrophe.