Yes, but I think it’s important that when someone says, “Well I think one-shotting X is impossible at any level of intelligence,” you can reply, “Maybe, but that doesn’t really help solve the not-dying problem, which is the part that I care about.”
I think the harder the theoretical doom plan it is the easier it is to control at least until alignment research catches up. It’s important because obsessing over unlikely scenarios that make the problem harder than it is can exclude potential solutions.
No one doubts that an ASI would have an easier time executing its plans than we could imagine but the popular claim is one-shot.
Yes, but I think it’s important that when someone says, “Well I think one-shotting X is impossible at any level of intelligence,” you can reply, “Maybe, but that doesn’t really help solve the not-dying problem, which is the part that I care about.”
I think the harder the theoretical doom plan it is the easier it is to control at least until alignment research catches up. It’s important because obsessing over unlikely scenarios that make the problem harder than it is can exclude potential solutions.