Well, yeah. But there are reasons why they could. Suppose you’re them...
Maybe you see a “FOOM” coming soon. You’re not God-King yet, so you can’t stop it. If you try to slow it down, others, unaligned with you, will just FOOM first. The present state of research gives you two choices for your FOOM: (a) try for friendly AI, or (b) get paperclipped. You assign very low utility to being paperclipped. So you go for friendly AI. Ceteris parabus, your having this choice becomes more likely if research in general is going toward friendliness and less likely if research in general is going toward intent alignment.
Maybe you’re afraid of what being God-King would turn you into, or you fear making some embarassingly stupid decision that switches you to the “paperclip” track, or you think having to be God-King would be a drag, or you’re morally opposed, or all of the above. Most people will go wrong eventually if given unlimited power, but that doesn’t mean they can’t stay non-wrong long enough to voluntarily give up that power for whatever reason. I personally would see myself on this track. Unfortunately I suspect that the barriers to being in charge of a “lab” select against it, though. And I think it’s also less likely if the prospective “God-King” is actually a group rather than an individual.
Maybe you’re forced, or not “in charge” any more, because there’s a torches-and-pitchforks-wielding mob or an enlightened democratic government or whatever. It could happen.
But why would the people who are currently in charge of AI labs want to do that, when they could stay in charge and become god-kings instead?
Well, yeah. But there are reasons why they could. Suppose you’re them...
Maybe you see a “FOOM” coming soon. You’re not God-King yet, so you can’t stop it. If you try to slow it down, others, unaligned with you, will just FOOM first. The present state of research gives you two choices for your FOOM: (a) try for friendly AI, or (b) get paperclipped. You assign very low utility to being paperclipped. So you go for friendly AI. Ceteris parabus, your having this choice becomes more likely if research in general is going toward friendliness and less likely if research in general is going toward intent alignment.
Maybe you’re afraid of what being God-King would turn you into, or you fear making some embarassingly stupid decision that switches you to the “paperclip” track, or you think having to be God-King would be a drag, or you’re morally opposed, or all of the above. Most people will go wrong eventually if given unlimited power, but that doesn’t mean they can’t stay non-wrong long enough to voluntarily give up that power for whatever reason. I personally would see myself on this track. Unfortunately I suspect that the barriers to being in charge of a “lab” select against it, though. And I think it’s also less likely if the prospective “God-King” is actually a group rather than an individual.
Maybe you’re forced, or not “in charge” any more, because there’s a torches-and-pitchforks-wielding mob or an enlightened democratic government or whatever. It could happen.