Yeah, AI alignment is hard. I get that. But since I’m new to the field, I’m trying to figure out what options we have in the first place and so far, I’ve come up with only three:
A: Ensure that no ASI is ever built. Can anything short of a GPU nuke accomplish this? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an ASI anyway.
B: Ensure that there is no AI apocalypse, even if a misaligned ASI is built. Is that even possible?
C: What I describe in this article—actively build an aligned ASI to act as a smart nuke that only eradicates misaligned ASI. For that purpose, the aligned ASI would need to constantly run on all online devices, or at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that.
To be fair I can say Im new to the field too. I’m not even “in the field”, not a researcher, just interested in that area and active user of AI models and doing some business-level research in ML.
The problem that I see is that none of these could realistically work soon enough:
A—no one can ensure that. It is not a technology where to progress further you need some special radioactive elements and machinery. Here you need only computing power, thinking, and time. Any party to the table can do it. It is easier for big companies and governments, but it is not a prerequisite. Billions in cash and supercomputer help a lot, but also not a prerequisite.
B—I don’t see how it could be done
C—so more like total observability of all systems and “control” meaning “overlooking” not “taking control”?
Maybe it could work out, but it still means we need to resolve the misalignment problems before starting so we know it is aligned on all human values and we need to be sure that it is stable (like it won’t one-day fancy idea that it could move humanity to some virtual reality like in Matrix to secure it or to create a threat to have something to do or test something).
It would also likely need to somehow enhance itself so it won’t get outpaced by some other solutions, but still be stable after iterations of self-change.
I don’t think governments and companies will allow that though. They will fear for security, the safety of information, being spied on, etc. This AI would need to force that control, hack systems, and possibly face resistance from actors that are well-enabled to make their own AIs. Or it would work after we face an AI-based catastrophe but not apocalyptic (situation like in Dune).
So I’m not very optimistic about this strategy, but I also don’t know any sensible strategy.
Yeah, AI alignment is hard. I get that. But since I’m new to the field, I’m trying to figure out what options we have in the first place and so far, I’ve come up with only three:
A: Ensure that no ASI is ever built. Can anything short of a GPU nuke accomplish this? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an ASI anyway.
B: Ensure that there is no AI apocalypse, even if a misaligned ASI is built. Is that even possible?
C: What I describe in this article—actively build an aligned ASI to act as a smart nuke that only eradicates misaligned ASI. For that purpose, the aligned ASI would need to constantly run on all online devices, or at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that.
Am I overlooking something?
To be fair I can say Im new to the field too. I’m not even “in the field”, not a researcher, just interested in that area and active user of AI models and doing some business-level research in ML.
The problem that I see is that none of these could realistically work soon enough:
A—no one can ensure that. It is not a technology where to progress further you need some special radioactive elements and machinery. Here you need only computing power, thinking, and time. Any party to the table can do it. It is easier for big companies and governments, but it is not a prerequisite. Billions in cash and supercomputer help a lot, but also not a prerequisite.
B—I don’t see how it could be done
C—so more like total observability of all systems and “control” meaning “overlooking” not “taking control”?
Maybe it could work out, but it still means we need to resolve the misalignment problems before starting so we know it is aligned on all human values and we need to be sure that it is stable (like it won’t one-day fancy idea that it could move humanity to some virtual reality like in Matrix to secure it or to create a threat to have something to do or test something).
It would also likely need to somehow enhance itself so it won’t get outpaced by some other solutions, but still be stable after iterations of self-change.
I don’t think governments and companies will allow that though. They will fear for security, the safety of information, being spied on, etc. This AI would need to force that control, hack systems, and possibly face resistance from actors that are well-enabled to make their own AIs. Or it would work after we face an AI-based catastrophe but not apocalyptic (situation like in Dune).
So I’m not very optimistic about this strategy, but I also don’t know any sensible strategy.