My own impression is that this would be an improvement over the status quo. Main reasons:
A lot of my P(doom) comes from race dynamics.
Right now, if a leading lab ends up realizing that misalignment risks are super concerning, they can’t do much to end the race. Their main strategy would be to go to the USG.
If the USG runs the Manhattan Project (or there’s some sort of soft nationalization in which the government ends up having a much stronger role), it’s much easier for the USG to see that misalignment risks are concerning & to do something about it.
A national project would be more able to slow down and pursue various kinds of international agreements (the national project has more access to POTUS, DoD, NSC, Congress, etc.)
I expect the USG to be stricter on various security standards. It seems more likely to me that the USG would EG demand a lot of security requirements to prevent model weights or algorithmic insights from leaking to China. One of my major concerns is that people will want to pause at GPT-X but they won’t feel able to because China stole access to GPT-Xminus1 (or maybe even a slightly weaker version of GPT-X).
In general, I feel like USG natsec folks are less “move fast and break things” than folks in SF. While I do think some of the AGI companies have tried to be less “move fast and break things” than the average company, I think corporate race dynamics & the general cultural forces have been the dominant factors and undermined a lot of attempts at meaningful corporate governance.
(Caveat that even though I see this as a likely improvement over status quo, this doesn’t mean I think this is the best thing to be advocating for.)
(Second caveat that I haven’t thought about this particular question very much and I could definitely be wrong & see a lot of reasonable counterarguments.)
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the “fuck you, I’m doing Iran-Contra” folks. Which do you expect will get in control of such a program ? It’s not immediately clear to me which ones would.
@davekasten I know you posed this question to us, but I’ll throw it back on you :) what’s your best-guess answer?
Or perhaps put differently: What do you think are the factors that typically influence whether the cautious folks or the non-cautious folks end up in charge? Are there any historical or recent examples of these camps fighting for power over an important operation?
Why is the built-in assumption for almost every single post on this site that alignment is impossible and we need a 100 year international ban to survive? This does not seem particularly intellectually honest to me. It is very possible no international agreement is needed. Alignment may turn out to be quite tractable.
A mere 5% chance that the plane will crash during your flight is consistent with considering this extremely concerning and doing anything in your power to avoid getting on it. “Alignment is impossible” is not necessary for great concern, isn’t implied by great concern.
I don’t think this line of argument is a good one. If there’s a 5% chance of x-risk and, say, a 50% chance that AGI makes the world just generally be very chaotic and high-stakes over the next few decades, then it seems very plausible that you should mostly be optimizing for making the 50% go well rather than the 5%.
Still consistent with great concern. I’m pointing out that O O’s point isn’t locally valid, observing concern shouldn’t translate into observing belief that alignment is impossible.
Yudkowsky has a pinned tweet that states the problem quite well: it’s not so much that alignment is necessarily infinitely difficult, but that it certainly doesn’t seem anywhere as easy as advancing capabilities, and that’s a problem when what matters is whether the first powerful AI is aligned:
Safely aligning a powerful AI will be said to be ‘difficult’ if that work takes two years longer or 50% more serial time, whichever is less, compared to the work of building a powerful AI without trying to safely align it.
Another frame: If alignment turns out to be easy, then the default trajectory seems fine (at least from an alignment POV. You might still be worried about EG concentration of power).
If alignment turns out to be hard, then the policy decisions we make to affect the default trajectory matter a lot more.
This means that even if misalignment risks are relatively low, a lot of value still comes from thinking about worlds where misalignment is hard (or perhaps “somewhat hard but not intractably hard”).
It’s not every post, but there are still a lot of people who think that alignment is very hard.
The more common assumption is that we should assume that alignment isn’t trivial, because an intellectually honest assessment of the range of opinions suggests that we collectively do not yet know how hard alignment will be.
My own impression is that this would be an improvement over the status quo. Main reasons:
A lot of my P(doom) comes from race dynamics.
Right now, if a leading lab ends up realizing that misalignment risks are super concerning, they can’t do much to end the race. Their main strategy would be to go to the USG.
If the USG runs the Manhattan Project (or there’s some sort of soft nationalization in which the government ends up having a much stronger role), it’s much easier for the USG to see that misalignment risks are concerning & to do something about it.
A national project would be more able to slow down and pursue various kinds of international agreements (the national project has more access to POTUS, DoD, NSC, Congress, etc.)
I expect the USG to be stricter on various security standards. It seems more likely to me that the USG would EG demand a lot of security requirements to prevent model weights or algorithmic insights from leaking to China. One of my major concerns is that people will want to pause at GPT-X but they won’t feel able to because China stole access to GPT-Xminus1 (or maybe even a slightly weaker version of GPT-X).
In general, I feel like USG natsec folks are less “move fast and break things” than folks in SF. While I do think some of the AGI companies have tried to be less “move fast and break things” than the average company, I think corporate race dynamics & the general cultural forces have been the dominant factors and undermined a lot of attempts at meaningful corporate governance.
(Caveat that even though I see this as a likely improvement over status quo, this doesn’t mean I think this is the best thing to be advocating for.)
(Second caveat that I haven’t thought about this particular question very much and I could definitely be wrong & see a lot of reasonable counterarguments.)
As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the “fuck you, I’m doing Iran-Contra” folks. Which do you expect will get in control of such a program ? It’s not immediately clear to me which ones would.
@davekasten I know you posed this question to us, but I’ll throw it back on you :) what’s your best-guess answer?
Or perhaps put differently: What do you think are the factors that typically influence whether the cautious folks or the non-cautious folks end up in charge? Are there any historical or recent examples of these camps fighting for power over an important operation?
Why is the built-in assumption for almost every single post on this site that alignment is impossible and we need a 100 year international ban to survive? This does not seem particularly intellectually honest to me. It is very possible no international agreement is needed. Alignment may turn out to be quite tractable.
A mere 5% chance that the plane will crash during your flight is consistent with considering this extremely concerning and doing anything in your power to avoid getting on it. “Alignment is impossible” is not necessary for great concern, isn’t implied by great concern.
I don’t think this line of argument is a good one. If there’s a 5% chance of x-risk and, say, a 50% chance that AGI makes the world just generally be very chaotic and high-stakes over the next few decades, then it seems very plausible that you should mostly be optimizing for making the 50% go well rather than the 5%.
Still consistent with great concern. I’m pointing out that O O’s point isn’t locally valid, observing concern shouldn’t translate into observing belief that alignment is impossible.
Yudkowsky has a pinned tweet that states the problem quite well: it’s not so much that alignment is necessarily infinitely difficult, but that it certainly doesn’t seem anywhere as easy as advancing capabilities, and that’s a problem when what matters is whether the first powerful AI is aligned:
Another frame: If alignment turns out to be easy, then the default trajectory seems fine (at least from an alignment POV. You might still be worried about EG concentration of power).
If alignment turns out to be hard, then the policy decisions we make to affect the default trajectory matter a lot more.
This means that even if misalignment risks are relatively low, a lot of value still comes from thinking about worlds where misalignment is hard (or perhaps “somewhat hard but not intractably hard”).
It’s not every post, but there are still a lot of people who think that alignment is very hard.
The more common assumption is that we should assume that alignment isn’t trivial, because an intellectually honest assessment of the range of opinions suggests that we collectively do not yet know how hard alignment will be.