National government policy won’t have strong[5] effects (70%)
This can change rapidly, e.g., if systems suddenly get much more agentic and become more reliable decision-makers or if we see incidents with power-seeking AI systems. Unless you believe in takeoff speeds of weeks, governments will be important actors in the time just before AGI, and it will be essential to have people working in relevant positions to advise them.
I hesitated on decreasing the likelihood on that one based on your consideration to be honest, but I still think that 30% of having strong effects is quite a lot because as you mentioned it requires the intersection of many conditions.
In particular, you don’t mention which intervention you expect from them. If you take the intervention I took as a reference class (“Constrain labs to airgap and box their SOTA models while they train them”), do you think there are things that are as much or more “extreme” than this and that are likely?
What might be misleading in my statement is that it could be understood as “let’s drop national government policy” while it’s more “I think that currently too many people are focused on national government policy and not enough are focused on corporate governance, and it puts us in a fairly bad position for pre-2030 timelines”.
I think you are ignoring the connection between corporate governance and national/supra-national government policies. Typically, corporations do not implement costly self-governance and risk management mechanisms just because some risk management activists have asked them nicely. They implement them if and when some powerful state requires them to implement them, requires this as a condition for market access or for avoiding fines and jail-time.
Asking nicely may work for well-funded research labs who do not need to show any profitability, and even in that special case one can have doubts about how long their do-not-need-to-be-profitable status will last. But definitely, asking nicely will not work for your average early-stage AI startup. The current startup ecosystem encourages the creation of companies that behave irresponsibly by cutting corners. I am less confident than you are that Deepmind and OpenAI have a major lead over these and future startups, to the point where we don’t even need to worry about them.
It is my assessment that, definitely in EA and x-risk circles, too few people are focussed on national government policy as a means to improve corporate governance among the less responsible corporations. In the case of EA, one might hope that recent events will trigger some kind of update.
The points you make are good, especially in the second paragraph. My model is that if scale is all you need, then it’s likely that indeed smaller startups are also worrying. I also think that there could be visible events in the future that would make some of these startups very serious contenders (happy to DM about that).
Having a clear map of who works in corporate governance and who works more towards policy would be very helpful. Is there anything like a “map/post of who does what in AI governance” or anything like that?
I am not aware of any good map of the governance field.
What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.
This can change rapidly, e.g., if systems suddenly get much more agentic and become more reliable decision-makers or if we see incidents with power-seeking AI systems. Unless you believe in takeoff speeds of weeks, governments will be important actors in the time just before AGI, and it will be essential to have people working in relevant positions to advise them.
I hesitated on decreasing the likelihood on that one based on your consideration to be honest, but I still think that 30% of having strong effects is quite a lot because as you mentioned it requires the intersection of many conditions.
In particular, you don’t mention which intervention you expect from them. If you take the intervention I took as a reference class (“Constrain labs to airgap and box their SOTA models while they train them”), do you think there are things that are as much or more “extreme” than this and that are likely?
What might be misleading in my statement is that it could be understood as “let’s drop national government policy” while it’s more “I think that currently too many people are focused on national government policy and not enough are focused on corporate governance, and it puts us in a fairly bad position for pre-2030 timelines”.
I think you are ignoring the connection between corporate governance and national/supra-national government policies. Typically, corporations do not implement costly self-governance and risk management mechanisms just because some risk management activists have asked them nicely. They implement them if and when some powerful state requires them to implement them, requires this as a condition for market access or for avoiding fines and jail-time.
Asking nicely may work for well-funded research labs who do not need to show any profitability, and even in that special case one can have doubts about how long their do-not-need-to-be-profitable status will last. But definitely, asking nicely will not work for your average early-stage AI startup. The current startup ecosystem encourages the creation of companies that behave irresponsibly by cutting corners. I am less confident than you are that Deepmind and OpenAI have a major lead over these and future startups, to the point where we don’t even need to worry about them.
It is my assessment that, definitely in EA and x-risk circles, too few people are focussed on national government policy as a means to improve corporate governance among the less responsible corporations. In the case of EA, one might hope that recent events will trigger some kind of update.
The points you make are good, especially in the second paragraph. My model is that if scale is all you need, then it’s likely that indeed smaller startups are also worrying. I also think that there could be visible events in the future that would make some of these startups very serious contenders (happy to DM about that).
Having a clear map of who works in corporate governance and who works more towards policy would be very helpful. Is there anything like a “map/post of who does what in AI governance” or anything like that?
Thanks!
I am not aware of any good map of the governance field.
What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.