The points you make are good, especially in the second paragraph. My model is that if scale is all you need, then it’s likely that indeed smaller startups are also worrying. I also think that there could be visible events in the future that would make some of these startups very serious contenders (happy to DM about that).
Having a clear map of who works in corporate governance and who works more towards policy would be very helpful. Is there anything like a “map/post of who does what in AI governance” or anything like that?
I am not aware of any good map of the governance field.
What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.
The points you make are good, especially in the second paragraph. My model is that if scale is all you need, then it’s likely that indeed smaller startups are also worrying. I also think that there could be visible events in the future that would make some of these startups very serious contenders (happy to DM about that).
Having a clear map of who works in corporate governance and who works more towards policy would be very helpful. Is there anything like a “map/post of who does what in AI governance” or anything like that?
Thanks!
I am not aware of any good map of the governance field.
What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.