This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop
...why do you expect those institutions to hold up in a world dominated by AGI or other powerful AI systems? (Maybe specifically, which institutions do you mean? The main options seem like ‘governments’ and ‘other companies’)e
US Government (I’m not sure about other governments) seems to lag something like 10 years behind technological developments. (This is a rough guess based on how long I recall seeing the government take significant actions around regulating stuff – Epistemic status: based on news article that made it to my eyeballs… usually via facebook).
And that’s before they start trying to take signification actions, which usually still seem pretty confused. (i.e. GDPR doesn’t really incentivize the things it needed to incentivize)
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so. You might hope that those governments are also being empowered by advanced AI stuff, but I’m approximately as worried about that as I am about companies.
(I realize I didn’t get that specific about the details, which are complicated, but I was somewhat surprised by your entire final paragraph and I’m not sure where the disagreement lies)
Maybe specifically, which institutions do you mean?
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.
...why do you expect those institutions to hold up in a world dominated by AGI or other powerful AI systems? (Maybe specifically, which institutions do you mean? The main options seem like ‘governments’ and ‘other companies’)e
US Government (I’m not sure about other governments) seems to lag something like 10 years behind technological developments. (This is a rough guess based on how long I recall seeing the government take significant actions around regulating stuff – Epistemic status: based on news article that made it to my eyeballs… usually via facebook).
And that’s before they start trying to take signification actions, which usually still seem pretty confused. (i.e. GDPR doesn’t really incentivize the things it needed to incentivize)
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so. You might hope that those governments are also being empowered by advanced AI stuff, but I’m approximately as worried about that as I am about companies.
(I realize I didn’t get that specific about the details, which are complicated, but I was somewhat surprised by your entire final paragraph and I’m not sure where the disagreement lies)
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
You’d have to get the employees to move there, which seems like a dealbreaker currently given how hot of a commodity AI researchers are.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.