Maybe specifically, which institutions do you mean?
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
Assuming I’m roughly correct that there’s a lag, there’d be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
I’m trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says “you’re too big, you need to be broken up” and the company says “no” and takes over the government?
Sort of, but worse – I’m imagining something more like “the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant.”
Okay, I see, that makes sense and seems plausible, though I’d bet against it happening. But you’ve convinced me that I should qualify that sentence more.
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it’s not constrained by people, it could build up a very large amount of power in a place that’s beyond the reach of a superpower’s anti-trust institutions.
You’d have to get the employees to move there, which seems like a dealbreaker currently given how hot of a commodity AI researchers are.
I suppose that’s true. Although assuming that the company has developed intent aligned AGI, I don’t see why the entire branch couldn’t be automated, with the exception of a couple of human figureheads. Even if the AGI isn’t good enough to do AI research, or the company doesn’t trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.