I’m not confident that I could give a meaningful number with any degree of confidence. I lack expertise in corporate governance, bio-safety and climate forecasting. Additionally, for the condition to be satisfied that corporations are left “unchecked” there would need to be a dramatic Western political shift that makes speculating extremely difficult.
I will outline my intuition for why (very large, global) human corporations could pose an existential risk (conditional on the existential risk from AI being negligible and global governance being effectively absent).
1.1 In the last hundred years, we’ve seen that (some) large corporations are willing to cause harm on a massive scale if it is profitable to do so, either intentionally or through neglect. Note that these decisions are mostly “rational” if your only concern is money.
IBM supported its German subsidiary company Dehomag throughout WWII. When the Nazis carried out the 1939 census, used to identify people with Jewish ancestry, they utilized the Dehomag D11, with “IBM” etched on the front. Later, Concentration camps would use Dehomag machines to manage data related to prisoners, resources and labor within the camps. The numbers tattooed onto prisoners’ bodies was used to track them via these machines.
1.2 Some corporations have also demonstrated they’re willing to cut corners and take risks at the expense of human lives.
3787 people died and more than half a million were injured when a due to a gas leak at a pesticide plant in Bhopal, India. The corporation running the plant, Union Carbide India Limited, was majority owned by the US-based Union Carbide Corporation (UCC). Ultimately UCC would pay less than a dollar per person affected.
2. Without corporate governance, immoral decision making and risk taking behaviour could be expected to increase. If the net benefit of taking an action improves because there are fewer repercussions when things go wrong, they should reasonably be expected to increase in frequency.
3. In recent decades there has been a trend (at least in the US) towards greater stock market concentration. For large corporations to pose and existential risk, this trend would need to continue until individual decisions made by a small group of corporations can affect the entire world.
I am not able to describe the exact mechanism of how unchecked corporations would post an existential risk, similar to how the exact mechanism for an AI takeover is still speculation.
You would have a small group of organisations responsible for deciding the production activities of large swaths of the globe. Possible mechanism include:
Irreparable environmental damage.
A widespread public health crisis due to non-obvious negative externalities of production.
Premature widespread deployment of biotechnology with unintended harms.
I think if you’re already sold on the idea that “corporations are risking global extinction through the development of AI” it isn’t a giant leap to recognise that corporations could potentially threaten the world via other mechanisms.
what does it mean to keep a corporation “in check” I’m referring to effective corporate governance. Monitoring, anticipating and influencing decisions made by the corporation via a system of incentives and penalties, with the goal of ensuring actions taken by the corporation are not harmful to broader society.
do you think those mechanisms will not be available for AIs Hopefully, but there are reasons to think that the governance of a corporation controlled (partially or wholly) by AGIs or controlling one or more AGIs directly may be very difficult. I will now suggest one reason this is the case, but it isn’t the only one.
Recently we’ve seen that national governments struggle with effectively taxing multinational corporations. Partially this is because the amount of money at stake is so great, multinational corporations are incentivized to invest large amounts of money into hiring teams of accountants to reduce their tax burden or pay money directly to politicians in the form of donations to manipulate the legal environment. It becomes harder to govern an entity as that entity invest more resources into finding flaws in your governance strategy.
Once you have the capability to harness general intelligence, you can invest a vast amount of intellectual “resources” into finding loopholes in governance strategies. So while many of the same mechanisms will be available for AI’s, there’s reason to think they might not be as effective.
“This argument also appears to apply to human groups such as corporations, so we need an explanation of why those are not an existential risk”
I don’t think this is necessary. It seems pretty obvious that (some) corporations could pose an existential risk if left unchecked.
Edit: And depending on your political leanings and concern over the climate, you might agree that they already are posing an existential risk.
What do you think P(doom from corporations) is. I’ve never heard much worry about current non-AI corps?
I’m not confident that I could give a meaningful number with any degree of confidence. I lack expertise in corporate governance, bio-safety and climate forecasting. Additionally, for the condition to be satisfied that corporations are left “unchecked” there would need to be a dramatic Western political shift that makes speculating extremely difficult.
I will outline my intuition for why (very large, global) human corporations could pose an existential risk (conditional on the existential risk from AI being negligible and global governance being effectively absent).
1.1 In the last hundred years, we’ve seen that (some) large corporations are willing to cause harm on a massive scale if it is profitable to do so, either intentionally or through neglect. Note that these decisions are mostly “rational” if your only concern is money.
Copying some of the examples I gave in No Summer Harvest:
Exxon chose to suppress their own research on the dangers of climate change in the late 1970s and early 1980s.
Numerous companies ignored signs that leaded gasoline was dangerous and the introduction of the product resulted in half the US adult population being exposed to lead during childhood. Here is a paper that claims American adults born between 1966 to 1970 lost an average of 5.9 IQ points (McFarland et al., 2022, bottom of page 3)
IBM supported its German subsidiary company Dehomag throughout WWII. When the Nazis carried out the 1939 census, used to identify people with Jewish ancestry, they utilized the Dehomag D11, with “IBM” etched on the front. Later, Concentration camps would use Dehomag machines to manage data related to prisoners, resources and labor within the camps. The numbers tattooed onto prisoners’ bodies was used to track them via these machines.
1.2 Some corporations have also demonstrated they’re willing to cut corners and take risks at the expense of human lives.
NASA neglected the warnings of engineers and almost a decade of test data demonstrating that there was a catastrophic flaw with SRB O-rings, resulting in the Challenger disaster. (You may be interested in reading Richard Feynman’s observations given in the Presidential Report.)
Meta’s engagement algorithm is alleged to have driven the spread of anti-Rohingya content in Myanmar and contributed to genocide.
3787 people died and more than half a million were injured when a due to a gas leak at a pesticide plant in Bhopal, India. The corporation running the plant, Union Carbide India Limited, was majority owned by the US-based Union Carbide Corporation (UCC). Ultimately UCC would pay less than a dollar per person affected.
2. Without corporate governance, immoral decision making and risk taking behaviour could be expected to increase. If the net benefit of taking an action improves because there are fewer repercussions when things go wrong, they should reasonably be expected to increase in frequency.
3. In recent decades there has been a trend (at least in the US) towards greater stock market concentration. For large corporations to pose and existential risk, this trend would need to continue until individual decisions made by a small group of corporations can affect the entire world.
I am not able to describe the exact mechanism of how unchecked corporations would post an existential risk, similar to how the exact mechanism for an AI takeover is still speculation.
You would have a small group of organisations responsible for deciding the production activities of large swaths of the globe. Possible mechanism include:
Irreparable environmental damage.
A widespread public health crisis due to non-obvious negative externalities of production.
Premature widespread deployment of biotechnology with unintended harms.
I think if you’re already sold on the idea that “corporations are risking global extinction through the development of AI” it isn’t a giant leap to recognise that corporations could potentially threaten the world via other mechanisms.
Concretely, what does it mean to keep a corporation “in check” and do you think those mechanisms will not be available for AIs?
what does it mean to keep a corporation “in check”
I’m referring to effective corporate governance. Monitoring, anticipating and influencing decisions made by the corporation via a system of incentives and penalties, with the goal of ensuring actions taken by the corporation are not harmful to broader society.
do you think those mechanisms will not be available for AIs
Hopefully, but there are reasons to think that the governance of a corporation controlled (partially or wholly) by AGIs or controlling one or more AGIs directly may be very difficult. I will now suggest one reason this is the case, but it isn’t the only one.
Recently we’ve seen that national governments struggle with effectively taxing multinational corporations. Partially this is because the amount of money at stake is so great, multinational corporations are incentivized to invest large amounts of money into hiring teams of accountants to reduce their tax burden or pay money directly to politicians in the form of donations to manipulate the legal environment. It becomes harder to govern an entity as that entity invest more resources into finding flaws in your governance strategy.
Once you have the capability to harness general intelligence, you can invest a vast amount of intellectual “resources” into finding loopholes in governance strategies. So while many of the same mechanisms will be available for AI’s, there’s reason to think they might not be as effective.