An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.
There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is a crossdisciplinary field of mathematics, economics, computer science, and philosophy which tackles the problem of how to stop such superintelligences.
AI alignment is a subfield of AI safety which studies theoretical conditions under which superintelligences aligned with human values can emerge. Another branch, which might be called AI deterrence, aims instead to make the production of unaligned superintelligences less likely in the first place.
One of the primary reasons why someone might want to create a superintelligence, even while understanding the risks involved, is because of the vast economic value such a program could generate. It makes sense then from a deterrence lens to look into the question of how this profit motive might be curtailed before catastrophe. Why not communism?
Unfortunately, this is almost certainly a bad move. Communism at almost every scale has to date never been able to escape the rampant black markets that appear due to the distortion of price signals. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs. Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1. It seems like: ‘Weaker Econ system’ → less human made x-risk with high development cost. (Natural pandemics can occur, so whether they would be difficult to make isn’t clear.) That’s not to say that overall x-risk is lower—if a meteor hits and wipes out earth’s entire population, then not being on other worlds is also an issue.
2. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs.
This seems surprising—developing to the level of ‘we’re working on AI’, it takes a while to get there.
3. I’d have guessed you’d mention ‘communism’ creating AGI. (These markets keep popping up! What should we do about them? We could allocate stuff using an AI...)
Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1a → Broadly agree. “Weaker” is an interesting word to pick here; I’m not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we’d be in a tough position—but I don’t think anyone takes that view seriously.
1b → Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That’s why we’re all here.
2 → I do agree it would probably take a while.
3a → Depends on how coarse or fine grained the distribution of resources is, a simple linear optimizer program would probably do the same job better for most coarser distribution schemes.
3b → Kind of. I’m looking into them as a curiosity.
Would AGI still be an x-risk under communism?
1-bit verdict
Yes.
2-bit verdict
Absolutely, yes.
Explanation
An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.
There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is a crossdisciplinary field of mathematics, economics, computer science, and philosophy which tackles the problem of how to stop such superintelligences.
AI alignment is a subfield of AI safety which studies theoretical conditions under which superintelligences aligned with human values can emerge. Another branch, which might be called AI deterrence, aims instead to make the production of unaligned superintelligences less likely in the first place.
One of the primary reasons why someone might want to create a superintelligence, even while understanding the risks involved, is because of the vast economic value such a program could generate. It makes sense then from a deterrence lens to look into the question of how this profit motive might be curtailed before catastrophe. Why not communism?
Unfortunately, this is almost certainly a bad move. Communism at almost every scale has to date never been able to escape the rampant black markets that appear due to the distortion of price signals. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs. Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.
1. It seems like: ‘Weaker Econ system’ → less human made x-risk with high development cost. (Natural pandemics can occur, so whether they would be difficult to make isn’t clear.) That’s not to say that overall x-risk is lower—if a meteor hits and wipes out earth’s entire population, then not being on other worlds is also an issue.
2. There is no reason to suspect such black markets wouldn’t have just as strong a profit motive to create stronger and stronger AGIs.
This seems surprising—developing to the level of ‘we’re working on AI’, it takes a while to get there.
3. I’d have guessed you’d mention ‘communism’ creating AGI. (These markets keep popping up! What should we do about them? We could allocate stuff using an AI...)
There’s deterrence oriented legislation?
1a → Broadly agree. “Weaker” is an interesting word to pick here; I’m not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we’d be in a tough position—but I don’t think anyone takes that view seriously.
1b → Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That’s why we’re all here.
2 → I do agree it would probably take a while.
3a → Depends on how coarse or fine grained the distribution of resources is, a simple linear optimizer program would probably do the same job better for most coarser distribution schemes.
3b → Kind of. I’m looking into them as a curiosity.
This could mean a few different things. What did you mean by it? (Specifically “That’s why we’re all here.”.)