A potentially somewhat important thing which I haven’t seen discussed:
People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
In fact, so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
Decisions about AI policy are mostly determined by people with a lot of capital or political power. (E.g. Andreessen Horowitz, JD Vance, Trump, etc.)
(This looks like a decisionmaker is not the beneficiary -type of situation.)
Why does that matter?
-
It has implications for modeling decisionmakers, interpreting their words, and for how to interact with them.[1]
-
If we are in a gradual-takeoff world[2], then we should perhaps not be too surprised to see the wealthy and powerful push for AI-related policies that make them more wealthy and powerful, while a majority of humans become disempowered and starve to death (or live in destitution, or get put down with viruses or robotic armies, or whatever). (OTOH, I’m not sure if that possibility can be planned/prepared for, so maybe that’s irrelevant, actually?)
- ↩︎
For example: we maybe should not expect decisionmakers to take risks from AI seriously until they realize those risks include a high probability of “I, personally, will die”. As another example: when people like JD Vance output rhetoric like “[AI] is not going to replace human beings. It will never replace human beings”, we should perhaps not just infer that “Vance does not believe in AGI”, but instead also assign some probability to hypotheses like “Vance thinks AGI will in fact replace lots of human beings, just not him personally; and he maybe does not believe in ASI, or imagines he will be able to control ASI”.
- ↩︎
Here I’ll define “gradual takeoff” very loosely as “a world in which there is a >1 year window during which it is possible to replace >90% of human labor, before the first ASI comes into existence”.
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Good point! I guess one way to frame that would be as
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>