I agree with this post, and IMO, technological unemployment is the main issue with AI in my model of the benefit-cost accounting from AI.
One point I do disagree with you on is this:
This loss is what causes existential risk.
I disagree with that, primarily because of 2 things:
I still expect humans to be able to control AI, and more importantly I think it’s pretty likely that something like corrigibility/personal intent alignment happens by default.
I think that existential risk gets overused, because the original definition was a risk that eliminated humanity’s potential, and I don’t think technological employment actually does this, but I do think that something like a global catastrophe could happen if tech unemployment is handled badly.
That’s because I expect at least a few humans to be able to create Dyson Swarms using their superintelligent AIs, and this means that existential risk is avoided in that scenario, and we wouldn’t call the bottleneck event that happened 70,000 years ago which reduced human populations to 1,000 people an existential risk.
That said, I agree that something like decoupling professional humans from value production, or the rich from any constraints on how they are able to shape the world can definitely be a big problem.
There’s an implied assumption that when you lose parts of society through a bottleneck that you can always recreate them with high fidelity. It seems plausible that some bottleneck events could “limit humanity’s potential”, since choices may rely on those lost values, and not all choices are exchangeable in time. (This has connections both to the long reflection and to the rich shaping the world in their own image).
As an aside, the bottleneck paper you’re referring to is pretty contentious. I personally find it unlikely that no other demographic model detects a bottleneck of >0.99 in the real data, but all of them can do it on simulated data. If such an event did occur in the modern day, the effects would be profound and serious.
There’s an implied assumption that when you lose parts of society through a bottleneck that you can always recreate them with high fidelity.
I do think this assumption will become more true in the 21st century, and indeed AI and robotics progress making the assumption more true is the main reason that if a catastrophe happens, this is how it happens.
Re the bottleneck paper, I’m not going to comment any further, since I just wanted to provide an example.
I agree with this post, and IMO, technological unemployment is the main issue with AI in my model of the benefit-cost accounting from AI.
One point I do disagree with you on is this:
I disagree with that, primarily because of 2 things:
I still expect humans to be able to control AI, and more importantly I think it’s pretty likely that something like corrigibility/personal intent alignment happens by default.
I think that existential risk gets overused, because the original definition was a risk that eliminated humanity’s potential, and I don’t think technological employment actually does this, but I do think that something like a global catastrophe could happen if tech unemployment is handled badly.
That’s because I expect at least a few humans to be able to create Dyson Swarms using their superintelligent AIs, and this means that existential risk is avoided in that scenario, and we wouldn’t call the bottleneck event that happened 70,000 years ago which reduced human populations to 1,000 people an existential risk.
That said, I agree that something like decoupling professional humans from value production, or the rich from any constraints on how they are able to shape the world can definitely be a big problem.
There’s an implied assumption that when you lose parts of society through a bottleneck that you can always recreate them with high fidelity. It seems plausible that some bottleneck events could “limit humanity’s potential”, since choices may rely on those lost values, and not all choices are exchangeable in time. (This has connections both to the long reflection and to the rich shaping the world in their own image).
As an aside, the bottleneck paper you’re referring to is pretty contentious. I personally find it unlikely that no other demographic model detects a bottleneck of >0.99 in the real data, but all of them can do it on simulated data. If such an event did occur in the modern day, the effects would be profound and serious.
I do think this assumption will become more true in the 21st century, and indeed AI and robotics progress making the assumption more true is the main reason that if a catastrophe happens, this is how it happens.
Re the bottleneck paper, I’m not going to comment any further, since I just wanted to provide an example.