In the Manhattan project, there was no disagreement between the physicists, the politicians / generals, and the actual laborers who built the bomb, on what they wanted the bomb to do.
In that they wanted the bomb to explode? I think the analogous level of control for AI would be unsatisfactory.
they did so voluntarily and knowing they wouldn’t be the ones who got any say in whether and how it would be used.
I’m not sure they thought this; I think many expected that by playing along they would have influence later. Tech workers today often seem to care a lot about how products made by their companies are deployed.
In that they wanted the bomb to explode? I think the analogous level of control for AI would be unsatisfactory.
The premise of this hypothetical is that all the technical problems are solved—if an AI lab wants to build an AI to pursue the collective CEV of humanity or whatever, they can just get it to do that. Maybe they’ll settle on something other than CEV that is a bit better or worse or just different, but my point was that I don’t expect them to choose something ridiculous like “our CEO becomes god-emperor forever” or whatever.
I’m not sure they thought this; I think many expected that by playing along they would have influence later. Tech workers today often seem to care a lot about how products made by their companies are deployed.
Yeah, I was probably glossing over the actual history a bit too much; most of my knowledge on this comes from seeing Oppenheimer recently. The actual dis-analogy is that no AI researcher would really be arguing for not building and deploying ASI in this scenario, vs. with the atomic bomb where lots of people wanted to build it to have around, but not actually use it or only use it as some kind of absolute last resort. I don’t think many AI researchers in our actual reality have that kind of view on ASI, and probably few to none would have that view in the counterfactual where the technical problems are solved.
In that they wanted the bomb to explode? I think the analogous level of control for AI would be unsatisfactory.
I’m not sure they thought this; I think many expected that by playing along they would have influence later. Tech workers today often seem to care a lot about how products made by their companies are deployed.
The premise of this hypothetical is that all the technical problems are solved—if an AI lab wants to build an AI to pursue the collective CEV of humanity or whatever, they can just get it to do that. Maybe they’ll settle on something other than CEV that is a bit better or worse or just different, but my point was that I don’t expect them to choose something ridiculous like “our CEO becomes god-emperor forever” or whatever.
Yeah, I was probably glossing over the actual history a bit too much; most of my knowledge on this comes from seeing Oppenheimer recently. The actual dis-analogy is that no AI researcher would really be arguing for not building and deploying ASI in this scenario, vs. with the atomic bomb where lots of people wanted to build it to have around, but not actually use it or only use it as some kind of absolute last resort. I don’t think many AI researchers in our actual reality have that kind of view on ASI, and probably few to none would have that view in the counterfactual where the technical problems are solved.