I think you bring up some important points here. I agree with many of your concerns, such as strong controllable AI leading to a dangerous concentration of power in the hands of the most power-hungry first movers.
I think many of the alternatives are worse though, and I don’t think we can choose what path to try to steer towards until we take a clear-eyed look at the pros and cons of each direction.
What would decentralized control of strong AI look like?
Would some terrorists use it to cause harm?
Would some curious people order one to become an independent entity just for curiosity or as a joke? What would happen with such an entity connected to the internet and actively seeking resources and self-improvement?
Would power then fall into the hands of whichever early mover poured the most resources into recursive self-improvement? If so, we’ve then got a centralized power problem again, but now the filter is ‘willing to self-improve as fast as possible’, which seems like it would select against maintaining control over the resulting stronger AI.
I think you bring up some important points here. I agree with many of your concerns, such as strong controllable AI leading to a dangerous concentration of power in the hands of the most power-hungry first movers.
I think many of the alternatives are worse though, and I don’t think we can choose what path to try to steer towards until we take a clear-eyed look at the pros and cons of each direction.
What would decentralized control of strong AI look like?
Would some terrorists use it to cause harm?
Would some curious people order one to become an independent entity just for curiosity or as a joke? What would happen with such an entity connected to the internet and actively seeking resources and self-improvement?
Would power then fall into the hands of whichever early mover poured the most resources into recursive self-improvement? If so, we’ve then got a centralized power problem again, but now the filter is ‘willing to self-improve as fast as possible’, which seems like it would select against maintaining control over the resulting stronger AI.
A lot of tricky questions here.
I made a related post here, and would enjoy hearing your thoughts on it: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy