Great job Thane! A few months ago I wrote about ‘un-unpluggability’ which is kinda like a drier version of this.
In brief
Rapidity and imperceptibility are two sides of ‘didn’t see it coming (in time)’
Robustness is ‘the act itself of unplugging it is a challenge’
Dependence is ‘notwithstanding harms, we (some or all of us) benefit from its continued operation’
Defence is ‘the system may react (or proact) against us if we try to unplug it’
Expansionism includes replication, propagation, and growth, and gets a special mention, as it is a very common and natural means to achieve all of the above
I also think the ‘who is “we”?’ question is really important.
One angle that isn’t very fleshed out is the counterquestion, ‘who is “we” and how do we agree to unplug something?’ - a little on this under Dependence, though much more could certainly be said.
I think more should be said about these factors. I tentatively wrote,
there is a clear incentive for designers and developers to imbue their systems with… dependence, at least while developers are incentivised to compete over market share in deployments.
and even more tentatively,
In light of recent developments in AI tech, I actually expect the most immediate unpluggability impacts to come from collateral, and for anti-unplug pressure to come perhaps as much from emotional dependence and misplaced concern[1] for the welfare of AI systems as from economic dependence—for this reason I believe there are large risks to allowing AI systems (dangerous or otherwise) to be perceived as pets, friends, or partners, despite the economic incentives.
It is my best guess for various reasons that concern for the welfare of contemporary and near-future AI systems would be misplaced, certainly regarding unplugging per se, but I caveat that nobody knows
Great job Thane! A few months ago I wrote about ‘un-unpluggability’ which is kinda like a drier version of this.
I also think the ‘who is “we”?’ question is really important.
I think more should be said about these factors. I tentatively wrote,
and even more tentatively,
It is my best guess for various reasons that concern for the welfare of contemporary and near-future AI systems would be misplaced, certainly regarding unplugging per se, but I caveat that nobody knows