The gist of these objections to the case for AI risks is that AI systems as we see them today are merely computer programs, and in our everyday experience computers are not dangerous
Yeah, I think a lot of people have a hard time moving past this. But even today’s large software systems are (deliberately designed to be!) difficult to unplug.
I wrote un-unpluggability which lists six properties making systems un-unpluggable
In brief
Rapidity and imperceptibility are two sides of ‘didn’t see it coming (in time)’ [includes deception]
Robustness is ‘the act itself of unplugging it is a challenge’ [esp redundancy]
Dependence is ‘notwithstanding harms, we (some or all of us) benefit from its continued operation’
Defence is ‘the system may react (or proact) against us if we try to unplug it’
Expansionism includes replication, propagation, and growth, and gets a special mention, as it is a very common and natural means to achieve all of the above
I also wrote a hint there that I think Dependence (especially ‘emotional’ dependence) is a neglected concern (‘pets, friends, or partners’), and I’ve been meaning to write more about that.
Yeah, I think a lot of people have a hard time moving past this. But even today’s large software systems are (deliberately designed to be!) difficult to unplug.
I wrote un-unpluggability which lists six properties making systems un-unpluggable
I also wrote a hint there that I think Dependence (especially ‘emotional’ dependence) is a neglected concern (‘pets, friends, or partners’), and I’ve been meaning to write more about that.