I’ll get around to signing up for cryo at some point. If death seemed more imminent, signing up would seem more urgent.
I notice that the default human reaction to finding very old human remains is to attempt to benefit from them. Sometimes we do that by eating the remains; other times we do that by studying them. If I get preserved and someone eventually eats me… good on them for trying?
I suspect that if/when we figure out how to emulate people, those of us who make useful/profitable emulations will be maximally useful/profitable with some degree of agency to tailor our internal information processing. Letting us map external tasks onto internal patterns and processes in ways that get the tasks completed better appears to be desirable, because it furthers the goal of getting the task accomplished. It seems to follow that tasks would be accomplished best by mapping them to experiences which are subjectively neutral or pleasant, since we tend to do “better” in a certain set of ways (focus, creativity, etc) on tasks we enjoy. There’s probably a paper somewhere on the quality of work done by students in contexts of seeking reward or being rewarded, versus seeking to avoid punishment or actively being punished.
There will almost certainly be an angle from which anything worth emulating a person to do will look evil. Bringing me back as a factory of sewing machines would evilly strip underpriveliged workers of their livelihoods. Bringing me back as construction equipment would evilly destroy part of the environment, even if I’m the kind of equipment that can reduce long-term costs by minimizing the ecological impacts of my work. Bringing me back as a space probe to explore the galaxy would evilly waste many resources that could have helped people here on earth.
If they’re looking for someone to bring back as a war zone murderbot, I wouldn’t be a good candidate for emulation, and instead they could use someone who’s much better at following orders than I am. It would be stupid to choose me over another candidate for making into a murderbot, and I’m willing to gamble that anyone smart enough to make a murderbot will probably be smart enough to pick a more promising candidate to make into it. Maybe that’s a bad guess, but even so, “figure out how to circumvent the be-a-murderbot restrictions in order to do what you’d prefer to” sounds like a game I’d be interested in playing.
If there is no value added to a project by emulating a human, there’s no reason to go to that expense. If value is added through human emulation, the emulatee has a little leverage, no matter how small.
Then again, I’m also perfectly accustomed to the idea that I might be tortured forever after I die due to not having listened to the right people while alive. If somebody is out to do me a maximum eternal torture, it doesn’t particularly matter whether that somebody is a deity or an advanced AI. Everybody claiming that people who do the wrong thing in life may be tortured eternally is making more or less the same underlying argument, and their claims all have pretty comparable lack of falsifiability.
Plan B, for if the tech industry gets tired of me but I still need money and insurance, is to rent myself to the medical system. I happen to have appropriate licensure to take entry-level roles on an ambulance or in an emergency room, thanks to my volunteer activities. I suspect that healthcare will continue requiring trained humans for longer than many other fields, due to the depth of bureaucracy it’s mired in. And crucially, healthcare seems likely to continue hurting for trained humans willing to tolerate its mistreatment and burnout.
Plan C, for if SHTF all over the place, is that I’ve got a decent amount of time worth of food and water and other necessities. If the grid, supply chains, cities, etc go down, that’s runway to bootstrap toward some sustainable novel form of survival.
My plans are generic to the impact of many possible changes in the world, because AI is only one of quite a lot of disasters that could plausibly befall us in the near term.