Hmm, I wonder if you could leave instructions, kind of like a living will except in reverse, so to speak… e.g., “only unfreeze me if you know I’ll be able to make good friends and will be happy”. Perhaps with a bit more detail explaining what “good friends” and “being happy” means to you :-)
If I were in charge of defrosting people, I’d certainly respect their wishes to the best of my ability.
And, if your life does turn out to be miserable, you can, um, always commit suicide then… you don’t have to commit passive suicide now just in case… :-)
But it certainly is a huge leap in the dark, isn’t it? With most decisions, we have some idea of the possible outcomes and a sense of likelihoods...
I can think of three possibilities...
If I’m in charge of unfreezing people, and I’m intelligent enough, it becomes a simple statistical analysis. I look at the totality of historical information available about the past life of frozen people: forum posts, blog postings, emails, youtube videos… and find out what correlates with the happiness or unhappiness of people who have been unfrozen. Then the decision depends what confidence level you’re looking for: do you want to be unfrozen if there’s a 80% chance that you’ll be happy? 90%? 95%? 99%? 99.9%?
Two, I might not be intelligent enough, or there might not be enough data available, or we might not be finding useful statistical correlates. Then if your instructions are to not unfreeze you if we don’t know, we don’t unfreeze you.
Three, I might be incompetent or mistaken so that I unfreeze you even if there isn’t any good evidence that you’re going to be happy with your new situation.