Yes, but the point is that people who characterize somebody as “bad” or “good” and people who decide which bodies to jettison are different people who don’t necessarily share a vocabulary, never mind a common value system.
I disagree. If you are the decision-maker and your decision algorithm includes “badness”, its your responsibility to define (and calculate) “badness”, based on the data available to you. This is key.
It seems to me that this whole scenario is roughly analogous to the Trolley Problem, with the twist that the decision-maker is given unknown access to unknown amounts of data about the people who will live or die. In a situation of minimal information (imagine caskets identified by a randomly-assigned ID, archived by a database which had long-since been lost), the decision-maker must choose the survivor based only on the information stored within the body (e.g. DNA, presence of extant uncured diseases, etc.). Given more information (such as Jiro’s caskets), the decision-maker must choose based on the combination of the information within the body and the information attached to it.
So, you must kill m people in order to preserve at least m+1 people, and you have n people from which to choose. How would you do it?
Given available data, I would try to calculate a Societal Expected Value, something like a prediction of how many QALYs a person would save if they were reanimated. Select the m people with the lowest expected value.
Again given available data, In the event of a tie which contains [m, m+1], break the tie(s) by calculating the “badness index” based on current criminal justice practices (e.g. sum of the average lengths of sentences for all that person’s convicted crimes: murder > rape > petty theft, etc.).
Break subsequent ties containing [m, m+1] by selecting randomly.
That’s interesting. So the value of the person is entirely in his/her usefulness to the society?
calculating the “badness index” based on current criminal justice practices
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
In fact, the core of the issue is whether you are willing to accept moral judgements from another time and culture to the extent of making life-and-death decisions on that basis.
So the value of the person is entirely in his/her usefulness to the society?
Not entirely. But it certainly trumps a person’s “badness” in my opinion.
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
If the civilization reawakening them is capable of calculating an Expected Value for each person based only on their DNA (and other information contained in the body, such as irreversible injuries) which is more accurate than moral differences between the society which froze them and the society which may awaken them, then the moral judgements of the originating society are probably useless.
I disagree. If you are the decision-maker and your decision algorithm includes “badness”, its your responsibility to define (and calculate) “badness”, based on the data available to you. This is key.
It seems to me that this whole scenario is roughly analogous to the Trolley Problem, with the twist that the decision-maker is given unknown access to unknown amounts of data about the people who will live or die. In a situation of minimal information (imagine caskets identified by a randomly-assigned ID, archived by a database which had long-since been lost), the decision-maker must choose the survivor based only on the information stored within the body (e.g. DNA, presence of extant uncured diseases, etc.). Given more information (such as Jiro’s caskets), the decision-maker must choose based on the combination of the information within the body and the information attached to it.
So, you must kill m people in order to preserve at least m+1 people, and you have n people from which to choose. How would you do it?
Given available data, I would try to calculate a Societal Expected Value, something like a prediction of how many QALYs a person would save if they were reanimated. Select the m people with the lowest expected value.
Again given available data, In the event of a tie which contains [m, m+1], break the tie(s) by calculating the “badness index” based on current criminal justice practices (e.g. sum of the average lengths of sentences for all that person’s convicted crimes: murder > rape > petty theft, etc.).
Break subsequent ties containing [m, m+1] by selecting randomly.
So, what data would you use and how?
That’s interesting. So the value of the person is entirely in his/her usefulness to the society?
Well, the problem we are discussing assumes that you do NOT have access to much data (certainly not their rap sheet or the lack thereof) about the frozen people—their name and whether their contemporaries thought them “good” or “bad” is all you have.
In fact, the core of the issue is whether you are willing to accept moral judgements from another time and culture to the extent of making life-and-death decisions on that basis.
Not entirely. But it certainly trumps a person’s “badness” in my opinion.
If the civilization reawakening them is capable of calculating an Expected Value for each person based only on their DNA (and other information contained in the body, such as irreversible injuries) which is more accurate than moral differences between the society which froze them and the society which may awaken them, then the moral judgements of the originating society are probably useless.