I’m honestly not sure I trust myself on this one. Our moral intuitions tend to go pear-shaped when confronted with infinities or near-infinities, as Dust Specks/Torture amply demonstrates.
But I’ll bite anyway. Our current judicial toolkit is a pretty crude hack, at its best not much better than the death penalty, but its goals are generally held to be some mixture of deterrence, retribution, and reformation. At least the latter two are well served by punishments that don’t involve the death penalty, an effective death penalty like denying life extension, or near-infinite punishment terms.
From a reformative perspective I think we can expect any desired amount of reformation to be pretty quick; it would take far less than a transhuman lifespan to voluntarily reinvent oneself almost completely, and it’d be all the quicker if we allow involuntary methods. From a retributive perspective, I’d expect any punishment to be proportional to the mental harm caused to others by the crime, which once again is small in comparison to the criminal’s potential lifespan.
I’m not sure what the effects on deterrence would be, but I doubt they’d change the overall picture.
from a retributive perspective I expect any punishment to be proportional to the mental harm caused to others by the crime, which once again is small in comparison to the potential lifespans here.
Depends, if the crime is murder how do you count the harm caused by ending someone’s near-infinite life?
I’m not sure what the effects on deterrence would be, though.
I haven’t fully worked out my theory of deterrence, but the crude first approximation, as briefly discussed here, is that the disutility to the criminal of the punishment should be greater than the utility they received from committing the crime, adjusted for things like probability of getting caught.
Depends, if the crime is murder how do you count the harm caused by ending someone’s near-infinite life?
The retributive aspect of punishment doesn’t attempt to compensate directly for harm to the victim; it attempts to give the people affected by the crime fuzzies by doing bad things to the perpetrator. The victim or victims can’t be accounted for in this dimension of the problem; they’re dead and cannot receive fuzzies. Hence the qualification.
I haven’t fully worked out my theory of deterrence, but the crude first approximation, as briefly discussed here, is that the disutility to the criminal of the punishment should be greater than the utility they received from committing the crime, adjusted for things like probability of getting caught.
That sounds reasonable, and I think it’s consistent with my model.
I don’t think that your typical prison inmate is a perfect Bayesian.
I rather think that that should be, ideally, adjusted so that overall utility is maximized (weighing the utility of prisoners equally as the utility of the rest), which will be vastly different both from reality and from your model assuming the above proposition.
I’m honestly not sure I trust myself on this one. Our moral intuitions tend to go pear-shaped when confronted with infinities or near-infinities, as Dust Specks/Torture amply demonstrates.
But I’ll bite anyway. Our current judicial toolkit is a pretty crude hack, at its best not much better than the death penalty, but its goals are generally held to be some mixture of deterrence, retribution, and reformation. At least the latter two are well served by punishments that don’t involve the death penalty, an effective death penalty like denying life extension, or near-infinite punishment terms.
From a reformative perspective I think we can expect any desired amount of reformation to be pretty quick; it would take far less than a transhuman lifespan to voluntarily reinvent oneself almost completely, and it’d be all the quicker if we allow involuntary methods. From a retributive perspective, I’d expect any punishment to be proportional to the mental harm caused to others by the crime, which once again is small in comparison to the criminal’s potential lifespan.
I’m not sure what the effects on deterrence would be, but I doubt they’d change the overall picture.
Depends, if the crime is murder how do you count the harm caused by ending someone’s near-infinite life?
I haven’t fully worked out my theory of deterrence, but the crude first approximation, as briefly discussed here, is that the disutility to the criminal of the punishment should be greater than the utility they received from committing the crime, adjusted for things like probability of getting caught.
The retributive aspect of punishment doesn’t attempt to compensate directly for harm to the victim; it attempts to give the people affected by the crime fuzzies by doing bad things to the perpetrator. The victim or victims can’t be accounted for in this dimension of the problem; they’re dead and cannot receive fuzzies. Hence the qualification.
That sounds reasonable, and I think it’s consistent with my model.
I don’t think that your typical prison inmate is a perfect Bayesian.
I rather think that that should be, ideally, adjusted so that overall utility is maximized (weighing the utility of prisoners equally as the utility of the rest), which will be vastly different both from reality and from your model assuming the above proposition.