how to prevent recurrence in any circumstance where there isn’t another way to prevent recurrence, right?
Not quite. How to minimize similar choices in future equilibrium, maybe. In many cases, how to maximize conformance and compliance to a set of norms, rather than just this specific case. In real humans (not made-up rationalist cooperators), it includes how to motivate people to behave compatibly with your worldview, even though they think differently enough from you that you can’t fully model them. Or don’t have the bandwidth to understand them well enough to convince them. Or don’t have the resources to satisfy their needs such that they’d be willing to comply.
I don’t mean to argue against searching for (and in fact using) alternatives. I merely mean to point out that there seem to be a lot of cases in society where we haven’t found effective alternatives to punishment. It’s simply incorrect for the OP to claim that the vision of fiction is fully applicable to the real world.
ah, I see—if it turns out OP was arguing for that, then I misunderstood something. the thing I understood OP to be saying is about the algorithm for how to generate responses—that it should not be retribution-seeking, but rather solution-seeking, and it should likely have a penalty for selecting retribution, but it also likely does need to be able to select retribution to work in reality, as you say. OP’s words, my italics:
In other words, when someone is wronged, we want to search over ways to repair the harm done to them and prevent similar harm from happening in the future, rather than searching over ways to harm the perpetrator in return.
implication I read: prevent similar harm is allowed to include paths that harm the perpetrator, but it’s searching over ?worldlines? based on those ?worldlines? preventing recurrence, rather than just because they harm the perpetrator.
If SpaceX drops a rocket on an irreplaceable work of art or important landmark, there’s no amount of money that can make the affected parties whole. Not that they shouldn’t pay compensation and do their best to repair the harm done anyway.
Not quite. How to minimize similar choices in future equilibrium, maybe. In many cases, how to maximize conformance and compliance to a set of norms, rather than just this specific case. In real humans (not made-up rationalist cooperators), it includes how to motivate people to behave compatibly with your worldview, even though they think differently enough from you that you can’t fully model them. Or don’t have the bandwidth to understand them well enough to convince them. Or don’t have the resources to satisfy their needs such that they’d be willing to comply.
but I don’t see how that precludes searching for alternatives to retribution first
I don’t mean to argue against searching for (and in fact using) alternatives. I merely mean to point out that there seem to be a lot of cases in society where we haven’t found effective alternatives to punishment. It’s simply incorrect for the OP to claim that the vision of fiction is fully applicable to the real world.
ah, I see—if it turns out OP was arguing for that, then I misunderstood something. the thing I understood OP to be saying is about the algorithm for how to generate responses—that it should not be retribution-seeking, but rather solution-seeking, and it should likely have a penalty for selecting retribution, but it also likely does need to be able to select retribution to work in reality, as you say. OP’s words, my italics:
implication I read: prevent similar harm is allowed to include paths that harm the perpetrator, but it’s searching over ?worldlines? based on those ?worldlines? preventing recurrence, rather than just because they harm the perpetrator.