Sounds like a possible scenario as well. Are they both just, both unjust, one or the other, variable?
And what period of time should we use as a standard?
The same for both scenarios, lenient punishments encouraging more severe crimes later on, and onerous punishments also encouraging more severe crimes. since both lead to the same outcome?
When it comes to interacting with complex systems expecting them to work according to your own preconceptions is generally a bad idea. You want policy to be driven by evidence about effects of interventions and not just based on thought experiments. You want to build feedback system into your system to optimize it’s actions.
You want to produce institutions that assume that they can’t know the answer to question like this just by thinking about but that think about how to gather the evidence to make informed policy choices.
Well that’s all well and good but all organizations, including all conceivable institutions, will eventually seek to optimize towards goals that we, present day people, cannot completely control.
i.e. they will carry out their affairs using whatever is at hand through their own preconceptions, regardless of how perfect our initial designs are or what we want their behaviours to be or how much we wish for them to lack preconceptions. They will seek answers to similar questions, perhaps with the same or different motivations.
So then if we proceed along such a path the same problem appears at the meta level. How do we take into account what future actors will consider what period of time they should use as a standard? (In order to build the ‘feedback system’ for them to operate in)
Sounds like a possible scenario as well. Are they both just, both unjust, one or the other, variable?
And what period of time should we use as a standard?
The same for both scenarios, lenient punishments encouraging more severe crimes later on, and onerous punishments also encouraging more severe crimes. since both lead to the same outcome?
Or different?
When it comes to interacting with complex systems expecting them to work according to your own preconceptions is generally a bad idea. You want policy to be driven by evidence about effects of interventions and not just based on thought experiments. You want to build feedback system into your system to optimize it’s actions.
You want to produce institutions that assume that they can’t know the answer to question like this just by thinking about but that think about how to gather the evidence to make informed policy choices.
Well that’s all well and good but all organizations, including all conceivable institutions, will eventually seek to optimize towards goals that we, present day people, cannot completely control.
i.e. they will carry out their affairs using whatever is at hand through their own preconceptions, regardless of how perfect our initial designs are or what we want their behaviours to be or how much we wish for them to lack preconceptions. They will seek answers to similar questions, perhaps with the same or different motivations.
So then if we proceed along such a path the same problem appears at the meta level. How do we take into account what future actors will consider what period of time they should use as a standard? (In order to build the ‘feedback system’ for them to operate in)