Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
In what sense should I be a consequentialist about the second set?
In the sense that you can update on evidence until you can marginally predict end states?
I’m afraid I can’t think of an example where there’s a meta-level but on predictive capacity on that meta-level. Can you give an example?
I have no hope of being able to predict everything...there is always going to be a large set of end states I can’t predict?
Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?
Lo! I have been so instructed-eth! See above.
Say I apply consequentialism to a set of end states I can reliably predict, and use something else for the set I cannot. In what sense should I be a consequentialist about the second set?
In the sense that you can update on evidence until you can marginally predict end states?
I’m afraid I can’t think of an example where there’s a meta-level but on predictive capacity on that meta-level. Can you give an example?
I have no hope of being able to predict everything...there is always going to be a large set of end states I can’t predict?
Then why have ethical opinions about it at all? Again, can you please give an example of a situation where this would come up?
Lo! I have been so instructed-eth! See above.