(1) You (and possibly others you refer to) seem to use the word ‘consequentialism’ to point to something more specific, e.g. classic utilitarianism, or some other variant.
I didn’t quite have in mind classical utilitarianism in mind. I had in mind principles like
Not helping somebody is equivalent to hurting the person
An action that doesn’t help or hurt someone doesn’t have moral value.
(2) Your described principle of indifference seems to me to be manifestly false.
I did mean after controlling for ability to have an impact.
I did mean after controlling for an ability to have impact
Strikes me as a bit like saying “once we forget about all the differences, everything is the same.” Is there a valid purpose to this indifference principle?
Don’t get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
Strikes me as a bit like saying “once we forget about all the differences, everything is the same.” Is there a valid purpose to this indifference principle?
If you actually did some kind of expected value calculation, with your utility function set to something like U(thing) = u(thing) / causal-distance(thing), you would end up double-counting “ability to have an impact”, because there is already a 1/causal-distance sort of factor in E(U|action) = sum { U(thing') P(thing' | action) } built into how much each action affects the probabilities of the different outcomes (which is basically what “ability to have an impact” is).
That’s assuming that what JonahSinick meant by “ability to have an impact” was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of thing upon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?
Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can’t think of a way to make it clearer.
Maybe ponder this:
How could my quality of life be affected by something with no causal influence on me?
I didn’t quite have in mind classical utilitarianism in mind. I had in mind principles like
Not helping somebody is equivalent to hurting the person
An action that doesn’t help or hurt someone doesn’t have moral value.
I did mean after controlling for ability to have an impact.
Strikes me as a bit like saying “once we forget about all the differences, everything is the same.” Is there a valid purpose to this indifference principle?
Don’t get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
If you actually did some kind of expected value calculation, with your utility function set to something like
U(thing) = u(thing) / causal-distance(thing)
, you would end up double-counting “ability to have an impact”, because there is already a1/causal-distance
sort of factor inE(U|action) = sum { U(thing') P(thing' | action) }
built into how much each action affects the probabilities of the different outcomes (which is basically what “ability to have an impact” is).That’s assuming that what JonahSinick meant by “ability to have an impact” was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of
thing
upon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can’t think of a way to make it clearer.
Maybe ponder this:
How could my quality of life be affected by something with no causal influence on me?
Note that I wasn’t arguing that it’s rational. See the quotation in this comment. Rather, I was describing an input into effective altruist thinking.