I think it is very important to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it’s just a shallow one.
I think the crux may be that I don’t agree with the claim that you ought to have rules separate form an expected utility calculation. (I’m familiar with this position from Eliezer but it’s never made sense to me.) For the “should-we-lie-about-the-singularity” example, I think that adding a justified amount of uncertainty into the utility calculation would have been enough to preclude lying; it doesn’t need to be an external rule. My philosophical position is thus just boilerplate utilitarianism, and I would disagree with your first sentence if you took out the “immediate.”
In this case, it just seems fairly obvious to me that signing this petition won’t have unforeseen long term consequences that outweigh the direct benefit.
And, as I said, I think responding to Callard in the way you did is useful, even if I disagree with the framework.
I think the crux may be that I don’t agree with the claim that you ought to have rules separate form an expected utility calculation. (I’m familiar with this position from Eliezer but it’s never made sense to me.) For the “should-we-lie-about-the-singularity” example, I think that adding a justified amount of uncertainty into the utility calculation would have been enough to preclude lying; it doesn’t need to be an external rule. My philosophical position is thus just boilerplate utilitarianism, and I would disagree with your first sentence if you took out the “immediate.”
In this case, it just seems fairly obvious to me that signing this petition won’t have unforeseen long term consequences that outweigh the direct benefit.
And, as I said, I think responding to Callard in the way you did is useful, even if I disagree with the framework.