To me, both the original tweet and your reply seem to miss the point entirely. I didn’t sign this petition out of some philosophical position on what petitions should or shouldn’t be used for. I did it because I see something very harmful happening and think this is a way to prevent it.
I think it is very important to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it’s just a shallow one.
I disagree with the position Callard has staked out that petitions are inconsistent with being a philosophical hero, but for reasons that presumably we could converge on; hence the reply, and a continuing conversation in the comments.
I think it is very important to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it’s just a shallow one.
I think the crux may be that I don’t agree with the claim that you ought to have rules separate form an expected utility calculation. (I’m familiar with this position from Eliezer but it’s never made sense to me.) For the “should-we-lie-about-the-singularity” example, I think that adding a justified amount of uncertainty into the utility calculation would have been enough to preclude lying; it doesn’t need to be an external rule. My philosophical position is thus just boilerplate utilitarianism, and I would disagree with your first sentence if you took out the “immediate.”
In this case, it just seems fairly obvious to me that signing this petition won’t have unforeseen long term consequences that outweigh the direct benefit.
And, as I said, I think responding to Callard in the way you did is useful, even if I disagree with the framework.
I think it is very important to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it’s just a shallow one.
I disagree with the position Callard has staked out that petitions are inconsistent with being a philosophical hero, but for reasons that presumably we could converge on; hence the reply, and a continuing conversation in the comments.
I think the crux may be that I don’t agree with the claim that you ought to have rules separate form an expected utility calculation. (I’m familiar with this position from Eliezer but it’s never made sense to me.) For the “should-we-lie-about-the-singularity” example, I think that adding a justified amount of uncertainty into the utility calculation would have been enough to preclude lying; it doesn’t need to be an external rule. My philosophical position is thus just boilerplate utilitarianism, and I would disagree with your first sentence if you took out the “immediate.”
In this case, it just seems fairly obvious to me that signing this petition won’t have unforeseen long term consequences that outweigh the direct benefit.
And, as I said, I think responding to Callard in the way you did is useful, even if I disagree with the framework.