I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”,
Thanks fixed.
I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
Thanks fixed.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.