I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. “doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc.”
All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can’t have a stable “Kantian utility function” that values weightings over world-histories and is consistent over time.
There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).
All well and good for t1, but then I need a new utility function for the next moment, t2, and places infinite weight on lying at t2 but not at t1.
Why at t2 must you no longer place infinite weight against lying at t1? It would seem that if you did not, in fact, lie at t1 (and you can infallibly achieve this) then leaving the infinite dis-utility for lying at t1 makes no practical difference. Sure, if you ever tell a single lie all subsequent behavior will become arbitrary but that possibility has been assumed away.
Provided you have infinite confidence in the impossibility of time travel or timeless decision theory style entanglement of past events with your choices now, that’s right. It’s not as problematic as placing infinite weight on lying at t2 when it’s still t1 (which would license lying now to avoid future lying, contra deontology).
Provided you have infinite confidence in the impossibility of time travel or timeless decision theory style entanglement of past events with your choices now, that’s right.
This would seem to be a problem only provided both of:
The “can do so infallibly” assumption is interpreted weakly—such that the infallibility is only assumed to hold at Time.now.
The meaning of “I” in “I lie”—that is, the construction of identity—is such that objects at (Time.now + x) and (Time.now—x) that are made of similar matter to “me” but that do not implement my decision algorithm or one that my decision algorithm endorses or would seek to create are still called me. The crude illustration being “Omega arbitrarily appears and hacks me such that my utility function such that the end product wants to do stuff that I currently assign negative infinity utility to.”
Without a model of “I” that includes (milder, more subtle) versions of that kind of modification as also instances of “I” there is not a problem leaving the negative utility for “I lie” at !Time.now in place. Apart from, you know, the fact that you have an agent with an idiotic absolute deontological injunction in place. But that was the specification we were implementing.
The implementation of mechanisms for what constitutes the “self” actor in any absolute-deontological injunction is ridiculously complicated and introduces all sorts of problems and potential pitfalls. However, I don’t think they are complexities and pitfalls that I am arbitrarily introducing for my convenience. They seem to be actual problems intrinsic to the practical task of implementing a deontological agent in a physical universe. In fact, they are largely problems intrinsic to actually defining a deontological rule precisely.
It’s not as problematic as placing infinite weight on lying at t2 when it’s still t1
I would grant both the factors you mention—less than 1 confidence in the impossibility of time travel and acausal influences—as valid reasons to qualitatively accept all the problems of Time.greater_than.now infinities as problems for Time.less_than.now infinities. So I would make the same claim that they can work correctly if you either have an encompassing “infallible” or a correctly defined “I”. (By ‘correct’ I mean “the same as what you would intend it to mean if you said the rule aloud”.)
(which would license lying now to avoid future lying, contra deontology).
I assume we are imagining a different utility function here. I am imagining a utility function defined over universe-histories where negative infinity is returned when “I lie” is true at any point in time and normal stuff (like lots of utility for delicious cookies, volcano lairs and catgirls) for all the rest. Of all the undesirable outcomes that would come from executing such a utility function lying now to avoid future lying would not be one of them. No lying behavior will ever be returned as the expected utility maximising action of that function except in cases where (according to your model) it is logically impossible to execute a behavior that has a non-zero chance of resulting in a lie. In that case behavior is and should be totally arbitrary.
I can represent a rigid prohibition against lying using time-relative lexicographic preferences or hyperreals, e.g. “doing an act that I now (at t1) believe has too high a probability of being a lie has infinite and overriding disutility, but I can do this infallibly (defining the high disutility act to enable this), and after taking that into account I can then optimize for my own happiness or the welfare of others, etc.”
All well and good for t1, but then I need a new utility function for the next moment, t2, that places infinite weight on lying at t2 (edit: where the t1 utility function did not). The indexical description of the utility function hides the fact that we need a different ranking of consequences for most every moment and situation. I can’t have a stable “Kantian utility function” that values weightings over world-histories and is consistent over time.
There are also some problems with the definition of acts and epistemic procedures such that one can have 100% certainty that one is not violating the deontological rules (otherwise they override any other lesser consequences).
Also see Brown, Consequentialize This.
Why at t2 must you no longer place infinite weight against lying at t1? It would seem that if you did not, in fact, lie at t1 (and you can infallibly achieve this) then leaving the infinite dis-utility for lying at t1 makes no practical difference. Sure, if you ever tell a single lie all subsequent behavior will become arbitrary but that possibility has been assumed away.
Provided you have infinite confidence in the impossibility of time travel or timeless decision theory style entanglement of past events with your choices now, that’s right. It’s not as problematic as placing infinite weight on lying at t2 when it’s still t1 (which would license lying now to avoid future lying, contra deontology).
This would seem to be a problem only provided both of:
The “can do so infallibly” assumption is interpreted weakly—such that the infallibility is only assumed to hold at Time.now.
The meaning of “I” in “I lie”—that is, the construction of identity—is such that objects at (Time.now + x) and (Time.now—x) that are made of similar matter to “me” but that do not implement my decision algorithm or one that my decision algorithm endorses or would seek to create are still called me. The crude illustration being “Omega arbitrarily appears and hacks me such that my utility function such that the end product wants to do stuff that I currently assign negative infinity utility to.”
Without a model of “I” that includes (milder, more subtle) versions of that kind of modification as also instances of “I” there is not a problem leaving the negative utility for “I lie” at !Time.now in place. Apart from, you know, the fact that you have an agent with an idiotic absolute deontological injunction in place. But that was the specification we were implementing.
The implementation of mechanisms for what constitutes the “self” actor in any absolute-deontological injunction is ridiculously complicated and introduces all sorts of problems and potential pitfalls. However, I don’t think they are complexities and pitfalls that I am arbitrarily introducing for my convenience. They seem to be actual problems intrinsic to the practical task of implementing a deontological agent in a physical universe. In fact, they are largely problems intrinsic to actually defining a deontological rule precisely.
I would grant both the factors you mention—less than 1 confidence in the impossibility of time travel and acausal influences—as valid reasons to qualitatively accept all the problems of Time.greater_than.now infinities as problems for Time.less_than.now infinities. So I would make the same claim that they can work correctly if you either have an encompassing “infallible” or a correctly defined “I”. (By ‘correct’ I mean “the same as what you would intend it to mean if you said the rule aloud”.)
I assume we are imagining a different utility function here. I am imagining a utility function defined over universe-histories where negative infinity is returned when “I lie” is true at any point in time and normal stuff (like lots of utility for delicious cookies, volcano lairs and catgirls) for all the rest. Of all the undesirable outcomes that would come from executing such a utility function lying now to avoid future lying would not be one of them. No lying behavior will ever be returned as the expected utility maximising action of that function except in cases where (according to your model) it is logically impossible to execute a behavior that has a non-zero chance of resulting in a lie. In that case behavior is and should be totally arbitrary.
Thanks for this. Your point substantially undermines the importance of Peterson’s result.