Distorting time won’t prevent reversals of preference just because it makes some plotted curves match.
If your discounting factor is f(t-now) for some function f, then f needs to be translation invariant (modulo positive affine scaling), on pain of preference reversals. The requirement of translation invariance is directly due to the fact that f gets translated by the varying values of “now”. For two possible events x1 and x2, the agent compares U(x1)*f(t1-now) vs U(x2)*f(t2-now), where U is the non-discounted utility function, and if the result of that comparison depends on the value of “now” you have problems.
However, if your discounting factor is f(t) simpliciter, then f isn’t translated and thus doesn’t need to be translation invariant. No single event is ever valued according to multiple different outputs of f. The agent will derive the same preference between any two events regardless of when it computes the decision.
If your discounting factor is f(t-now) for some function f, then f needs to be translation invariant (modulo positive affine scaling), on pain of preference reversals. The requirement of translation invariance is directly due to the fact that f gets translated by the varying values of “now”. For two possible events x1 and x2, the agent compares U(x1)*f(t1-now) vs U(x2)*f(t2-now), where U is the non-discounted utility function, and if the result of that comparison depends on the value of “now” you have problems.
However, if your discounting factor is f(t) simpliciter, then f isn’t translated and thus doesn’t need to be translation invariant. No single event is ever valued according to multiple different outputs of f. The agent will derive the same preference between any two events regardless of when it computes the decision.