It mainly looks like simplification, rather than any definite fallacy. The real world is complex, and it is necessary to simplify it for purposes of deriving a conclusion. 0% discount rate is very much easier to model than any nonzero discount rate, especially in an informal verbal discussion. Picking any nonzero discount rate means choosing a more complex model, and so choosing 0% is defensible on this basis. The question is mainly whether the simplifications are known (or should be known) to seriously change the conclusion.
If either you or Joe know that the outcome of the simpler model will be meaningfully wrong as a result, then it becomes indefensible. Here “meaningfully wrong” means not just different from reality—no model will be perfect—but wrong enough that something meaningful such as a decision, a more general belief, or a quality of outcome depends upon that difference.
There is a humorous term “spherical cow”, used of physics problems in which one might do somethign comparable to modelling a cow as a uniform sphere in order to derive some property without accounting for the complexity of an actual cow’s shape. In many cases, this will still give good approximations with radically less complex math! In others, it can yield absurd results.
You could consider over-simplifying to be a type of fallacy, in the case where the simplification discards information that makes a critical difference to the conclusion. There are quite a lot of fallacies of oversimplification, but discarding a conclusion because the model isn’t perfect is also a fallacy!
It mainly looks like simplification, rather than any definite fallacy. The real world is complex, and it is necessary to simplify it for purposes of deriving a conclusion. 0% discount rate is very much easier to model than any nonzero discount rate, especially in an informal verbal discussion. Picking any nonzero discount rate means choosing a more complex model, and so choosing 0% is defensible on this basis. The question is mainly whether the simplifications are known (or should be known) to seriously change the conclusion.
If either you or Joe know that the outcome of the simpler model will be meaningfully wrong as a result, then it becomes indefensible. Here “meaningfully wrong” means not just different from reality—no model will be perfect—but wrong enough that something meaningful such as a decision, a more general belief, or a quality of outcome depends upon that difference.
There is a humorous term “spherical cow”, used of physics problems in which one might do somethign comparable to modelling a cow as a uniform sphere in order to derive some property without accounting for the complexity of an actual cow’s shape. In many cases, this will still give good approximations with radically less complex math! In others, it can yield absurd results.
You could consider over-simplifying to be a type of fallacy, in the case where the simplification discards information that makes a critical difference to the conclusion. There are quite a lot of fallacies of oversimplification, but discarding a conclusion because the model isn’t perfect is also a fallacy!