The first few sections are best read as empirical claims about what’s evolutionarily useful for humans (though I agree that the language is sloppy and doesn’t make this clear). Later sections distinguish what we consciously want and what our brains have been optimised to achieve, and venture some suggestions for what we should do given the conflict. (And it includes a suggestion that it might be ok to give over-optimistic ETA, but it doesn’t really argue for it, and it’s not an important point.)
Your suggested alternate loss-function seems like a plausible description of your conscious desires, which may well be different from what evolution optimised us for.
The first few sections are best read as empirical claims about what’s evolutionarily useful for humans (though I agree that the language is sloppy and doesn’t make this clear). Later sections distinguish what we consciously want and what our brains have been optimised to achieve, and venture some suggestions for what we should do given the conflict. (And it includes a suggestion that it might be ok to give over-optimistic ETA, but it doesn’t really argue for it, and it’s not an important point.)
Your suggested alternate loss-function seems like a plausible description of your conscious desires, which may well be different from what evolution optimised us for.