The OP’s point was that the “correct” actions were wrong accoriding to our current understanding of rationality—and his conclusion was that our current understanding of rationality might be wrong.
I wrote that our current understanding of rationality is not the last word and that we should therefore take account of model uncertainty.
If that was the extent of what you wrote I would not have commented. In this case I replied to this:
Pascal’s mugging, the Lifespan Dilemma, blackmailing and the wrath of Löb’s theorem are just a few examples on how an agent build according to our current understanding of rationality could fail.
Giving those as examples implies you are saying something more than “our current understanding of rationality is the last word”. Rejecting the position that argument supports is not nitpicking on definitions!
If our current understanding is that something is the wrong thing to do then our current understand of rationality doesn’t do it.
Oh boy...I just got what you are doing here. Nitpicking on a definition. Okay...of course rationality is winning and winning is doing what’s right according to your utility-function. What I meant is obviously that our methods are not perfect at guiding us and satisfying our utility-functions.
The OP’s point was that the “correct” actions were wrong accoriding to our current understanding of rationality—and his conclusion was that our current understanding of rationality might be wrong.
The OP is wrong. If our current understanding is that something is the wrong thing to do then our current understand of rationality doesn’t do it.
And that conclusion may be right, despite the argument being wrong.
I wrote that our current understanding of rationality is not the last word and that we should therefore take account of model uncertainty.
If that was the extent of what you wrote I would not have commented. In this case I replied to this:
Giving those as examples implies you are saying something more than “our current understanding of rationality is the last word”. Rejecting the position that argument supports is not nitpicking on definitions!
Oh boy...I just got what you are doing here. Nitpicking on a definition. Okay...of course rationality is winning and winning is doing what’s right according to your utility-function. What I meant is obviously that our methods are not perfect at guiding us and satisfying our utility-functions.
Not even remotely.