I wrote that our current understanding of rationality is not the last word and that we should therefore take account of model uncertainty.
If that was the extent of what you wrote I would not have commented. In this case I replied to this:
Pascal’s mugging, the Lifespan Dilemma, blackmailing and the wrath of Löb’s theorem are just a few examples on how an agent build according to our current understanding of rationality could fail.
Giving those as examples implies you are saying something more than “our current understanding of rationality is the last word”. Rejecting the position that argument supports is not nitpicking on definitions!
I wrote that our current understanding of rationality is not the last word and that we should therefore take account of model uncertainty.
If that was the extent of what you wrote I would not have commented. In this case I replied to this:
Giving those as examples implies you are saying something more than “our current understanding of rationality is the last word”. Rejecting the position that argument supports is not nitpicking on definitions!