Rolf: It seems to me that you are trying to assert that it is normative for agents to behave in a certain manner because the agents you are addressing are presumably non-normative.
On a semantic level, I agree; I actually avoided using the word “normative” in my comment because you had, earlier, correctly criticized my use of the word on my blog.
I try to consistently consider myself as part of an ensemble of flawed humans. (It’s not easy, and I often fail.) To be more rigorous, I would want to condition my reasoning on the fact that I’m one of the flawed humans who attempts to adjust for the fact that he himself is a flawed human. (But, I don’t think that, in practice, this particular conditioning would change my conclusions.)
I do have to, to “bootstrap” my philosophy, presume that I have some ability to, much of the time, use logic, in such a way that (on average) >50% of my 1-bit beliefs are likely to be correct. But since I grant that same ability to the rest of the ensemble of flawed humans, that doesn’t affect the analysis.
I don’t have a citation to an existing paper that rigorously spells out how you would do this (maybe such a paper doesn’t even exist, for all I know), but my intuition is that such analysis is not, at a fundamental level, self-contradictory.
Rolf: It seems to me that you are trying to assert that it is normative for agents to behave in a certain manner because the agents you are addressing are presumably non-normative.
On a semantic level, I agree; I actually avoided using the word “normative” in my comment because you had, earlier, correctly criticized my use of the word on my blog.
I try to consistently consider myself as part of an ensemble of flawed humans. (It’s not easy, and I often fail.) To be more rigorous, I would want to condition my reasoning on the fact that I’m one of the flawed humans who attempts to adjust for the fact that he himself is a flawed human. (But, I don’t think that, in practice, this particular conditioning would change my conclusions.)
I do have to, to “bootstrap” my philosophy, presume that I have some ability to, much of the time, use logic, in such a way that (on average) >50% of my 1-bit beliefs are likely to be correct. But since I grant that same ability to the rest of the ensemble of flawed humans, that doesn’t affect the analysis.
I don’t have a citation to an existing paper that rigorously spells out how you would do this (maybe such a paper doesn’t even exist, for all I know), but my intuition is that such analysis is not, at a fundamental level, self-contradictory.