This is weird. I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I regarded categorical imperative as an obvious result of rational and selfish decision making.
The most charitable thing that categorical imperatives can be called is arational. The most accurate thing they can be called is unintelligible. The statement “You should do X” is meaningless without an “if you want to accomplish Y,” because otherwise it can’t answer the question, “Why?” More importantly, there is no way to determine which of two contradictory CIs should be followed.
No moral rule can be derived via any rational decision making process alone. Morality requires arational axioms or values. The litany of things you “should” have done if you were individually rational does not actually follow. “Rational” gets used to mean “strictly selfish utility maximizer” a bit more often than it should be, which is never. There may be people who are indeed individually arational to not do those things, but as we all have different values, that does not mean we all are.
-I’m using categorical imperative as distinct from hypothetical imperative—“Don’t lie” vs. “Don’t lie if you want people to trust you.” There can be some confusion over what people mean by CI, from what I’ve seen written on this site.
I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I
Nah, that’s what they want you to think. (Which seems to be more or less literally how norms apply in reference to altruism.)
I thought I addressed this issue in the paragraph starting “But, I’m an altruist.” Is there something about my argument that you find unclear or unsatisfactory?
It’s not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling(“rationalist should follow categorical imperative because it feels sensible”) to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)
I think my earlier response to you (now deleted) misunderstood your comment. I’m still not sure I understand you now, but I’ll give it another shot.
All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you’ll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.
Here’s another example that might help make my point. How much “CPU time” does an average person’s brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.
This is weird. I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I regarded categorical imperative as an obvious result of rational and selfish decision making.
So I was wrong, huh?
The most charitable thing that categorical imperatives can be called is arational. The most accurate thing they can be called is unintelligible. The statement “You should do X” is meaningless without an “if you want to accomplish Y,” because otherwise it can’t answer the question, “Why?” More importantly, there is no way to determine which of two contradictory CIs should be followed.
No moral rule can be derived via any rational decision making process alone. Morality requires arational axioms or values. The litany of things you “should” have done if you were individually rational does not actually follow. “Rational” gets used to mean “strictly selfish utility maximizer” a bit more often than it should be, which is never. There may be people who are indeed individually arational to not do those things, but as we all have different values, that does not mean we all are.
-I’m using categorical imperative as distinct from hypothetical imperative—“Don’t lie” vs. “Don’t lie if you want people to trust you.” There can be some confusion over what people mean by CI, from what I’ve seen written on this site.
Categorical imperatives that result in persistence will accumulate.
Why should any lifeform preserve its own existence? There’s no reason. But those that do eventually dominate existence. Those that do not, are not.
Nah, that’s what they want you to think. (Which seems to be more or less literally how norms apply in reference to altruism.)
I thought I addressed this issue in the paragraph starting “But, I’m an altruist.” Is there something about my argument that you find unclear or unsatisfactory?
Argue this point in more detail, it isn’t obvious.
It’s not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling(“rationalist should follow categorical imperative because it feels sensible”) to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)
I think my earlier response to you (now deleted) misunderstood your comment. I’m still not sure I understand you now, but I’ll give it another shot.
All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you’ll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.
Here’s another example that might help make my point. How much “CPU time” does an average person’s brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.
One box!
Eliezer’s solution to Newcomb’s problem doesn’t apply to human cooperation.