Your claim that a risk-averse agent cannot be rational is trivially true because it is purely circular.
You’ve defined a risk-averse agent as someone who does not maximize their expected utilons. The meaning of “rational” around these parts is, “maximizes expected utilons.” The fact that you took a circuitous route to make this point does not change the fact that it is trivial.
I’ll break down that point in case it’s non-obvious. Utilons do not exist in the real world—there is no method of measuring utilons. Rather, they are a theoretical construct you are employing. You’ve defined a rational agent as the one who maximizes the amount of utilons he acquires. You’ve specified a function as to how he calculates these, but the specifics of that function are immaterial. You’ve then shown that someone who does not rationally maximize these utilons is not a rational utilon maximizer.
Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.
I’ll break down that point in case it’s non-obvious. Utilons do not exist in the real world—there is no method of measuring utilons.
(There is no method in the context of this discussion, but figuring out how to “measure utilons” (with respect to humans) is part of the FAI problem. If an agent doesn’t maximize utility suggested by that agent’s construction (in the same sense as human preference can hopefully be defined based on humans), that would count as a failure of that agent’s rationality.)
Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.
And yet this was still disputed. Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.
Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.
This is like a dismissive… compliment? I’m not sure how to feel!
Seriously, though, it doesn’t undermine my point. This article ultimately gets to the same basic conclusion, but does it in a very roundabout way. The definition of “utilitons,” converting outcomes into utilons eliminates risk-aversion. This extensive discussion ultimately makes the point that it’s irrational to be utilon risk averse, but it doesn’t really hit the bigger point that utilon risk aversion is fundamentally non-sensical. The fact that people don’t realize that there’s circular reasoning going on is all the more reason to point out that it is happening.
I disagree with your connotations. While the point is obvious and even follows logically from the premises it is not ‘circular’ in any meaningful sense. People are still getting confused on the issue so explaining it is fine.
I don’t mean obvious in the, “Why didn’t I think of that?” sense. I mean obvious in the trivial sense. When I say that it is circular, I don’t mean simply that the conclusion follows logically from the premises. That is the ultimate virtue of an argument. What I mean is that the conclusion is one of the premises. The definition of a rational person is one who maximizes their expected utility. Therefore, someone who is risk-averse with respect to utility is irrational; our definition of rational guarantees that this be so.
I certainly see why the overall issue leads to confusion and why people don’t see the problem instantly—the language is complex, and the concept of “utilons” folds a lot of concepts into itself so that it’s easy to lose track of what it really means. I don’t think this post really appreciates this issue, and it seems to me to be the deepest problem with this discussion. It reads like it is analyzing an actual problem, rather than unpacking an argument to show how it is circular, and I think the latter is the best description of the actual problem.
In other words, the article makes it easy to walk away without realizing that it is impossible for a rational person to be risk averse towards utility because it contradicts what we mean by “rational person.” That seems like the key issue here to me.
I don’t mean obvious in the, “Why didn’t I think of that?” sense. I mean obvious in the trivial sense. When I say that it is circular, I don’t mean simply that the conclusion follows logically from the premises.
And, for the sake of clarity, I have expressed disagreement with this position.
For what it’s worth I don’t necessarily agree with the post in full—I just don’t apply this particular rejection.
Your claim that a risk-averse agent cannot be rational is trivially true because it is purely circular.
You’ve defined a risk-averse agent as someone who does not maximize their expected utilons. The meaning of “rational” around these parts is, “maximizes expected utilons.” The fact that you took a circuitous route to make this point does not change the fact that it is trivial.
I’ll break down that point in case it’s non-obvious. Utilons do not exist in the real world—there is no method of measuring utilons. Rather, they are a theoretical construct you are employing. You’ve defined a rational agent as the one who maximizes the amount of utilons he acquires. You’ve specified a function as to how he calculates these, but the specifics of that function are immaterial. You’ve then shown that someone who does not rationally maximize these utilons is not a rational utilon maximizer.
Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.
(There is no method in the context of this discussion, but figuring out how to “measure utilons” (with respect to humans) is part of the FAI problem. If an agent doesn’t maximize utility suggested by that agent’s construction (in the same sense as human preference can hopefully be defined based on humans), that would count as a failure of that agent’s rationality.)
And yet this was still disputed. Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.
This is like a dismissive… compliment? I’m not sure how to feel!
Seriously, though, it doesn’t undermine my point. This article ultimately gets to the same basic conclusion, but does it in a very roundabout way. The definition of “utilitons,” converting outcomes into utilons eliminates risk-aversion. This extensive discussion ultimately makes the point that it’s irrational to be utilon risk averse, but it doesn’t really hit the bigger point that utilon risk aversion is fundamentally non-sensical. The fact that people don’t realize that there’s circular reasoning going on is all the more reason to point out that it is happening.
I disagree with your connotations. While the point is obvious and even follows logically from the premises it is not ‘circular’ in any meaningful sense. People are still getting confused on the issue so explaining it is fine.
I don’t mean obvious in the, “Why didn’t I think of that?” sense. I mean obvious in the trivial sense. When I say that it is circular, I don’t mean simply that the conclusion follows logically from the premises. That is the ultimate virtue of an argument. What I mean is that the conclusion is one of the premises. The definition of a rational person is one who maximizes their expected utility. Therefore, someone who is risk-averse with respect to utility is irrational; our definition of rational guarantees that this be so.
I certainly see why the overall issue leads to confusion and why people don’t see the problem instantly—the language is complex, and the concept of “utilons” folds a lot of concepts into itself so that it’s easy to lose track of what it really means. I don’t think this post really appreciates this issue, and it seems to me to be the deepest problem with this discussion. It reads like it is analyzing an actual problem, rather than unpacking an argument to show how it is circular, and I think the latter is the best description of the actual problem.
In other words, the article makes it easy to walk away without realizing that it is impossible for a rational person to be risk averse towards utility because it contradicts what we mean by “rational person.” That seems like the key issue here to me.
And, for the sake of clarity, I have expressed disagreement with this position.
For what it’s worth I don’t necessarily agree with the post in full—I just don’t apply this particular rejection.