I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I’d thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you’d rather you can private message me your answer.
The question is:
Truth or Happiness? If you had to choose between one or the other, which would you pick?
I don’t think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these?
If I have all the happiness and none of the truth, do I get run over by a car that I didn’t believe in?
If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I’m curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It’s interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
For simplicity’s sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don’t know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega… okay let’s assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren’t sure what “Truth” or “Happiness” means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a “correct” rational answer?
If it’s not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
Personally, it’s a bit of an ugh field for me. And is something I’m confused about, and really wish I had a good answer to.
To me, this get’s at a more general question of, “what should your terminal values be?”. It is my understanding that rationality can help you to achieve terminal values, but not to select them. I’ve thought about it a lot and have tried to think of a reason why one terminal value is “better” or “more rational” than another… but I’ve pretty much failed. I keep arriving at the conclusion that “what should your terminal values be?” is a Wrong Question, which becomes pretty obvious once it’s dissolved.
But at the same time… it’s such an important question that the slightest bit of uncertainty really bothers me. Think of it in terms of expected value—a huge magnitude multiplied by a small probability can still be huge. If I misunderstood something and I’m pursuing the wrong terminal goal(s)… well that’d be bad (how bad depends on how different my current goals are from “the real goals”).
I’d love to hear others’ takes on this. It appears that people live their lives as if things other than Your Happiness matter. Like Altruism and Truth. Ie, people pursue terminal values other than their own happiness. Is this true? I’ve really be interested in seeing a LW survey on terminal goals.
Truth is a tool. If it can’t be used to fulfill my goal of happiness, what good is it? That being said, if you just meant my happiness, then I’d take truth and use it to increase net happiness.
When I was much younger I might have said Truth. I was a student of physics once and loved to repeat the quote that the end of man is knowledge. But since then I have been happy, and I have been unhappy, and the difference between the two is just too large.
I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I’d thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you’d rather you can private message me your answer.
The question is:
Truth or Happiness? If you had to choose between one or the other, which would you pick?
I don’t think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these?
If I have all the happiness and none of the truth, do I get run over by a car that I didn’t believe in?
If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I’m curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It’s interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
Defining happiness as “guaranteed increased utility” is questionable. It doesn’t consider situations of blissful ignorance, where
We can’t seem to agree whether being blissfully ignorant about something one does not want is a loss of utility at all
If that does count as a loss of utility, utility would not equate to happiness because you can’t be happy or sad about something you don’t know about.
For simplicity’s sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don’t know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega… okay let’s assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren’t sure what “Truth” or “Happiness” means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a “correct” rational answer?
If it’s not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
Great question! I’m glad you brought it up!
Personally, it’s a bit of an ugh field for me. And is something I’m confused about, and really wish I had a good answer to.
To me, this get’s at a more general question of, “what should your terminal values be?”. It is my understanding that rationality can help you to achieve terminal values, but not to select them. I’ve thought about it a lot and have tried to think of a reason why one terminal value is “better” or “more rational” than another… but I’ve pretty much failed. I keep arriving at the conclusion that “what should your terminal values be?” is a Wrong Question, which becomes pretty obvious once it’s dissolved.
But at the same time… it’s such an important question that the slightest bit of uncertainty really bothers me. Think of it in terms of expected value—a huge magnitude multiplied by a small probability can still be huge. If I misunderstood something and I’m pursuing the wrong terminal goal(s)… well that’d be bad (how bad depends on how different my current goals are from “the real goals”).
I’d love to hear others’ takes on this. It appears that people live their lives as if things other than Your Happiness matter. Like Altruism and Truth. Ie, people pursue terminal values other than their own happiness. Is this true? I’ve really be interested in seeing a LW survey on terminal goals.
Truth is a tool. If it can’t be used to fulfill my goal of happiness, what good is it? That being said, if you just meant my happiness, then I’d take truth and use it to increase net happiness.
Hey it’s a good question. I’d pick Happiness.
When I was much younger I might have said Truth. I was a student of physics once and loved to repeat the quote that the end of man is knowledge. But since then I have been happy, and I have been unhappy, and the difference between the two is just too large.