I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I’m curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It’s interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
For simplicity’s sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don’t know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega… okay let’s assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren’t sure what “Truth” or “Happiness” means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a “correct” rational answer?
If it’s not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I’m curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It’s interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
Defining happiness as “guaranteed increased utility” is questionable. It doesn’t consider situations of blissful ignorance, where
We can’t seem to agree whether being blissfully ignorant about something one does not want is a loss of utility at all
If that does count as a loss of utility, utility would not equate to happiness because you can’t be happy or sad about something you don’t know about.
For simplicity’s sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don’t know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega… okay let’s assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren’t sure what “Truth” or “Happiness” means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a “correct” rational answer?
If it’s not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?