What I mean by “sincerely” is just that I’m not lying when I assert it. And, yes, this presumes that X isn’t changing F. I wasn’t trying to be sneaky; my intention was simply to confirm that you believe F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)), and that I hadn’t misunderstood something. And, further, to confirm that you believe that you believe that if F(W) gives the utility of a world-state for some evaluator, then F(O(W)) gives the degree to which that world-state makes that evaluator happy. Or, said more concisely: that H(O(W)) == F(O(W)) for a given observer.
Hm.
So, I agree broadly that F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)). (Although a caveat: it’s certainly possible to come up with combinations of F() and O() for which it isn’t true, so this is more of an evidentiary implication than a logical one. But I think that’s beside our purpose here.)
H(O(W)) = F(O(W)), though, seems entirely unjustified to me. I mean, it might be true, sure, just as it might be true that F(O(W)) is necessarily equal to various other things. But I see no reason to believe it; it feels to me like an assertion pulled out of thin air.
Of course, I can’t really have any counterevidence, the way the claim is structured.
I mean, I’ve certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy—that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same—which suggests to me that F() and H() are different functions… but you would presumably just say that I’m mistaken about one or both of those things. Which is certainly possible, I am far from incorrigible either about what makes me happy and I don’t entirely understand what I believe makes the world better.
I think I have to leave it there. You are asserting an identity that seems unjustified to me, and I have no compelling reason to believe that it’s true, but also no definitive grounds for declaring it false.
I’ve certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy—that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same
but I can’t imagine experiencing that. If the utility of a function goes down, it seems my happiness from seeing that function must necessarily go down as well. This discrepancy causes me to believe there is a low-level difference between what you consider happiness and what I consider happiness, but I can’t explain mine any farther than I already have.
I don’t know how else to say it, but I don’t feel I’m actually making that assertion. I’m just saying:
“By my understanding of hedony=H(x), awareness=O(x), and utility=F(x), I don’t see any possible situation where H(W) =/= F(O(W)). If they’re indistinguishable, wouldn’t it make sense to say they’re the same thing?”
I agree that if two things are indistinguishable in principle, it makes sense to use the same label for both.
It is not nearly as clear to me that “what makes me happy” and “what makes the world better” are indistinguishable sets as it seems to be to you, so I am not as comfortable using the same label for both sets as you seem to be.
You may be right that we don’t use “happiness” to refer to the same things. I’m not really sure how to explore that further; what I use “happiness” to refer to is an experiential state I don’t know how to convey more precisely without in effect simply listing synonyms. (And we’re getting perilously close to “what if what I call ‘red’ is what you call ‘green’?” territory, here.)
Without a much more precise way of describing patterns of neuron-fire, I don’t think either of us can describe happiness more than we have so far. Having discussed the reactions in-depth, though, I think we can reasonably conclude that, whatever they are, they’re not the same, which answers at least part of my initial question.
What I mean by “sincerely” is just that I’m not lying when I assert it.
And, yes, this presumes that X isn’t changing F.
I wasn’t trying to be sneaky; my intention was simply to confirm that you believe F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)), and that I hadn’t misunderstood something.
And, further, to confirm that you believe that you believe that if F(W) gives the utility of a world-state for some evaluator, then F(O(W)) gives the degree to which that world-state makes that evaluator happy. Or, said more concisely: that H(O(W)) == F(O(W)) for a given observer.
Hm.
So, I agree broadly that F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)). (Although a caveat: it’s certainly possible to come up with combinations of F() and O() for which it isn’t true, so this is more of an evidentiary implication than a logical one. But I think that’s beside our purpose here.)
H(O(W)) = F(O(W)), though, seems entirely unjustified to me. I mean, it might be true, sure, just as it might be true that F(O(W)) is necessarily equal to various other things. But I see no reason to believe it; it feels to me like an assertion pulled out of thin air.
Of course, I can’t really have any counterevidence, the way the claim is structured.
I mean, I’ve certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy—that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same—which suggests to me that F() and H() are different functions… but you would presumably just say that I’m mistaken about one or both of those things. Which is certainly possible, I am far from incorrigible either about what makes me happy and I don’t entirely understand what I believe makes the world better.
I think I have to leave it there. You are asserting an identity that seems unjustified to me, and I have no compelling reason to believe that it’s true, but also no definitive grounds for declaring it false.
I believe you to be sincere when you say
but I can’t imagine experiencing that. If the utility of a function goes down, it seems my happiness from seeing that function must necessarily go down as well. This discrepancy causes me to believe there is a low-level difference between what you consider happiness and what I consider happiness, but I can’t explain mine any farther than I already have.
I don’t know how else to say it, but I don’t feel I’m actually making that assertion. I’m just saying: “By my understanding of hedony=H(x), awareness=O(x), and utility=F(x), I don’t see any possible situation where H(W) =/= F(O(W)). If they’re indistinguishable, wouldn’t it make sense to say they’re the same thing?”
Edit: formatting
I agree that if two things are indistinguishable in principle, it makes sense to use the same label for both.
It is not nearly as clear to me that “what makes me happy” and “what makes the world better” are indistinguishable sets as it seems to be to you, so I am not as comfortable using the same label for both sets as you seem to be.
You may be right that we don’t use “happiness” to refer to the same things. I’m not really sure how to explore that further; what I use “happiness” to refer to is an experiential state I don’t know how to convey more precisely without in effect simply listing synonyms. (And we’re getting perilously close to “what if what I call ‘red’ is what you call ‘green’?” territory, here.)
Without a much more precise way of describing patterns of neuron-fire, I don’t think either of us can describe happiness more than we have so far. Having discussed the reactions in-depth, though, I think we can reasonably conclude that, whatever they are, they’re not the same, which answers at least part of my initial question.
Thanks!