Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
A utility function is itself not something physical. It is one (often mathematically convenient) way of summarizing an agent’s preferences in making decisions. These preferences are of course physical. Note, for instance, that everything observable is completely invariant under arbitrary positive affine transformations. Even assuming our preferences can be described by a utility function (i.e. they are consistent—but we know they’re not), it’s clear that putting an upper bound on it would no longer agree with the decisions made by a utility function without such a bound.
Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
I didn’t say dopamine levels and endorphin levels were utility functions. The idea is that they are part of the brain’s representation of expected utility—and utility.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes? The obvious objection to your argument is that the representation could in principle take one of many different forms, some of which allow us to represent something unbounded by means of something bounded. If that were the case, then the boundedness of dopamine would not imply the boundedness of utility.
If you want an example of how this representation might be done, here’s one: if you prefer state A to state B, this is (hypothetically) represented by the fact that if you move from state B to state A your dopamine level is raised temporarily—and after some interval, it drops again to a default level. So, every time you move from a less preferred state to a more preferred state, i.e. from lower utility to higher utility, your dopamine level is raised temporarily and then drops back. The opposite happens if you move from higher utility to lower utility.
Though I have offered this as a hypothetical, from the little bit that I’ve read in the so-called “happiness” literature, something like this seems to be what actualyl goes on. If you receive good fortune, you get especially happy for a bit, and then you go back to a default level of happiness. And conversely, if you suffer some misfortune, you become unhappy for a bit, and then you go back to a default level of happiness.
Unfortunately, a lot of people seem to draw what I think is a perverse lesson from this phenomenon, which is that good and bad fortune does not really matter, because no matter what happens to us, in the end we find ourselves at the default level of happiness. In my view, utility should not be confused with happiness. If a man becomes rich and, in the end, finds himself no happier than before, I don’t think that that is a valid argument against getting rich. Rather, temporary increases and decreases in happiness is how our brains mark permanent increases and decreases in utility. That the happiness returns to default does not mean that utility returns to default.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes?
No. What I actually said was:
The idea is that they [Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.
Dopamine levels and endorphin levels are not utility functions. At best they are “hedons”, and even that’s not indisputably clear—there’s more to happiness than that.
A utility function is itself not something physical. It is one (often mathematically convenient) way of summarizing an agent’s preferences in making decisions. These preferences are of course physical. Note, for instance, that everything observable is completely invariant under arbitrary positive affine transformations. Even assuming our preferences can be described by a utility function (i.e. they are consistent—but we know they’re not), it’s clear that putting an upper bound on it would no longer agree with the decisions made by a utility function without such a bound.
Well, the brain represents utility somehow, as part of its operation. It rather obviously compares expected utilities of future states.
I didn’t say dopamine levels and endorphin levels were utility functions. The idea is that they are part of the brain’s representation of expected utility—and utility.
No. You’ve entirely missed my point. The brain makes decisions. Saying it does so via representing things as utilities is a radical and unsupported assumption. It can be useful to model people as making decisions according to a utility function, as this can compress our description of it, often with only small distortions. But it’s still just a model. Unboundedness in our model of a decision maker has nothing to do with unboundedness in the decision maker we are modeling. This is a basic map/territory confusion (or perhaps advanced: our map of their map of the territory is not the same as their map of the territory).
Not exactly an assumption. We can see—more-or-less—how the fundamental reward systems in the brain work. They use neurotransmitter concentrations and firing frequencies to represent desire and and aversion—and pleasure and pain. These are the physical representation of utility, the brain’s equivalent of money. Neurotransmitter concentrations and neuron firing frequencies don’t shoot off to infinity. They saturate—resulting in pleasure and pain saturation points.
I see little indication that the brain is in the assigning absolute utilities business at all. Things like scope insensitivity seem to suggest that it only assigns relative utilities, comparing to a context-dependent default.
They are feedback signals, certainly. Every system with any degree of intelligence must have those. But feedback signals, utility and equivalent of money are not synonyms. To say a system’s feedback signals are equivalent to money is to make certain substantive claims about its design. (e.g. some but not most AI programs have been designed with those properties.) To say they are utility measurements is to make certain other substantive claims about its design. Neither of those claims is true about the human brain in general.
You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes? The obvious objection to your argument is that the representation could in principle take one of many different forms, some of which allow us to represent something unbounded by means of something bounded. If that were the case, then the boundedness of dopamine would not imply the boundedness of utility.
If you want an example of how this representation might be done, here’s one: if you prefer state A to state B, this is (hypothetically) represented by the fact that if you move from state B to state A your dopamine level is raised temporarily—and after some interval, it drops again to a default level. So, every time you move from a less preferred state to a more preferred state, i.e. from lower utility to higher utility, your dopamine level is raised temporarily and then drops back. The opposite happens if you move from higher utility to lower utility.
Though I have offered this as a hypothetical, from the little bit that I’ve read in the so-called “happiness” literature, something like this seems to be what actualyl goes on. If you receive good fortune, you get especially happy for a bit, and then you go back to a default level of happiness. And conversely, if you suffer some misfortune, you become unhappy for a bit, and then you go back to a default level of happiness.
Unfortunately, a lot of people seem to draw what I think is a perverse lesson from this phenomenon, which is that good and bad fortune does not really matter, because no matter what happens to us, in the end we find ourselves at the default level of happiness. In my view, utility should not be confused with happiness. If a man becomes rich and, in the end, finds himself no happier than before, I don’t think that that is a valid argument against getting rich. Rather, temporary increases and decreases in happiness is how our brains mark permanent increases and decreases in utility. That the happiness returns to default does not mean that utility returns to default.
No. What I actually said was:
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
You are seriously referring me to your entire oeuvre as a supposed explanation of what you meant in the specific comment that I was replying to?
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.