You argued that human utility is bounded because dopamine is bounded, and dopamine is part of how utility is represented. Yes?
No. What I actually said was:
The idea is that they [Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
I don’t think the human brain’s equivalent to a utility function is unbounded. Dopamine levels and endorphin levels are limited—and it seems tremendously unlikely that the brain deals with infinities in its usual mode of operation. So, this is all very hypothetical.
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.
No. What I actually said was:
I do think an unbounded human-equivalent utility function is not supported by any evidence. I reckon Hutter’s [0,1] utility would be able to simulate humans just fine on digital hardware.
I didn’t say that you equated utility with dopamine. [edit: I was replying to an earlier draft of your comment. As of now you’ve changed the comment to delete the claim that I had said that you equated utility with dopamine, though you retained an unexplained “no”.] I said that you said that dopamine is part of how utility is represented. Your quote appears to confirm my statement. You quote yourself saying “[Dopamine levels and endorphin levels] are part of the brain’s representation of expected utility—and utility.” Among other things, this says that dopamine is part of the brain’s representation of utility. Which is virtually word for word what I said you said, the main difference being that instead of saying “the brain’s representation of utility”, I said, “how utility is represented”. I don’t see any real difference here—just slightly different wording.
Moreover, the key statement that I am basing my interpretation on is not that, but this:
Here you are arguing that the human brain’s equivalent to a utility function is bounded, and your apparent argument for this is that dopamine and endorphin levels are limited.
I argued that the limitation of dopamine and endorphin levels does not imply that the human brain’s equivalent to a utility function is bounded. You have not addressed my argument, only claimed—incorrectly, it would appear—that I had misstated your argument.
I note that your characterisation of my argument models very, very poorly all the times I talked about the finite nature of the human brain on this thread.
You are seriously referring me to your entire oeuvre as a supposed explanation of what you meant in the specific comment that I was replying to?
I was pointing out that there was more to the arguments I have given than what you said. The statement you used to characterise my position was a false syllogism—but it doesn’t represent my thinking on the topic very well.