I agree that it’s unclear that it makes sense to talk about humans having utility functions; my use of the term was more a manner of speaking than anything else.
It sounds like you’re going with something like Counterargument #5 with something like the Dunbar number determining the point at which your concern for others caps off; this augmented by some desire to “be a good citizen n’stuff”.
Something similar may be true of me, but I’m not sure. I know that I derive a lot of satisfaction from feeling like I’m making the world a better place and am uncomfortable with the idea that I don’t care about people who I don’t know (in light of my abstract belief in space and time independence of moral value); but maybe the intensity of the relevant feelings are sufficiently diminished when the magnitude of uncertainty becomes huge so that other interests predominate.
I feel like if I could prove that course X maximizes expected utility then my interest in pursuing course X would increase dramatically (independently of how small the probabilities are and of the possibility of doing more harm than good) but that having a distinctive sense that I’ll probably change my mind about whether pursuing course X was a good idea significantly decreases my interest in pursuing course X. Finding it difficult to determine whether this reflects my “utility function” or whether there’s a logical argument coming from utilitarianism against pursuing courses that one will probably regret (e.g. probable burnout and disillusionment repelling potentially utilitarian bystanders).
Great Adam Smith quotation; I’ve seen it before, but it’s good to have a reference.
I agree that it’s unclear that it makes sense to talk about humans having utility functions; my use of the term was more a manner of speaking than anything else.
It would be convenient if we could show that all O-maximizers have some
characteristic behavior pattern, as we do with reward maximizers in Appendix
B. We cannot do this, though, because the set of O-maximizers coincides with
the set of all agents; any agents can be written in O-maximizer form.
To prove this, consider an agent A whose behavior is specied by yk = A(yx<k).
Trivially, we can construct an O-maximizer whose utility is 1 if each yn in its
interaction history is equal to A(yx<n), and 0 otherwise. This O-maximizer will
maximize its utility by behaving as A does at every time n. In this way, any
agent can be rewritten as an O-maximizer.
Thanks for your thoughtful comment.
I agree that it’s unclear that it makes sense to talk about humans having utility functions; my use of the term was more a manner of speaking than anything else.
It sounds like you’re going with something like Counterargument #5 with something like the Dunbar number determining the point at which your concern for others caps off; this augmented by some desire to “be a good citizen n’stuff”.
Something similar may be true of me, but I’m not sure. I know that I derive a lot of satisfaction from feeling like I’m making the world a better place and am uncomfortable with the idea that I don’t care about people who I don’t know (in light of my abstract belief in space and time independence of moral value); but maybe the intensity of the relevant feelings are sufficiently diminished when the magnitude of uncertainty becomes huge so that other interests predominate.
I feel like if I could prove that course X maximizes expected utility then my interest in pursuing course X would increase dramatically (independently of how small the probabilities are and of the possibility of doing more harm than good) but that having a distinctive sense that I’ll probably change my mind about whether pursuing course X was a good idea significantly decreases my interest in pursuing course X. Finding it difficult to determine whether this reflects my “utility function” or whether there’s a logical argument coming from utilitarianism against pursuing courses that one will probably regret (e.g. probable burnout and disillusionment repelling potentially utilitarian bystanders).
Great Adam Smith quotation; I’ve seen it before, but it’s good to have a reference.
Obligatory OB link: Bostrom and Ord’s parliamentary model for normative uncertainty/mixed motivations.
They do have them—in this sense: