Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
The free-loader problem is an obvious downside of the above simplification, but that and other issues don’t seem to be part of the present discussion.
Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
That doesn’t make them beholden—obligated. They can opt not to play that game. They can opt not to vvalue winning.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.
Could you taboo “beholden” in that first? I’m not sure the “feeling of moral duty borned from guilt” I associate with the word “obligated” is quite what you have in mind.
They can opt not to play that game. They can opt not to value winning.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
In other words, you just didn’t truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
(sorry if I’m arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent’s utility to its maximum possible value if there exists a maximum for that agent’s function)
I wouldn’t let my values be changed if doing so would thwart my current values. I think you’re contending that the utopia would satisfy my current values better than the status quo would, though.
In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don’t have very strong ones but I think they’re in here somewhere and for some things). You would call this a hidden utility function, I don’t think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.
Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
Games emerge where people have things other people value. If someone doens’t value those sorts
of things, they are not going to game-play.
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
I don’t see where higher-tier functions come in.
You are assumign that a utopia will maximise everyones value indiividually AND that values diverge.
That’s a tall order.
Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
The free-loader problem is an obvious downside of the above simplification, but that and other issues don’t seem to be part of the present discussion.
That doesn’t make them beholden—obligated. They can opt not to play that game. They can opt not to vvalue winning.
Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.
Could you taboo “beholden” in that first? I’m not sure the “feeling of moral duty borned from guilt” I associate with the word “obligated” is quite what you have in mind.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
In other words, you just didn’t truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
(sorry if I’m arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent’s utility to its maximum possible value if there exists a maximum for that agent’s function)
I wouldn’t let my values be changed if doing so would thwart my current values. I think you’re contending that the utopia would satisfy my current values better than the status quo would, though.
In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don’t have very strong ones but I think they’re in here somewhere and for some things). You would call this a hidden utility function, I don’t think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.
Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.
Do you think the utopia is feasible?
Naw. But even if it was, if I placed value on maintaining my current values to a high degree, I wouldn’t modify.
Games emerge where people have things other people value. If someone doens’t value those sorts of things, they are not going to game-play.
I don’t see where higher-tier functions come in.
You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That’s a tall order.