Phil, you’re right that there’s a difference between giving people their mutually unsatisfiable values and giving them the feeling that they’ve been satisfied. But there’s a mechanism missing from this picture:
Even if I wouldn’t want to try running an AI to have conversations with humans worldwide to convert them to more mutually satisfiable value systems, and even though I don’t want a machine to wire-head everybody into a state of illusory high status, I certainly trust humans to convince other humans to convert to mutually satisfiable values. In fact, I do it all the time. I consider it one of the most proselytism-worthy ideas ever.
So I see your post as describing a very important initiative we should all be taking, as people: convince others to find happiness in positive-sum games :)
(If I were an AI, or even just an I, perhaps you would hence define me as “unFreindly”. If so, okay then. I’m still going to go around convincing people to be better at happiness, rational-human-style.)
So I see your post as describing a very important initiative we should all be taking, as people: convince others to find happiness in positive-sum games
It’s an error to assume that human brains are actually wired for zero or negative sum games in the first place, vs. having adaptations that tend towards such a situation. Humans aren’t true maximizers; they’re maximizer-satisficers. E.g., people don’t seek the best possible mate: they seek the best mate they think they can get.
(Ironically, the greater mobility and choices in our current era often lead to decreased happiness, as our perceptions of what we ought to be able to “get” have increased.)
Anyway, ISTM that any sort of monomaniacal maximizing behavior (e.g. OCD, paranoia, etc.) is indicative of an unhealthy brain. Simple game theory suggests that putting one value so much higher than others is unlikely to be an evolutionarily stable strategy.
Phil, you’re right that there’s a difference between giving people their mutually unsatisfiable values and giving them the feeling that they’ve been satisfied. But there’s a mechanism missing from this picture:
Even if I wouldn’t want to try running an AI to have conversations with humans worldwide to convert them to more mutually satisfiable value systems, and even though I don’t want a machine to wire-head everybody into a state of illusory high status, I certainly trust humans to convince other humans to convert to mutually satisfiable values. In fact, I do it all the time. I consider it one of the most proselytism-worthy ideas ever.
So I see your post as describing a very important initiative we should all be taking, as people: convince others to find happiness in positive-sum games :)
(If I were an AI, or even just an I, perhaps you would hence define me as “unFreindly”. If so, okay then. I’m still going to go around convincing people to be better at happiness, rational-human-style.)
It’s an error to assume that human brains are actually wired for zero or negative sum games in the first place, vs. having adaptations that tend towards such a situation. Humans aren’t true maximizers; they’re maximizer-satisficers. E.g., people don’t seek the best possible mate: they seek the best mate they think they can get.
(Ironically, the greater mobility and choices in our current era often lead to decreased happiness, as our perceptions of what we ought to be able to “get” have increased.)
Anyway, ISTM that any sort of monomaniacal maximizing behavior (e.g. OCD, paranoia, etc.) is indicative of an unhealthy brain. Simple game theory suggests that putting one value so much higher than others is unlikely to be an evolutionarily stable strategy.