Thank you for saying this. I have tried several times to explain something like it in a post, but I don’t think I have the writing skill to convey effectively how deeply distressed I am about these scenarios. It’s essential to my ability to enjoy life that I be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me, and can have natural and productive relationships with unartificial people. I don’t understand at all how I’m supposed to be excited by the “good OpenAI ending” where every facet of human skill and interaction gets slowly commoditized, and that ending seems strictly worse to me in a lot of ways than just dying suddenly in an exploding ball of fire.
be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me
How large part of this are zero-sum games, and the part that makes you happy is that you are winning? Would the person who is losing feel the same? What is the good ending for them?
WRT status games: I enjoy playing such games more when everybody agrees to the terms of the game and has a relatively even footing at the beginning and there are resets throughout. “Having more prestige” is great, but it’s more important that I get to interact with other people in a meaningful way like that at all. The respect and prestige components people usually associate with winning status games are also not inherently zero-sum. It’s possible to respect people even when they lose.
WRT political capital: Maybe it would be clearer if I said that I want to live in a world where humans have agency, and there’s a History that feels like it’s being shaped by actual people and not by brownian motion, and where the path to power is not always to subjugate your entire life and psychology to a moral maze. While most people won’t outright endorse things like Prighozin’s coup, because they realize it might end up being a lot more bad than good, they are obviously viscerally excited by the possibility that outsiders can win through gutsy action, and get depressed when they realize that’s untrue. Contrast this with the default scenario of “some coalition of politicians and AGI lab heads and lobbyists decide how everything is going to be forever”.
WRT everything else: Those things aren’t zero sum at all. My laptop is useful and so am I. A laborer in Egypt is still participating in the economy.
Thank you! I agree. Things called “zero-sum” often become something else when we also consider their impact on third parties, i.e. when we model them as games of 3 players (Player 1, Player 2, World). It may be that the actions of Player 1 negate the actions of Player 2 from their relative perspectives (if we are in a running competition, and I start running faster, I get an advantage, but if you also start running faster, my advantage is lost), but both work in the same direction from the perspective of the World (if both of us run faster, the competition is more interesting to watch for the audience).
In some status games the effect on the third party is mostly “resources are wasted”. (I try to buy a larger gold chain, you try to buy a larger gold chain, resources are wasted on mining gold and making chains.)
But if we compete at producing value for the third party, whether it is making jokes, or signaling wealth by sending money to charity, the effect on the third party is the value produced. Such games are good! If we could make producing value for the third party the only status game in town, the world would probably be a much nicer place.
That said, the concept of “useful” seems intrinsically related to “scarcity”. The laborer in Egypt does something that wouldn’t get done otherwise (or it would get done anyway, but at the cost of something else not done). If we get to see a positive singularity, some kind of future where all important things get done by a Friendly AI, then only the unimportant things are left for us. For example, the AI will provide healthcare, and we will play computer games. (I wanted to say ”...and we will make art or tell jokes”, but actually the AI will be able to do that much better; unless we choose to ignore that, and make sure that no human in our group is cheating by asking the AI for help.)
The possibility of a coup is, of course, a two-sided coin. If things can get surprisingly better, they can also get surprisingly worse. If all possibilities are open, then so is also the possibility of Hell. Someone will try to find a way to sacrifice everything to Moloch in return for being the ruler of the Hell. So other people will have to spend a lot of time trying to prevent that, and we get a sword of Damocles above our heads.
The possibility of a coup is, of course, a two-sided coin. If things can get surprisingly better, they can also get surprisingly worse.
I have long wanted a society where there is a “constitutional monarchy” position that is high status and a magnet for interesting political skirmishes but doesn’t have much control over public policy, and alongside that a “head of government” who is a boring accountant type and by law doesn’t get invited to any of the interesting parties or fly around in a fancy jet.
How distressed would you be if the “good ending” were opt-in and existed somewhere far away from you? I’ve explored the future and have found one version that I think would satisfy your desire but I’m asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they’re available at the Opt-In Revolution, in narrative form.
This assumes that we’ll never have the technology to change our brain’s wiring to our liking? If we live in the post-scarcity utopia, why won’t you be able to just go change who you are as a person so that you’ll fully enjoy the new world?
But you have also written yourself a couple of years ago:
if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven’t read before.
And indeed, when talking specifically about the Fun Theory sequence itself, you said:
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can’t care about the qualia—that they must value video games or science instead of the pleasure derived from video games or science.
Thank you for saying this. I have tried several times to explain something like it in a post, but I don’t think I have the writing skill to convey effectively how deeply distressed I am about these scenarios. It’s essential to my ability to enjoy life that I be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me, and can have natural and productive relationships with unartificial people. I don’t understand at all how I’m supposed to be excited by the “good OpenAI ending” where every facet of human skill and interaction gets slowly commoditized, and that ending seems strictly worse to me in a lot of ways than just dying suddenly in an exploding ball of fire.
How large part of this are zero-sum games, and the part that makes you happy is that you are winning? Would the person who is losing feel the same? What is the good ending for them?
WRT status games: I enjoy playing such games more when everybody agrees to the terms of the game and has a relatively even footing at the beginning and there are resets throughout. “Having more prestige” is great, but it’s more important that I get to interact with other people in a meaningful way like that at all. The respect and prestige components people usually associate with winning status games are also not inherently zero-sum. It’s possible to respect people even when they lose.
WRT political capital: Maybe it would be clearer if I said that I want to live in a world where humans have agency, and there’s a History that feels like it’s being shaped by actual people and not by brownian motion, and where the path to power is not always to subjugate your entire life and psychology to a moral maze. While most people won’t outright endorse things like Prighozin’s coup, because they realize it might end up being a lot more bad than good, they are obviously viscerally excited by the possibility that outsiders can win through gutsy action, and get depressed when they realize that’s untrue. Contrast this with the default scenario of “some coalition of politicians and AGI lab heads and lobbyists decide how everything is going to be forever”.
WRT everything else: Those things aren’t zero sum at all. My laptop is useful and so am I. A laborer in Egypt is still participating in the economy.
Thank you! I agree. Things called “zero-sum” often become something else when we also consider their impact on third parties, i.e. when we model them as games of 3 players (Player 1, Player 2, World). It may be that the actions of Player 1 negate the actions of Player 2 from their relative perspectives (if we are in a running competition, and I start running faster, I get an advantage, but if you also start running faster, my advantage is lost), but both work in the same direction from the perspective of the World (if both of us run faster, the competition is more interesting to watch for the audience).
In some status games the effect on the third party is mostly “resources are wasted”. (I try to buy a larger gold chain, you try to buy a larger gold chain, resources are wasted on mining gold and making chains.)
But if we compete at producing value for the third party, whether it is making jokes, or signaling wealth by sending money to charity, the effect on the third party is the value produced. Such games are good! If we could make producing value for the third party the only status game in town, the world would probably be a much nicer place.
That said, the concept of “useful” seems intrinsically related to “scarcity”. The laborer in Egypt does something that wouldn’t get done otherwise (or it would get done anyway, but at the cost of something else not done). If we get to see a positive singularity, some kind of future where all important things get done by a Friendly AI, then only the unimportant things are left for us. For example, the AI will provide healthcare, and we will play computer games. (I wanted to say ”...and we will make art or tell jokes”, but actually the AI will be able to do that much better; unless we choose to ignore that, and make sure that no human in our group is cheating by asking the AI for help.)
The possibility of a coup is, of course, a two-sided coin. If things can get surprisingly better, they can also get surprisingly worse. If all possibilities are open, then so is also the possibility of Hell. Someone will try to find a way to sacrifice everything to Moloch in return for being the ruler of the Hell. So other people will have to spend a lot of time trying to prevent that, and we get a sword of Damocles above our heads.
I have long wanted a society where there is a “constitutional monarchy” position that is high status and a magnet for interesting political skirmishes but doesn’t have much control over public policy, and alongside that a “head of government” who is a boring accountant type and by law doesn’t get invited to any of the interesting parties or fly around in a fancy jet.
If you died and went to a Heaven run by a genuinely benevolent and omnipotent god, would it be impossible for you to enjoy yourself in it?
It would be possible. “Fun Theory” describes one such environment the benevolent god could create.
How distressed would you be if the “good ending” were opt-in and existed somewhere far away from you? I’ve explored the future and have found one version that I think would satisfy your desire but I’m asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they’re available at the Opt-In Revolution, in narrative form.
It’s essential to my ability to enjoy life
This assumes that we’ll never have the technology to change our brain’s wiring to our liking? If we live in the post-scarcity utopia, why won’t you be able to just go change who you are as a person so that you’ll fully enjoy the new world?
https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence
But you have also written yourself a couple of years ago:
And indeed, when talking specifically about the Fun Theory sequence itself, you said:
Do you no longer endorse this?