I skimmed some of your posts, and I think we agree that rules are arbitrary (and thus axioms rather than something which can be derived objectively) and that rules are fundamentally relative (which renders “objective truth” nonsense, which we don’t notice because we’re so used to the context we’re in that we deem it to be reality).
Preferences are axioms, they’re arbitrary starting points, we merely have similar preferences because we have similar human nature. Things like “good”, “bad”, and even “evil” and “suffering” are human concepts entirely. You can formalize them and describe them in logical symbols so that they appear to be outside the scope of humanity, but these symbols are still constructed (created, not discovered) and anything (except maybe contradictions) can be constructed, so nothing is proven (or even said!) about reality.
I don’t agree entirely with everything in your sequence, I think it still appears a little naive. It’s true that we don’t know what we want, but I think the truth is much worse than that. I will explain my own view here, but another user came up with a similar idea here: The point of a game is not to win, and you shouldn’t even pretend that it is
What we like is the feeling of progress towards goals. We like fixing problems, just like we like playing games. Every time a problem is fixed, we need a new problem to focus on. And a game is no fun if it’s too easy, so what we want is really for reality to resist our attempts to win, not so much that we fail but not so little that we consider it easy.
In other words, we’re not building AI to help people, we’re doing it because it’s a difficult, exciting, and rewarding game. If preventing human suffering was easy, then we’d not value such a thing very much, as value comes from scarcity. To outsource humanity to robots is missing the entire point of life, and to the degree that robots are “better” and less flawed than us, they’re less human.
It doesn’t matter even if we manage to create utopia, for doing so stops it from being an utopia. It doesn’t matter how good we make reality, the entire point lies in the tension between reality as it is and reality as we want it to be. This tension gives birth to the value of tools which may help us. I believe that Human well-being requires everything we’re currently destroying, and while you can bioengineer humans to be happy all the time or whatever, the result would be that humans (as we know them now) cease to exist, and it would be just as meaningless as building the experience machine.
Buddhists are nihilistic in the sense that they seek to escape life. I think that building an AI is nihilistic in the sense that you ruin life by solving it. Both approaches miss the point entirely. It’s like using cheat codes in a video game. Life is not happy or meaningful if you get rid of suffering, even rules and their enforcement conflict with life. (For similar reasons that solved games cease to be games, i.e. Two tic-tac-toe experts playing against eachother will not feel like they’re playing a game)
Sorry for the lengthy reply—I tried to keep it brief. And I don’t blame you if you consider all of this to be the rambling of a madman (maybe it is). But if you read the The Fun Theory Sequence you might find that the ideal human life looks a lot like what we already have, and that we’re ruining life from a psychological standpoint (e.g. through a reduction of agency) through technological “improvement”.
The ideal human life may be close to what you have, but the vast majority of humanity is and has been living in ways they’d really prefer not to. And I’d prefer not to get old and suffer and die before I want to. We will need new challenges if we create utopia, but the point of fun theory is that it’s fairly easy to create fun challenges.
I also prefer things to be different, but this is how it’s supposed to be. If we play a game against eachother, I will actually prefer it if you try to prevent my victory rather than let me win. Doesn’t this reveal that we prefer the game itself over the victory? But it’s the same with life.
Of course, I’d like it if my boss said “You don’t have to work any more, and we’re going to pay you 1000$ a day anyway”. But this is only because it will allow me to play better games than the rat race. Whatever I do as an alternative, I will need something to fight against me in order to enjoy life.
I’m willing to bet that suicide is more common today than it was in the stone age, despite the common belief that life is much better now. I don’t think they required nearly as much encouragement to survive as we do. I think we have an attitude problem today.
By the way, if you were both human and god at the same time, would you be able to prevent yourself from cheating? Given utopia-creating AI, would you actually struggle with these challenges and see a value in them? You could cheat at any time, and so could anyone competing with you. You will also have to live with the belief that it’s just a game you made up and therefore not “real”.
Ever played a good game without challenges and problems to solve? Ever read a good fiction without adversity and villains? But our lives are stories and games. And when the problem is solved, the book and the game is over, there’s nothing worth writing about anymore. Victory is dangerous. The worst which can happen to a society is that somebody wins the game, i.e. gets absolute power over everyone else. The game “monopoly” shows how the game kind of ends there. Dictatorships, tyranny, AI takeover, corruption, monopolies of power—they’re all terrible because they’re states in which there is a winner and the rest are losers. The game has to continue for as long as possible, both victory and defeat are death states in a sense.
Even my studies are a kind of game, and the difficulty of the topics posted on this website is the resistance. Discovery is fun. If we make an AI which can think better than us, then this hobby of ours loses its value, the game becomes meaningless. The people trying to “save” us from life have already removed half of it. Human agency is mostly gone, mystery is mostly gone, and there’s far too many rules. Many people agree that the game is starting to suck, but they think that technology is the solution when it’s actually the cause. Modern struggles are much less meaningful, so it’s harder to enjoy them.
I skimmed some of your posts, and I think we agree that rules are arbitrary (and thus axioms rather than something which can be derived objectively) and that rules are fundamentally relative (which renders “objective truth” nonsense, which we don’t notice because we’re so used to the context we’re in that we deem it to be reality).
Preferences are axioms, they’re arbitrary starting points, we merely have similar preferences because we have similar human nature. Things like “good”, “bad”, and even “evil” and “suffering” are human concepts entirely. You can formalize them and describe them in logical symbols so that they appear to be outside the scope of humanity, but these symbols are still constructed (created, not discovered) and anything (except maybe contradictions) can be constructed, so nothing is proven (or even said!) about reality.
I don’t agree entirely with everything in your sequence, I think it still appears a little naive. It’s true that we don’t know what we want, but I think the truth is much worse than that. I will explain my own view here, but another user came up with a similar idea here: The point of a game is not to win, and you shouldn’t even pretend that it is
What we like is the feeling of progress towards goals. We like fixing problems, just like we like playing games. Every time a problem is fixed, we need a new problem to focus on. And a game is no fun if it’s too easy, so what we want is really for reality to resist our attempts to win, not so much that we fail but not so little that we consider it easy.
In other words, we’re not building AI to help people, we’re doing it because it’s a difficult, exciting, and rewarding game. If preventing human suffering was easy, then we’d not value such a thing very much, as value comes from scarcity. To outsource humanity to robots is missing the entire point of life, and to the degree that robots are “better” and less flawed than us, they’re less human.
It doesn’t matter even if we manage to create utopia, for doing so stops it from being an utopia. It doesn’t matter how good we make reality, the entire point lies in the tension between reality as it is and reality as we want it to be. This tension gives birth to the value of tools which may help us. I believe that Human well-being requires everything we’re currently destroying, and while you can bioengineer humans to be happy all the time or whatever, the result would be that humans (as we know them now) cease to exist, and it would be just as meaningless as building the experience machine.
Buddhists are nihilistic in the sense that they seek to escape life. I think that building an AI is nihilistic in the sense that you ruin life by solving it. Both approaches miss the point entirely. It’s like using cheat codes in a video game. Life is not happy or meaningful if you get rid of suffering, even rules and their enforcement conflict with life. (For similar reasons that solved games cease to be games, i.e. Two tic-tac-toe experts playing against eachother will not feel like they’re playing a game)
Sorry for the lengthy reply—I tried to keep it brief. And I don’t blame you if you consider all of this to be the rambling of a madman (maybe it is). But if you read the The Fun Theory Sequence you might find that the ideal human life looks a lot like what we already have, and that we’re ruining life from a psychological standpoint (e.g. through a reduction of agency) through technological “improvement”.
The ideal human life may be close to what you have, but the vast majority of humanity is and has been living in ways they’d really prefer not to. And I’d prefer not to get old and suffer and die before I want to. We will need new challenges if we create utopia, but the point of fun theory is that it’s fairly easy to create fun challenges.
I also prefer things to be different, but this is how it’s supposed to be.
If we play a game against eachother, I will actually prefer it if you try to prevent my victory rather than let me win. Doesn’t this reveal that we prefer the game itself over the victory? But it’s the same with life.
Of course, I’d like it if my boss said “You don’t have to work any more, and we’re going to pay you 1000$ a day anyway”. But this is only because it will allow me to play better games than the rat race. Whatever I do as an alternative, I will need something to fight against me in order to enjoy life.
I’m willing to bet that suicide is more common today than it was in the stone age, despite the common belief that life is much better now. I don’t think they required nearly as much encouragement to survive as we do. I think we have an attitude problem today.
By the way, if you were both human and god at the same time, would you be able to prevent yourself from cheating? Given utopia-creating AI, would you actually struggle with these challenges and see a value in them? You could cheat at any time, and so could anyone competing with you. You will also have to live with the belief that it’s just a game you made up and therefore not “real”.
Ever played a good game without challenges and problems to solve? Ever read a good fiction without adversity and villains? But our lives are stories and games. And when the problem is solved, the book and the game is over, there’s nothing worth writing about anymore. Victory is dangerous. The worst which can happen to a society is that somebody wins the game, i.e. gets absolute power over everyone else. The game “monopoly” shows how the game kind of ends there. Dictatorships, tyranny, AI takeover, corruption, monopolies of power—they’re all terrible because they’re states in which there is a winner and the rest are losers. The game has to continue for as long as possible, both victory and defeat are death states in a sense.
Even my studies are a kind of game, and the difficulty of the topics posted on this website is the resistance. Discovery is fun. If we make an AI which can think better than us, then this hobby of ours loses its value, the game becomes meaningless.
The people trying to “save” us from life have already removed half of it. Human agency is mostly gone, mystery is mostly gone, and there’s far too many rules. Many people agree that the game is starting to suck, but they think that technology is the solution when it’s actually the cause. Modern struggles are much less meaningful, so it’s harder to enjoy them.