Yeah. People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
People need to be needed, but that doesn’t mean they need to be needed for something in particular. It’s a flexible emotion. Just keep someone of matching neediness around for mutual needing purposes.
And when all is fixed, I’ll say: “It needn’t be possible to lose you, that it be true I’d miss you if I did.”
People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
From an old blog post I wrote:
Imagine that, after your death, you were cryogenically frozen and eventually resurrected in a benevolent utopia ruled by a godlike artificial intelligence.
Naturally, you desire to read up on what has happened after your death. It turns out that you do not have to read anything, but merely desire to know something and the knowledge will be integrated as if it had been learnt in the most ideal and unbiased manner. If certain cognitive improvements are necessary to understand certain facts, your computational architecture will be expanded appropriately.
You now perfectly understand everything that has happened and what has been learnt during and after the technological singularity, that took place after your death. You understand the nature of reality, consciousness, and general intelligence.
Concepts such as creativity or fun are now perfectly understood mechanical procedures that you can easily implement and maximize, if desired. If you wanted to do mathematics, you could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.
But you also learnt that everything you could do has already been done, and that you could just integrate that knowledge as well, if you like. All that is left to be discovered is highly abstract mathematics that requires the resources of whole galaxy clusters.
So you instead consider to explore the galaxy. But you become instantly aware that the galaxy is unlike the way it has been depicted in old science fiction novels. It is just a wasteland, devoid of any life. There are billions of barren planets, differing from each other only in the most uninteresting ways.
But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Yes, sex! But you realize that you already thoroughly understand what it is that makes exploration and sex fun. You know how to implement the ideal adventure in which you save people of maximal sexual attractiveness. And you also know that you could trivially integrate the memory of such an adventure, or simulate it a billion times in a few nanoseconds, and that the same is true for all possible permutations that are less desirable.
You realize that the universe has understood itself.
Yes, if you skip to the end, you’ll be at the end. So don’t. Unless you want to. In which case, do.
How long are you going to postpone the end? After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
Now your answer to this seems to be that you can also read it very slowly, or with a very low IQ, so that it will take you a really long time to do so. I am not the kind of person who would enjoy to artificially slow down amusement, such as e.g. learning category theory, if I could also learn it quickly.
After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
to which I’d reply “Possibly, but that feels really stupid.”
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.
Much like a five-year old realizing that he won’t be enjoying snakes-and-ladders anymore when he’s grown up, and thus concluding adults lives must be super-dull, I find scenarios of future ultimate boredom to be extremely shortsighted.
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what? Do adults need to either enjoy snakes-and-ladders or live lives of boredom?
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what?
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Now consider another program whose computation would cause an emulation of you to understand ArisKatsaris-CEV to such an extent that it would become as predictable and interesting as a game of Tic-tac-toe. I will call this program ArisKatsaris-SELF.
The options I see are to make sure that ArisKatsaris-CEV does never turn into ArisKatsaris-SELF or to maximize ArisKatsaris-CEV. The latter possibility would be similar to paperclip maximizing, or wireheading, from the subjective viewpoint of ArisKatsaris-SELF, as it would turn the universe into something boring. The former option seems to set fundamental limits to how far you can go in understanding yourself.
The gist of the problem is that a certain point you become bored of yourself. And avoiding that point implies stagnation.
You’re mixing up different things: (A)- a program which will produce an optimal existence for me (B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics...
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
Then I wasn’t clear enough, because that’s not what I tried to say.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Consider calling it something else. That isn’t CEV.
Do you think that’s likely? My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
The problem is that if you perfectly understand the process of art generation. There are cellular automata that generate novel music. How much do you value running such an automata and watching it output music? To me it seems that the value of novelty is diminished by the comprehension of the procedures generating it.
Yeah. People need to be needed, but if FAI can satisfy all other needs, then it fails to satisfy that one. Maybe FAI will uplift people and disappear, or do something more creative.
People need to be needed, but that doesn’t mean they need to be needed for something in particular. It’s a flexible emotion. Just keep someone of matching neediness around for mutual needing purposes.
And when all is fixed, I’ll say: “It needn’t be possible to lose you, that it be true I’d miss you if I did.”
From an old blog post I wrote:
Imagine that, after your death, you were cryogenically frozen and eventually resurrected in a benevolent utopia ruled by a godlike artificial intelligence.
Naturally, you desire to read up on what has happened after your death. It turns out that you do not have to read anything, but merely desire to know something and the knowledge will be integrated as if it had been learnt in the most ideal and unbiased manner. If certain cognitive improvements are necessary to understand certain facts, your computational architecture will be expanded appropriately.
You now perfectly understand everything that has happened and what has been learnt during and after the technological singularity, that took place after your death. You understand the nature of reality, consciousness, and general intelligence.
Concepts such as creativity or fun are now perfectly understood mechanical procedures that you can easily implement and maximize, if desired. If you wanted to do mathematics, you could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.
But you also learnt that everything you could do has already been done, and that you could just integrate that knowledge as well, if you like. All that is left to be discovered is highly abstract mathematics that requires the resources of whole galaxy clusters.
So you instead consider to explore the galaxy. But you become instantly aware that the galaxy is unlike the way it has been depicted in old science fiction novels. It is just a wasteland, devoid of any life. There are billions of barren planets, differing from each other only in the most uninteresting ways.
But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Yes, sex! But you realize that you already thoroughly understand what it is that makes exploration and sex fun. You know how to implement the ideal adventure in which you save people of maximal sexual attractiveness. And you also know that you could trivially integrate the memory of such an adventure, or simulate it a billion times in a few nanoseconds, and that the same is true for all possible permutations that are less desirable.
You realize that the universe has understood itself.
The movie has been watched.
The game has been won.
The end.
Yes, if you skip to the end, you’ll be at the end. So don’t. Unless you want to. In which case, do.
Did you have a point?
How long are you going to postpone the end? After the Singularity, you have the option of just reading a book as you do now, or to integrate it instantly, as if you had read it in the best possible way.
Now your answer to this seems to be that you can also read it very slowly, or with a very low IQ, so that it will take you a really long time to do so. I am not the kind of person who would enjoy to artificially slow down amusement, such as e.g. learning category theory, if I could also learn it quickly.
Here it is.
And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.
Much like a five-year old realizing that he won’t be enjoying snakes-and-ladders anymore when he’s grown up, and thus concluding adults lives must be super-dull, I find scenarios of future ultimate boredom to be extremely shortsighted.
Certainly some of the fun stuff believed fun at our current level of intelligence or ability will not be considered fun at a higher level of intelligence or ability. So bloody what? Do adults need to either enjoy snakes-and-ladders or live lives of boredom?
Consider that there is an optimal way for you to enjoy existence. Then there exists a program whose computation will make an emulation of you experience an optimal existence. I will call this program ArisKatsaris-CEV.
Now consider another program whose computation would cause an emulation of you to understand ArisKatsaris-CEV to such an extent that it would become as predictable and interesting as a game of Tic-tac-toe. I will call this program ArisKatsaris-SELF.
The options I see are to make sure that ArisKatsaris-CEV does never turn into ArisKatsaris-SELF or to maximize ArisKatsaris-CEV. The latter possibility would be similar to paperclip maximizing, or wireheading, from the subjective viewpoint of ArisKatsaris-SELF, as it would turn the universe into something boring. The former option seems to set fundamental limits to how far you can go in understanding yourself.
The gist of the problem is that a certain point you become bored of yourself. And avoiding that point implies stagnation.
You’re mixing up different things:
(A)- a program which will produce an optimal existence for me
(B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.
Consider calling it something else. That isn’t CEV.
Do you think that’s likely? My prejudices tend towards the universe (including the range of possible inventions and art) to be much larger than any mind within it, but I’m not sure how to prove either option.
The problem is that if you perfectly understand the process of art generation. There are cellular automata that generate novel music. How much do you value running such an automata and watching it output music? To me it seems that the value of novelty is diminished by the comprehension of the procedures generating it.