And you obviously argue that the ‘best possible way’ is somehow suboptimal (or you wouldn’t be hating on it so much), without seeing the contradiction here?
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
to which I’d reply “Possibly, but that feels really stupid.”
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.
Hating??? It is an interesting topic, that’s all. The topic I am interested in is how various technologies could influence how humans value their existence.
Here are some examples of what I value and how hypothetical ultra-advanced technology would influence these values:
Mathematics. Right now, mathematics is really useful and interesting. You can also impress other people if your math skills are good.
Now if I could just ask the friendly AI to make me much smarter and install a math module, then I’d see very little value in doing it the hard way.
Gaming. Gaming is much fun. Especially competition. Now if everyone can just ask the friendly AI to make them play a certain game in an optimal way, well that would be boring. And if the friendly AI can create the perfect game for me then I don’t see much sense in exploring games that are less fun.
Reading books. I can’t see any good reason to read a book slowly if I could just ask the friendly AI to upload it directly into my brain. Although I can imagine that it would reply, “Wait, it will be more fun reading it like you did before the Singularity”, to which I’d reply “Possibly, but that feels really stupid. And besides, you could just run a billion emulations of me reading all books like I would have done before the Singularity. So we are done with that.”.
Sex. Yes, it’s always fun again. But hey, why not just ask the friendly AI to simulate a copy of me having sex until the heat death of the universe. Then I have more time for something else...
Comedy. I expect there to be a formula that captures everything that makes something funny for me. It seems pretty dull to ask the friendly AI to tell me a joke instead of asking it to make me understand that formula.
If people choose to not have fun because fun feels “really stupid”, then I’d say these are the problems of super-stupidities, not superintelligences.
I’m sure there will exist future technologies that will make some people become self-destructive, but we already knew that since the invention of alcohol and opium and heroin.
What I object to is you treating these particular failed modes of thinking as if they are inevitable.