Also, they would seek to personally become an immortal super-intelligence, since many truths simply can’t be learned by an unenhanced human, and certainly not within a human lifetime.
(Which is why the Yudkowsky-Armstrong Fun-Theoretic Utopia leaves me cold. Would any curious person not choose to become superintelligent and “have direct philosophical conversations with the Machines” if the only alternative is to essentially play the post-Singularity equivalent of the World of Warcraft?)
On a grand scale, my hunger for truths is probably as limited and easy to satisfy as my hunger for cheeseburgers. I do feel that in a post-Singularity world I’d want to enhance my intelligence, but the underlying motivation seems to be status-seeking, a desire to be significant.
Something I learned viscerally while I was recovering from brain damage is that intelligence is fun. I suspect I’d want to enhance my intelligence in much the same way that I’d want to spend more time around puppies.
Context matters, I suspect. I don’t think that having a 140 IQ would be all that fun if everyone one interacted with on a daily basis was in the 90-100 range.
I have an IQ in the 140-ish range. (At least, that’s what the professionally administered test I had when I was a child said. Online IQ tests tell me I’ve lost 20 IQ points in the intervening years. Make of that what you will.)
I would estimate I regularly converse “in real life” with someone of above-average IQ a few times a year. This is just a guess, of course. One indicator of the accuracy of this assessment (granting that education as a proxy for intelligence isn’t perfect) is that no one in my circle of friends or family that I regularly communicate with has ever went to, or graduated from anything greater than high school.
You’re right it’s not fun.
The internet alleviates this issue to a large degree.
Context matters, I suspect. I don’t think that having a 140 IQ would be all that fun if everyone one interacted with on a daily basis was in the 90-100 range.
Unless you really enjoy winning or achieving social success and dominance (and don’t have an accompanying social disfunction to go with your IQ).
That’s certainly true. “If you’re routinely the smartest guy in the room, find a different room.”
And yeah, in a “post-Singularity” world that contained a lot of different ranges of intelligence I would probably tune my intelligence to whatever range I was interacting with regularly, which might involve variable intelligence levels, or even maintaining several different disjoint chains of experience.
And I’m perfectly prepared to believe that past a certain point the negative tradeoffs of marginal increases in intelligence outweigh the benefits.
But at least up to that threshold, I would likely choose to socialize with other people who tuned themselves up to that level. It’s admittedly an aesthetic preference, but it’s mine.
On a grand scale, my hunger for truths is probably as limited and easy to satisfy as my hunger for cheeseburgers.
I have very good reasons to think that my hunger for cheeseburgers is limited and easy to satisfy (e.g., ample evidence from past consumption/satiation of various foods including specifically cheeseburgers). On the other hand, there seems good reason to suspect that if my appetite for truths is limited, the satiation level comes well after what can be achieved at human intelligence level and within a human lifetime (e.g., there are plenty of questions I want answers to that seem very hard, and every question that gets answered seems to generate more interesting and even harder questions).
(It’s an interesting question whether all my questions could be answered within 1 second after the Singularity occurs, or if it would require the more than the resources in our entire light cone, or something in between, but the answer to that doesn’t affect my point that a curious person would seek to become superintelligent.)
I do feel that in a post-Singularity world I’d want to enhance my intelligence, but the underlying motivation seems to be status-seeking, a desire to be significant.
If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn’t tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?
It’s an interesting question whether all my questions could be answered within 1 second after the Singularity occurs, or if it would require the more than the resources in our entire light cone, or something in between, but the answer to that doesn’t affect my point that a curious person would seek to become superintelligent.
Are you at all curious about what the 3^^^3rd digit of Pi is?
If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn’t tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?
Nice. The opposite of the premise of a lot of fantasy worlds!
If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn’t tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?
I would be interested, but I wouldn’t take it unless I got a solid technical explanation of “affect the world” that allowed me to do at least as much as I am doing now.
No, I wouldn’t be much interested, I’d even pay to refuse the offer because I don’t want the frustration of being unable to tell anyone.
You aren’t willing to just console yourself with all the hookers, cars, drugs, holidays and general opulence you have been able to buy with the money you earned with your ‘personal benefit only’ intelligence? Or are we to take it that we can’t even use the intelligence to benefit ourselves materially and can only use it to sit in a chair and think to ourselves?
Worst-case (and probable) scenario, you get trapped inside your head and forced to watch your body act like an idiot. If you could engage in transactions, you could make lots of money and then selectively do business with people you like.
you could make lots of money and then selectively do business with people you like.
This part isn’t the case. You can’t game “can’t use powers for personal gain” laws of magic—the universe always catches you. The reversed case would be analogous.
(Which is why the Yudkowsky-Armstrong Fun-Theoretic Utopia leaves me cold. Would any curious person not choose to become superintelligent and “have direct philosophical conversations with the Machines” if the only alternative is to essentially play the post-Singularity equivalent of the World of Warcraft?)
I was under the impression that in Yudkowsky’s image of utopia, one gains intelligence slowly over centuries, such that one has all the fun that can be had at one level but not the one above, then goes up a level, then has that level’s fun, etc.
Also, they would seek to personally become an immortal super-intelligence, since many truths simply can’t be learned by an unenhanced human, and certainly not within a human lifetime.
(Which is why the Yudkowsky-Armstrong Fun-Theoretic Utopia leaves me cold. Would any curious person not choose to become superintelligent and “have direct philosophical conversations with the Machines” if the only alternative is to essentially play the post-Singularity equivalent of the World of Warcraft?)
On a grand scale, my hunger for truths is probably as limited and easy to satisfy as my hunger for cheeseburgers. I do feel that in a post-Singularity world I’d want to enhance my intelligence, but the underlying motivation seems to be status-seeking, a desire to be significant.
Something I learned viscerally while I was recovering from brain damage is that intelligence is fun. I suspect I’d want to enhance my intelligence in much the same way that I’d want to spend more time around puppies.
Context matters, I suspect. I don’t think that having a 140 IQ would be all that fun if everyone one interacted with on a daily basis was in the 90-100 range.
edited to depersonalize the pronoun.
I have an IQ in the 140-ish range. (At least, that’s what the professionally administered test I had when I was a child said. Online IQ tests tell me I’ve lost 20 IQ points in the intervening years. Make of that what you will.)
I would estimate I regularly converse “in real life” with someone of above-average IQ a few times a year. This is just a guess, of course. One indicator of the accuracy of this assessment (granting that education as a proxy for intelligence isn’t perfect) is that no one in my circle of friends or family that I regularly communicate with has ever went to, or graduated from anything greater than high school.
You’re right it’s not fun.
The internet alleviates this issue to a large degree.
Unless you really enjoy winning or achieving social success and dominance (and don’t have an accompanying social disfunction to go with your IQ).
I suspect that in the country of the stupid the moderately intelligent individual would be less successful and dominant than he might hope.
I would bet against you—with the aforementioned caveats that said individual is not socially handicapped and is ambitious.
I also wouldn’t call 90-100 IQ ‘stupid’ or 140 IQ ‘moderately intelligent’.
That’s certainly true. “If you’re routinely the smartest guy in the room, find a different room.”
And yeah, in a “post-Singularity” world that contained a lot of different ranges of intelligence I would probably tune my intelligence to whatever range I was interacting with regularly, which might involve variable intelligence levels, or even maintaining several different disjoint chains of experience.
And I’m perfectly prepared to believe that past a certain point the negative tradeoffs of marginal increases in intelligence outweigh the benefits.
But at least up to that threshold, I would likely choose to socialize with other people who tuned themselves up to that level. It’s admittedly an aesthetic preference, but it’s mine.
I have very good reasons to think that my hunger for cheeseburgers is limited and easy to satisfy (e.g., ample evidence from past consumption/satiation of various foods including specifically cheeseburgers). On the other hand, there seems good reason to suspect that if my appetite for truths is limited, the satiation level comes well after what can be achieved at human intelligence level and within a human lifetime (e.g., there are plenty of questions I want answers to that seem very hard, and every question that gets answered seems to generate more interesting and even harder questions).
(It’s an interesting question whether all my questions could be answered within 1 second after the Singularity occurs, or if it would require the more than the resources in our entire light cone, or something in between, but the answer to that doesn’t affect my point that a curious person would seek to become superintelligent.)
If Omega offered to enhance your intelligence and/or answer all your questions, but for your private benefit only (i.e., you couldn’t tell anyone else or otherwise use your improved intelligence/knowledge to affect the world), would you not be much interested?
Are you at all curious about what the 3^^^3rd digit of Pi is?
Nice. The opposite of the premise of a lot of fantasy worlds!
I would be interested, but I wouldn’t take it unless I got a solid technical explanation of “affect the world” that allowed me to do at least as much as I am doing now.
No, I wouldn’t be much interested, I’d even pay to refuse the offer because I don’t want the frustration of being unable to tell anyone.
You aren’t willing to just console yourself with all the hookers, cars, drugs, holidays and general opulence you have been able to buy with the money you earned with your ‘personal benefit only’ intelligence? Or are we to take it that we can’t even use the intelligence to benefit ourselves materially and can only use it to sit in a chair and think to ourselves?
I think that counts as “using your improved intelligence to affect the world”. If it’s allowed, then sure, sign me up.
And if even personal use is not allowed then I rapidly become indifferent between the choices (and to the question itself).
Affecting your mind is still a discernible effect...
Yes, you can reduce Wei’s counterfactual to nonsensical if you try to pick it apart too far. Yet somehow I think that misses his point.
Worst-case (and probable) scenario, you get trapped inside your head and forced to watch your body act like an idiot. If you could engage in transactions, you could make lots of money and then selectively do business with people you like.
This part isn’t the case. You can’t game “can’t use powers for personal gain” laws of magic—the universe always catches you. The reversed case would be analogous.
I was under the impression that in Yudkowsky’s image of utopia, one gains intelligence slowly over centuries, such that one has all the fun that can be had at one level but not the one above, then goes up a level, then has that level’s fun, etc.