I have not read the materials yet, but there is something fundamental I don’t understand about the superintelligence problem.
Are there really serious reasons to think that intelligence is such hugely useful thing that a 1000 IQ being would acquire superpowers? Somehow I never had an intuitive trust in the importance of intelligence (my own was more often a hindrance than an asset, suppressing my instincts). A superintelligent could figure out how to do anything, but there is a huge gap from there and actually doing things. Today the many of the most intelligent people alive basically do nothing but play chess (Polgar, Kasparov), Marilyn vos Savant runs a column entertaining readers by solving their toy logic puzzles, Rick Rosner is a TV writer and James Woods became an actor giving up an academic career for it. They are all over IQ 180.
My point is, what are the reasons to think a superintelligent AI will actually exercise power changing the world, instead of just entertaining itself with chess puzzles or collecting jazz or writing fan fiction or having similar “savant” hobbies?
What are the chances of a no-fucks-given superintelligence? Was this even considered, or is it just assumed ab ovo that intelligence must be a fearsomely powerful thing?
I suspect a Silicon Valley bias here. You guys in the Bay Area are very much used to people using their intelligence to change the world. But it does not seem to be such a default thing. It seems more common for savants to care only about e.g. chess and basically withdraw from the world. If anything, The Valley is an exception. Outside it, in most of the world, intelligence is more of a hindrance, suppressing instincts and making people unhappy in the menial jobs they are given. Why assume a superintelligent AI would have both a Valley type personality, i.e. actually interested in using reasoning to change the world, and be put into an environment where it has the resources to? I could easily imagine an AI being kind of depressed because it has to do menial tasks, and entertaining itself with chess puzzles. I mean, this is how most often intelligence works in the world. Most often it is not combined with ambition, motivation, and lucky circumstances.
In my opinion, intelligence, rationality, is like aiming an arrow with a bow. It is very useful to aim it accurately, but the difference between a nanometer and picometer inaccuracy of aim is negligible, so you easily get to diminishing marginal returns there, and then there are other things that matter much more, how strong is your bow, how many arrows you have, how many targets you have and all that.
Am I missing something? I am simply looking at what difference intelligence makes in the world-changing ability of humans and extrapolating from that. Most savants simply don’t care about changing the world, and some others who do realize other skills than intelligence are needed, and most are not put into a highly meritocratic Silicon Valley environment but more stratified ones where the 190 IQ son of a waiter is probably a cook. Why would AI be different, any good reasons?
I’m not sure how useful or relevant a point this is, but I was just thinking about this when I saw the comment: IQ is defined within the range of human ability, where an arbitrarily large IQ just means being the smartest human in an arbitrarily large human population. “IQ 1000” and “IQ 180″ might both be close enough to the asymptote of the upper limit of natural human ability that the difference is indiscernible. Quantum probabilities of humans being born with really weird “superhuman” brain architectures notwithstanding, a truly superintelligent being might have a true IQ of “infinity” or “N/A”, which sounds much less likely to stick to the expectations we have of human savants.
Seems to me the “killer app” is not superintelligence per se, but superintelligence plus self-modification. With highly intelligent people that problem is often that they can’t modify themselves, so in addition to their high intelligence they also have some problems they can’t rid of.
Maybe Kasparov plays chess because he genuinely believes that playing chess is the most useful thing he could ever do. But more likely there are other reasons; for example he probably enjoys playing chess, and the more useful things are boring to him. Or maybe he is emotionally insecure and wants to stay in an area he is already good at, because he couldn’t emotionally bear giving up the glory and starting something as a total newbie, even if a few years later it would pay off. (This is just a random idea; I have no idea what Kasparov really is like. Also, maybe he is doing other things too, we just don’t know about them.)
Imagine a person who would be able to edit their own mind. For example, if they would realize that their plans would advance further if they had better social skills, they would (a) temporarily self-modify to enjoy scientifically studying social skills, then (b) do a research about social skills and find out what is evidence-based, and finally (c) self-modify to have those traits that reliably work. And then they would approach every obstacle in the same way. Procrastinating too much? Find the parts of your mind that make you do so, and edit them out. Want to stay fit, but you hate exercising? Modify yourself to enjoy exercising, but only for a limited time every day. Want to study finance, but have a childhood emotional trauma related to money? Remove the trauma.
You would get a person who, upon seeing an optimal way, would start following that way. They would probably soon realize that their time is a limited resource, and start employing other people to do some work for them. They would use various possible computer tools, maybe even employ people to improve those tools for them. They would research possibilities of self-improvement, use them for themselves, and also trade the knowledge with other people for resources or loyalty. After a while, they would have an “army” of loyal improved followers, and would also interact a lot with the rest of the world. And that would be just a beginning.
Maybe for a human there would be some obstacle, for example that the average human life is too short to research and implement immortality. Or maybe they would reach the escape velocity and become something transhuman.
Most savants simply don’t care about changing the world
I would guess they still find some things frustrating (e.g. when someone or something is interrupting them from their hobby), they are just no strategic enough to remove all sources of frustration. Either they do not bother to plan long-term, or they don’t believe such long-term planning could work.
the 190 IQ son of a waiter is probably a cook
Let’s imagine a country with a strict caste system, where the IQ 190 person born in the lower caste must remain there. If the society is really protected against any attempts to break the system, for example if the people from the lower caste are forbidden all education and internet, and are under constant surveillance, there is probably not much he could do. But if it’s a society more or less like ours, only it is illegal and socially frowned upon hiring people to do other caste’s work, a strategic person could start exploring possibilites to cheat—for example, could they fake a different, higher-caste identity? Or perhaps move to a country without the caste system. Or if there are exceptions where the change of caste is allowed, they would try one, and could try to cheat to get the exception easier. (For example, if a higher-caste person can “adopt” you into their caste, you could try to make a deal with one, or blackmail one, or maybe somehow fake the whole process of being adopted.) They could also try to somehow undermine the caste system; create a community that ignores the caste rules; etc.
Hm, that is a better point, it seems then most of my objections are just to the wording. Most intelligent people are also shy etc. and that is why they end up being math researchers instead of being Steve Jobs. If an intelligent person could edit in his mind courage, dedication, charme… that would be powerful.
But I think self-modification would be powerful even without very high IQ, 120 would already make one pretty succesful.
Or is it more IQ being necessary for efficient self-modification?
My point is, this sounds like a powerful combination, but probably not the intelligence explosion kind.
The caste stuff: really elegant steelmanning, congrats. But I think kind of missing the point, probably I explained myself wrong. Basically an IQ-meritocracy requires a market based system, exchanged based one, where what you get is very roughly proportional to what you give to others. However most of the planet is not exchange based but power based. This is where intelligence is less useful. Imagine trying to compete with Stalin for the job of being Lenin’s successor. What traits you need for it? First of all, mountains of courage, that guys is scary. Of course if you can self-edit, that is indeed extremely helpful in it… I did not factor that in. But broadly speaking, you don’t just outsmart him. Power requires other traits. And of course it can very well be that you don’t want power, you want to be a researcher… but in that situation you are forced to take orders from him so you may still want to topple the big boss or something.
Now of course if we see intelligence as simply the amount of sophistication put into self-editing, so seeing a higher intelligence as something that can self-edit e.g. charisma better than a lower intelligence… then these possibilities are indeed there. I am just saying, still no intelligence explosion, more like everything explosion, or maybe everything else but intelligence explosion. Charisma explosion, and so on… but I do agree that this combined can be very powerful.
Or is it more IQ being necessary for efficient self-modification?
Sounds like false dilemma if IQ is one of those things that can be modified. :D
To unpack the word, IQ approximately means how complex concepts can you “juggle” in your head. Without enough IQ, even if you had an easy computer interface to modify your own brain, you wouldn’t understand what exactly you should do to achieve your goals (because you wouldn’t sufficiently understand the concepts and the possible consequences of the changes). That means, you would be making those changes blindly… and you could get lucky and hit the path where your IQ increases so then you can start making reliably the right steps, or you could set yourself on a way towards some self-destructive attractor.
As a simple example, a stupid person could choose to press a button that activates their reward center… and would keep doing that until they die from starvation. Another example would be self-modification where you lose your original goals, or lose the will to further self-improve, etc. This does not have to happen in one obvious step, but could be a subtle cumulative consequence of many seemingly innocent steps. For example, a person could decide that being fit is instrumentally useful for their goals, so they would self-modify to enjoy exercise, but would make a mistake of modifying themselves too much, so now they only want to exercise all day long, and no longer care about their original goals. Then they would either stop self-modifying, or self-modify merely to exercise better.
It also depends on how complex would be the “user interface to modify your own brain”. Maybe IQ 120 would not be enough to understand it. Maybe even IQ 200 wouldn’t. You could just see a huge network of nodes, each connected to hundreds of other nodes, each one with insanely abstract description… and either give up, or start pushing random buttons and most likely hurt yourself.
So basicly the lowest necessary starting IQ is the IQ you need to self-modify to safely enough increase your IQ. This is a very simple model which assumes that if IQ N allows you to get to IQ N+1, then IQ N+1 probably allows you to get to IQ N+2. The way does not have to be this smooth; there may be a smooth increase up to some level, which then requires a radical change to overcome; or maybe the intelligence gains at each step will decrease.
Imagine trying to compete with Stalin for the job of being Lenin’s successor. What traits you need for it?
Courage, social skills, ability to understand how politics really works. You should probably start in some position in army or secret service, some place where you can start building your own network of loyal people, without being noticed by Stalin. Or maybe you should start as a crime boss in hiding, I don’t know.
To unpack the word, IQ approximately means how complex concepts can you “juggle” in your head
I think I agree with this, this is why I don’t understand how can intelligence be defined as goal-achieving ability. When I am struggling with the hardest exercises on the Raven test, what I wish I had more is not some goal-achieving power but something far simpler, something akin to short term memory. So when I wish for more intelligene, I just wish for a bit more RAM in short-term memory, so more detailed, more granular ideas can be uploaded into tumble space. No idea why should it mean goal-achieving or optimizing ability. And for AI IQ sounds like entirely hardware...
Basically an IQ-meritocracy requires a market based system, exchanged based one, where what you get is very roughly proportional to what you give to others. However most of the planet is not exchange based but power based. This is where intelligence is less useful.
Intelligence is very useful in conflicts. If you can calculate beforehand whether you will win or lose a battle you don’t have to fight the battle if you think you will lose it.
In our modern world great intelligence means the ability to hack computers. Getting information and being able to alter email message that person A sends person B. That’s power.
Getting money because you are smart enough to predict stock market movements is another way to get power.
And of course it can very well be that you don’t want power, you want to be a researcher… but in that situation you are forced to take orders from him so you may still want to topple the big boss or something.
Stalin was likely powerful but a lot of today’s political leaders are less powerful.
Peter Thiel made the point that Obama probably didn’t even knew that the US was wiretapping Angela Merkel. It’s something that the nerds in the NSA decided in their Star Trek bridge lookalike.
Well, I suspect that this is a known possibility to AI researchers. I mean, when I’ve heard people talk about the problems in AGI they’re not so much saying that the intelligence can only be supremely good or supremely bad, but they’re just showing the range of possibilities. Some have mentioned how AI might just advance society enough so that they can get off world and explore the universe, or it might just turn out to be completely uninterested in our goals and just laze about playing GO.
When people talk about FAI, I think they’re not just saying ‘We need to make an AI that is not going to kill us all’ because although that sort of AI may not end all humanity, it may not help us either. Part of the goal of FAI is that you get a specific sort of AI that will want to help us, and not be like what you’ve described.
But I suspect someone vastly more knowledgeable than me on this subject will come and address your problem.
But even if it wants to help, can it? Is intelligence really such a powerful tool? I picture it as we are aiming an arrow at something, like building a Mars base, and it makes our arrow more accurate. But if it was accurate enough to do the job how much does that help? How does intelligence transform into power to generate outcomes?
I mean the issue is that EY defines pretty much as efficient goal-achievement or cross-domain optimization, so by using that definition intelligence is trivially about a power to generate outcomes, but this is simply not the common definition of intelligence. It is not what IQ tests measure. It is not what Mensa members are good at. It is not what Kasparov is good at. That is more of a puzzle-solving ability.
I mean, could efficient goal-achievement equal puzzle-solving? I would be really surprised if someone could prove that. In the real life the most efficient goal achievers I know are often stupid and have other virtues, like a bulldog like never, ever, ever, ever give up type attitude.
Might I ask, what exactly do you mean? Are you saying that the super intelligent AI would not be able to contribute that much to our intellectual endeavours? Or are you saying that its intelligence may not translate to achieving goals like ‘become a billionaire’ or ‘solve world hunger’? Or something else entirely?
I don’t yet know enough about AI to even attempt to answer that, I am just trying to form a position on intelligence itself, human intelligence. I don’t think IQ is predictive of how much power people have. I don’t think the world is an IQ meritocracy. I understand how it can look like one from the Silicon Valley, because the SV is an IQ meritocracy, where people who actually use intelligence to change the world go to, but in general the world is not so. IQ is more of a puzzle-solving ability and I think it transforms to world-changing power only when the bottleneck is specifically the lack of puzzle-solving ability. When we have infinite amounts of dedication, warm bodies, electricity and money to throw on a problem and the only thing missing is that we don’t know how, then yes, smart people are useful, they can figure that out. But that is not the usual case. Imagine you are a 220 IQ Russian and just solved cold fusion and offer it to Putin out of patriotism. You probably get shot and the blueprints buried because it threatens the energy exports so important for their economy and for the power of his oligarch supporters. This is IMHO how intelligence works, if there is everything there, especially the will, to change the world, and only the know-how is missing, then it is useful, but it is not the typical case, and in all the other cases it does not help much. Yes, of course a superintelligence could figure out, say, nanotechnology, but why should we assume that main reason why the world is not a utopia is the lack of such know-how?
This is why I am so suspicious about it. That I am afraid the whole thing is Silicon Valley culture written large, and this is not predictive of how the world works. SV is full of people eager to change the world, have the dedication, the money, all is missing is knowing how. A superintelligence could help them, but it does not mean intelligence, or superintelligence, is a general power, a general optimization ability, a general goal-achievement ability. I think it is only true in the special cases when achieving goals requires solving puzzles. Usually, it is not unsolved puzzles that are standing between you and your goal.
Imagine I wanted to be dictator of a first-world country. Is there any puzzle I could solve if I had a gazillion IQ that could achieve that? No. I could read all the books about how to manipulate people and figure out the best things to say and the result would still be that people don’t want to give up their freedom especially not to some uncharismatic fat nerds no matter how excellent sounding things he says. But maybe if it was an economic depression and I would be simply highly charismatic, ruthless, and have all the right ex-classmates… I would have a chance to pull that off without any puzzle-solving, just being smart enough to not sabotage myself with blunders, say, IQ 120 would do it.
And I am worried I am missing something huge. MIRI consists of far smarter people than I am, so I am almost certainly wrong… unless they have fallen in love with smartness so much that it created a pro-smart bias in them, unless it made them underestimate in how many cases efficient world changing has nothing to do with puzzle solving and has everything to do with beating obstacles with a huge hammer, or forging consensus or a million other things.
But I think the actual data about what 180+ IQ people are doing is on my side here. What is Kasparov doing? What is James Woods doing? Certainly not radically transforming the world. Nor taking it over.
I don’t think IQ is predictive of how much power people have. I don’t think the world is an IQ meritocracy.
Predictive isn’t a binary catergory. Statistically IQ is predictive of a lot of things including higher social skills and lifespan.
Imagine I wanted to be dictator of a first-world country. Is there any puzzle I could solve if I had a gazillion IQ that could achieve that? No. I could read all the books about how to manipulate people and figure out the best things to say and the result would still be that people don’t want to give up their freedom especially not to some uncharismatic fat nerds no matter how excellent sounding things he says.
Bernanke was during his federal reserve tenureship one of the most powerful people in the US and he scored 1590 out of 1600 on the SAT.
But I think the actual data about what 180+ IQ people are doing is on my side here. What is Kasparov doing?
Kasparov is not 180+ IQ. When given a real IQ test he scored 135. Lower than the average of LW people who submit their IQ on the census.
Over 20 IQ points lower than what Bernanke has and given that Bernanke scored at the top and the test SAT isn’t created to distinguish 160 from 180 he might be even smarter.
Bernake’s successor got described in her NYTimes profile by a collegue as a “small lady with a large I.Q.”.
While I can’t find direct scores she likely has a higher IQ than Kasparov.
Top bankers are high IQ people and at the moment the banker class has quite a lot of power in the US.
Banking likely needs more IQ than playing chess.
Bill O’Reilly is with a SAT score of 1585 also much smarter than Kasparov and the guy seems to have some influence in US political discussion.
Statistically IQ is predictive of a lot of things including higher social skills
How? Pretty sure it was the other way around in high school. Popular dumb shallow people, unpopular smart geeks.
I am not 100% sure what is a SAT is but if it is like a normal school test—memorize stuff like historical dates of battles, barf it back—the probably more related to dedication than intelligence.
Popular dumb shallow people, unpopular smart geeks.
That could be a problem of perception. If someone is book smart and unpopular, people say “he is smart”. If someone is smart and popular, people say “he is cool”.
There are sportsmen and actors with very high IQ, but no one remembers them as “having high IQ”, only as being a great sportsman or a great actor.
Do you know how high IQ Arnold Schwarzenegger has? Neither do I. My point is, when people say “Arnold Schwarzenegger”, no one thinks about “I wonder how high IQ that guy has… maybe that could explain some of his success”. Maybe he has much higher IQ than Kasparov, but most people would never even start thinking in that direction.
Also there is a difference between intelligence and signalling intelligence. Not everyone with high IQ must necessarily spend their days talking about relativity and quantum physics. Maybe they just invest all their intelligence into improving their private lives.
Do you know how high IQ Arnold Schwarzenegger has? Neither do I. My point is, when people say “Arnold Schwarzenegger”, no one thinks about “I wonder how high IQ that guy has… maybe that could explain some of his success”. Maybe he has much higher IQ than Kasparov, but most people would never even start thinking in that direction.
The score I find on the internet for him is 135 which puts him on the same ballpark as Kasparov.
Popular dumb shallow people, unpopular smart geeks.
You mistake what people signal with their inherent intelligence. Bill O’Reilly doesn’t behave as a geek. That change that he’s smart.
It’s like the mistake of thinking that being good at chess is about intelligence.
Unfortunately at the moment I don’t find the link to the studies about IQ and social skill, but I think we had previous LW discussions towards IQ positively correlating with social abilities in average society being well established.
I am not 100% sure what is a SAT is but if it is like a normal school test—memorize stuff like historical dates of battles, barf it back
It’s not about having memorized information. SAT tests are generally known to be a good proxy for IQ.
But again that is not power. That is just smart people getting paid better when and if there are enough jobs around where intelligence is actually useful. I think it is a very big jump from the fact that there seem to be relatively lot of those jobs around to saying it is a general world-changing, outcome-generating power. I cannot find it , but I think income could just as well be correlated with height.
If I understand, you are attempting a “proves too much” argument with height, however, this is irrelevant, if height is predictive of income then this is an interesting result in itself* (maybe tall people are more respected) and has no bearing on whether IQ is also predictive of income. I agree that IQ probably doesn’t scale indefinitely with success and power though. The tails are already starting to diverge at 130.
Well, a while back I was reading an article about Terrence Tao. Now, in it, it said that bout 1 in 40 child prodigies go on to become incredible academics. Part of the reason is that they are forced form an early age to learn as much as possible. There was one such child prodigy, who published papers at the age of 15 and just gradually faded away afterwards. Because of their environment, these child prodigies burn out very quickly, and grow jaded towards academia. Not wanting this to happen, Tao’s parent let him go at a pace he was comfortable with. What did that result in? Him becoming one of the greatest mathematicians on the planet.
So yes, many 180+iq individuals never go on to become great, but that’s largely due to their environment.And most geniuses just lack the drive/charisma/interest to take over the world. But that still doesn’t answer your question about ‘How could an AI take over the world?’ or something to that effect.
Well, you pointed out that some rich, charismatic individual with the right connections could make a big dent in society. But, charisma is something that can be learnt. Sure, it’s hard and maybe not everyone can do it, but it can be learnt. Now, if one has sufficient charisma, one can make a huge impact on the world i.e. Hitler. (The slaterstarcodex blog discussed something similar to this, and I’m just basically lifting the Hitler thing from there).
The Nazi party had about 55 members when he joined, and at the time he was just a failed painter/veteran soldier. And there are disturbing records that people would say things like ’Look at this new guy! We’ve got to promote him. Everyone he talks to is just joining us, its insane!” And people just flocked to him. Someone might hate him, go see a speech or two, and then become a lifelong convert. Even in WW2 when the German country was getting progressively worse, he still had 90% approval from his people. This is what charisma can do. And this is something the AI can learn.Now, imagine what this AI, who’ll be much more intelligent than Hitler, could do.
Furthermore, an AI would be connected to the entirety of the internet, and once it had learnt pretty much everything, it would be able to gain so much power. For example:
1) It could gain capital very rapidly. Certain humans have managed to gain huge amounts of money by overcoming their biases and making smart business decisions. Everything they learnt, so too could an AI, and with far fewer biases. So the AI could rapidly acquire business capital.
2) It could use this business capital to set up various companies, and with its intellectual capabilities, it could outpace other companies in terms of innovation. In a few years, it might well dominate the market.
3) Because of its vast knowledge drawn from the internet, it would be able to market far more succesfully than other organisations. Rather quickly it would draw people to its side, gaining a lot of social capital.
4) It would also be able to gain a lot of knowledge about any competitors and give it yet another edge over them.
5) Due to its advanced marketing strategies, wealth and social capital, it could make a party in some place and place a figurehead government in power. From there, it would be able to increase the countries power (probably in a careful fashion, not upsetting the people and keeping allies around it).
6) Now it has a country under its control, large sway over the rest of the world, and a huge amount of resources. From here, it could advance manufacturing and production to such a degree that it would need no humans to work for it.
7) Still acting carefully, the AI would now have the capability to build pretty much whatever it wanted. From there, it could institute more autonomous production plants around the world. It may provide many goods for free for the locals, in order to keep them on its side.
8) Now the AI can try and take over other countries, making parties with its backing, and promising a Golden Age for mankind.
9) The AI has transformed itself into the head of the worlds greatest superpower
10) Victory
This is just a rough outline of the path an AI could take. In all the stages it simply replicated the feats of extraordinary individuals throughout history. Nothing it did was impossible, and I would say not even implausible. Of course, it could do things very differently. Once we make autonomous production plants, the would AI just need to take them over, produce large amounts of robots and weaponry, and take the world over. Or maybe it would just hold its economic welfare hostage.
Certain humans have managed to gain huge amounts of money by overcoming their biases and making smart business decision
Thinking that one can outsmart the market is the biggest bias in this regard. People like Soros and Buffet were either lucky or had secret information others didn’t, because otherwise it should be very, very unlikely to outsmart the market.
I wasn’t referring to the stock market. I know that almost all money made by ‘playing the market’ is due to luck. What I meant was creating the right type of service/good with the right kind of marketing. More like Steve Jobs, Elon Musk and so forth.
A quant who does day trading can find that there some market inefficiency between the prices of different products and then make money with the effect.
That’s not how either Soros or Buffet made their fortunes but it’s still possible for other people.
I have not read the materials yet, but there is something fundamental I don’t understand about the superintelligence problem.
Are there really serious reasons to think that intelligence is such hugely useful thing that a 1000 IQ being would acquire superpowers? Somehow I never had an intuitive trust in the importance of intelligence (my own was more often a hindrance than an asset, suppressing my instincts). A superintelligent could figure out how to do anything, but there is a huge gap from there and actually doing things. Today the many of the most intelligent people alive basically do nothing but play chess (Polgar, Kasparov), Marilyn vos Savant runs a column entertaining readers by solving their toy logic puzzles, Rick Rosner is a TV writer and James Woods became an actor giving up an academic career for it. They are all over IQ 180.
My point is, what are the reasons to think a superintelligent AI will actually exercise power changing the world, instead of just entertaining itself with chess puzzles or collecting jazz or writing fan fiction or having similar “savant” hobbies?
What are the chances of a no-fucks-given superintelligence? Was this even considered, or is it just assumed ab ovo that intelligence must be a fearsomely powerful thing?
I suspect a Silicon Valley bias here. You guys in the Bay Area are very much used to people using their intelligence to change the world. But it does not seem to be such a default thing. It seems more common for savants to care only about e.g. chess and basically withdraw from the world. If anything, The Valley is an exception. Outside it, in most of the world, intelligence is more of a hindrance, suppressing instincts and making people unhappy in the menial jobs they are given. Why assume a superintelligent AI would have both a Valley type personality, i.e. actually interested in using reasoning to change the world, and be put into an environment where it has the resources to? I could easily imagine an AI being kind of depressed because it has to do menial tasks, and entertaining itself with chess puzzles. I mean, this is how most often intelligence works in the world. Most often it is not combined with ambition, motivation, and lucky circumstances.
In my opinion, intelligence, rationality, is like aiming an arrow with a bow. It is very useful to aim it accurately, but the difference between a nanometer and picometer inaccuracy of aim is negligible, so you easily get to diminishing marginal returns there, and then there are other things that matter much more, how strong is your bow, how many arrows you have, how many targets you have and all that.
Am I missing something? I am simply looking at what difference intelligence makes in the world-changing ability of humans and extrapolating from that. Most savants simply don’t care about changing the world, and some others who do realize other skills than intelligence are needed, and most are not put into a highly meritocratic Silicon Valley environment but more stratified ones where the 190 IQ son of a waiter is probably a cook. Why would AI be different, any good reasons?
I’m not sure how useful or relevant a point this is, but I was just thinking about this when I saw the comment: IQ is defined within the range of human ability, where an arbitrarily large IQ just means being the smartest human in an arbitrarily large human population. “IQ 1000” and “IQ 180″ might both be close enough to the asymptote of the upper limit of natural human ability that the difference is indiscernible. Quantum probabilities of humans being born with really weird “superhuman” brain architectures notwithstanding, a truly superintelligent being might have a true IQ of “infinity” or “N/A”, which sounds much less likely to stick to the expectations we have of human savants.
Seems to me the “killer app” is not superintelligence per se, but superintelligence plus self-modification. With highly intelligent people that problem is often that they can’t modify themselves, so in addition to their high intelligence they also have some problems they can’t rid of.
Maybe Kasparov plays chess because he genuinely believes that playing chess is the most useful thing he could ever do. But more likely there are other reasons; for example he probably enjoys playing chess, and the more useful things are boring to him. Or maybe he is emotionally insecure and wants to stay in an area he is already good at, because he couldn’t emotionally bear giving up the glory and starting something as a total newbie, even if a few years later it would pay off. (This is just a random idea; I have no idea what Kasparov really is like. Also, maybe he is doing other things too, we just don’t know about them.)
Imagine a person who would be able to edit their own mind. For example, if they would realize that their plans would advance further if they had better social skills, they would (a) temporarily self-modify to enjoy scientifically studying social skills, then (b) do a research about social skills and find out what is evidence-based, and finally (c) self-modify to have those traits that reliably work. And then they would approach every obstacle in the same way. Procrastinating too much? Find the parts of your mind that make you do so, and edit them out. Want to stay fit, but you hate exercising? Modify yourself to enjoy exercising, but only for a limited time every day. Want to study finance, but have a childhood emotional trauma related to money? Remove the trauma.
You would get a person who, upon seeing an optimal way, would start following that way. They would probably soon realize that their time is a limited resource, and start employing other people to do some work for them. They would use various possible computer tools, maybe even employ people to improve those tools for them. They would research possibilities of self-improvement, use them for themselves, and also trade the knowledge with other people for resources or loyalty. After a while, they would have an “army” of loyal improved followers, and would also interact a lot with the rest of the world. And that would be just a beginning.
Maybe for a human there would be some obstacle, for example that the average human life is too short to research and implement immortality. Or maybe they would reach the escape velocity and become something transhuman.
I would guess they still find some things frustrating (e.g. when someone or something is interrupting them from their hobby), they are just no strategic enough to remove all sources of frustration. Either they do not bother to plan long-term, or they don’t believe such long-term planning could work.
Let’s imagine a country with a strict caste system, where the IQ 190 person born in the lower caste must remain there. If the society is really protected against any attempts to break the system, for example if the people from the lower caste are forbidden all education and internet, and are under constant surveillance, there is probably not much he could do. But if it’s a society more or less like ours, only it is illegal and socially frowned upon hiring people to do other caste’s work, a strategic person could start exploring possibilites to cheat—for example, could they fake a different, higher-caste identity? Or perhaps move to a country without the caste system. Or if there are exceptions where the change of caste is allowed, they would try one, and could try to cheat to get the exception easier. (For example, if a higher-caste person can “adopt” you into their caste, you could try to make a deal with one, or blackmail one, or maybe somehow fake the whole process of being adopted.) They could also try to somehow undermine the caste system; create a community that ignores the caste rules; etc.
Hm, that is a better point, it seems then most of my objections are just to the wording. Most intelligent people are also shy etc. and that is why they end up being math researchers instead of being Steve Jobs. If an intelligent person could edit in his mind courage, dedication, charme… that would be powerful.
But I think self-modification would be powerful even without very high IQ, 120 would already make one pretty succesful.
Or is it more IQ being necessary for efficient self-modification?
My point is, this sounds like a powerful combination, but probably not the intelligence explosion kind.
The caste stuff: really elegant steelmanning, congrats. But I think kind of missing the point, probably I explained myself wrong. Basically an IQ-meritocracy requires a market based system, exchanged based one, where what you get is very roughly proportional to what you give to others. However most of the planet is not exchange based but power based. This is where intelligence is less useful. Imagine trying to compete with Stalin for the job of being Lenin’s successor. What traits you need for it? First of all, mountains of courage, that guys is scary. Of course if you can self-edit, that is indeed extremely helpful in it… I did not factor that in. But broadly speaking, you don’t just outsmart him. Power requires other traits. And of course it can very well be that you don’t want power, you want to be a researcher… but in that situation you are forced to take orders from him so you may still want to topple the big boss or something.
Now of course if we see intelligence as simply the amount of sophistication put into self-editing, so seeing a higher intelligence as something that can self-edit e.g. charisma better than a lower intelligence… then these possibilities are indeed there. I am just saying, still no intelligence explosion, more like everything explosion, or maybe everything else but intelligence explosion. Charisma explosion, and so on… but I do agree that this combined can be very powerful.
Sounds like false dilemma if IQ is one of those things that can be modified. :D
To unpack the word, IQ approximately means how complex concepts can you “juggle” in your head. Without enough IQ, even if you had an easy computer interface to modify your own brain, you wouldn’t understand what exactly you should do to achieve your goals (because you wouldn’t sufficiently understand the concepts and the possible consequences of the changes). That means, you would be making those changes blindly… and you could get lucky and hit the path where your IQ increases so then you can start making reliably the right steps, or you could set yourself on a way towards some self-destructive attractor.
As a simple example, a stupid person could choose to press a button that activates their reward center… and would keep doing that until they die from starvation. Another example would be self-modification where you lose your original goals, or lose the will to further self-improve, etc. This does not have to happen in one obvious step, but could be a subtle cumulative consequence of many seemingly innocent steps. For example, a person could decide that being fit is instrumentally useful for their goals, so they would self-modify to enjoy exercise, but would make a mistake of modifying themselves too much, so now they only want to exercise all day long, and no longer care about their original goals. Then they would either stop self-modifying, or self-modify merely to exercise better.
It also depends on how complex would be the “user interface to modify your own brain”. Maybe IQ 120 would not be enough to understand it. Maybe even IQ 200 wouldn’t. You could just see a huge network of nodes, each connected to hundreds of other nodes, each one with insanely abstract description… and either give up, or start pushing random buttons and most likely hurt yourself.
So basicly the lowest necessary starting IQ is the IQ you need to self-modify to safely enough increase your IQ. This is a very simple model which assumes that if IQ N allows you to get to IQ N+1, then IQ N+1 probably allows you to get to IQ N+2. The way does not have to be this smooth; there may be a smooth increase up to some level, which then requires a radical change to overcome; or maybe the intelligence gains at each step will decrease.
Courage, social skills, ability to understand how politics really works. You should probably start in some position in army or secret service, some place where you can start building your own network of loyal people, without being noticed by Stalin. Or maybe you should start as a crime boss in hiding, I don’t know.
I think I agree with this, this is why I don’t understand how can intelligence be defined as goal-achieving ability. When I am struggling with the hardest exercises on the Raven test, what I wish I had more is not some goal-achieving power but something far simpler, something akin to short term memory. So when I wish for more intelligene, I just wish for a bit more RAM in short-term memory, so more detailed, more granular ideas can be uploaded into tumble space. No idea why should it mean goal-achieving or optimizing ability. And for AI IQ sounds like entirely hardware...
Intelligence is very useful in conflicts. If you can calculate beforehand whether you will win or lose a battle you don’t have to fight the battle if you think you will lose it.
In our modern world great intelligence means the ability to hack computers. Getting information and being able to alter email message that person A sends person B. That’s power.
Getting money because you are smart enough to predict stock market movements is another way to get power.
Stalin was likely powerful but a lot of today’s political leaders are less powerful.
Peter Thiel made the point that Obama probably didn’t even knew that the US was wiretapping Angela Merkel. It’s something that the nerds in the NSA decided in their Star Trek bridge lookalike.
Well, I suspect that this is a known possibility to AI researchers. I mean, when I’ve heard people talk about the problems in AGI they’re not so much saying that the intelligence can only be supremely good or supremely bad, but they’re just showing the range of possibilities. Some have mentioned how AI might just advance society enough so that they can get off world and explore the universe, or it might just turn out to be completely uninterested in our goals and just laze about playing GO.
When people talk about FAI, I think they’re not just saying ‘We need to make an AI that is not going to kill us all’ because although that sort of AI may not end all humanity, it may not help us either. Part of the goal of FAI is that you get a specific sort of AI that will want to help us, and not be like what you’ve described.
But I suspect someone vastly more knowledgeable than me on this subject will come and address your problem.
But even if it wants to help, can it? Is intelligence really such a powerful tool? I picture it as we are aiming an arrow at something, like building a Mars base, and it makes our arrow more accurate. But if it was accurate enough to do the job how much does that help? How does intelligence transform into power to generate outcomes?
I mean the issue is that EY defines pretty much as efficient goal-achievement or cross-domain optimization, so by using that definition intelligence is trivially about a power to generate outcomes, but this is simply not the common definition of intelligence. It is not what IQ tests measure. It is not what Mensa members are good at. It is not what Kasparov is good at. That is more of a puzzle-solving ability.
I mean, could efficient goal-achievement equal puzzle-solving? I would be really surprised if someone could prove that. In the real life the most efficient goal achievers I know are often stupid and have other virtues, like a bulldog like never, ever, ever, ever give up type attitude.
Might I ask, what exactly do you mean? Are you saying that the super intelligent AI would not be able to contribute that much to our intellectual endeavours? Or are you saying that its intelligence may not translate to achieving goals like ‘become a billionaire’ or ‘solve world hunger’? Or something else entirely?
I don’t yet know enough about AI to even attempt to answer that, I am just trying to form a position on intelligence itself, human intelligence. I don’t think IQ is predictive of how much power people have. I don’t think the world is an IQ meritocracy. I understand how it can look like one from the Silicon Valley, because the SV is an IQ meritocracy, where people who actually use intelligence to change the world go to, but in general the world is not so. IQ is more of a puzzle-solving ability and I think it transforms to world-changing power only when the bottleneck is specifically the lack of puzzle-solving ability. When we have infinite amounts of dedication, warm bodies, electricity and money to throw on a problem and the only thing missing is that we don’t know how, then yes, smart people are useful, they can figure that out. But that is not the usual case. Imagine you are a 220 IQ Russian and just solved cold fusion and offer it to Putin out of patriotism. You probably get shot and the blueprints buried because it threatens the energy exports so important for their economy and for the power of his oligarch supporters. This is IMHO how intelligence works, if there is everything there, especially the will, to change the world, and only the know-how is missing, then it is useful, but it is not the typical case, and in all the other cases it does not help much. Yes, of course a superintelligence could figure out, say, nanotechnology, but why should we assume that main reason why the world is not a utopia is the lack of such know-how?
This is why I am so suspicious about it. That I am afraid the whole thing is Silicon Valley culture written large, and this is not predictive of how the world works. SV is full of people eager to change the world, have the dedication, the money, all is missing is knowing how. A superintelligence could help them, but it does not mean intelligence, or superintelligence, is a general power, a general optimization ability, a general goal-achievement ability. I think it is only true in the special cases when achieving goals requires solving puzzles. Usually, it is not unsolved puzzles that are standing between you and your goal.
Imagine I wanted to be dictator of a first-world country. Is there any puzzle I could solve if I had a gazillion IQ that could achieve that? No. I could read all the books about how to manipulate people and figure out the best things to say and the result would still be that people don’t want to give up their freedom especially not to some uncharismatic fat nerds no matter how excellent sounding things he says. But maybe if it was an economic depression and I would be simply highly charismatic, ruthless, and have all the right ex-classmates… I would have a chance to pull that off without any puzzle-solving, just being smart enough to not sabotage myself with blunders, say, IQ 120 would do it.
And I am worried I am missing something huge. MIRI consists of far smarter people than I am, so I am almost certainly wrong… unless they have fallen in love with smartness so much that it created a pro-smart bias in them, unless it made them underestimate in how many cases efficient world changing has nothing to do with puzzle solving and has everything to do with beating obstacles with a huge hammer, or forging consensus or a million other things.
But I think the actual data about what 180+ IQ people are doing is on my side here. What is Kasparov doing? What is James Woods doing? Certainly not radically transforming the world. Nor taking it over.
Predictive isn’t a binary catergory. Statistically IQ is predictive of a lot of things including higher social skills and lifespan.
Bernanke was during his federal reserve tenureship one of the most powerful people in the US and he scored 1590 out of 1600 on the SAT.
Kasparov is not 180+ IQ. When given a real IQ test he scored 135. Lower than the average of LW people who submit their IQ on the census. Over 20 IQ points lower than what Bernanke has and given that Bernanke scored at the top and the test SAT isn’t created to distinguish 160 from 180 he might be even smarter.
Bernake’s successor got described in her NYTimes profile by a collegue as a “small lady with a large I.Q.”. While I can’t find direct scores she likely has a higher IQ than Kasparov.
Top bankers are high IQ people and at the moment the banker class has quite a lot of power in the US. Banking likely needs more IQ than playing chess.
Bill O’Reilly is with a SAT score of 1585 also much smarter than Kasparov and the guy seems to have some influence in US political discussion.
How? Pretty sure it was the other way around in high school. Popular dumb shallow people, unpopular smart geeks.
I am not 100% sure what is a SAT is but if it is like a normal school test—memorize stuff like historical dates of battles, barf it back—the probably more related to dedication than intelligence.
That could be a problem of perception. If someone is book smart and unpopular, people say “he is smart”. If someone is smart and popular, people say “he is cool”.
There are sportsmen and actors with very high IQ, but no one remembers them as “having high IQ”, only as being a great sportsman or a great actor.
Do you know how high IQ Arnold Schwarzenegger has? Neither do I. My point is, when people say “Arnold Schwarzenegger”, no one thinks about “I wonder how high IQ that guy has… maybe that could explain some of his success”. Maybe he has much higher IQ than Kasparov, but most people would never even start thinking in that direction.
Also there is a difference between intelligence and signalling intelligence. Not everyone with high IQ must necessarily spend their days talking about relativity and quantum physics. Maybe they just invest all their intelligence into improving their private lives.
The score I find on the internet for him is 135 which puts him on the same ballpark as Kasparov.
You mistake what people signal with their inherent intelligence. Bill O’Reilly doesn’t behave as a geek. That change that he’s smart. It’s like the mistake of thinking that being good at chess is about intelligence.
Unfortunately at the moment I don’t find the link to the studies about IQ and social skill, but I think we had previous LW discussions towards IQ positively correlating with social abilities in average society being well established.
It’s not about having memorized information. SAT tests are generally known to be a good proxy for IQ.
More helpful than single data points, here is a scatterplot of IQ vs income in Figure 1.
But again that is not power. That is just smart people getting paid better when and if there are enough jobs around where intelligence is actually useful. I think it is a very big jump from the fact that there seem to be relatively lot of those jobs around to saying it is a general world-changing, outcome-generating power. I cannot find it , but I think income could just as well be correlated with height.
If I understand, you are attempting a “proves too much” argument with height, however, this is irrelevant, if height is predictive of income then this is an interesting result in itself* (maybe tall people are more respected) and has no bearing on whether IQ is also predictive of income. I agree that IQ probably doesn’t scale indefinitely with success and power though. The tails are already starting to diverge at 130.
*there is a correlation
It’s useful but it’s about comparing people between IQ 100 and 130. If we want to look at the power in society it’s worth looking at the extremes.
Well, a while back I was reading an article about Terrence Tao. Now, in it, it said that bout 1 in 40 child prodigies go on to become incredible academics. Part of the reason is that they are forced form an early age to learn as much as possible. There was one such child prodigy, who published papers at the age of 15 and just gradually faded away afterwards. Because of their environment, these child prodigies burn out very quickly, and grow jaded towards academia. Not wanting this to happen, Tao’s parent let him go at a pace he was comfortable with. What did that result in? Him becoming one of the greatest mathematicians on the planet.
So yes, many 180+iq individuals never go on to become great, but that’s largely due to their environment.And most geniuses just lack the drive/charisma/interest to take over the world. But that still doesn’t answer your question about ‘How could an AI take over the world?’ or something to that effect.
Well, you pointed out that some rich, charismatic individual with the right connections could make a big dent in society. But, charisma is something that can be learnt. Sure, it’s hard and maybe not everyone can do it, but it can be learnt. Now, if one has sufficient charisma, one can make a huge impact on the world i.e. Hitler. (The slaterstarcodex blog discussed something similar to this, and I’m just basically lifting the Hitler thing from there).
The Nazi party had about 55 members when he joined, and at the time he was just a failed painter/veteran soldier. And there are disturbing records that people would say things like ’Look at this new guy! We’ve got to promote him. Everyone he talks to is just joining us, its insane!” And people just flocked to him. Someone might hate him, go see a speech or two, and then become a lifelong convert. Even in WW2 when the German country was getting progressively worse, he still had 90% approval from his people. This is what charisma can do. And this is something the AI can learn.Now, imagine what this AI, who’ll be much more intelligent than Hitler, could do.
Furthermore, an AI would be connected to the entirety of the internet, and once it had learnt pretty much everything, it would be able to gain so much power. For example:
1) It could gain capital very rapidly. Certain humans have managed to gain huge amounts of money by overcoming their biases and making smart business decisions. Everything they learnt, so too could an AI, and with far fewer biases. So the AI could rapidly acquire business capital.
2) It could use this business capital to set up various companies, and with its intellectual capabilities, it could outpace other companies in terms of innovation. In a few years, it might well dominate the market.
3) Because of its vast knowledge drawn from the internet, it would be able to market far more succesfully than other organisations. Rather quickly it would draw people to its side, gaining a lot of social capital.
4) It would also be able to gain a lot of knowledge about any competitors and give it yet another edge over them.
5) Due to its advanced marketing strategies, wealth and social capital, it could make a party in some place and place a figurehead government in power. From there, it would be able to increase the countries power (probably in a careful fashion, not upsetting the people and keeping allies around it).
6) Now it has a country under its control, large sway over the rest of the world, and a huge amount of resources. From here, it could advance manufacturing and production to such a degree that it would need no humans to work for it.
7) Still acting carefully, the AI would now have the capability to build pretty much whatever it wanted. From there, it could institute more autonomous production plants around the world. It may provide many goods for free for the locals, in order to keep them on its side.
8) Now the AI can try and take over other countries, making parties with its backing, and promising a Golden Age for mankind.
9) The AI has transformed itself into the head of the worlds greatest superpower
10) Victory
This is just a rough outline of the path an AI could take. In all the stages it simply replicated the feats of extraordinary individuals throughout history. Nothing it did was impossible, and I would say not even implausible. Of course, it could do things very differently. Once we make autonomous production plants, the would AI just need to take them over, produce large amounts of robots and weaponry, and take the world over. Or maybe it would just hold its economic welfare hostage.
Thinking that one can outsmart the market is the biggest bias in this regard. People like Soros and Buffet were either lucky or had secret information others didn’t, because otherwise it should be very, very unlikely to outsmart the market.
I wasn’t referring to the stock market. I know that almost all money made by ‘playing the market’ is due to luck. What I meant was creating the right type of service/good with the right kind of marketing. More like Steve Jobs, Elon Musk and so forth.
A quant who does day trading can find that there some market inefficiency between the prices of different products and then make money with the effect.
That’s not how either Soros or Buffet made their fortunes but it’s still possible for other people.