Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I’ve heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Global nuclear warfare and biological weapons would be the best candidates I can think of.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let’s assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn’t go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can’t do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn’t due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
That morning, a U-2 piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida, and at approximately 12:00 p.m. Eastern Standard Time, was shot down by an S-75 Dvina (NATO designation SA-2 Guideline) SAM launched from an emplacement in Cuba. The stress in negotiations between the USSR and the U.S. intensified, and only later was it learned that the decision to fire was made locally by an undetermined Soviet commander on his own authority.
If this guy had been smarter, maybe this mistake would never have been made.
We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn’t have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn’t meet, we’d simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought “Well, it might have been an accident, we won’t attack.” Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Arguably the most dangerous moment in the crisis was unrecognized until the Cuban Missile Crisis Havana conference in October 2002, attended by many of the veterans of the crisis, at which it was learned that on October 26, 1962 the USS Beale had tracked and dropped practice depth charges on the B-39, a Soviet Foxtrot-class submarine which was armed with a nuclear torpedo. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. An argument broke out among three officers on the B-39, including submarine captain Valentin Savitsky, political officer Ivan Semonovich Maslennikov, and chief of staff of the submarine flotilla, Commander Vasiliy Arkhipov. An exhausted Savitsky became furious and ordered that the nuclear torpedo on board be made combat ready. Accounts differ about whether Commander Arkhipov convinced Savitsky not to make the attack, or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface.[29]
At the Cuban Missile Crisis Havana conference, Robert McNamara admitted that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said that “a guy called Vasili Arkhipov saved the world.”
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ.
This may be true, but “ability to think through the consequences of actions” is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn’t link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn’t always trivial:
We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn’t have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn’t meet, we’d simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought “Well, it might have been an accident, we won’t attack.” Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don’t merely have greater “book smarts,” they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners’ Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don’t have rigorous scientific evidence for this point yet, though I don’t think it’s a stretch, and hopefully we will never have a large sample size of existential crises.
I’m not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I’m just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
When I said “smartness,” I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can’t find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
As it happens, g does have a high correlation with IQ
Someone who knows the details of this is welcome to correct me if I’m wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated—the degree that performance on one predicts performance on another.
It’s a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.
To answer your second question: No, there aren’t any historical examples I am thinking of. Do you find many historical examples of existential risks?
Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.
Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I’ve heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let’s assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn’t go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can’t do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn’t due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
If this guy had been smarter, maybe this mistake would never have been made.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
What relationship does the kind of ‘smartness’ possessed by the individuals in question have with IQ?
I don’t think there are good reasons for thinking they’re one and the same.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
This may be true, but “ability to think through the consequences of actions” is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn’t link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn’t always trivial:
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don’t merely have greater “book smarts,” they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners’ Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don’t have rigorous scientific evidence for this point yet, though I don’t think it’s a stretch, and hopefully we will never have a large sample size of existential crises.
I’m not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I’m just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
What about the inherent incentive that motivates people even in the absence of strong external factors?
I’m not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?
More like a distinction between different types of intrinsic factors.
I still have no idea what you’re talking about and how it relates to my comment.
When I said “smartness,” I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can’t find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
Someone who knows the details of this is welcome to correct me if I’m wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated—the degree that performance on one predicts performance on another.
It’s a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.