Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.
So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war.
This statement brings all the problems to about the same distance as that to the nearest black hole.
In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.
Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.
The first thing is mobilizing, while the second one causes enjoyable relaxation.
He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’
The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread.
However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed.
Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”
Also, the statement that migration to other planets automatically means salvation is false.
What catastrophe can we escape if we have a colony on Mars? It will die off without supplies.
If a world war started, nuclear missiles would reach it as well.
In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.
I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative.
Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere.
I think you’re reading things into what he said that he never intended to put there.
His central claim is certainly not a reassuring, relaxing one: disaster is “a near certainty”. He says it’ll be at least a hundred years before we have “self-sustaining colonies in space” (note the words “self-sustaining”; he is not talking about a colony on Mars that “will die off without supplies”) and that this means “we have to be very careful in this period”.
Yes, indeed, the timescale on which he said disaster is a near certainty is “the next thousand or ten thousand years”. I suggest that this simply indicates that he’s a cautious scientific sort of chap and doesn’t like calling something a “near certainty” if it’s merely very probable. Let’s suppose you’re right about nuclear war and artificial viruses, and let’s say there’s a 10% chance that one of those causes a planetary-scale disaster within 50 years. (That feels way too pessimistic to me, for what it’s worth.) Then the time until there’s a 99% chance of such disaster—which, for me, is actually not enough to justify the words “near certainty”—is 50 log(0.9)/log(0.01) years … or about 2000 years. Well done, Prof. Hawking!
Indeed, the statement that “migration to other planets automatically means salvation” is false. But that goes beyond what he actually said. A nuclear war or genetically engineered flu virus that wiped out most of the population on earth probably wouldn’t also wipe out the population of a colony on, say, Mars. (You say “nuclear missiles would reach [Mars] as well”, but why? Existing nuclear missiles certainly aren’t close to having that capability, and there’s a huge difference from the defensive point of view between “missiles have been launched and will hit us in a few minutes if not intercepted” and “missiles have been launched and will hit us in a few months if not intercepted”.)
You ask “Why namely 100 years?” but, at least in the article you link to, Hawking is not quoted as saying that there are no AI risks on timescales shorter than that. Maybe he’s said something similar elsewhere?
I really don’t think many people are going to read that article and come away feeling more relaxed about humanity’s prospects for survival.
I comment on what impression he translates to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter.
We could reconstruct his line of reasoning, but most people will not do it. Even if he thinks that risks are 1 per cent in next 50 years it results in 87 per cent in 10 000 years.
I think that attempts to reconstructs some one line of reasoning with goal to get more comforting result is biased.
For example one may said: “I want to kill children”. But we know apriori that he is clever and kind man. So may be he started from the idea that he wants to make the world better place and may be he just wanted to say that he would like to prevent overpopulation. But I prefer to take claims on their face value. No matter who say it and what he may think, but didn’t said.
Self-sustain Mars colonies are able to create nukes on their own. If a war starts on Earth and if we have several colonies on Mars built by different countries, they could start war between each other too. In this case travel time for nukes will still be minutes—from one point on Mars to another. The history of WW2 shows that war between metropolis often resulted in the war in colonies (North Africa).
Hawking has been quoted about AI in 100 years in another article about the same lectures.
That is not my goal, and I have no idea why you suggest it is.
But I prefer to take claims on their face value.
It doesn’t appear to me that you are doing this in Hawking’s case; rather, you are reading all sorts of implications into his words that they don’t logically entail and don’t seem to me to imply in weaker senses either.
they could start war between each other too.
Sure, they could. But I don’t see any particular reason to assume that they would.
We don’t know what Hawking meant by “near certainty” − 90 per cent or 99,999 per cent and depending on it we may come to different conclusion about what probability it implies for next 100 years. Most readers will not do this type of calculations anyway. They will learn that global risks is something that could happen in 1000 − 10 000 years time frame. And will discount it as unimportant.
Your goal seems to be to prove that Hawking thinks thats global risks are real in near term future while he said exactly opposite.
About Mars. If colonies will be built by national states, for example there will be two colonies, American and Chinese, the war between US and China will result in war between their colonies with high probability, because if one side choose to completely destroy other side and its second strike capability it has to destroy all its remote military bases which may have nukes.
He did not “say exactly opposite”. He said: it’ll be at least 100 years before we have much chance of mitigating species-level disasters by putting part of our species somewhere other than earth, so “we have to be very careful”.
My goal is to point out that you are misrepresenting what Hawking said.
If colonies will be built by national states
If these are genuinely self-supporting colonies on another planet, I think it will not be long—a few generations at most—before they stop thinking of themselves as mere offshoots of whatever nation back on earth originally produced them. Their relations with other colonies on Mars (or wherever) will be more important to them than their relations with anyone back on earth. And I do not think they will be keen to engage in mutually assured destruction merely because their alleged masters back on earth tell them to.
(And if they are not genuinely self-supporting colonies, then they are not what Hawking was talking about.)
My criticism should concentrate on two levels: on his wording and on his model of x-risks and their prevention.
His wording is ambiguous than he speak about tens of thousand years—we don’t have them.
But I also think that his claims that we have 100 years (with small probability of extinction) and that space colonies are our best chance are both false.
Firstly, because we need strong AI and nanotech to create really self-sustained colony. Self-replicting robots are the best way to build colonies. So we need to prevent risks of AI and nanotech before we create such colonies. And I think that strong AI will be created in less than 100 years. The same maybe said about most other risks—we could create new flu virus even now without any new technologies. Global catastrophe is almost certain in next 100 years if we don’t implement protective measures here on Earth.
The space colonies will not be safe from UFAI and from nanobots. Large space crafts maybe used as kinetic weapon against planets, so space exploration could create new risks. Space colonies also will not be safe from internal conflicts, as large colony will be able to create nukes and viruses and use it against another planet or another colony on the same planet or even in case internal terrorism. Only starships with near light speed maybe useful as escape mechanism as they could help spread civilization through Galaxy and create many independent nodes.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
But I also think that his claims that we have 100 years (with small probability of extinction)
His claim is that we have 100 years in with we have to be extra careful to prevent Xrisk.
The same maybe said about most other risks—we could create new flu virus even now without any new technologies.
With today’s technology you could create a problematic new virus. On the other hand that hardly would mean extinction. Wearing masks 24⁄7 to filter air isn’t fun but it’s a possible step when we are afraid of airbone viruses.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
It’s not like Hawkings doesn’t call for AGI control.
So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war. This statement brings all the problems to about the same distance as that to the nearest black hole.
Do you really think that it’s Hawking’s position that at the moment there’s no threat of nuclear war?
I don’t think that he think so, I comment on what impression he translate to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter.
I don’t think that it makes sense to give the full responsibility for a message to a person that’s distinct from the author of an article.
That said I don’t think that saying: Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years. makes any reader update to believe that the chance of nuclear war or genetically engineered viruses are lower than they previously expected.
Talking with mainstream media inherently requires simplying your message.Focuses the message compounding of risk over time doesn’t seem wrong to me.
If he write an article about his understanding of x-risks timeframe and prevention measures timeframe with all fidelity he use to describe black holes we could concentrate on it.
But now it may be wise to said that the media is wrongly interoperated his words and that he (probably) meant exactly opposite: that we must invest in x-risks prevention now. The media publications is only thing with which we could argue. Also I think that he may take more responsibility while talking to media, because he is guru and everything he said may be understood uncritically.
Even the article says we have to be extra careful with x-risk prevention in the next 100 years because we don’t have a self sustaining Mars base. I think you are misreading the article when you say it argues against investing in x-risk prevention now.
Rant mode on:
Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.
So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war. This statement brings all the problems to about the same distance as that to the nearest black hole.
In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.
Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.
The first thing is mobilizing, while the second one causes enjoyable relaxation.
He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’
The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread. However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed. Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”
Also, the statement that migration to other planets automatically means salvation is false. What catastrophe can we escape if we have a colony on Mars? It will die off without supplies. If a world war started, nuclear missiles would reach it as well. In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.
I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative. Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere.
Finally, some words about AI. Why namely 100 years? Talking about risks, we have to consider a lower time limit, rather than a median. And the lower limit of estimated time to create some dangerous AI is 5 to 15 years, not 100. http://www.sciencealert.com/stephen-hawking-says-a-planetary-disaster-on-earth-is-a-near-certainty
Rant mode off
I think you’re reading things into what he said that he never intended to put there.
His central claim is certainly not a reassuring, relaxing one: disaster is “a near certainty”. He says it’ll be at least a hundred years before we have “self-sustaining colonies in space” (note the words “self-sustaining”; he is not talking about a colony on Mars that “will die off without supplies”) and that this means “we have to be very careful in this period”.
Yes, indeed, the timescale on which he said disaster is a near certainty is “the next thousand or ten thousand years”. I suggest that this simply indicates that he’s a cautious scientific sort of chap and doesn’t like calling something a “near certainty” if it’s merely very probable. Let’s suppose you’re right about nuclear war and artificial viruses, and let’s say there’s a 10% chance that one of those causes a planetary-scale disaster within 50 years. (That feels way too pessimistic to me, for what it’s worth.) Then the time until there’s a 99% chance of such disaster—which, for me, is actually not enough to justify the words “near certainty”—is 50 log(0.9)/log(0.01) years … or about 2000 years. Well done, Prof. Hawking!
Indeed, the statement that “migration to other planets automatically means salvation” is false. But that goes beyond what he actually said. A nuclear war or genetically engineered flu virus that wiped out most of the population on earth probably wouldn’t also wipe out the population of a colony on, say, Mars. (You say “nuclear missiles would reach [Mars] as well”, but why? Existing nuclear missiles certainly aren’t close to having that capability, and there’s a huge difference from the defensive point of view between “missiles have been launched and will hit us in a few minutes if not intercepted” and “missiles have been launched and will hit us in a few months if not intercepted”.)
You ask “Why namely 100 years?” but, at least in the article you link to, Hawking is not quoted as saying that there are no AI risks on timescales shorter than that. Maybe he’s said something similar elsewhere?
I really don’t think many people are going to read that article and come away feeling more relaxed about humanity’s prospects for survival.
I comment on what impression he translates to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter. We could reconstruct his line of reasoning, but most people will not do it. Even if he thinks that risks are 1 per cent in next 50 years it results in 87 per cent in 10 000 years. I think that attempts to reconstructs some one line of reasoning with goal to get more comforting result is biased.
For example one may said: “I want to kill children”. But we know apriori that he is clever and kind man. So may be he started from the idea that he wants to make the world better place and may be he just wanted to say that he would like to prevent overpopulation. But I prefer to take claims on their face value. No matter who say it and what he may think, but didn’t said.
Self-sustain Mars colonies are able to create nukes on their own. If a war starts on Earth and if we have several colonies on Mars built by different countries, they could start war between each other too. In this case travel time for nukes will still be minutes—from one point on Mars to another. The history of WW2 shows that war between metropolis often resulted in the war in colonies (North Africa).
Hawking has been quoted about AI in 100 years in another article about the same lectures.
That’s not my idea of “near certainty”.
That is not my goal, and I have no idea why you suggest it is.
It doesn’t appear to me that you are doing this in Hawking’s case; rather, you are reading all sorts of implications into his words that they don’t logically entail and don’t seem to me to imply in weaker senses either.
Sure, they could. But I don’t see any particular reason to assume that they would.
We don’t know what Hawking meant by “near certainty” − 90 per cent or 99,999 per cent and depending on it we may come to different conclusion about what probability it implies for next 100 years. Most readers will not do this type of calculations anyway. They will learn that global risks is something that could happen in 1000 − 10 000 years time frame. And will discount it as unimportant.
Your goal seems to be to prove that Hawking thinks thats global risks are real in near term future while he said exactly opposite.
A lot of media starts to report Hawking claims in following words: “Professor Stephen Hawking has warned that a disaster on Earth within the next thousand or ten thousand years is a ‘near certainty’. http://www.telegraph.co.uk/news/science/science-news/12107623/Prof-Stephen-Hawking-disaster-on-planet-Earth-is-a-near-certainty.html While media may be not exact in repeating his claims and the wording is rather ambiguous, he didn’t clarify them publicly as I know.
About Mars. If colonies will be built by national states, for example there will be two colonies, American and Chinese, the war between US and China will result in war between their colonies with high probability, because if one side choose to completely destroy other side and its second strike capability it has to destroy all its remote military bases which may have nukes.
He did not “say exactly opposite”. He said: it’ll be at least 100 years before we have much chance of mitigating species-level disasters by putting part of our species somewhere other than earth, so “we have to be very careful”.
My goal is to point out that you are misrepresenting what Hawking said.
If these are genuinely self-supporting colonies on another planet, I think it will not be long—a few generations at most—before they stop thinking of themselves as mere offshoots of whatever nation back on earth originally produced them. Their relations with other colonies on Mars (or wherever) will be more important to them than their relations with anyone back on earth. And I do not think they will be keen to engage in mutually assured destruction merely because their alleged masters back on earth tell them to.
(And if they are not genuinely self-supporting colonies, then they are not what Hawking was talking about.)
My criticism should concentrate on two levels: on his wording and on his model of x-risks and their prevention. His wording is ambiguous than he speak about tens of thousand years—we don’t have them.
But I also think that his claims that we have 100 years (with small probability of extinction) and that space colonies are our best chance are both false.
Firstly, because we need strong AI and nanotech to create really self-sustained colony. Self-replicting robots are the best way to build colonies. So we need to prevent risks of AI and nanotech before we create such colonies. And I think that strong AI will be created in less than 100 years. The same maybe said about most other risks—we could create new flu virus even now without any new technologies. Global catastrophe is almost certain in next 100 years if we don’t implement protective measures here on Earth.
The space colonies will not be safe from UFAI and from nanobots. Large space crafts maybe used as kinetic weapon against planets, so space exploration could create new risks. Space colonies also will not be safe from internal conflicts, as large colony will be able to create nukes and viruses and use it against another planet or another colony on the same planet or even in case internal terrorism. Only starships with near light speed maybe useful as escape mechanism as they could help spread civilization through Galaxy and create many independent nodes.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
His claim is that we have 100 years in with we have to be extra careful to prevent Xrisk.
With today’s technology you could create a problematic new virus. On the other hand that hardly would mean extinction. Wearing masks 24⁄7 to filter air isn’t fun but it’s a possible step when we are afraid of airbone viruses.
It’s not like Hawkings doesn’t call for AGI control.
Do you really think that it’s Hawking’s position that at the moment there’s no threat of nuclear war?
I don’t think that he think so, I comment on what impression he translate to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter.
I don’t think that it makes sense to give the full responsibility for a message to a person that’s distinct from the author of an article.
That said I don’t think that saying:
Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years.
makes any reader update to believe that the chance of nuclear war or genetically engineered viruses are lower than they previously expected.Talking with mainstream media inherently requires simplying your message.Focuses the message compounding of risk over time doesn’t seem wrong to me.
If he write an article about his understanding of x-risks timeframe and prevention measures timeframe with all fidelity he use to describe black holes we could concentrate on it.
But now it may be wise to said that the media is wrongly interoperated his words and that he (probably) meant exactly opposite: that we must invest in x-risks prevention now. The media publications is only thing with which we could argue. Also I think that he may take more responsibility while talking to media, because he is guru and everything he said may be understood uncritically.
Even the article says we have to be extra careful with x-risk prevention in the next 100 years because we don’t have a self sustaining Mars base. I think you are misreading the article when you say it argues against investing in x-risk prevention now.