We don’t know what Hawking meant by “near certainty” − 90 per cent or 99,999 per cent and depending on it we may come to different conclusion about what probability it implies for next 100 years. Most readers will not do this type of calculations anyway. They will learn that global risks is something that could happen in 1000 − 10 000 years time frame. And will discount it as unimportant.
Your goal seems to be to prove that Hawking thinks thats global risks are real in near term future while he said exactly opposite.
About Mars. If colonies will be built by national states, for example there will be two colonies, American and Chinese, the war between US and China will result in war between their colonies with high probability, because if one side choose to completely destroy other side and its second strike capability it has to destroy all its remote military bases which may have nukes.
He did not “say exactly opposite”. He said: it’ll be at least 100 years before we have much chance of mitigating species-level disasters by putting part of our species somewhere other than earth, so “we have to be very careful”.
My goal is to point out that you are misrepresenting what Hawking said.
If colonies will be built by national states
If these are genuinely self-supporting colonies on another planet, I think it will not be long—a few generations at most—before they stop thinking of themselves as mere offshoots of whatever nation back on earth originally produced them. Their relations with other colonies on Mars (or wherever) will be more important to them than their relations with anyone back on earth. And I do not think they will be keen to engage in mutually assured destruction merely because their alleged masters back on earth tell them to.
(And if they are not genuinely self-supporting colonies, then they are not what Hawking was talking about.)
My criticism should concentrate on two levels: on his wording and on his model of x-risks and their prevention.
His wording is ambiguous than he speak about tens of thousand years—we don’t have them.
But I also think that his claims that we have 100 years (with small probability of extinction) and that space colonies are our best chance are both false.
Firstly, because we need strong AI and nanotech to create really self-sustained colony. Self-replicting robots are the best way to build colonies. So we need to prevent risks of AI and nanotech before we create such colonies. And I think that strong AI will be created in less than 100 years. The same maybe said about most other risks—we could create new flu virus even now without any new technologies. Global catastrophe is almost certain in next 100 years if we don’t implement protective measures here on Earth.
The space colonies will not be safe from UFAI and from nanobots. Large space crafts maybe used as kinetic weapon against planets, so space exploration could create new risks. Space colonies also will not be safe from internal conflicts, as large colony will be able to create nukes and viruses and use it against another planet or another colony on the same planet or even in case internal terrorism. Only starships with near light speed maybe useful as escape mechanism as they could help spread civilization through Galaxy and create many independent nodes.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
But I also think that his claims that we have 100 years (with small probability of extinction)
His claim is that we have 100 years in with we have to be extra careful to prevent Xrisk.
The same maybe said about most other risks—we could create new flu virus even now without any new technologies.
With today’s technology you could create a problematic new virus. On the other hand that hardly would mean extinction. Wearing masks 24⁄7 to filter air isn’t fun but it’s a possible step when we are afraid of airbone viruses.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
It’s not like Hawkings doesn’t call for AGI control.
We don’t know what Hawking meant by “near certainty” − 90 per cent or 99,999 per cent and depending on it we may come to different conclusion about what probability it implies for next 100 years. Most readers will not do this type of calculations anyway. They will learn that global risks is something that could happen in 1000 − 10 000 years time frame. And will discount it as unimportant.
Your goal seems to be to prove that Hawking thinks thats global risks are real in near term future while he said exactly opposite.
A lot of media starts to report Hawking claims in following words: “Professor Stephen Hawking has warned that a disaster on Earth within the next thousand or ten thousand years is a ‘near certainty’. http://www.telegraph.co.uk/news/science/science-news/12107623/Prof-Stephen-Hawking-disaster-on-planet-Earth-is-a-near-certainty.html While media may be not exact in repeating his claims and the wording is rather ambiguous, he didn’t clarify them publicly as I know.
About Mars. If colonies will be built by national states, for example there will be two colonies, American and Chinese, the war between US and China will result in war between their colonies with high probability, because if one side choose to completely destroy other side and its second strike capability it has to destroy all its remote military bases which may have nukes.
He did not “say exactly opposite”. He said: it’ll be at least 100 years before we have much chance of mitigating species-level disasters by putting part of our species somewhere other than earth, so “we have to be very careful”.
My goal is to point out that you are misrepresenting what Hawking said.
If these are genuinely self-supporting colonies on another planet, I think it will not be long—a few generations at most—before they stop thinking of themselves as mere offshoots of whatever nation back on earth originally produced them. Their relations with other colonies on Mars (or wherever) will be more important to them than their relations with anyone back on earth. And I do not think they will be keen to engage in mutually assured destruction merely because their alleged masters back on earth tell them to.
(And if they are not genuinely self-supporting colonies, then they are not what Hawking was talking about.)
My criticism should concentrate on two levels: on his wording and on his model of x-risks and their prevention. His wording is ambiguous than he speak about tens of thousand years—we don’t have them.
But I also think that his claims that we have 100 years (with small probability of extinction) and that space colonies are our best chance are both false.
Firstly, because we need strong AI and nanotech to create really self-sustained colony. Self-replicting robots are the best way to build colonies. So we need to prevent risks of AI and nanotech before we create such colonies. And I think that strong AI will be created in less than 100 years. The same maybe said about most other risks—we could create new flu virus even now without any new technologies. Global catastrophe is almost certain in next 100 years if we don’t implement protective measures here on Earth.
The space colonies will not be safe from UFAI and from nanobots. Large space crafts maybe used as kinetic weapon against planets, so space exploration could create new risks. Space colonies also will not be safe from internal conflicts, as large colony will be able to create nukes and viruses and use it against another planet or another colony on the same planet or even in case internal terrorism. Only starships with near light speed maybe useful as escape mechanism as they could help spread civilization through Galaxy and create many independent nodes.
Our best option to prevent x-risks are international control systems on dangerous tech and lately friendly AI, and we need to do it now, and space colonies have remote and marginal utility.
His claim is that we have 100 years in with we have to be extra careful to prevent Xrisk.
With today’s technology you could create a problematic new virus. On the other hand that hardly would mean extinction. Wearing masks 24⁄7 to filter air isn’t fun but it’s a possible step when we are afraid of airbone viruses.
It’s not like Hawkings doesn’t call for AGI control.