Because the asteroid thread is real, and has caused mass extinction events before. Probably more than once. AI takeoff may or may not be a real threat, and likely isn’t even possible. There is a qualitative difference between these two.
Also: MIRI has a financial incentive to lie, and/or exaggerate the threat, Tyson does not. Someone might think the AI threat is just a scam MIRI folks use to pump impressionable youngsters for cash.
Time-scales involved, in a nutshell. What is the chance that there is an extinction level event from asteroids while we still have all our eggs in one basket (on earth), compared to e.g. threats from AI, bioengineering etcetera?
X-risk asteroid impacts every few tens of million years come out to a low probability per century, especially when considering that i.e. the impact causing the demise of the dinosaurs wouldn’t even be a true x-risk for humans.
I’d agree that the error bars on the estimated asteroid x-risk probabilities are smaller than the ones on the estimated x-risk from e.g. AI, but even a small chance of the AI x-risk would beat out the minuscule one from asteroids, don’t you think?
Sorry, you asked “why one might.” I gave two reasons: (a) actual direct evidence of threat in one case vs absence in another, and (b) incentives to lie. There are certainly reasons in favor in the AI takeoff threat, but that was not your question :). I think you are trying to have an argument with me I did not come here to have.
In case I was not clear, regardless of the actual state of probabilities on the ground, the difference between asteroids and takeoff AI is PR. Think of it from a typical person’s point of view. Tyson is a respected physicist with no direct financial stake in how threats are evaluated, taking seriously a known existing threat which had already reshaped our biosphere more than once. EY is some sort of internet cult leader? Whose claim to fame is a fan fic? And who relies on people taking his pet threat seriously for his livelihood? And it’s not clear the threat is even real?
I think I replied before reading your edit, sorry about that.
I’d say that Tyson does have incentives for popularizing a threat that’s right up his alley as an astrophysicist, though maybe not to the same degree as MIRIans. However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive. If the financial incentive played a crucial part, that dedicating-their-professional-life-to-AI-as-an-x-risk wouldn’t have happened.
As for “(AI takeoff) likely isn’t possible”, even if you throw that into your probability calculation, it may (in my opinion will) still beat out a “certain threat but with a very low probability”.
However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive.
I don’t think appeals to charity are valid here. Let’s imagine some known obvious cult, like Scientology. Hubbard said: “You don’t get rich writing science fiction. If you want to get rich, you start a religion.” So he declared what he was doing right away—however folks who joined, including perhaps even Mr. Miscavige himself * may well have had good intentions. Perhaps they wanted to “Clear the planet” or whatever. But so what? Once Miscavige got into the situation with appropriate incentives, he happily went crooked.
Regardless of why people joined MIRI, they have incentives to be crooked now.
*: apparently Miscavige was born into Scientology. “You reap what you sow.”
To be clear—I am not accusing them of being crooked. They seem like earnest people. I am merely explaining why they have a perception problem in a way that Tyson does not. Tyson is a well-known personality who makes money partly from his research gigs, and partly from speaking engagements. He has an honorary doctorate list half a page long. I am sure existential threats are one of his topics, but he will happily survive without asteroids.
Because the asteroid thread is real, and has caused mass extinction events before. Probably more than once. AI takeoff may or may not be a real threat, and likely isn’t even possible. There is a qualitative difference between these two.
Also: MIRI has a financial incentive to lie, and/or exaggerate the threat, Tyson does not. Someone might think the AI threat is just a scam MIRI folks use to pump impressionable youngsters for cash.
Time-scales involved, in a nutshell. What is the chance that there is an extinction level event from asteroids while we still have all our eggs in one basket (on earth), compared to e.g. threats from AI, bioengineering etcetera?
X-risk asteroid impacts every few tens of million years come out to a low probability per century, especially when considering that i.e. the impact causing the demise of the dinosaurs wouldn’t even be a true x-risk for humans.
I’d agree that the error bars on the estimated asteroid x-risk probabilities are smaller than the ones on the estimated x-risk from e.g. AI, but even a small chance of the AI x-risk would beat out the minuscule one from asteroids, don’t you think?
Sorry, you asked “why one might.” I gave two reasons: (a) actual direct evidence of threat in one case vs absence in another, and (b) incentives to lie. There are certainly reasons in favor in the AI takeoff threat, but that was not your question :). I think you are trying to have an argument with me I did not come here to have.
In case I was not clear, regardless of the actual state of probabilities on the ground, the difference between asteroids and takeoff AI is PR. Think of it from a typical person’s point of view. Tyson is a respected physicist with no direct financial stake in how threats are evaluated, taking seriously a known existing threat which had already reshaped our biosphere more than once. EY is some sort of internet cult leader? Whose claim to fame is a fan fic? And who relies on people taking his pet threat seriously for his livelihood? And it’s not clear the threat is even real?
Who do you think people will believe?
I think I replied before reading your edit, sorry about that.
I’d say that Tyson does have incentives for popularizing a threat that’s right up his alley as an astrophysicist, though maybe not to the same degree as MIRIans. However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive. If the financial incentive played a crucial part, that dedicating-their-professional-life-to-AI-as-an-x-risk wouldn’t have happened.
As for “(AI takeoff) likely isn’t possible”, even if you throw that into your probability calculation, it may (in my opinion will) still beat out a “certain threat but with a very low probability”.
Thanks for your thoughts, upvotes all around :)
I don’t think appeals to charity are valid here. Let’s imagine some known obvious cult, like Scientology. Hubbard said: “You don’t get rich writing science fiction. If you want to get rich, you start a religion.” So he declared what he was doing right away—however folks who joined, including perhaps even Mr. Miscavige himself * may well have had good intentions. Perhaps they wanted to “Clear the planet” or whatever. But so what? Once Miscavige got into the situation with appropriate incentives, he happily went crooked.
Regardless of why people joined MIRI, they have incentives to be crooked now.
*: apparently Miscavige was born into Scientology. “You reap what you sow.”
To be clear—I am not accusing them of being crooked. They seem like earnest people. I am merely explaining why they have a perception problem in a way that Tyson does not. Tyson is a well-known personality who makes money partly from his research gigs, and partly from speaking engagements. He has an honorary doctorate list half a page long. I am sure existential threats are one of his topics, but he will happily survive without asteroids.