Niels Bohr supposedly said “Prediction is difficult, especially about the future”. Even if he was mistaken about quantum mechanics, he was right about that.
Every generation seems to think it’s special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We’ll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all.
It’s always something. Now it’s AGI. Maybe it’ll kill us. Maybe it’ll usher in utopia, or transform us into gods via a singularity.
Maybe. But based on the record to date, it’s not the way to bet.
Whatever you think the world is going to be like in 20 years, you’ll find it easier to deal with if you’re not living hand-to-mouth. If you find it difficult to save money, it’s very tempting to find an excuse to not even try. Don’t deceive yourself.
″… however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience.”—Edward Gibbon, ‘Decline and Fall of the Roman Empire’
Added: I do think Bohr was wrong and Everett (MWI) was right.
So think of it this way—you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn’t happen.
And in many of those worlds, you’ll be wanting something to live on in your retirement.
Every generation seems to think it’s special and will encounter new circumstances that turn old advice on its head.
I have a question, how did you come to know this, especially as a repeatable pattern? I’d really like to know this, because this sounds like one of the more interesting arguments against AI being impactful at all.
I dont think he’s trying to say AI wont be impactful, obviously it will, just that trying to predict it isn’t an activity that one ought apply any surety to. Soothsaying isn’t a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI. We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.
Niels Bohr supposedly said “Prediction is difficult, especially about the future”. Even if he was mistaken about quantum mechanics, he was right about that.
Every generation seems to think it’s special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We’ll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all.
It’s always something. Now it’s AGI. Maybe it’ll kill us. Maybe it’ll usher in utopia, or transform us into gods via a singularity.
Maybe. But based on the record to date, it’s not the way to bet.
Whatever you think the world is going to be like in 20 years, you’ll find it easier to deal with if you’re not living hand-to-mouth. If you find it difficult to save money, it’s very tempting to find an excuse to not even try. Don’t deceive yourself.
″… however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience.”—Edward Gibbon, ‘Decline and Fall of the Roman Empire’
Added: I do think Bohr was wrong and Everett (MWI) was right.
So think of it this way—you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn’t happen.
And in many of those worlds, you’ll be wanting something to live on in your retirement.
I’ve thought on this additional axiom, and it seems to bend the reality too much, leading to possible [unpleasant outcomes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes): for example, where a person survives but is tortured indefinitely long.
Also, it’s unclear how could this axiom manage to preserve ratios of probabilities for quantum states.
I have a question, how did you come to know this, especially as a repeatable pattern? I’d really like to know this, because this sounds like one of the more interesting arguments against AI being impactful at all.
I dont think he’s trying to say AI wont be impactful, obviously it will, just that trying to predict it isn’t an activity that one ought apply any surety to. Soothsaying isn’t a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI. We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.