In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form ‘X is unavoidable, everything is about X, you are screwed’. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics.
Future hazards deserve more research funding, yes, but remember that the future is not certain.
“Thousands of people in history have been convinced by trains of thought of the form ‘X is unavoidable, everything is about X, you are screwed’.”
Care to give a few examples? Because I’d venture saying that, except for religious and other superstitious beliefs, and except for crazy lunatics too like fascists and communists, they were mostly right.
“the future is not certain”
Depends on what you mean by that. If you mean that it’s not extremely likely, like 90% plus, that we will develop some truly dangerous form of AI this century that will pose immense control challenges, then I’d say you’re deeply misguided given the smoke signals that have been coming up since 2017.
I mean, it’s like worrying about nuclear war. Is it certain that we’ll ever get a big nuclear war? No. Is it extremely likely if things stay the same and if enough time passes (10, 50, 100, 200, 300 years)? Hell yes. I mean, just look at the current situation...
Though I don’t care about nuclear war much because it is also extremely likely that it will come with a warning, so you can also run to the countryside, and even then if things go bad like you’re starving to death or dying of radiation poisoning, you can always put an end to your own suffering. With AI you might not be so lucky. You might end in an unbreakable dictatorship a la With Folded Hands.
How can you not feel paralyzed when you see chaos pointed at your head and at the heads of other humans, coming in as little as 5 or 10 years, and you see absolutely no solution, or much less anything you can do yourself?
We can’t even build a provably safe plane, how are we gonna build a provably safe TAI with the work of a few hundred people over 5-30 years, and with complete ignorance by most?
The world would have to wake up, and I don’t think it will.
Really, the only ways we will not build dangerous and uncontrollable AI is if either we destroy ourselves by some other way first (or even just with narrow AI), or the miracle happens that someone cracks advanced nanotechnology/magic through narrow AI and becomes a benevolent and omnipotent world dictator. There’s really no other way we won’t end up doing it.
In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form ‘X is unavoidable, everything is about X, you are screwed’. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics.
Future hazards deserve more research funding, yes, but remember that the future is not certain.
“Thousands of people in history have been convinced by trains of thought of the form ‘X is unavoidable, everything is about X, you are screwed’.”
Care to give a few examples? Because I’d venture saying that, except for religious and other superstitious beliefs, and except for crazy lunatics too like fascists and communists, they were mostly right.
“the future is not certain”
Depends on what you mean by that. If you mean that it’s not extremely likely, like 90% plus, that we will develop some truly dangerous form of AI this century that will pose immense control challenges, then I’d say you’re deeply misguided given the smoke signals that have been coming up since 2017.
I mean, it’s like worrying about nuclear war. Is it certain that we’ll ever get a big nuclear war? No. Is it extremely likely if things stay the same and if enough time passes (10, 50, 100, 200, 300 years)? Hell yes. I mean, just look at the current situation...
Though I don’t care about nuclear war much because it is also extremely likely that it will come with a warning, so you can also run to the countryside, and even then if things go bad like you’re starving to death or dying of radiation poisoning, you can always put an end to your own suffering. With AI you might not be so lucky. You might end in an unbreakable dictatorship a la With Folded Hands.
How can you not feel paralyzed when you see chaos pointed at your head and at the heads of other humans, coming in as little as 5 or 10 years, and you see absolutely no solution, or much less anything you can do yourself?
We can’t even build a provably safe plane, how are we gonna build a provably safe TAI with the work of a few hundred people over 5-30 years, and with complete ignorance by most?
The world would have to wake up, and I don’t think it will.
Really, the only ways we will not build dangerous and uncontrollable AI is if either we destroy ourselves by some other way first (or even just with narrow AI), or the miracle happens that someone cracks advanced nanotechnology/magic through narrow AI and becomes a benevolent and omnipotent world dictator. There’s really no other way we won’t end up doing it.