I’ve actually had similar thoughts myself about why developing AI sooner wouldn’t be that good. Technology isn’t the barrier in most places to human flourishing, but governance.
Prevention of the creation of other potentially dangerous superintelligences
For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.
But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.
“But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition”—unlikely to happen in the forseeable future
Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.
Where governance is the barrier to human flourishing, doesn’t that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven’t thought of yet). After all, if we’re worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it’s friendly, it can defeat harmful parts of the human race, like poorly-run governments.
Most work for AI in life extension could be done by narrow AIs, like needed data-crunching for modelling genetic networks or control of medical nanobots. Quick ascending of self-improving—and benevolent—AI may be a last chance for survival for old person who will never survive until these narrow AI services, but then again, such person could make a safer bet on cryonics.
Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information… but we just don’t know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.
I’ve actually had similar thoughts myself about why developing AI sooner wouldn’t be that good. Technology isn’t the barrier in most places to human flourishing, but governance.
Solving existential risks in general
For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.
But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.
“But to solve many x-risks we don’t probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition”—unlikely to happen in the forseeable future
Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.
Where governance is the barrier to human flourishing, doesn’t that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven’t thought of yet). After all, if we’re worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it’s friendly, it can defeat harmful parts of the human race, like poorly-run governments.
Most work for AI in life extension could be done by narrow AIs, like needed data-crunching for modelling genetic networks or control of medical nanobots. Quick ascending of self-improving—and benevolent—AI may be a last chance for survival for old person who will never survive until these narrow AI services, but then again, such person could make a safer bet on cryonics.
Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information… but we just don’t know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.