Solving other x-risks will not save us from uFAI. Solving FAI will save us from other x-risks.
Good point. I will have to think about it further. Just a few thoughts:
Safe nanotechnology (unsafe nanotechnology being an existential risk) will also save us from various existential risks. Arguably less than a fully-fledged friendly AI. But assume that the disutility of both scenarios is about the same.
An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI. Both risks will probably simply wipe us out and don’t cause extra disutility. If you consider the the extermination of alien life you might get a higher amount of disutility. But I believe that can be outweighed by negative effects of unsafe nanotechnology that doesn’t manage to wipe out humanity but rather cause various dystopian scenarios. Such scenarios are more likely than evil AI because nanotechnology is a tool used by humans who can be deliberately unfriendly.
So let’s say that solving friendly AI has 10x the utility of ensuring safe nanotechnology because it can save us from more existential risks than the use of advanced nanotechnology could.
But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now. If that is the case it might be better to focus on another existential risk that might wipe us out in all possible worlds where unfriendly AI either comes later or doesn’t pose a risk at all.
An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI.
Surely only if you completely ignore effects from sociology and psychology!
But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now.
Machine intellignece may be distant or close. Nobody knows for sure—although there are some estimates. “Close” seems to have some non-negligible probability mass to many observers—so, humans would be justified in paying a lot more attention than many humans are doing.
“AI vs nanotechnology” is rather a false dichotomty. Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
If you were to develop safe nanotechnology before unfriendly AI then you should be able to suppress the further development of AGI. With advanced nanotechnology you could spy on and sabotage any research that could lead to existential risk scenarios.
You could also use nanotechnology to advance WBE and use it to develop friendly AI.
Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
Even in the possible worlds where it is true that uncontrollable recursive self-improvement is possible (which I doubt anyone would claim is a certainty and therefore that there are possible outcomes where any amount of nanotechnology won’t result in unfriendly AI), one will come first. If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
The question is not only about the utility associated with various existential risks and their probability but also the probability of mitigating the risk. It doesn’t matter if friendly AI can do more good than nanotechnology if nanotechnology comes first or if friendly AI is unsolvable in time.
Probably slightly. Most likely we will get machine intelligence before nanotech and good robots. To build an e-brain you just need a nanotech NAND gate. It is easier to build a brain than an ecosystem. Some lament the difficulties of software engineering—but their concerns seem rather overrated . Yes, software lags behind hardware—but not by a huge amount.
If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
That seems rather pessimistic to me.
Note that nanotechnology is just an example.
The “convergence” I mentioned also includes robots and biotechnology. That should take out any other examples you might have been thinking of.
Good point. I will have to think about it further. Just a few thoughts:
Safe nanotechnology (unsafe nanotechnology being an existential risk) will also save us from various existential risks. Arguably less than a fully-fledged friendly AI. But assume that the disutility of both scenarios is about the same.
An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI. Both risks will probably simply wipe us out and don’t cause extra disutility. If you consider the the extermination of alien life you might get a higher amount of disutility. But I believe that can be outweighed by negative effects of unsafe nanotechnology that doesn’t manage to wipe out humanity but rather cause various dystopian scenarios. Such scenarios are more likely than evil AI because nanotechnology is a tool used by humans who can be deliberately unfriendly.
So let’s say that solving friendly AI has 10x the utility of ensuring safe nanotechnology because it can save us from more existential risks than the use of advanced nanotechnology could.
But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now. If that is the case it might be better to focus on another existential risk that might wipe us out in all possible worlds where unfriendly AI either comes later or doesn’t pose a risk at all.
Surely only if you completely ignore effects from sociology and psychology!
Machine intellignece may be distant or close. Nobody knows for sure—although there are some estimates. “Close” seems to have some non-negligible probability mass to many observers—so, humans would be justified in paying a lot more attention than many humans are doing.
“AI vs nanotechnology” is rather a false dichotomty. Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
If you were to develop safe nanotechnology before unfriendly AI then you should be able to suppress the further development of AGI. With advanced nanotechnology you could spy on and sabotage any research that could lead to existential risk scenarios.
You could also use nanotechnology to advance WBE and use it to develop friendly AI.
Even in the possible worlds where it is true that uncontrollable recursive self-improvement is possible (which I doubt anyone would claim is a certainty and therefore that there are possible outcomes where any amount of nanotechnology won’t result in unfriendly AI), one will come first. If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
The question is not only about the utility associated with various existential risks and their probability but also the probability of mitigating the risk. It doesn’t matter if friendly AI can do more good than nanotechnology if nanotechnology comes first or if friendly AI is unsolvable in time.
Note that nanotechnology is just an example.
Probably slightly. Most likely we will get machine intelligence before nanotech and good robots. To build an e-brain you just need a nanotech NAND gate. It is easier to build a brain than an ecosystem. Some lament the difficulties of software engineering—but their concerns seem rather overrated . Yes, software lags behind hardware—but not by a huge amount.
That seems rather pessimistic to me.
The “convergence” I mentioned also includes robots and biotechnology. That should take out any other examples you might have been thinking of.