An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI.
Surely only if you completely ignore effects from sociology and psychology!
But one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI. Which is why I asked if it might be possible that the difficulty of solving friendly AI might outweigh its utility and therefore justify us to disregard friendly AI for now.
Machine intellignece may be distant or close. Nobody knows for sure—although there are some estimates. “Close” seems to have some non-negligible probability mass to many observers—so, humans would be justified in paying a lot more attention than many humans are doing.
“AI vs nanotechnology” is rather a false dichotomty. Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
If you were to develop safe nanotechnology before unfriendly AI then you should be able to suppress the further development of AGI. With advanced nanotechnology you could spy on and sabotage any research that could lead to existential risk scenarios.
You could also use nanotechnology to advance WBE and use it to develop friendly AI.
Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
Even in the possible worlds where it is true that uncontrollable recursive self-improvement is possible (which I doubt anyone would claim is a certainty and therefore that there are possible outcomes where any amount of nanotechnology won’t result in unfriendly AI), one will come first. If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
The question is not only about the utility associated with various existential risks and their probability but also the probability of mitigating the risk. It doesn’t matter if friendly AI can do more good than nanotechnology if nanotechnology comes first or if friendly AI is unsolvable in time.
Probably slightly. Most likely we will get machine intelligence before nanotech and good robots. To build an e-brain you just need a nanotech NAND gate. It is easier to build a brain than an ecosystem. Some lament the difficulties of software engineering—but their concerns seem rather overrated . Yes, software lags behind hardware—but not by a huge amount.
If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
That seems rather pessimistic to me.
Note that nanotechnology is just an example.
The “convergence” I mentioned also includes robots and biotechnology. That should take out any other examples you might have been thinking of.
Surely only if you completely ignore effects from sociology and psychology!
Machine intellignece may be distant or close. Nobody knows for sure—although there are some estimates. “Close” seems to have some non-negligible probability mass to many observers—so, humans would be justified in paying a lot more attention than many humans are doing.
“AI vs nanotechnology” is rather a false dichotomty. Convergence means that machine intelligence and nanotechnology will spiral in together. Synergy means that each facilitates the production of the other.
If you were to develop safe nanotechnology before unfriendly AI then you should be able to suppress the further development of AGI. With advanced nanotechnology you could spy on and sabotage any research that could lead to existential risk scenarios.
You could also use nanotechnology to advance WBE and use it to develop friendly AI.
Even in the possible worlds where it is true that uncontrollable recursive self-improvement is possible (which I doubt anyone would claim is a certainty and therefore that there are possible outcomes where any amount of nanotechnology won’t result in unfriendly AI), one will come first. If nanotechnology is going to come first then we won’t have to worry about unfriendly AI anymore because we will all be dead.
The question is not only about the utility associated with various existential risks and their probability but also the probability of mitigating the risk. It doesn’t matter if friendly AI can do more good than nanotechnology if nanotechnology comes first or if friendly AI is unsolvable in time.
Note that nanotechnology is just an example.
Probably slightly. Most likely we will get machine intelligence before nanotech and good robots. To build an e-brain you just need a nanotech NAND gate. It is easier to build a brain than an ecosystem. Some lament the difficulties of software engineering—but their concerns seem rather overrated . Yes, software lags behind hardware—but not by a huge amount.
That seems rather pessimistic to me.
The “convergence” I mentioned also includes robots and biotechnology. That should take out any other examples you might have been thinking of.