I don’t think that’s a forgone conclusion. After all, there seem to be many proposals on how to get around this problem that individuals compete each other. For example, there’s Eliezer’s idea of using humanity’s coherent extrapolated voalition to guide the AI. I also don’t think that its in anyone’s advantage to have hostile AI, that no one will try to bring about explicitly hostile AI on purpose, and that anyone sufficiently intelligent to program a working AI will probably recognize the dangers that AI contain.
Yes, humans will fight amongst each other and there is temptation for seed AI programmers to abuse the resulting AI to destroy their rivals. But I don’t agree with the idea that AIs will always be hostile to the enemies of programmers. With some of the proposals that researchers have, it doesn’t seem like individuals can abuse the AI to compete with other humans at all. The large potential for abuse doesn’t mean that there is no potential for a good result.
I don’t think that’s a forgone conclusion. After all, there seem to be many proposals on how to get around this problem that individuals compete each other. For example, there’s Eliezer’s idea of using humanity’s coherent extrapolated voalition to guide the AI. I also don’t think that its in anyone’s advantage to have hostile AI, that no one will try to bring about explicitly hostile AI on purpose, and that anyone sufficiently intelligent to program a working AI will probably recognize the dangers that AI contain.
Yes, humans will fight amongst each other and there is temptation for seed AI programmers to abuse the resulting AI to destroy their rivals. But I don’t agree with the idea that AIs will always be hostile to the enemies of programmers. With some of the proposals that researchers have, it doesn’t seem like individuals can abuse the AI to compete with other humans at all. The large potential for abuse doesn’t mean that there is no potential for a good result.