As long as other humans exist in competition with other humans, there is no_ way to keep AI as safe AI.
Agreed, but in need of qualifiers. There might be a way. I’d say “probably no way.” As in, “no guaranteed-reliable method, but a possible likelihood.”
As long as competitive humans exist, boxes and rules are futile.
I agree fairly strongly with this statement.
The only way to stop hostile AI is to have no AI. Otherwise, expect hostile AI.
This can be interpreted in two ways. The first sentence I agree with if reworded as “The only way to stop hostile AI in the absence of nearly-as-intelligent but separate-minded competitors, is to have no AI.” Otherwise, I think markets indicate fairly well how hostile an AI is likely to be, thanks to governments and the corporate charter. Governments are already-in-existence malevolent AGI. However, they are also very incompetent AGI, in comparison to the theoretical maximum value of malevolent competence without empathic hesitation, internal disagreement, and confusion. (I think we can expect more “unity of purpose” from AGI than we can from government. Interestingly I think this makes sociopathic or “long-term hostile” AI less likely.)
“Expect hostile AI” could either mean “I think hostile AI is likely in this case” or “I think in this case, we should expect hostile AI because one should always expect the worst—as a philosophical matter.”
There really isn’t a logical way around this reality.
Nature often deals with “less likely” and “more likely,” as well as intermediate outcomes. Hopefully you’ve seen Stephen Omohundro’s webinars on hostile universal motivators as basic AI drives and autonomous systems. as well as Peter Voss’s excellent ideas on the subject. I think that evolutionary approaches will trend toward neutral benevolence, and even given extremely shocking intermediary experiences, it will trend toward benevolence, especially given enough interaction with benevolent entities. I believe that intelligence trends toward increased interaction with its environment.
Without competitive humans, you could box the AI, give it ONLY preventative primary goals (primarily: 1. don’t lie 2. always ask before creating a new goal), and feed it limited-time secondary goals that expire upon inevitable completion. There can never be a strong AI that has continuous goals that aren’t solely designed to keep the AI safe.
I think this is just as likely to create malevolent AGI (with limited “G”), possibly more likely. After all, if humans are in competition with each other in anything that operates like the current sociopath-driven “mixed economy,” sociopaths will be controlling them. Our only hope is that other sociopaths aren’t in their same “professional sociopath” network, and that’s a slim hope, indeed.
Agreed, but in need of qualifiers. There might be a way. I’d say “probably no way.” As in, “no guaranteed-reliable method, but a possible likelihood.”
I agree fairly strongly with this statement.
This can be interpreted in two ways. The first sentence I agree with if reworded as “The only way to stop hostile AI in the absence of nearly-as-intelligent but separate-minded competitors, is to have no AI.” Otherwise, I think markets indicate fairly well how hostile an AI is likely to be, thanks to governments and the corporate charter. Governments are already-in-existence malevolent AGI. However, they are also very incompetent AGI, in comparison to the theoretical maximum value of malevolent competence without empathic hesitation, internal disagreement, and confusion. (I think we can expect more “unity of purpose” from AGI than we can from government. Interestingly I think this makes sociopathic or “long-term hostile” AI less likely.)
“Expect hostile AI” could either mean “I think hostile AI is likely in this case” or “I think in this case, we should expect hostile AI because one should always expect the worst—as a philosophical matter.”
Nature often deals with “less likely” and “more likely,” as well as intermediate outcomes. Hopefully you’ve seen Stephen Omohundro’s webinars on hostile universal motivators as basic AI drives and autonomous systems. as well as Peter Voss’s excellent ideas on the subject. I think that evolutionary approaches will trend toward neutral benevolence, and even given extremely shocking intermediary experiences, it will trend toward benevolence, especially given enough interaction with benevolent entities. I believe that intelligence trends toward increased interaction with its environment.
I think this is just as likely to create malevolent AGI (with limited “G”), possibly more likely. After all, if humans are in competition with each other in anything that operates like the current sociopath-driven “mixed economy,” sociopaths will be controlling them. Our only hope is that other sociopaths aren’t in their same “professional sociopath” network, and that’s a slim hope, indeed.