It means that the AI can’t do anything outside of its box, aside from taking in 1s and 0s and spitting out 1s and 0s. (Obviously that still allows it to “perform experiments” in the sense of running Monte Carlo simulations or whatever.) Getting it to not torture virtual people would admittedly be an additional problem which this doesn’t cover. The AI has no means with which to convert Earth into memory storage aside from manipulating us. But it doesn’t have any motivation to manipulate us, because the multiplication of question-specific demons means it has a low time horizon—it treats each answer as the final answer; is a deontologist rather than consequentialist.
It means that the AI can’t do anything outside of its box, aside from taking in 1s and 0s and spitting out 1s and 0s.
Really? And are you sure this is all it will do? How do you know for example that it won’t manipulate other objects by fooling with its power source? Or by rapidly turning on and off components send out very specific radio signals to nearby electronic devices? These can both be possibly handld but these are only the most obvious extra angles of attack for the AI.
I think that a properly designed Oracle AI might be possible, but that may be due more to a failure of imagination on my part and my general skepticism of fooming than anything else.
It means that the AI can’t do anything outside of its box, aside from taking in 1s and 0s and spitting out 1s and 0s. (Obviously that still allows it to “perform experiments” in the sense of running Monte Carlo simulations or whatever.) Getting it to not torture virtual people would admittedly be an additional problem which this doesn’t cover. The AI has no means with which to convert Earth into memory storage aside from manipulating us. But it doesn’t have any motivation to manipulate us, because the multiplication of question-specific demons means it has a low time horizon—it treats each answer as the final answer; is a deontologist rather than consequentialist.
Really? And are you sure this is all it will do? How do you know for example that it won’t manipulate other objects by fooling with its power source? Or by rapidly turning on and off components send out very specific radio signals to nearby electronic devices? These can both be possibly handld but these are only the most obvious extra angles of attack for the AI.
I think that a properly designed Oracle AI might be possible, but that may be due more to a failure of imagination on my part and my general skepticism of fooming than anything else.