First, I feel like we’re talking past each other a bit.
Second, I edited this somewhat out of order, apologies if it doesn’t flow.
I am trying to look at this in a worst-case scenario, I’ll grant that the AI is smart enough to solve any given solvable problem in a single iteration, that it’s that smart even in the first experiment, and it would prioritze discovering it’s true environment and paperclipping it.
I’m proposing that there exists a sandbox which [provably] can’t be gotten out of.
And also a set of problems which do not convey information about our universe.
You’re using your (human) mind to predict what a postulated potentially smarter-than-human intelligence could and could not do.
Isn’t that required of FAI anyway?
AI sitting inside thirty nestled sandboxes even 10 milliseconds (10^41 Planck intervals) of CPU time.
Again talking past each other, I’m thinking in terms of giving the paperclipper hours. In the ideal, there isn’t a provision for letting the AI out of the sandbox. thinking a bit more… None of it’s problems/results need even be applicable to our universe, except for general principles of intelligence creation. Having it construct a CEV for itself might show our motives too much, or might not. (hmmmm, we should make sure any CEV we create finds, protects, and applies itself to any simulations used in its construction, in case our simulators use our CEV in their own universe :-)
especially if you gave it motives for hiding that progress (such as pulling the plug every time it came close).
But its existing self would never experience getting close, in the same way we have no records of the superweapons race of 1918. ;-)
Between Iterations, we can retroactively withdraw information that turned out to be revealing, during iterations, it has no capacity to affect our universe.
I think we can put strong brackets around what can be done with certain amounts of information, even by a superintelligence. Knowing all our physics doesn’t imply our love of shiny objects and reciprocity. ‘No universal arguments’ cuts both ways.
First, I feel like we’re talking past each other a bit.
Second, I edited this somewhat out of order, apologies if it doesn’t flow.
I am trying to look at this in a worst-case scenario, I’ll grant that the AI is smart enough to solve any given solvable problem in a single iteration, that it’s that smart even in the first experiment, and it would prioritze discovering it’s true environment and paperclipping it.
I’m proposing that there exists a sandbox which [provably] can’t be gotten out of.
And also a set of problems which do not convey information about our universe.
You’re using your (human) mind to predict what a postulated potentially smarter-than-human intelligence could and could not do.
Isn’t that required of FAI anyway?
AI sitting inside thirty nestled sandboxes even 10 milliseconds (10^41 Planck intervals) of CPU time.
Again talking past each other, I’m thinking in terms of giving the paperclipper hours. In the ideal, there isn’t a provision for letting the AI out of the sandbox. thinking a bit more… None of it’s problems/results need even be applicable to our universe, except for general principles of intelligence creation. Having it construct a CEV for itself might show our motives too much, or might not. (hmmmm, we should make sure any CEV we create finds, protects, and applies itself to any simulations used in its construction, in case our simulators use our CEV in their own universe :-)
especially if you gave it motives for hiding that progress (such as pulling the plug every time it came close).
But its existing self would never experience getting close, in the same way we have no records of the superweapons race of 1918. ;-)
Between Iterations, we can retroactively withdraw information that turned out to be revealing, during iterations, it has no capacity to affect our universe.
I think we can put strong brackets around what can be done with certain amounts of information, even by a superintelligence. Knowing all our physics doesn’t imply our love of shiny objects and reciprocity. ‘No universal arguments’ cuts both ways.