Once AI exists, in the public, it isn’t containable.
You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.
But if we get to AI first, and we figure out how to box it and get it to do useful work, then we can use it to help solve FAI. Maybe.
I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that’s probably friendly and probably boxed, and it’s more likely that everything won’t go horribly wrong.
You would need some assurance that the AI would not try to manipulate the output.
Manipulate it to do what? The idea behind mine is that the AI only cares about answering the questions you pose it given that it has no inputs and everything operates to spec. I suppose it might try to do things to guarantee that it operates to spec, but it’s supposed to be assuming that.
You mean like the knowledge of how it was made is public and anyone can do it? Definitely not. But if you keep it all proprietary it might be possible to contain.
I suppose what we should do is figure out how to make friendly AI, figure out how to create boxed AI, and then build an AI that’s probably friendly and probably boxed, and it’s more likely that everything won’t go horribly wrong.
Manipulate it to do what? The idea behind mine is that the AI only cares about answering the questions you pose it given that it has no inputs and everything operates to spec. I suppose it might try to do things to guarantee that it operates to spec, but it’s supposed to be assuming that.