Question: If we do manage to build a strong AI, why not just let it figure this problem out on its own when trying to construct a successor? Almost definitionally, it will do a better job of it than we will.
The biggest problem with deferring the Lobstacle to the AI is that you could have a roughly human-comparable AI which solves the Lobstacle in a hacky way, which changes the value system for the successor AI, which is then intelligent enough to solve the Lobstacle perfectly and preserve that new value system. So now you’ve got a superintelligent AI locked in on the wrong target.
Yes, obviously. We solve the Lobstacle by not ourselves running on formal systems and sometimes accepting axioms that we were not born with (things like PA). Allowing the AI to only do things that it can prove will have good consequences using a specific formal system would make it dumber than us.
I think, rather, that humans solve decision problems that involve predicting other human deductive processes by means of some evolved heuristics for social reasoning that we don’t yet fully understand on a formal level. “Not running on formal systems” isn’t a helpful answer for how to make good decisions.
I think that the way that humans predict other humans is the wrong way to look at this, and instead consider how humans would reason about the behavior of an AI that they build. I’m not proposing simply “don’t use formal systems”, or even “don’t limit yourself exclusively to a single formal system”. I am actually alluding to a far more specific procedure:
Come up with a small set of basic assumptions (axioms)
Convince yourself that these assumptions accurately describe the system at hand
Try to prove that the axioms would imply the desired behavior
If you cannot do this return for the first step and see if additional assumptions are necessary
Now it turns out that for almost any mathematical problem that we are actually interested in, ZFC is going to be a sufficient set of assumptions, so the first few steps here are somewhat invisible, but they are still there. Somebody need need to come up with these axioms for the first time, and each individual who wants to use them should convince themselves that they are reasonable before relying on them.
A good AI should already do this to some degree. It needs to come up with models of a system that it is interacting with before determining its course of action. It is obvious that it might need to update what assumptions it’s using the model physical laws, why shouldn’t it just do the same thing for logical ones?
Question: If we do manage to build a strong AI, why not just let it figure this problem out on its own when trying to construct a successor? Almost definitionally, it will do a better job of it than we will.
The biggest problem with deferring the Lobstacle to the AI is that you could have a roughly human-comparable AI which solves the Lobstacle in a hacky way, which changes the value system for the successor AI, which is then intelligent enough to solve the Lobstacle perfectly and preserve that new value system. So now you’ve got a superintelligent AI locked in on the wrong target.
If you want to take that as a definition, then we can’t build a strong AI without solving the Lobstacle!
Yes, obviously. We solve the Lobstacle by not ourselves running on formal systems and sometimes accepting axioms that we were not born with (things like PA). Allowing the AI to only do things that it can prove will have good consequences using a specific formal system would make it dumber than us.
I think, rather, that humans solve decision problems that involve predicting other human deductive processes by means of some evolved heuristics for social reasoning that we don’t yet fully understand on a formal level. “Not running on formal systems” isn’t a helpful answer for how to make good decisions.
I think that the way that humans predict other humans is the wrong way to look at this, and instead consider how humans would reason about the behavior of an AI that they build. I’m not proposing simply “don’t use formal systems”, or even “don’t limit yourself exclusively to a single formal system”. I am actually alluding to a far more specific procedure:
Come up with a small set of basic assumptions (axioms)
Convince yourself that these assumptions accurately describe the system at hand
Try to prove that the axioms would imply the desired behavior
If you cannot do this return for the first step and see if additional assumptions are necessary
Now it turns out that for almost any mathematical problem that we are actually interested in, ZFC is going to be a sufficient set of assumptions, so the first few steps here are somewhat invisible, but they are still there. Somebody need need to come up with these axioms for the first time, and each individual who wants to use them should convince themselves that they are reasonable before relying on them.
A good AI should already do this to some degree. It needs to come up with models of a system that it is interacting with before determining its course of action. It is obvious that it might need to update what assumptions it’s using the model physical laws, why shouldn’t it just do the same thing for logical ones?