There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.
The usual solution to that problem—see the EY’s fooming scenario—is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.
This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you’re looking at the gazillionth iteration, your compiler flags were probably lost around the thousandth iteration and your chained monitor system mutated into a cute puppy around the millionth iteration...
Probabilistic safety systems are indeed more tractable, but that’s not the question. The question is whether they are good enough.
There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.
The usual solution to that problem—see the EY’s fooming scenario—is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.
This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you’re looking at the gazillionth iteration, your compiler flags were probably lost around the thousandth iteration and your chained monitor system mutated into a cute puppy around the millionth iteration...
Probabilistic safety systems are indeed more tractable, but that’s not the question. The question is whether they are good enough.