I think one way to sum up parts of what Eliezer is talking about in terms of AGI go FOOM is as follows:
If you think of Intelligence as Optimization and we assume you can build an AGI with optimization power near to or at human level (anything below would be too weak to affect anything, a human could do a better job) then we can use the following argument to show that AGI does go FOOM.
We already have proof that human level optimization power can produce near human level artificial intelligence (premise), so simply point it at an interesting optimization problem (itself) and recurse. As long as the number of additional improvements per improvement done on the AGI is greater than 1, FOOM will occur.
Why wouldn’t you point your AGI (using whatever techniques you have available) at itself? I can’t think of any reasonable ones which wouldn’t preclude you building the AGI in the first place.
Of course this means we need human level artificial general intelligence, but then it needs to be that to be anywhere near human level optimization power. I won’t bother going over what happens when you have AI that is better at some of what humans do but not all, simply look around you right now.
I think one way to sum up parts of what Eliezer is talking about in terms of AGI go FOOM is as follows:
If you think of Intelligence as Optimization and we assume you can build an AGI with optimization power near to or at human level (anything below would be too weak to affect anything, a human could do a better job) then we can use the following argument to show that AGI does go FOOM.
We already have proof that human level optimization power can produce near human level artificial intelligence (premise), so simply point it at an interesting optimization problem (itself) and recurse. As long as the number of additional improvements per improvement done on the AGI is greater than 1, FOOM will occur.
It should not get stuck at human level intelligence as human level is nowhere near as good as you can get.
Why wouldn’t you point your AGI (using whatever techniques you have available) at itself? I can’t think of any reasonable ones which wouldn’t preclude you building the AGI in the first place.
Of course this means we need human level artificial general intelligence, but then it needs to be that to be anywhere near human level optimization power. I won’t bother going over what happens when you have AI that is better at some of what humans do but not all, simply look around you right now.