That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.
That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.