Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.