For example, if I want to find the maxima of a function, it doesn’t matter if I use conjugate gradient descent or Newton’s method or interpolation methods or whatever, they will tend to find the same maxima assuming they are looking at the same function.
Trying to channel my internal Eliezer:
It is painfully obvious that we are not the pinnacle of efficient intelligence. If evolution is to run more optimisation on us, we will become more efficient… and lose the important parts that matter to us and of no consequence to evolution. So yes, we end up being same aliens thing as AI.
Thing that makes us us is bug. So you have to hope gradient descent makes exactly same mistake evolution did, but there are a lot of possible mistakes.
To push back on this, I’m not sure that humanness is a “bug,” as you say. While we likely aren’t a pinnacle of intelligence in a fundamental sense, I do think that as humans have continued to advance, first through natural selection and now through… whatever it is we do now with culture and education and science, the parts of humanness that we care about have tended to increase in us, and not go away. So perhaps an AI optimized far beyond us, but starting in the same general neighborhood in the function space, would optimize to become not just superintelligent but superhuman in the sense that they would embody the things that we care about better than we do!
Trying to channel my internal Eliezer:
It is painfully obvious that we are not the pinnacle of efficient intelligence. If evolution is to run more optimisation on us, we will become more efficient… and lose the important parts that matter to us and of no consequence to evolution. So yes, we end up being same aliens thing as AI.
Thing that makes us us is bug. So you have to hope gradient descent makes exactly same mistake evolution did, but there are a lot of possible mistakes.
To push back on this, I’m not sure that humanness is a “bug,” as you say. While we likely aren’t a pinnacle of intelligence in a fundamental sense, I do think that as humans have continued to advance, first through natural selection and now through… whatever it is we do now with culture and education and science, the parts of humanness that we care about have tended to increase in us, and not go away. So perhaps an AI optimized far beyond us, but starting in the same general neighborhood in the function space, would optimize to become not just superintelligent but superhuman in the sense that they would embody the things that we care about better than we do!