While I find Robin’s model more convincing than Eliezer’s, I’m still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn’t say much.
The following exchange is also relevant:
[-] Raiden 1y link 30
Robin, or anyone who agrees with Robin:
What evidence can you imagine would convince you that AGI would go FOOM?
Reply[-] jprwg 1y link 22
While I find Robin’s model more convincing than Eliezer’s, I’m still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussedrecently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn’t say much.
Reply[-] RobinHanson 1y link 23
This seems to me a reasonable statement of the kind of evidence that would be most relevant.