(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]
(1) I agree with the grandparent.
(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]