My difficulty imagining a genuinely realistic mechanism of impossibility is such that I want to see the details of how it doesn’t happen before I update. I could make up dumb stories but they would be the wrong explanation if it actually happened, because I don’t think those dumb stories are actually plausible.
(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]
...yes? This seems like a quite reasonable epistemic state.
Is there any time line where if it hasn’t happened by that point you’d start doubting whether it will occur?
While I acknowledge that this sort of counterintuitive anti-inductivist position has precedent on this site, I suspect you mean “hasn’t happened”.
Yes, fixed, thank you.
My difficulty imagining a genuinely realistic mechanism of impossibility is such that I want to see the details of how it doesn’t happen before I update. I could make up dumb stories but they would be the wrong explanation if it actually happened, because I don’t think those dumb stories are actually plausible.
(1) I agree with the grandparent.
(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]
See my reply to diegocaleiro.