Machine superintelligence appears to be a uniquely foreseeable and impactful source of stable trajectory change.
If you think (as I do) that a machine superintelligence is largely inevitable (Bostrom forthcoming), then it seems our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.
Someone once told me they thought that giving to the Against Malaria Foundation is, via a variety of ripple effects, more likely to positively affect the development of machine superintelligence than direct work on AI risk strategy and Friendly AI math. I must say I find this implausible, but I’ll also admit that humanity’s current understanding of ripple effects in general, and our understanding of how MIRI/FHI-style research in particular will affect the world, leaves much to be desired.
So I’m glad that Givewell, MIRI, FHI, Nick Beckstead, and others are investing resources to figure out how these things work.
My difficulty imagining a genuinely realistic mechanism of impossibility is such that I want to see the details of how it doesn’t happen before I update. I could make up dumb stories but they would be the wrong explanation if it actually happened, because I don’t think those dumb stories are actually plausible.
(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]
Not sure whether I do think otherwise. But if Luke had written “smarter-than-human machine intelligence” instead, I probably wouldn’t have reacted. In comparison, “machine superintelligence singleton” is much more specific, indicating both (i) that the machine intelligence will be vastly smarter than us, and (ii) that multipolar outcomes are very unlikely. Though perhaps there are very convincing arguments for both of these claims.
If you think (as I do) that a machine superintelligence singleton is largely inevitable (Bostrom forthcoming)
I can grant the “machine superintelligence” part as largely inevitable, but why “singleton”? Are you suggesting that Bostrom has a good argument for the inevitability of such a singleton, that he hasn’t written down anywhere except in his forthcoming book?
To some degree, yes (I’ve seen a recent draft). But my point goes through without this qualification, so I’ve edited my original comment to remove “singleton.”
On this specific question (AI risk strategy and math vs. AMF), I have similar intuitions, though maybe with somewhat less confidence. But see my comment exchange with Holden Karnofsky here.
Machine superintelligence appears to be a uniquely foreseeable and impactful source of stable trajectory change.
If you think (as I do) that a machine superintelligence is largely inevitable (Bostrom forthcoming), then it seems our effects on the far future must almost entirely pass through our effects on the development of machine superintelligence.
Someone once told me they thought that giving to the Against Malaria Foundation is, via a variety of ripple effects, more likely to positively affect the development of machine superintelligence than direct work on AI risk strategy and Friendly AI math. I must say I find this implausible, but I’ll also admit that humanity’s current understanding of ripple effects in general, and our understanding of how MIRI/FHI-style research in particular will affect the world, leaves much to be desired.
So I’m glad that Givewell, MIRI, FHI, Nick Beckstead, and others are investing resources to figure out how these things work.
So do you think that while we can’t be very confident about when AI will be created, we can still be quite confident that it will be created?
...yes? This seems like a quite reasonable epistemic state.
Is there any time line where if it hasn’t happened by that point you’d start doubting whether it will occur?
While I acknowledge that this sort of counterintuitive anti-inductivist position has precedent on this site, I suspect you mean “hasn’t happened”.
Yes, fixed, thank you.
My difficulty imagining a genuinely realistic mechanism of impossibility is such that I want to see the details of how it doesn’t happen before I update. I could make up dumb stories but they would be the wrong explanation if it actually happened, because I don’t think those dumb stories are actually plausible.
(1) I agree with the grandparent.
(2) Yes, of course. But I feel that there’s enough evidence to assign very low probability to AGI not being inventable if humanity survives, but not enough evidence to assign very low probability to it being very hard and taking very long; eyeballing, it might well be thousands of years of no AGI before even considering AGI-is-impossible seriously (assuming that there is no other evidence cropping up why AGI is impossible, besides humanity having no clue how to do it; conditioning on impossible AGI, I would such expect such evidence to crop up earlier). Eliezer might put less weight on the tail of the time-to-AGI distribution and may have to have a correspondingly shorter time before considering impossible AGI seriously.
If we have had von Neumann-level AGI for a while and still no idea how to make a more efficient AGI, my update towards “superintelligence is impossible” would be very much quicker than the update towards “AGI is impossible” in the above scenario, I think. [ETA: Of course I still expect you can run it faster than a biological human, but I can conceive of a scenario where it’s within a few orders of magnitude of a von Neumann WBE, the remaining difference coming from the emulation overhead and inefficiencies in the human brain that the AGI doesn’t have but that don’t lead super-large improvements.]
See my reply to diegocaleiro.
Aron, what makes you think otherwise?
Not sure whether I do think otherwise. But if Luke had written “smarter-than-human machine intelligence” instead, I probably wouldn’t have reacted. In comparison, “machine superintelligence singleton” is much more specific, indicating both (i) that the machine intelligence will be vastly smarter than us, and (ii) that multipolar outcomes are very unlikely. Though perhaps there are very convincing arguments for both of these claims.
I can grant the “machine superintelligence” part as largely inevitable, but why “singleton”? Are you suggesting that Bostrom has a good argument for the inevitability of such a singleton, that he hasn’t written down anywhere except in his forthcoming book?
To some degree, yes (I’ve seen a recent draft). But my point goes through without this qualification, so I’ve edited my original comment to remove “singleton.”
On this specific question (AI risk strategy and math vs. AMF), I have similar intuitions, though maybe with somewhat less confidence. But see my comment exchange with Holden Karnofsky here.