Your PredictionBook link says you reckon 10% probability of humanity still being around in ~100 years, but that’s not the same thing as MIRI succeeding. Superhuman AI might turn out to be beyond human capability, so we could survive without MIRI achieving anything. Superhuman UFAI might be feasible, and MIRI might successfully stop it happening (in which case I’d say they succeeded), but FAI might be just too hard or too weak and we might then get wiped out by something else. (I agree that that seems low probability.)
The full sentence reads: “MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.” (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI’s efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI.
So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI’s efforts? I find that view implausible, and instead agree with Carl Shulman that “the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole,” for the reasons he mentions.
Could you clarify your definition of success?
Your PredictionBook link says you reckon 10% probability of humanity still being around in ~100 years, but that’s not the same thing as MIRI succeeding. Superhuman AI might turn out to be beyond human capability, so we could survive without MIRI achieving anything. Superhuman UFAI might be feasible, and MIRI might successfully stop it happening (in which case I’d say they succeeded), but FAI might be just too hard or too weak and we might then get wiped out by something else. (I agree that that seems low probability.)
From MIRI’s mission statement: “the creation of smarter-than-human intelligence has a positive impact.”
I see smarter-than-human intelligence as required to overcome the combined threat of existential risks in the long run.
The full sentence reads: “MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.” (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI’s efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI.
So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI’s efforts? I find that view implausible, and instead agree with Carl Shulman that “the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole,” for the reasons he mentions.
I subscribe to the view that AGI is bad by default, and don’t see anyone else working on the friendliness problem.