Eliezer is very much against the idea of supporting MIRI based on a “low probability of really high impact” argument.
I hate to put words in his mouth, but I think
he means 0.0001% chance, not 10% chance. 10% is well within the range of probabilities humans can reason about (to the extent that humans can reason about any probabilities).
Eliezer thinks the case for MIRI does not depend on very small chances, and furthermore, is sceptical that these arguments are used in practice by Xrisk organisations, etc. He wouldn’t necessary turn away someone’s money who said “I’m donating because of a 10^-10 chance.” (though equally he might for PR/paternalistic reasons)
he means 0.0001% chance, not 10% chance. 10% is well within the range of probabilities humans can reason about (to the extent that humans can reason about any probabilities).
I base it on everything I’ve read and seen on technology, human nature, historical uses of power, trends in tech capabilities, the effects of intelligence, MIRI’s mission, team, and focus, and the greater realm of philanthropic endeavors.
If you mean, ‘you pulled that number out of your butt, and therefore I call you on it,’ then I’ll have to admit defeat due to inability to break it down quantitatively. Sorry.
2.) Eliezer is very much against the idea of supporting MIRI based on a “low probability of really high impact” argument. What do you think?
I think that’s taken out of context. The way I understand it, he means superintelligence will have a really high impact regardless (near 100% probability), and is therefore a ‘lever point’ which can have a higher probability of being impacted by anyone paying attention to it, and since MIRI is one of very few groups paying attention, they have a medium probability of being such an impactor.
If you mean, ‘you pulled that number out of your butt, and therefore I call you on it,’ then I’ll have to admit defeat due to inability to break it down quantitatively. Sorry.
Yeah. On one hand, I think there is something to be said about needing to make these fast and loose estimates and that there’s some basis for them. But on the other hand, I think one needs to recognize just how fast and loose they are. I think our error bars on MIRI’s chance of success is really high.
~
I think that’s taken out of context. The way I understand it, he means superintelligence will have a really high impact regardless (near 100% probability), and is therefore a ‘lever point’ which can have a higher probability of being impacted by anyone paying attention to it, and since MIRI is one of very few groups paying attention, they have a medium probability of being such an impactor.
Let me put that in premise-conclusion form:
P1: Superintelligence will, with probability greater than 99.999%, dramatically impact the future.
P2: One can change how superintelligence will unfold by working on superintelligence.
C3: Therefore from P1 and P2, working on superintelligence will dramatically impact the future.
P4: MIRI is one of the only groups working on superintelligence.
C5: Therefore from C3 and P4, MIRI will dramatically impact the future.
Do you think that’s right?
If so, I think P2 could be false, but I’ll accept it for the sake of argument. The real problem is, I think, C5 is a fallacy. It either assumes that any work in the domain will affect how superintelligence unfolds in a controlled way (which seems false) or that MIRI’s work will have impact (which seems unproven).
P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.
I’m not sure which fallacy you’re invoking, but saying (to paraphrase), ‘superintelligence is likely difficult to aim’ and ‘MIRI’s work may not have an impact’ are certainly possible, and already contribute to my estimates.
The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.
1.) What do you base the 10% estimate on?
2.) Eliezer is very much against the idea of supporting MIRI based on a “low probability of really high impact” argument. What do you think?
I hate to put words in his mouth, but I think
he means 0.0001% chance, not 10% chance. 10% is well within the range of probabilities humans can reason about (to the extent that humans can reason about any probabilities).
Eliezer thinks the case for MIRI does not depend on very small chances, and furthermore, is sceptical that these arguments are used in practice by Xrisk organisations, etc. He wouldn’t necessary turn away someone’s money who said “I’m donating because of a 10^-10 chance.” (though equally he might for PR/paternalistic reasons)
Where does this 10% probability come from?
Anchoring from my butt-number?
I believe the correct term is “ass-pull number.” :)
I base it on everything I’ve read and seen on technology, human nature, historical uses of power, trends in tech capabilities, the effects of intelligence, MIRI’s mission, team, and focus, and the greater realm of philanthropic endeavors.
If you mean, ‘you pulled that number out of your butt, and therefore I call you on it,’ then I’ll have to admit defeat due to inability to break it down quantitatively. Sorry.
I think that’s taken out of context. The way I understand it, he means superintelligence will have a really high impact regardless (near 100% probability), and is therefore a ‘lever point’ which can have a higher probability of being impacted by anyone paying attention to it, and since MIRI is one of very few groups paying attention, they have a medium probability of being such an impactor.
Yeah. On one hand, I think there is something to be said about needing to make these fast and loose estimates and that there’s some basis for them. But on the other hand, I think one needs to recognize just how fast and loose they are. I think our error bars on MIRI’s chance of success is really high.
~
Let me put that in premise-conclusion form:
P1: Superintelligence will, with probability greater than 99.999%, dramatically impact the future.
P2: One can change how superintelligence will unfold by working on superintelligence.
C3: Therefore from P1 and P2, working on superintelligence will dramatically impact the future.
P4: MIRI is one of the only groups working on superintelligence.
C5: Therefore from C3 and P4, MIRI will dramatically impact the future.
Do you think that’s right?
If so, I think P2 could be false, but I’ll accept it for the sake of argument. The real problem is, I think, C5 is a fallacy. It either assumes that any work in the domain will affect how superintelligence unfolds in a controlled way (which seems false) or that MIRI’s work will have impact (which seems unproven).
P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.
I’m not sure which fallacy you’re invoking, but saying (to paraphrase), ‘superintelligence is likely difficult to aim’ and ‘MIRI’s work may not have an impact’ are certainly possible, and already contribute to my estimates.
I think a fair amount of people argue that because a cause is important, anyone working on that cause must be doing important work.
The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.