Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It could as easily come to pass that the Institute’s activities make matters worse.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
Reply to Vaniver:
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.