This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes.
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before?
Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit?
Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It could as easily come to pass that the Institute’s activities make matters worse.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before? Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit? Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
Reply to Vaniver:
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.