This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don’t tell me there isn’t irrational prejudice here!
The argument that any donation is subject to similar objections is silly because it’s obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it’s unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it’s unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: “Deciding which charity is the best is hard.” Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.
(As to whether signaling is rational, completely irrelevant to the discussion, as we’re talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)
Another argument for the Singularity Institute donation I can’t be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don’t have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
Before downvoting this, ask yourself whether you’re saying my point is unintelligent or shouldn’t be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn’t better than at least 50% of the postings here. Ask yourself whether it’s rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute’s importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)
This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes.
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before?
Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit?
Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It could as easily come to pass that the Institute’s activities make matters worse.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient.
You know this is a blog started by and run by Eliezer Yudkowsky—right? Many of the posters are fans. Looking at the rest of this thread, signaling seems to be involved in large quantities—but consider also the fact that there is a sampling bias.
Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence. This is, or at least was, basically Eliezer’s blog; if the thing that unites its readers is respect for his intelligence and judgment, then you should be completely unsurprised to see that many support SIAI. It is not clear how this is a form of irrationality, unless you are claiming that the facts are so clearly against the SIAI that we should be interpreting them as evidence against the intelligence of supporters of the SIAI.
Someone who is trying to have an effect on the course of an intelligence explosion is more likely to than someone who isn’t. I think many readers (myself included) believe very strongly that an intelligence explosion is almost certainly going to happen eventually and that how it occurs will have a dominant influence on the future of humanity. I don’t know if the SIAI will have a positive, negative, or negligible influence, but based on my current knowledge all of these possibilities are still reasonably likely (where even 1% is way more than likely enough to warrant attention).
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence.
No. It isn’t very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.
This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can’t do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.
(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)
I don’t really mean “smart” in the sense that a chess player proves their intelligence by being good at chess, or a mathematician proves their intelligence by being good at math. I mean smart in the sense of good at forming true beliefs and acting on them. If Nick Bostrom were to profess his belief that the world was created 6000 years ago, then I would say this constitutes reasonably strong evidence that the world was created 6000 years ago (when combined with existing evidence that Nick Bostrom is good at forming correct beliefs and reporting them honestly). Of course, there is much stronger evidence against this hypothesis (and it is extremely unlikely that I would have only Bostrom’s testimony—if he came to such a belief legitimately I would strongly expect there to be additional evidence he could present), so if he were to come out and say such a thing it would mostly just decrease my estimate of his intelligence rather than decreasing my estimate for the age of the Earth. The situation with SIAI is very different: I know of little convincing evidence bearing one way or the other on the question, and there are good reasons that intelligent people might not be able to produce easily understood evidence justifying their positions (since that evidence basically consists of a long thought process which they claim to have worked through over years).
Finally, though you didn’t object, I shouldn’t really have said “obvious.” There are definitely other plausible explanations for the observed behavior of SIAI supporters than their honest belief that it is the most important cause to support.
One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy
There is a strong selection effect. Most people won’t even look too closely, or comment on their observations. I’m not sure in what sense we can expect what you wrote to be correct.
This comment, on this post, in this blog, comes across as a textbook example of the Texas Sharpshooter Fallacy. You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
I normally form hypotheses after I’ve looked at the data, although before placing high credence in them I would prefer to have confirmation using different data.
I agree that I made at least one error in that post (as in most things I write). But what exactly are you calling out?
I believe an intelligence explosion is likely (and have believed this for a good decade). I know the SIAI purports to try to positively influence an explosion. I have observed that some smart people are behind this effort and believe it is worth spending their time on. This is enough motivation for me to seriously consider how effective I think that the SIAI will be. It is also enough for me to question the claim that many people supporting SIAI is clear evidence of irrationality.
This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don’t tell me there isn’t irrational prejudice here!
The argument that any donation is subject to similar objections is silly because it’s obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it’s unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it’s unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!
Numerous posters wouldn’t pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: “Deciding which charity is the best is hard.” Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.
(As to whether signaling is rational, completely irrelevant to the discussion, as we’re talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)
Another argument for the Singularity Institute donation I can’t be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don’t have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain’s preceding entry, where each $500 saves a human life.
Before downvoting this, ask yourself whether you’re saying my point is unintelligent or shouldn’t be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn’t better than at least 50% of the postings here. Ask yourself whether it’s rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute’s importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)
Envy is unbecoming; I recommend against displaying it. You’d be better off starting with your 3rd sentence and cutting the word “silly.”
They have worked out this math, and it’s available in most of their promotional stuff that I’ve seen. Their argument is essentially “instead of operating on the level of individuals, we will either save all of humanity, present and future, or not.” And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it’s a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).
The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don’t know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal’s Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven’s infinite rewards. One of the argument’s fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.
The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute’s activities make matters worse. They aren’t entitled to assume their efforts to control matters won’t have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn’t precisely what will send one to hell. We just don’t know (can’t know) about god’s nature by merely postulating his possible existence: we can’t know that the miniscule effects don’t run the other way. Similarly if not exactly the same, there’s no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.
When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.
When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?
Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before? Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit? Because if it was the latter, I’d bet pretty strongly against you not getting there...
The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you’re looking in the wrong direction, I’d seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you’d intentionally stop looking if you arrived at one, and not if you didn’t. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.
I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we’re calibrated for are unreliable.
For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say “well, geez, 1e-10 is such a tiny number, why not?”
Which demonstrates that my brain isn’t calibrated to work with numbers in that range, which is no surprise.
So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.
They’re aware of this and have written about it. The argument is “just because something looks like a known fallacy doesn’t mean it’s fallacious.” If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn’t sound like Pascal’s Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.
It’s not clear to me that it’s as easily, and I think that’s where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they’re still ahead by 1e-6. With Pascal’s Wager, you don’t have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It’s like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there’s still a chance malevolent god is the one you end up with, but it’s a better bet than picking solo (and you’re screwed anyway if you get a malevolent god).
I agree with you that it’s not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
Reply to Vaniver:
The referenced essay by Eliezer didn’t deal with the present argument. Eliezer said, correctly, that the key Pascal’s Wager is in the balanced potential outcomes, not in the use of infinity. But my argument doesn’t rely on infinities.
Tellingly, Eliezer ultimately flubs Pascal’s Wager itself, when he states (incredibly) that praying to various benevolent gods obviates the Wager argument. This should tell you (and him) that he hasn’t completely grasped the Wager. If you or other posters agree with Eliezer’s argument against the Wager argument, I’ll clarify, but at the moment the point looks so obvious as to make explanation otiose.
Now to your main point, which other posters also voice: that we have some reason to think preparing for AIs will help avert disaster, at least with greater likelihood than the reverse. I think one poster provided part of the refutation when he said we are intellectually unable to make intuitive estimates of exceedingly small probabilities. Combining this with the Pascal Argument (which I was tempted to make explicit in my presentation of the argument but decided to avoid excessive complication at the onset), there’s no rational basis for assuming the miniscule probability we’re debating is positive.
Pascal is relevant because (if I’m right) the only reason to accept the miniscule probability when probabilities are so low goes something like this: If we strive to avert disaster, it will certainly be the case that, to whatever small extent, we’re more likely to succeed than make things worse. But nobody can seriously claim to have made a probability estimate as low as the bottom limit SI offers. The reasoning goes from the inevitability of some difference in probability.. The only thing the SI estimate has in its favor is that it’s so small, and the existence of such tiny differences can be presupposed. Which is true, but this reasoning from the inevitability of some difference doesn’t lead to any conclusion about the effect’s direction. If the probability were so low as the lower limit, there could be no rational basis for intuitively making a positive estimate of its magnitude.
Here’s an analogy. I flip a coin and concentrate very hard on ‘Heads.’ I say my concentration has to make some difference. And this is undoubtedly true, if you’re willing to entertain sufficiently small probabilities. (My thoughts, being physical processes, have some effect on their surroundings. They even interact minisculely with the H T outcome.) But no matter how strong my intuition that the effect goes the way I hope, I have no rational basis for accepting my intuition, the ultimate reason being that if so tiny a difference in fact existed, its estimation would be way beyond my intuitive capacities. If I had an honest hunch about the coin’s bias, even small, absent other evidence, I rationally follow my intuition. There’s some probability it’s right because my intuitions generally are more often correct than not. If I think the coin is slightly biased, there’s some chance I’m right; more chance that is, however small, that I have managed, I know not how, to intuit this tiny bias. But at some point, certainly far above the Singularity Institute’s lower bound for the probability they’d make a difference. At that point it becomes absurd (as opposed to merely foolish) to rely on my intuition because I can have no intuition valid to the slightest degree, when the quantities are so low I can’t grasp them intuitively; nor can I hope to predict effects so terribly small that, if real, chaos effects would surely wipe them out.
I’ve seen comments questioning my attitude and motives, so I should probably say something about why I’m a bit hostile to this project,; it’s not a matter of hypocrisy alone. The Singularity Institute competes with other causes for contributions, and it should concern people that it does so using specious argument. If SI intuits the likelihood could be as low as the lower probability estimate for success, the only honest practice is to call the probability zero.
You know this is a blog started by and run by Eliezer Yudkowsky—right? Many of the posters are fans. Looking at the rest of this thread, signaling seems to be involved in large quantities—but consider also the fact that there is a sampling bias.
Do you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space?
This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further.
(I’ll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer’s writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence. This is, or at least was, basically Eliezer’s blog; if the thing that unites its readers is respect for his intelligence and judgment, then you should be completely unsurprised to see that many support SIAI. It is not clear how this is a form of irrationality, unless you are claiming that the facts are so clearly against the SIAI that we should be interpreting them as evidence against the intelligence of supporters of the SIAI.
Someone who is trying to have an effect on the course of an intelligence explosion is more likely to than someone who isn’t. I think many readers (myself included) believe very strongly that an intelligence explosion is almost certainly going to happen eventually and that how it occurs will have a dominant influence on the future of humanity. I don’t know if the SIAI will have a positive, negative, or negligible influence, but based on my current knowledge all of these possibilities are still reasonably likely (where even 1% is way more than likely enough to warrant attention).
Upvoting but nitpicking one aspect:
No. It isn’t very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.
This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can’t do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.
(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)
There are several problems with what I said.
My use of “extremely” was unequivocally wrong.
I don’t really mean “smart” in the sense that a chess player proves their intelligence by being good at chess, or a mathematician proves their intelligence by being good at math. I mean smart in the sense of good at forming true beliefs and acting on them. If Nick Bostrom were to profess his belief that the world was created 6000 years ago, then I would say this constitutes reasonably strong evidence that the world was created 6000 years ago (when combined with existing evidence that Nick Bostrom is good at forming correct beliefs and reporting them honestly). Of course, there is much stronger evidence against this hypothesis (and it is extremely unlikely that I would have only Bostrom’s testimony—if he came to such a belief legitimately I would strongly expect there to be additional evidence he could present), so if he were to come out and say such a thing it would mostly just decrease my estimate of his intelligence rather than decreasing my estimate for the age of the Earth. The situation with SIAI is very different: I know of little convincing evidence bearing one way or the other on the question, and there are good reasons that intelligent people might not be able to produce easily understood evidence justifying their positions (since that evidence basically consists of a long thought process which they claim to have worked through over years).
Finally, though you didn’t object, I shouldn’t really have said “obvious.” There are definitely other plausible explanations for the observed behavior of SIAI supporters than their honest belief that it is the most important cause to support.
There is a strong selection effect. Most people won’t even look too closely, or comment on their observations. I’m not sure in what sense we can expect what you wrote to be correct.
This comment, on this post, in this blog, comes across as a textbook example of the Texas Sharpshooter Fallacy. You don’t form your hypothesis after you’ve looked at the data, just as you don’t prove what a great shot you are by drawing a target around the bullet hole.
I normally form hypotheses after I’ve looked at the data, although before placing high credence in them I would prefer to have confirmation using different data.
I agree that I made at least one error in that post (as in most things I write). But what exactly are you calling out?
I believe an intelligence explosion is likely (and have believed this for a good decade). I know the SIAI purports to try to positively influence an explosion. I have observed that some smart people are behind this effort and believe it is worth spending their time on. This is enough motivation for me to seriously consider how effective I think that the SIAI will be. It is also enough for me to question the claim that many people supporting SIAI is clear evidence of irrationality.
Yes, but here you’re using your data to support the hypothesis you’ve formed.
If I believe X and you ask me why I believe X, surely I will respond by providing you with the evidence that caused me to believe X?
External reality is not changed by the temporal location of hypothesis formation.
No, but when hypotheses are formed is relevant to evaluating their likelyhood given standard human cognitive biases.