Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
If that happens, it means the person who made the breakthrough released
it to the public. That would be a huge mistake, because it would greatly
increase the chances of an unfriendly AI being built before a friendly one.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
P(SIAI will be successful) may be smaller that 10^-(3^^^^3)!
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
Upvoted twice for the “Two points”. Downvoted once for the remainder of the comment.
Well, actually, I’m pretty sure the second point has a serious typo. Maybe I should flip that vote.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.