But why is the estimate that I gave obviously on the wrong order of magnitude?
The original statement was
I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on.”
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here.
I agree with all of this.
And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.
The original statement was
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
Two points here:
•Quoting a comment that I wrote in July:
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
I agree with all of this.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.