might lead him to focus on inspiring others and on existential risk reduction advocacy (things that he has demonstrated capacity to do very well) rather than Friendly AI research
That would absolutely be a waste. If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy. I point again to the huge gaping absence of other people who specialize in this problem and who have worthwhile ideas. The other “existential risks” have their specialized advocates. No-one else remotely comes close to filling that role for the risks associated with superintelligence.
In other words, the important question is not, what are Eliezer’s personal chances of success; the important question is, who else is offering competent leadership on this issue? Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are. But such guesses are of secondary importance compared to the choice of doing something or doing nothing, in a domain where no-one else is acting. Until other people show up, you have to just go out there and do your best.
I’m pretty sure Eric Drexler went through this already, with nanotechnology. There was a time when Drexler was in a quite unique position, of appreciating the world-shaking significance of molecular machines, having an overall picture of what they imply and how to respond, and possessing a platform (his Foresight Institute) which gave him a little visibility. The situation is very different now. We may still be headed for disaster on that front as well, but at least the ability of society to think about the issues is greatly improved, mostly because broad technical progress in chemistry and nanoscale technology has made it easier for people to see the possibilities and has also clarified what can and can’t be done.
As computer science, cognitive science, and neuroscience keep advancing, the same thing will happen in artificial intelligence, and a lot of Eliezer’s ideas will seem more natural and constructive than they may now appear. Some of them will be reinvented independently. All of them (that survive) should take on much greater depth and richness (compare the word pictures in Drexler’s 1986 book with the calculations in his 1992 book).
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it. So in terms of process, I do consider them to be on track, even if the train shakes violently at times.
Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are.
Eliezer took exception to my estimate linked in my comment here.
If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy.
Quite possibly you’re right about this.
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it.
On this point I agree with SarahC’s second comment here.
I would again recur to my point about Eliezer having an accurate view of his abilities and likelihood of success being important for public relations purposes.
Eliezer took exception to my estimate linked in my comment here.
Less than 1 in 1 billion! :-) May I ask exactly what the proposition was? At the link you say “probability of … you succeeding in playing a critical role on the Friendly AI project that you’re working on”. Now by one reading that probability is 1, since he’s already the main researcher at SIAI.
Suppose we analyse your estimate in terms of three factors:
(probability that anyone ever creates Friendly AI) x
(conditional probability SIAI contributed) x
(conditional probability that Eliezer contributed)
Can you tell us where the bulk of the 10^-9 is located?
Eliezer took exception to my estimate linked in my comment here.
And he was right to do so, because that estimate was obviously on the wrong order of magnitude. To make an analogy, if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
But why is the estimate that I gave obviously on the wrong order of magnitude?
From my point of view, his reaction is an indication that his estimate is obviously on the wrong order of magnitude. But I’m still willing to engage with him and hear what he has to say, whereas he doesn’t seem willing to engage with me and hear what I have to say.
But why is the estimate that I gave obviously on the wrong order of magnitude?
The original statement was
I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on.”
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here.
I agree with all of this.
And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.
That would absolutely be a waste. If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy. I point again to the huge gaping absence of other people who specialize in this problem and who have worthwhile ideas. The other “existential risks” have their specialized advocates. No-one else remotely comes close to filling that role for the risks associated with superintelligence.
In other words, the important question is not, what are Eliezer’s personal chances of success; the important question is, who else is offering competent leadership on this issue? Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are. But such guesses are of secondary importance compared to the choice of doing something or doing nothing, in a domain where no-one else is acting. Until other people show up, you have to just go out there and do your best.
I’m pretty sure Eric Drexler went through this already, with nanotechnology. There was a time when Drexler was in a quite unique position, of appreciating the world-shaking significance of molecular machines, having an overall picture of what they imply and how to respond, and possessing a platform (his Foresight Institute) which gave him a little visibility. The situation is very different now. We may still be headed for disaster on that front as well, but at least the ability of society to think about the issues is greatly improved, mostly because broad technical progress in chemistry and nanoscale technology has made it easier for people to see the possibilities and has also clarified what can and can’t be done.
As computer science, cognitive science, and neuroscience keep advancing, the same thing will happen in artificial intelligence, and a lot of Eliezer’s ideas will seem more natural and constructive than they may now appear. Some of them will be reinvented independently. All of them (that survive) should take on much greater depth and richness (compare the word pictures in Drexler’s 1986 book with the calculations in his 1992 book).
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it. So in terms of process, I do consider them to be on track, even if the train shakes violently at times.
Eliezer took exception to my estimate linked in my comment here.
Quite possibly you’re right about this.
On this point I agree with SarahC’s second comment here.
I would again recur to my point about Eliezer having an accurate view of his abilities and likelihood of success being important for public relations purposes.
Less than 1 in 1 billion! :-) May I ask exactly what the proposition was? At the link you say “probability of … you succeeding in playing a critical role on the Friendly AI project that you’re working on”. Now by one reading that probability is 1, since he’s already the main researcher at SIAI.
Suppose we analyse your estimate in terms of three factors:
(probability that anyone ever creates Friendly AI) x (conditional probability SIAI contributed) x (conditional probability that Eliezer contributed)
Can you tell us where the bulk of the 10^-9 is located?
And he was right to do so, because that estimate was obviously on the wrong order of magnitude. To make an analogy, if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
I agree with
But why is the estimate that I gave obviously on the wrong order of magnitude?
From my point of view, his reaction is an indication that his estimate is obviously on the wrong order of magnitude. But I’m still willing to engage with him and hear what he has to say, whereas he doesn’t seem willing to engage with me and hear what I have to say.
The original statement was
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
Two points here:
•Quoting a comment that I wrote in July:
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
I agree with all of this.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.