I actually think self-driving cars are more interesting than strong go playing programs (but they don’t worry me much either).
I guess I am not sure why I should pay attention to EY’s opinion on this. I do ML-type stuff for a living. Does EY have an unusual track record for predicting anything? All I see is a long tail of vaguely silly things he says online that he later renounces (e.g. “ignore stuff EY_2004 said”). To be clear: moving away from bad opinions is great! That is not what the issue is.
edit: In general I think LW really really doesn’t listen to experts enough (I don’t even mean myself, I just mean the sensible Bayesian thing to do is to just go with expert opinion prior on almost everything.) EY et al. take great pains to try to move people away from that behavior, talking about how the world is mad, about civiliational inadequacy, etc. In other words, don’t trust experts, they are crazy anyways.
I’m not going to argue that you should pay attention to EY. His arguments convince me, but if they don’t convince you, I’m not gonna do any better.
What I’m trying to get at is, when you ask “is there any evidence that will result in EY ceasing to urgently ask for your money?”… I mean, I’m sure there is such evidence, but I don’t wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: “this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
While we’re at it, this exchange between us seems relevant. (“Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design.” “Well, what a relief!”) You seem surprised, and I’m not sure what about it was surprising to you, but I don’t think you should have been surprised.
Basically, even if you’re right that he’s wrong, I feel like you’re wrong about how he’s wrong. You seem to have a model of him which is very different from my model of him.
(Btw, his opinion seems to be that AlphaGo’s methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)
“this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
In particular he apparently mentioned Go play as an indicator before (and assumed as many other people that it were somewhat more distant) and now follows up on this threshold. What else would you expect? That he don’t name a limited number of relevant events (I assume that the number is limited; I didn’t know of this specific one before)?
I think you misunderstood me (but that’s my fault for being opaque, cadence is hard to convey in text). I was being sarcastic. In other words, I don’t need EY’s opinion, I can just look at the problem myself (as you guys say “argument screens authority.”)
I feel like you’re wrong about how he’s wrong.
Look, I met EY and chatted with him. I don’t think EY is “evil,” exactly, in a way that L. Ron Hubbard was. I think he mostly believes his line (but humans are great at self-deception). I think he’s a flawed person, like everyone else. It’s just that he has an enormous influence on the rationalist community that immensely magnify the damage his normal human flaws and biases can do.
I always said that the way to repair human frailty issues is to treat rationality as a job (rather than a social club), and fellow rationalists as coworkers (rather than tribe members). I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
Currently MIRI gets their funding by 1) donations 2) grants. Isn’t that exactly what the normal funding stream for non-profits is?
Sure. Scientology probably has non-profits, too. I am not saying MIRI is anything like Scientology, merely that it isn’t enough to just determine legal status and call it a day, we have to look at the type of thing the non-profit is.
MIRI is a research group. They call themselves an institute, but they aren’t, really. Institutes are large. They are working on some neat theory stuff (from what Benja/EY explained to me) somewhat outside the mainstream. Which is great! They have some grant funding, actually, last I checked. Which is also great!
They are probably not yet financially secure to stop asking for money, which is also ok.
I think all I am saying is, in my view the success condition is they “achieve orbit” and stop asking, because basically what they are working on is considered sufficiently useful research that they can operate like a regular research group. If they never stop asking I think that’s a bit weird, because either their direction isn’t perceived good and they can’t get enough funding bandwidth without donations, or they do have enough bandwidth but want more revenue anyways, which I personally would find super weird and unsavory.
They are probably not yet financially secure to stop asking for money, which is also ok.
Who is? Last I checked, Harvard was still asking alums for donations, which suggests to me that asking is driven by getting money more than it’s driven by needing money.
I think comparing Harvard to a research group is a type error, though. Research groups don’t typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.
I think comparing Harvard to a research group is a type error, though.
I know several research groups where the PI’s sole role is fundraising, despite them having much more funding than the average research group.
My point was more generic—it’s not obvious to me why you would expect groups to think “okay, we have enough resources, let’s stop trying to acquire more” instead of “okay, we have enough resources to take our ambitions to the next stage.” The American Cancer Society has about a billion dollar budget, and yet they aren’t saying “yeah, this is enough to deal with cancer, we don’t need your money.”
(It may be the case that a particular professor stops writing grant applications, because they’re limited by attention they can give to their graduate students. But it’s not like any of those professors will say “yeah, my field is big enough, we don’t need any more professor slots for my students to take.”)
In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly “in orbit” by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?
By “in orbit” I mean “funded by grants rather than charity.” If a group has a steady grant research stream, that means they are doing good enough work that funding agencies continue to give them money. This is the standard way to be self-sustaining for a research group.
This is a good question. I think lots of funding incentive to build integrated systems (like self-driving cars, but for other domains) and enough talent pipeline to start making that stuff happen and create incremental improvements. People in general underestimate the systems engineering aspect of getting artificial intelligent agents to work in practice even in fairly limited settings like car driving.
Go is a hard game, but it is a toy problem in a way that dealing with the real world isn’t. I am worried about economic incentives making it worth people’s while to keep throwing money and people and iterating on real actual systems that do intelligent things in the world. Even fairly limited things at first.
Go is a hard game, but it is a toy problem in a way that dealing with the real world isn’t.
What do you mean by this exactly? That real world has combinatorics problems that are much wider, or that dealing with real world does not reduce well to search in a tree of possible actions?
I think getting this working took a lot of effort and insight, and I don’t mean to discount this effort or insight at all. I couldn’t do what these guys did. But what I mean by “toy problem” is it avoids a lot of stuff about the physical world, hardware, laws, economics, etc. that happen when you try to build real things like cars, robots, or helicopters.
In other words, I think it’s great people figured out the ideal rocket equation. But somehow it will take a lot of elbow grease (that Elon Musk et al are trying to provide) to make this stuff practical for people who are not enormous space agencies.
I don’t think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:
A recent set of surveys of AI researchers produced the following median dates:
for human-level AI with 10% probability: 2022 for human-level AI with 50% probability: 2040 for human-level AI with 90% probability: 2075
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
“EY said X on facebook, time for me to change my opinion.”
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/
human-level AI with 50% probability and human-level AI with 90% probability?
I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.
I don’t have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn’t see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
So that’s why I wouldn’t trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorithms they work on are not anywhere near human level. But the general trend, and the proof that humans aren’t really that special, is concerning.
I’m not saying that you should just trust Yudkowsky or me instead. And expert opinion still has value. But maybe pick an expert that is more “big picture” focused? Perhaps Jürgen Schmidhuber, who has done a lot of notable work on deep learning and ML, but also has an interest in general intelligence and self improving AIs.
And I don’t have any specific prediction from him on when we will reach AGI. But he did say last year that he believes we will reach monkey level intelligence in 10 years. Which is quite a huge milestone.
Another candidate might be the group being discussed in this thread, Deepmind. They are focused on reaching general AI instead of just typical machine vision work. That’s why they have such a strong interest in game playing. I don’t have any specific predictions from them either, but I do get the impression they are very optimistic.
Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
I’m not buying this.
There are tons of cases where people look at the current trend and predict it will continue unabated into the future. Occasionally they turn out to be right, mostly they turn out to be wrong. In retrospect it’s easy to pick “winners”, but do you have any reason to believe it was more than a random stab in the dark which got lucky?
If you were trying to predict the future of flight in 1900, you’d do pretty terrible by surveying experts. You would do far better by taking a Kurzweil style approach where you put combustion engine performance on a chart and compared it to estimates of the power/weight ratios required for flight.
The point of that comment wasn’t to praise predicting with trends. It was to show an example where experts are sometimes overly pessimistic and not looking at the big picture.
When people say that current AI sucks, and progress is really hard, and they can’t imagine how it will scale to human level intelligence, I think it’s a similar thing. They are overly focused on current methods and their shortcomings and difficulties. They aren’t looking at the general trend that AI is rapidly making a lot of progress. Who knows what could be achieved in decades.
I’m not talking about specific extrapolations like Moore’s law, or even imagenet benchmarks—just the general sense of progress every year.
This claim doesn’t make much sense from the outset. Look at your specific example of transistors. In 1965, an electronics magazine wanted to figure out what would happen over time with electronics/transistors so they called up an expert, the director of research of Fairchild semiconductor. Gordon Moore (the director of research), proceeded to coin Moore’s law and tell them the doubling would continue for at least a decade, probably more. Moore wasn’t an outsider, he was an expert.
I never said that every engineer at every point in time was pessimistic. Just that many of them were at one time. And I said it was a second hand anecdote, so take that for what it’s worth.
You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.
Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.
It wouldn’t have made a lot of sense to predict any doublings for transistors in an integrated circuit before 1960, because I think that is when they were invented.
As I said, the ideal is to use expert opinion as prior unless you have a lot of good info, or you think something is uniquely dysfunctional about an area (its rationalist folklore that a lot of areas are dysfunctional—“the world is mad”—but I think people are being silly about this). Experts really do know a lot.
You also need to figure out who are actual experts and what do they actually say. That’s a non-trivial task—reading reports on science in mainstream media will just stuff your head with nonsense.
It’s actually much worse than that, because huge breakthroughs themselves are what create new experts. So on the eve of huge breakthroughs, currently recognized experts invariably predict the future is far, simply because they can’t see the novel path towards the solution.
In this sense everyone who is currently an AI expert is, trivially, someone who has failed to create AGI. The only experts who have any sort of clear understanding of how far AGI is are either not currently recognized or do not yet exist.
Btw, I don’t consider myself an AI expert. I am not sure what “AI expertise” entails, I guess knowing a lot about lots of things that include stuff like stats/ML but also other things, including a ton of engineering. I think an “AI expert” is sort of like “an airplane expert.” Airplanes are too big for one person—you might be an expert on modeling fluids or an expert on jet engines, but not an expert on airplanes.
And the many-worlds interpretation of quantum mechanics. That is, all EY’s hobby horses. Though I don’t know how common these positions are among the unquiet spirits that haunt LessWrong.
I actually think self-driving cars are more interesting than strong go playing programs (but they don’t worry me much either).
I guess I am not sure why I should pay attention to EY’s opinion on this. I do ML-type stuff for a living. Does EY have an unusual track record for predicting anything? All I see is a long tail of vaguely silly things he says online that he later renounces (e.g. “ignore stuff EY_2004 said”). To be clear: moving away from bad opinions is great! That is not what the issue is.
edit: In general I think LW really really doesn’t listen to experts enough (I don’t even mean myself, I just mean the sensible Bayesian thing to do is to just go with expert opinion prior on almost everything.) EY et al. take great pains to try to move people away from that behavior, talking about how the world is mad, about civiliational inadequacy, etc. In other words, don’t trust experts, they are crazy anyways.
I’m not going to argue that you should pay attention to EY. His arguments convince me, but if they don’t convince you, I’m not gonna do any better.
What I’m trying to get at is, when you ask “is there any evidence that will result in EY ceasing to urgently ask for your money?”… I mean, I’m sure there is such evidence, but I don’t wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: “this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
While we’re at it, this exchange between us seems relevant. (“Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design.” “Well, what a relief!”) You seem surprised, and I’m not sure what about it was surprising to you, but I don’t think you should have been surprised.
Basically, even if you’re right that he’s wrong, I feel like you’re wrong about how he’s wrong. You seem to have a model of him which is very different from my model of him.
(Btw, his opinion seems to be that AlphaGo’s methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)
In particular he apparently mentioned Go play as an indicator before (and assumed as many other people that it were somewhat more distant) and now follows up on this threshold. What else would you expect? That he don’t name a limited number of relevant events (I assume that the number is limited; I didn’t know of this specific one before)?
I think you misunderstood me (but that’s my fault for being opaque, cadence is hard to convey in text). I was being sarcastic. In other words, I don’t need EY’s opinion, I can just look at the problem myself (as you guys say “argument screens authority.”)
Look, I met EY and chatted with him. I don’t think EY is “evil,” exactly, in a way that L. Ron Hubbard was. I think he mostly believes his line (but humans are great at self-deception). I think he’s a flawed person, like everyone else. It’s just that he has an enormous influence on the rationalist community that immensely magnify the damage his normal human flaws and biases can do.
I always said that the way to repair human frailty issues is to treat rationality as a job (rather than a social club), and fellow rationalists as coworkers (rather than tribe members). I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
Currently MIRI gets their funding by 1) donations 2) grants. Isn’t that exactly what the normal funding stream for non-profits is?
Sure. Scientology probably has non-profits, too. I am not saying MIRI is anything like Scientology, merely that it isn’t enough to just determine legal status and call it a day, we have to look at the type of thing the non-profit is.
MIRI is a research group. They call themselves an institute, but they aren’t, really. Institutes are large. They are working on some neat theory stuff (from what Benja/EY explained to me) somewhat outside the mainstream. Which is great! They have some grant funding, actually, last I checked. Which is also great!
They are probably not yet financially secure to stop asking for money, which is also ok.
I think all I am saying is, in my view the success condition is they “achieve orbit” and stop asking, because basically what they are working on is considered sufficiently useful research that they can operate like a regular research group. If they never stop asking I think that’s a bit weird, because either their direction isn’t perceived good and they can’t get enough funding bandwidth without donations, or they do have enough bandwidth but want more revenue anyways, which I personally would find super weird and unsavory.
Who is? Last I checked, Harvard was still asking alums for donations, which suggests to me that asking is driven by getting money more than it’s driven by needing money.
I think comparing Harvard to a research group is a type error, though. Research groups don’t typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.
I know several research groups where the PI’s sole role is fundraising, despite them having much more funding than the average research group.
My point was more generic—it’s not obvious to me why you would expect groups to think “okay, we have enough resources, let’s stop trying to acquire more” instead of “okay, we have enough resources to take our ambitions to the next stage.” The American Cancer Society has about a billion dollar budget, and yet they aren’t saying “yeah, this is enough to deal with cancer, we don’t need your money.”
(It may be the case that a particular professor stops writing grant applications, because they’re limited by attention they can give to their graduate students. But it’s not like any of those professors will say “yeah, my field is big enough, we don’t need any more professor slots for my students to take.”)
In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly “in orbit” by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?
By “in orbit” I mean “funded by grants rather than charity.” If a group has a steady grant research stream, that means they are doing good enough work that funding agencies continue to give them money. This is the standard way to be self-sustaining for a research group.
What would worry you that strong AI is near?
This is a good question. I think lots of funding incentive to build integrated systems (like self-driving cars, but for other domains) and enough talent pipeline to start making that stuff happen and create incremental improvements. People in general underestimate the systems engineering aspect of getting artificial intelligent agents to work in practice even in fairly limited settings like car driving.
Go is a hard game, but it is a toy problem in a way that dealing with the real world isn’t. I am worried about economic incentives making it worth people’s while to keep throwing money and people and iterating on real actual systems that do intelligent things in the world. Even fairly limited things at first.
What do you mean by this exactly? That real world has combinatorics problems that are much wider, or that dealing with real world does not reduce well to search in a tree of possible actions?
I think getting this working took a lot of effort and insight, and I don’t mean to discount this effort or insight at all. I couldn’t do what these guys did. But what I mean by “toy problem” is it avoids a lot of stuff about the physical world, hardware, laws, economics, etc. that happen when you try to build real things like cars, robots, or helicopters.
In other words, I think it’s great people figured out the ideal rocket equation. But somehow it will take a lot of elbow grease (that Elon Musk et al are trying to provide) to make this stuff practical for people who are not enormous space agencies.
I don’t think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn’t contradict expert opinions.
Sure, but it’s not just about what experts say on a survey about human level AI. It’s also about what info a good Go program has for this question, and whether MIRI’s program makes any sense (and whether it should take people’s money). People here didn’t say “oh experts said X, I am updating,” they said “EY said X on facebook, time for me to change my opinion.”
My reaction was more “oh, EY made a good argument about why this is a big deal, so I’ll take that argument into account”.
Presumably a lot of others felt the same way; attributing the change in opinion to just a deference for tribal authority seems uncharitable.
Say I am worried about this tribal thing happening a lot—what would put my mind more at ease?
I don’t know your mind, you tell me? What exactly is it that you find worrying?
My possibly-incorrect guess is that you’re worried about something like “the community turning into an echo chamber that only promotes Eliezer’s views and makes its members totally ignore expert opinion when forming their views”. But if that was your worry, the presence of highly upvoted criticisms of Eliezer’s views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.
So since you still seem to be worried despite the presence of those comments, I’m assuming that your worry is something slightly different, but I’m not entirely sure of what.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
Agreed both with this being a real risk, and it being good that Ilya hangs out here.
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for
human-level AI with 10% probability
/human-level AI with 50% probability
andhuman-level AI with 90% probability
?I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is “very little in a sooner direction” (just because it is a successful example of getting a complex thing working). The question wasn’t “what are your belief about how far human level AI is” (mine are centered fairly far out).
I think this debate is quite hard with terms vague terms like “very little” and “far out”. I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.
The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind’s AlphaGo success.
There is a difference between “Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022” and “Gib moni plz because of AlphaGo”.
I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn’t appropriate because of EY statements. I don’t think that’s true.
I don’t have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn’t see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.
So that’s why I wouldn’t trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorithms they work on are not anywhere near human level. But the general trend, and the proof that humans aren’t really that special, is concerning.
I’m not saying that you should just trust Yudkowsky or me instead. And expert opinion still has value. But maybe pick an expert that is more “big picture” focused? Perhaps Jürgen Schmidhuber, who has done a lot of notable work on deep learning and ML, but also has an interest in general intelligence and self improving AIs.
And I don’t have any specific prediction from him on when we will reach AGI. But he did say last year that he believes we will reach monkey level intelligence in 10 years. Which is quite a huge milestone.
Another candidate might be the group being discussed in this thread, Deepmind. They are focused on reaching general AI instead of just typical machine vision work. That’s why they have such a strong interest in game playing. I don’t have any specific predictions from them either, but I do get the impression they are very optimistic.
I’m not buying this.
There are tons of cases where people look at the current trend and predict it will continue unabated into the future. Occasionally they turn out to be right, mostly they turn out to be wrong. In retrospect it’s easy to pick “winners”, but do you have any reason to believe it was more than a random stab in the dark which got lucky?
If you were trying to predict the future of flight in 1900, you’d do pretty terrible by surveying experts. You would do far better by taking a Kurzweil style approach where you put combustion engine performance on a chart and compared it to estimates of the power/weight ratios required for flight.
The point of that comment wasn’t to praise predicting with trends. It was to show an example where experts are sometimes overly pessimistic and not looking at the big picture.
When people say that current AI sucks, and progress is really hard, and they can’t imagine how it will scale to human level intelligence, I think it’s a similar thing. They are overly focused on current methods and their shortcomings and difficulties. They aren’t looking at the general trend that AI is rapidly making a lot of progress. Who knows what could be achieved in decades.
I’m not talking about specific extrapolations like Moore’s law, or even imagenet benchmarks—just the general sense of progress every year.
This claim doesn’t make much sense from the outset. Look at your specific example of transistors. In 1965, an electronics magazine wanted to figure out what would happen over time with electronics/transistors so they called up an expert, the director of research of Fairchild semiconductor. Gordon Moore (the director of research), proceeded to coin Moore’s law and tell them the doubling would continue for at least a decade, probably more. Moore wasn’t an outsider, he was an expert.
You then generalize from an incorrect anecdote.
I never said that every engineer at every point in time was pessimistic. Just that many of them were at one time. And I said it was a second hand anecdote, so take that for what it’s worth.
You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.
Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.
It wouldn’t have made a lot of sense to predict any doublings for transistors in an integrated circuit before 1960, because I think that is when they were invented.
In what specific areas do you think LWers are making serious mistakes by ignoring or not accepting strong enough priors from experts?
As I said, the ideal is to use expert opinion as prior unless you have a lot of good info, or you think something is uniquely dysfunctional about an area (its rationalist folklore that a lot of areas are dysfunctional—“the world is mad”—but I think people are being silly about this). Experts really do know a lot.
You also need to figure out who are actual experts and what do they actually say. That’s a non-trivial task—reading reports on science in mainstream media will just stuff your head with nonsense.
It’s true, reading/scholarship is hard (even for scientists).
It’s actually much worse than that, because huge breakthroughs themselves are what create new experts. So on the eve of huge breakthroughs, currently recognized experts invariably predict the future is far, simply because they can’t see the novel path towards the solution.
In this sense everyone who is currently an AI expert is, trivially, someone who has failed to create AGI. The only experts who have any sort of clear understanding of how far AGI is are either not currently recognized or do not yet exist.
Btw, I don’t consider myself an AI expert. I am not sure what “AI expertise” entails, I guess knowing a lot about lots of things that include stuff like stats/ML but also other things, including a ton of engineering. I think an “AI expert” is sort of like “an airplane expert.” Airplanes are too big for one person—you might be an expert on modeling fluids or an expert on jet engines, but not an expert on airplanes.
AI, general singulatarianism, cryonics, life extension?
And the many-worlds interpretation of quantum mechanics. That is, all EY’s hobby horses. Though I don’t know how common these positions are among the unquiet spirits that haunt LessWrong.
My thoughts exactly.