This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
The only extrapolations I’ve seen around here, e.g. by lukeprog, involve statistically sampling AI researchers’ opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That’s what I did, and where my numbers came from.
That’s a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:
“There is little difference between experts and non-experts”
“There is little difference between current predictions, and those known to have been wrong previously”
“It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors”
In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.
It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you’re also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you’re encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.
“What I discovered” was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.
With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years—an assessment that concurs with what is said by the leaders of the project I am thinking of as well.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
That’s assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He’s Solved an Important Problem.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?
I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.
you could share your thoughts about the actual stated reason?
I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says “I’ve read Friedenbach’s arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good,” then I think your argument is starting to become persuasive.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
If you’ve solved stable self-improvement issues, that’s FAI work, and you should damn well share that component.
Read the OP, I didn’t make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What’s wrong with that?
Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven’t really dug into that codebase, but from reading the Wiki it’s my impression that it’s a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?
This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
I don’t really entirely endorse the algorithms behind OpenCog and such, but I do share the forecasting timeline. Modern work in hierarchical learning, probabilities over sentences (and thus: learning and inference over structured knowledge), planning as inference… basically, I’ve been reading enough papers to say that we’re definitely starting to see the pieces emerge that embody algorithms for actual, human-level cognition. We will soon confront the question, “Yes, we have all these algorithms, but how do we put them together into an agent?”
I also think that most if not all parts needed for AGI are already there and ‘only’ need to be integrated. But that is actually a hard part. Kind of comparable to our understanding of the human brain: We know how most modules work—or at least how we can produce comparable results—but not how these are integrated. Just adding a meta level to Cog and plugins for domain specific modules at least wouldn’t do.
Why do you think it’s so near? I don’t see many others taking that position even among those who are already concerned about AGI (like around here).
This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
The only extrapolations I’ve seen around here, e.g. by lukeprog, involve statistically sampling AI researchers’ opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
This is required reading for anyone wanting to extrapolate AI researcher predictions:
https://intelligence.org/files/PredictingAI.pdf
In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That’s what I did, and where my numbers came from.
If several people follow this procedure, I would expect to get a better estimate from averaging their results than trying it out for myself.
That’s a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:
“There is little difference between experts and non-experts” “There is little difference between current predictions, and those known to have been wrong previously” “It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors”
http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ https://intelligence.org/files/PredictingAI.pdf
In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.
It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you’re also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you’re encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.
“What I discovered” was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.
With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years—an assessment that concurs with what is said by the leaders of the project I am thinking of as well.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?
That’s assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He’s Solved an Important Problem.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
I think you entirely missed the point.
I would agree with this in the sense that my stated reasons for the “conundrum” are a bit different from yours.
Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?
I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.
I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says “I’ve read Friedenbach’s arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good,” then I think your argument is starting to become persuasive.
Sorry I didn’t mean to come off so abrasively either. I was just being unduly snarky. The internet is not good for conveying emotional state :\
If you’ve solved stable self-improvement issues, that’s FAI work, and you should damn well share that component.
[retracted]
Read the OP, I didn’t make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What’s wrong with that?
Disregard my reply. I really shouldn’t be posting from my phone at 2 AM. Such a venture rarely ends well.
Yeah, I’ve been there before. No worries ;)
Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven’t really dug into that codebase, but from reading the Wiki it’s my impression that it’s a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?
I don’t really entirely endorse the algorithms behind OpenCog and such, but I do share the forecasting timeline. Modern work in hierarchical learning, probabilities over sentences (and thus: learning and inference over structured knowledge), planning as inference… basically, I’ve been reading enough papers to say that we’re definitely starting to see the pieces emerge that embody algorithms for actual, human-level cognition. We will soon confront the question, “Yes, we have all these algorithms, but how do we put them together into an agent?”
I also think that most if not all parts needed for AGI are already there and ‘only’ need to be integrated. But that is actually a hard part. Kind of comparable to our understanding of the human brain: We know how most modules work—or at least how we can produce comparable results—but not how these are integrated. Just adding a meta level to Cog and plugins for domain specific modules at least wouldn’t do.