AGI is already 1-2 decades away. Or 2-5 years if a well-funded project started now. I don’t think that is enough time for a meaningful reaction by society, even just its upper echelons.
I would be very concerned about the “out of nowhere” outcome, especially now that the AI winter has thawed. We have the tools, and we have the technology to do AGI now. Why assume that it is decades away?
This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
The only extrapolations I’ve seen around here, e.g. by lukeprog, involve statistically sampling AI researchers’ opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That’s what I did, and where my numbers came from.
That’s a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:
“There is little difference between experts and non-experts”
“There is little difference between current predictions, and those known to have been wrong previously”
“It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors”
In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.
It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you’re also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you’re encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.
“What I discovered” was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.
With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years—an assessment that concurs with what is said by the leaders of the project I am thinking of as well.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
That’s assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He’s Solved an Important Problem.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?
I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.
you could share your thoughts about the actual stated reason?
I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says “I’ve read Friedenbach’s arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good,” then I think your argument is starting to become persuasive.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
If you’ve solved stable self-improvement issues, that’s FAI work, and you should damn well share that component.
Read the OP, I didn’t make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What’s wrong with that?
Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven’t really dug into that codebase, but from reading the Wiki it’s my impression that it’s a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?
This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
I don’t really entirely endorse the algorithms behind OpenCog and such, but I do share the forecasting timeline. Modern work in hierarchical learning, probabilities over sentences (and thus: learning and inference over structured knowledge), planning as inference… basically, I’ve been reading enough papers to say that we’re definitely starting to see the pieces emerge that embody algorithms for actual, human-level cognition. We will soon confront the question, “Yes, we have all these algorithms, but how do we put them together into an agent?”
I also think that most if not all parts needed for AGI are already there and ‘only’ need to be integrated. But that is actually a hard part. Kind of comparable to our understanding of the human brain: We know how most modules work—or at least how we can produce comparable results—but not how these are integrated. Just adding a meta level to Cog and plugins for domain specific modules at least wouldn’t do.
20 years is on the very soon end of plausible; but 2-5 years is absolutely impossible. We just don’t have the slightest notion how we would do that, regardless of fundingn.
We do not have the tools or technology right now; it won’t come out of the blue.
We just don’t have the slightest notion how we would do that, regardless of funding.
Really? And what’s that opinion based on? Are you an expert in the field? I very often see this meme quoted, but no explanation to back it up.
I’m a computer scientist that has been following the AI / AGI literature for years. I have been doing my own private research (since publishing AGI work is too dangerous) based on OpenCog, pretty much since it was first open sourced, and a few other projects. I’ve looked at the issues involved in creating a seed AGI, while creating my own design for just such a system. And they are all solvable, or more often already solved but not yet integrated.
I’m a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.
In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google’s knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
The pitfall with private research is that nobody sees your work, meaning there’s nobody to criticize it or tell you your assessment “the issues are solvable or solved but not yet integrated” is incorrect. Or, if it is correct and I’m dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general “AGI can be dangerous” thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you’ve got something)?
In terms of practicalities, AI and AGI share two letters in common, and that’s about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you’ll find most of it covers the specific narrow AI components they’ve selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)
So when you say:
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
It doesn’t really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What’s different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
Actually, what AGI is about, by definition, is to achieve human-level or higher performance in a broad variety of cognitive tasks. Whether self-modification is useful or necessary to achieve such goal is questionable.
Even if self-modification turns out to be a core enabling technology for AGI, we are still quite far from getting it to work. Just having a language or platform that allows introspection and runtime code generation isn’t enough: LISP didn’t lead to AGI. Neither did Eurisko. And, while I’m not very familiar with OpenCog, frankly I can’t see any fundamental innovation in it.
Representing code as data is trivial. The hard problem is making a machine reason about code. Automatic program verification is only barely starting to become commercially useful in a few restricted application domains, and automatic programming is still largely undeveloped with very little progress being made beyond optimizing compilers.
Having a machine write code at the level of a human programmer in 2 − 5 years is completely unrealistic, and 20 years looks like the bare minimum, with the realistic expectation being higher.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system. This isn’t the way humans write code, and it doesn’t produce an output which looks like “source code” as we know it. But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Regarding what AGI is “about”, yes that is true in the strictest, definitional sense. But what I was trying to convey is how AGI is separate from narrow AI in that it is basically a field of meta-AI. An AGI approaches a problem by first thinking about how to solve the problem. It first thinks about thinking, before it thinks.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
Yes, and my pocket calculator can compute cosines faster than Newton could. Therefore my pocket calculator is better at math than Newton.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.
Lots of commonly used classifiers are “nearly Turing-complete”. Specifically, non-linear SVMs, feed-forward neural networks and the various kinds of decision tree methods can represent arbitrary Boolean functions, while recurrent neural networks can represent arbitrary finite state automata when implemented with finite precision arithmetic, and they are Turing-complete when implemented with arbitrary precision arithmetic.
But we don’t exactly observe hordes of unemployed programmers begging in the streets after losing their jobs to some machine learning algorithm, do we? Useful as they are, current machine learning algorithms are still very far from performing automatic programming.
But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Really? Can you system provide a correct implementation of the FizzBuzz program starting from a specification written in English? Can it play competitively in a programming contest?
Or, even if your system is restricted to machine learning, can it beat random forests on a standard benchmark?
If it can do no such thing perhaps you should consider avoiding such claims, in particular when you are unwilling to show your work.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
Which we are currently very far from accomplishing.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult. Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
… raw general intelligence …
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence. I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.
Hmmm… Do you have a completeness result? I mean, I can see that if you make it a total language, you can just use coinduction to reason about indefinite computing processes, but I’m wondering what sort of internal logic you’re using that would allow complete reasoning over programs in the language and decidable typing (since to have the agent rewrite its own code it will also have to type-check its own code).
Current theorem-proving systems like Coq that work in logics this advanced usually have undecidable type inference somewhere, and require humans to add type annotations sometimes.
Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.
I have trouble trusting your expert opinion because it is not clear to me that you are an expert in the field, though you claim to be. Google doesn’t point to any of your research in the area, and I can find no mention of your work beyond bitcoin by any (other) AI researchers. Feel free to link to anything corroborating your claims.
I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.
I have as much credibility as Eliezer Yudkowsky in that regard
That is, not very much. But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven’t published in peer-reviewed journals or conferences (except some philosophical “special issue” volumes, IIRC).
Your “Trust me, I’m a computer scientist and I’ve fiddled with OpenCog in my basement but I can’t show you my work because humans not ready for it” gives you even less credibility.
No, I wouldn’t feel qualified to make predictions on novel narrow AI developments. I stay up to date with what’s being published chiefly because my own design involves integrating a handful of narrow AI techniques, and new developments have ramifications for that. But I have no inside knowledge about what frontiers are being pushed next.
Edit: narrow AI and general AI are two very different fields, in case you didn’t know.
This whole debate makes me wonder , if we can have any certainity for AI predictions.
Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren’t safe. I don’t think anyone can trace their prediction back to empiric data, it all comes from our minds’ black boxes, to which biases have full access and which we can’t examine with our conciousness.
While I find Mark’s prediction far from accurate, I know it might be just because I wouldn’t like it. I like to think that I would have some impact on AGI research, that some new insights are needed rather than just pumping more and more money in SIRI-like products. Developement of AI in next 10-15 years would mean that no qualitative research were needed and that all what is to be done is honing current technology. It would also mean there was time for thorough developement of friendliness and we may end up with AI catastroph.
While I guess human level AI to rise in about 2070s, I know I would LIKE if it happened in 2070s. And I base this prediction on no solid base.
Can anybody point me to any near-empiric data concerning, when AGI may be developed? Anything more solid than hunch of even most prominent AI researcher?
Applying Moore’s law seems a bit magical, it without doubt has some Bayesian effect, but with little certainity.
The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data.
If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.
Honestly the best empiric data I know is Ray Kurzweil’s extrapolations, which places 2045 generically as the date of the singularity, although he places human-level AI earlier around 2029 (obviously he does not lend credence to a FOOM). You have to take some care in using these predictions as individual technologies eventually hit hard limits and leave the exponential portion of the S-curve, but molecular and reversible computation shows that there is plenty of room at the bottom here.
2070 is a crazy late date. If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today. If you assume that our neural abstractions are all wrong and that we need to do a full simulation including the inner working details of neural cells and transport mechanisms, that’s possible in the 2040′s. Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
If you want to find out how long it will take to make a self-improving AGI, then (1) find or create a design for one, and (2) construct a project plan. Flesh that plan out in detail by researching and eliminating as much uncertainty as you are able to, and fully specify dependencies. Then find the critical path.
Edit: There’s a larger issue which I forgot to mention: I find it a little strange to think of AGI arriving in 2070 vs the near future as comforting. If you assume the AI has evil intentions, then it needs to do a lot of computational legwork before it is able to carry out any of its plans. With today’s technology it’s not really possible to do that and remain hidden. It could take over a botnet, sure, but the level of HPC computing required to develop new computational technology (e.g. molecular nanotechnology) requires data centers today. In 2070 though, either that technology already exists or a home network of PCs would be sufficient. By being released earlier, the UFAI has more legwork it needs to do in the event of a breakout scenario, giving higher chances of detection and more of a buffer for humanity.
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that.
My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data?
If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim.
I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that.
But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box.
More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story.
I would agree though that it’s hard to tell where would our preferences point to.
If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections?
Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m tempted to call that bollocks.
Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them?
Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and “Best guess butchery” alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later… This isn’t a very desirable future, but it is a possible one.
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them?
Don’t underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)
AGI is already 1-2 decades away. Or 2-5 years if a well-funded project started now. I don’t think that is enough time for a meaningful reaction by society, even just its upper echelons.
I would be very concerned about the “out of nowhere” outcome, especially now that the AI winter has thawed. We have the tools, and we have the technology to do AGI now. Why assume that it is decades away?
Why do you think it’s so near? I don’t see many others taking that position even among those who are already concerned about AGI (like around here).
This is my adopted long-term field—though professionally I work as a bitcoin developer right now—and those estimates are my own. 1-2 decades is based on existing AGI work such as OpenCog, and what is known about generalizations to narrow AI being done by Google and a few smaller startups. It is reasonable extrapolations based on published project plans, the authors’ opinions, and my own evaluation of the code in the case of OpenCog. 5 years is what it would take if money were not a concern. 2-years is based on my own, unpublished simplification of the CogPrime architecture meant as a blitz to seed-stage oracle AGI, under the same money-is-no-concern conditions.
The only extrapolations I’ve seen around here, e.g. by lukeprog, involve statistically sampling AI researchers’ opinions. Stuart Armstrong showed a year or two ago just how inaccurate this method is historically, as well as concrete reasons for why such statistical methods are useless in this case.
You rate your ability to predict AI above AI researchers? It seems to me that at best, I as an independent observer should give your opinion about as much weight as any AI researcher. Any concerns with the predictions of AI researchers in general should also apply to your estimate. (With all due respect.)
This is required reading for anyone wanting to extrapolate AI researcher predictions:
https://intelligence.org/files/PredictingAI.pdf
In short, asking AI researchers (including myself) their opinions is probably the worst way to get an answer here. What you need to do instead is learn the field, try your hand at it yourself, ask AI researchers what they feel are the remaining unsolved problems, investigate those answers, and most critically form your own opinion. That’s what I did, and where my numbers came from.
If several people follow this procedure, I would expect to get a better estimate from averaging their results than trying it out for myself.
That’s a reasonable expectation. But in as much as one can expect AI researchers to have gone through this exercise in the past (this is where the problem is, I think), the data is apparently not predictive. Kaj Sotala and Stuart Armstrong looked at this in some detail, with MIRI funding. Some highlights:
“There is little difference between experts and non-experts” “There is little difference between current predictions, and those known to have been wrong previously” “It is not unlikely that recent predictions are suffering from the same biases and errors as their predecessors”
http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ https://intelligence.org/files/PredictingAI.pdf
In other words, asking AI experts is about as useless as it can get when it comes to making predictions about future AI developments. This includes myself, objectively. What I advocate people do instead is what I did: investigate the matter yourself and make your own evaluation.
It sounds to me as though you are aware that your estimate for when AI will arrive is earlier than most estimates, but you’re also aware that the reference class of which your estimate is a part of is not especially reliable. So instead of pushing your estimate as the one true estimate, you’re encouraging others to investigate in case they discover what you discovered (because if your estimate is accurate, that would be important information). That seems pretty reasonable. Another thing you could do is create a discussion post where you lay out the specific steps you took to come to the conclusion that AI will come relatively early in detail, and get others to check your work directly that way. It could be especially persuasive if you were to contrast the procedure you think was used to generate other estimates and explain why you think that procedure was flawed.
“What I discovered” was that all the pieces for a seed AGI exist, are demonstrated to work as advertised, and could be assembled together rather quickly if adequate resources were available to do so. Really all that is required is rolling up our sleeves and doing some major integrative work in putting the pieces together.
With designs that are public knowledge (albeit not contained in one place), this could be done as well-funded project in the order of 5 years—an assessment that concurs with what is said by the leaders of the project I am thinking of as well.
My own unpublished contribution is a refinement of this particular plan which strips out those pieces not strictly needed for a seed UFAI (these components being learnt by the AI rather than hand coded), and tweaks the remaining structure slightly in order to favor self-modifying agents. The critical path here is 2 years assuming infinite resources, but more scarily the actual resources needed are quire small. With the right people it could be done in a basement in maybe 3-4 years and take the world by storm.
But here’s the conundrum, as was mentioned in one of the other sub-threads: how do I convince you of that, without walking you through the steps involved in creating an UFAI? If I am right, I would then have posted on the internet blueprints for the destruction of humankind. Then the race would really be on.
So what can I do, except encourage people to walk the same path I did, and see if they come to the same conclusions?
That’s assuming people take you seriously. Even if your plan is solid, probably most people will write you off as another Crackpot Who Thinks He’s Solved an Important Problem.
But I do agree it’s a bit of a conundrum. If you have what you think is an important idea, it’s natural to worry that people will either (1) steal your idea or (2) criticize it not because it’s not a great idea but because they want to feel superior.
I think you entirely missed the point.
I would agree with this in the sense that my stated reasons for the “conundrum” are a bit different from yours.
Well perhaps instead of insinuating motives, you could share your thoughts about the actual stated reason? At what point does one have a moral obligation not to share information about a dangerous idea on a public forum?
I was thinking of my own motives in similar situations, sorry if you took it as a characterization of yours. I do see it could have been read that way.
I would suggest you e-mail your blueprint to a few of the posters here with the understanding they keep it to themselves. If even one long-term poster says “I’ve read Friedenbach’s arguments and while they are confidential, I now agree that his estimate of the time to AI is actually pretty good,” then I think your argument is starting to become persuasive.
Sorry I didn’t mean to come off so abrasively either. I was just being unduly snarky. The internet is not good for conveying emotional state :\
If you’ve solved stable self-improvement issues, that’s FAI work, and you should damn well share that component.
[retracted]
Read the OP, I didn’t make any boisterous claims. I simply said UFAI is 2-5 years away, focused effort, and 10-20 years away otherwise. I therefore believe it important that FAI research be refocused on near-term solutions. I state so publicly in order to counter the entrenched meme that seems to have infected everyone here, saying that AI is X years away, where X is some arbitrary number that by golly seems like a lot, in the hope that some people who encounter the post consider refocusing on near-term work. What’s wrong with that?
Disregard my reply. I really shouldn’t be posting from my phone at 2 AM. Such a venture rarely ends well.
Yeah, I’ve been there before. No worries ;)
Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven’t really dug into that codebase, but from reading the Wiki it’s my impression that it’s a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?
I don’t really entirely endorse the algorithms behind OpenCog and such, but I do share the forecasting timeline. Modern work in hierarchical learning, probabilities over sentences (and thus: learning and inference over structured knowledge), planning as inference… basically, I’ve been reading enough papers to say that we’re definitely starting to see the pieces emerge that embody algorithms for actual, human-level cognition. We will soon confront the question, “Yes, we have all these algorithms, but how do we put them together into an agent?”
I also think that most if not all parts needed for AGI are already there and ‘only’ need to be integrated. But that is actually a hard part. Kind of comparable to our understanding of the human brain: We know how most modules work—or at least how we can produce comparable results—but not how these are integrated. Just adding a meta level to Cog and plugins for domain specific modules at least wouldn’t do.
20 years is on the very soon end of plausible; but 2-5 years is absolutely impossible. We just don’t have the slightest notion how we would do that, regardless of fundingn.
We do not have the tools or technology right now; it won’t come out of the blue.
Really? And what’s that opinion based on? Are you an expert in the field? I very often see this meme quoted, but no explanation to back it up.
I’m a computer scientist that has been following the AI / AGI literature for years. I have been doing my own private research (since publishing AGI work is too dangerous) based on OpenCog, pretty much since it was first open sourced, and a few other projects. I’ve looked at the issues involved in creating a seed AGI, while creating my own design for just such a system. And they are all solvable, or more often already solved but not yet integrated.
I’m a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.
In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google’s knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
The pitfall with private research is that nobody sees your work, meaning there’s nobody to criticize it or tell you your assessment “the issues are solvable or solved but not yet integrated” is incorrect. Or, if it is correct and I’m dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general “AGI can be dangerous” thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you’ve got something)?
In terms of practicalities, AI and AGI share two letters in common, and that’s about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you’ll find most of it covers the specific narrow AI components they’ve selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)
So when you say:
It doesn’t really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What’s different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
My two cents here are just:
1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.
Actually, what AGI is about, by definition, is to achieve human-level or higher performance in a broad variety of cognitive tasks.
Whether self-modification is useful or necessary to achieve such goal is questionable.
Even if self-modification turns out to be a core enabling technology for AGI, we are still quite far from getting it to work.
Just having a language or platform that allows introspection and runtime code generation isn’t enough: LISP didn’t lead to AGI. Neither did Eurisko. And, while I’m not very familiar with OpenCog, frankly I can’t see any fundamental innovation in it.
Representing code as data is trivial. The hard problem is making a machine reason about code.
Automatic program verification is only barely starting to become commercially useful in a few restricted application domains, and automatic programming is still largely undeveloped with very little progress being made beyond optimizing compilers.
Having a machine write code at the level of a human programmer in 2 − 5 years is completely unrealistic, and 20 years looks like the bare minimum, with the realistic expectation being higher.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system. This isn’t the way humans write code, and it doesn’t produce an output which looks like “source code” as we know it. But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Regarding what AGI is “about”, yes that is true in the strictest, definitional sense. But what I was trying to convey is how AGI is separate from narrow AI in that it is basically a field of meta-AI. An AGI approaches a problem by first thinking about how to solve the problem. It first thinks about thinking, before it thinks.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
Yes, and my pocket calculator can compute cosines faster than Newton could. Therefore my pocket calculator is better at math than Newton.
Lots of commonly used classifiers are “nearly Turing-complete”.
Specifically, non-linear SVMs, feed-forward neural networks and the various kinds of decision tree methods can represent arbitrary Boolean functions, while recurrent neural networks can represent arbitrary finite state automata when implemented with finite precision arithmetic, and they are Turing-complete when implemented with arbitrary precision arithmetic.
But we don’t exactly observe hordes of unemployed programmers begging in the streets after losing their jobs to some machine learning algorithm, do we?
Useful as they are, current machine learning algorithms are still very far from performing automatic programming.
Really? Can you system provide a correct implementation of the FizzBuzz program starting from a specification written in English?
Can it play competitively in a programming contest?
Or, even if your system is restricted to machine learning, can it beat random forests on a standard benchmark?
If it can do no such thing perhaps you should consider avoiding such claims, in particular when you are unwilling to show your work.
Which we are currently very far from accomplishing.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
http://goertzel.org/MonsterDiagram.jpg
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.
On the subject of recurrent neural networks, keep in mind that you are such a network, and training you to write code and write it well took years.
Hmmm… Do you have a completeness result? I mean, I can see that if you make it a total language, you can just use coinduction to reason about indefinite computing processes, but I’m wondering what sort of internal logic you’re using that would allow complete reasoning over programs in the language and decidable typing (since to have the agent rewrite its own code it will also have to type-check its own code).
Current theorem-proving systems like Coq that work in logics this advanced usually have undecidable type inference somewhere, and require humans to add type annotations sometimes.
Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
Aaaaand now I have to go look at OpenCog again.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.
I have trouble trusting your expert opinion because it is not clear to me that you are an expert in the field, though you claim to be. Google doesn’t point to any of your research in the area, and I can find no mention of your work beyond bitcoin by any (other) AI researchers. Feel free to link to anything corroborating your claims.
I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.
That is, not very much.
But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven’t published in peer-reviewed journals or conferences (except some philosophical “special issue” volumes, IIRC).
Your “Trust me, I’m a computer scientist and I’ve fiddled with OpenCog in my basement but I can’t show you my work because humans not ready for it” gives you even less credibility.
Eliezer published a lot of relevant work, I have seen none from you.
Eliezer has publications in the field of artificial intelligence? Where?
Yudkowsky, Eliezer (2001): Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures.
Yudkowsky, Eliezer (2007): Levels of Organization in General Intelligence. In: Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389–501.
Hanson,Robin, Yudkowsky, Eliezer (2013): The Hanson-Yudkowsky AI-Foom Debate.
...
Don’t make me figure this stuff out and publish the safe bits just to embarrass you guys.
Do you have any predictions of what types of new narrow-AI we are likely to see in the next few years?
No, I wouldn’t feel qualified to make predictions on novel narrow AI developments. I stay up to date with what’s being published chiefly because my own design involves integrating a handful of narrow AI techniques, and new developments have ramifications for that. But I have no inside knowledge about what frontiers are being pushed next.
Edit: narrow AI and general AI are two very different fields, in case you didn’t know.
This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren’t safe. I don’t think anyone can trace their prediction back to empiric data, it all comes from our minds’ black boxes, to which biases have full access and which we can’t examine with our conciousness.
While I find Mark’s prediction far from accurate, I know it might be just because I wouldn’t like it. I like to think that I would have some impact on AGI research, that some new insights are needed rather than just pumping more and more money in SIRI-like products. Developement of AI in next 10-15 years would mean that no qualitative research were needed and that all what is to be done is honing current technology. It would also mean there was time for thorough developement of friendliness and we may end up with AI catastroph.
While I guess human level AI to rise in about 2070s, I know I would LIKE if it happened in 2070s. And I base this prediction on no solid base.
Can anybody point me to any near-empiric data concerning, when AGI may be developed? Anything more solid than hunch of even most prominent AI researcher? Applying Moore’s law seems a bit magical, it without doubt has some Bayesian effect, but with little certainity.
The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.
Honestly the best empiric data I know is Ray Kurzweil’s extrapolations, which places 2045 generically as the date of the singularity, although he places human-level AI earlier around 2029 (obviously he does not lend credence to a FOOM). You have to take some care in using these predictions as individual technologies eventually hit hard limits and leave the exponential portion of the S-curve, but molecular and reversible computation shows that there is plenty of room at the bottom here.
2070 is a crazy late date. If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today. If you assume that our neural abstractions are all wrong and that we need to do a full simulation including the inner working details of neural cells and transport mechanisms, that’s possible in the 2040′s. Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
If you want to find out how long it will take to make a self-improving AGI, then (1) find or create a design for one, and (2) construct a project plan. Flesh that plan out in detail by researching and eliminating as much uncertainty as you are able to, and fully specify dependencies. Then find the critical path.
Edit: There’s a larger issue which I forgot to mention: I find it a little strange to think of AGI arriving in 2070 vs the near future as comforting. If you assume the AI has evil intentions, then it needs to do a lot of computational legwork before it is able to carry out any of its plans. With today’s technology it’s not really possible to do that and remain hidden. It could take over a botnet, sure, but the level of HPC computing required to develop new computational technology (e.g. molecular nanotechnology) requires data centers today. In 2070 though, either that technology already exists or a home network of PCs would be sufficient. By being released earlier, the UFAI has more legwork it needs to do in the event of a breakout scenario, giving higher chances of detection and more of a buffer for humanity.
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data? If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim. I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that. But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box. More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it’s hard to tell where would our preferences point to.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
I hope it would be in any way helpful.
No, but a sufficiently morally depraved research program can certainly do a hard take-off based on direct simulations and “Best guess butchery” alone. Once you have a brain running in code, you can do experimental neurosurgery with a reset button and without the constraints of physicality, biology or viability stopping you. A thousand simulated man-years of virtual people dying horrifying deaths later… This isn’t a very desirable future, but it is a possible one.
Don’t underestimate the rapid progress that can be achieved with very short feedback loops. (In this case, probably rapid progress into a wireheading attractor, but still.)