We just don’t have the slightest notion how we would do that, regardless of funding.
Really? And what’s that opinion based on? Are you an expert in the field? I very often see this meme quoted, but no explanation to back it up.
I’m a computer scientist that has been following the AI / AGI literature for years. I have been doing my own private research (since publishing AGI work is too dangerous) based on OpenCog, pretty much since it was first open sourced, and a few other projects. I’ve looked at the issues involved in creating a seed AGI, while creating my own design for just such a system. And they are all solvable, or more often already solved but not yet integrated.
I’m a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.
In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google’s knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
The pitfall with private research is that nobody sees your work, meaning there’s nobody to criticize it or tell you your assessment “the issues are solvable or solved but not yet integrated” is incorrect. Or, if it is correct and I’m dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general “AGI can be dangerous” thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you’ve got something)?
In terms of practicalities, AI and AGI share two letters in common, and that’s about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you’ll find most of it covers the specific narrow AI components they’ve selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)
So when you say:
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
It doesn’t really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What’s different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
Actually, what AGI is about, by definition, is to achieve human-level or higher performance in a broad variety of cognitive tasks. Whether self-modification is useful or necessary to achieve such goal is questionable.
Even if self-modification turns out to be a core enabling technology for AGI, we are still quite far from getting it to work. Just having a language or platform that allows introspection and runtime code generation isn’t enough: LISP didn’t lead to AGI. Neither did Eurisko. And, while I’m not very familiar with OpenCog, frankly I can’t see any fundamental innovation in it.
Representing code as data is trivial. The hard problem is making a machine reason about code. Automatic program verification is only barely starting to become commercially useful in a few restricted application domains, and automatic programming is still largely undeveloped with very little progress being made beyond optimizing compilers.
Having a machine write code at the level of a human programmer in 2 − 5 years is completely unrealistic, and 20 years looks like the bare minimum, with the realistic expectation being higher.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system. This isn’t the way humans write code, and it doesn’t produce an output which looks like “source code” as we know it. But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Regarding what AGI is “about”, yes that is true in the strictest, definitional sense. But what I was trying to convey is how AGI is separate from narrow AI in that it is basically a field of meta-AI. An AGI approaches a problem by first thinking about how to solve the problem. It first thinks about thinking, before it thinks.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
Yes, and my pocket calculator can compute cosines faster than Newton could. Therefore my pocket calculator is better at math than Newton.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.
Lots of commonly used classifiers are “nearly Turing-complete”. Specifically, non-linear SVMs, feed-forward neural networks and the various kinds of decision tree methods can represent arbitrary Boolean functions, while recurrent neural networks can represent arbitrary finite state automata when implemented with finite precision arithmetic, and they are Turing-complete when implemented with arbitrary precision arithmetic.
But we don’t exactly observe hordes of unemployed programmers begging in the streets after losing their jobs to some machine learning algorithm, do we? Useful as they are, current machine learning algorithms are still very far from performing automatic programming.
But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Really? Can you system provide a correct implementation of the FizzBuzz program starting from a specification written in English? Can it play competitively in a programming contest?
Or, even if your system is restricted to machine learning, can it beat random forests on a standard benchmark?
If it can do no such thing perhaps you should consider avoiding such claims, in particular when you are unwilling to show your work.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
Which we are currently very far from accomplishing.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult. Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
… raw general intelligence …
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence. I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system.
Hmmm… Do you have a completeness result? I mean, I can see that if you make it a total language, you can just use coinduction to reason about indefinite computing processes, but I’m wondering what sort of internal logic you’re using that would allow complete reasoning over programs in the language and decidable typing (since to have the agent rewrite its own code it will also have to type-check its own code).
Current theorem-proving systems like Coq that work in logics this advanced usually have undecidable type inference somewhere, and require humans to add type annotations sometimes.
Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.
I have trouble trusting your expert opinion because it is not clear to me that you are an expert in the field, though you claim to be. Google doesn’t point to any of your research in the area, and I can find no mention of your work beyond bitcoin by any (other) AI researchers. Feel free to link to anything corroborating your claims.
I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.
I have as much credibility as Eliezer Yudkowsky in that regard
That is, not very much. But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven’t published in peer-reviewed journals or conferences (except some philosophical “special issue” volumes, IIRC).
Your “Trust me, I’m a computer scientist and I’ve fiddled with OpenCog in my basement but I can’t show you my work because humans not ready for it” gives you even less credibility.
No, I wouldn’t feel qualified to make predictions on novel narrow AI developments. I stay up to date with what’s being published chiefly because my own design involves integrating a handful of narrow AI techniques, and new developments have ramifications for that. But I have no inside knowledge about what frontiers are being pushed next.
Edit: narrow AI and general AI are two very different fields, in case you didn’t know.
Really? And what’s that opinion based on? Are you an expert in the field? I very often see this meme quoted, but no explanation to back it up.
I’m a computer scientist that has been following the AI / AGI literature for years. I have been doing my own private research (since publishing AGI work is too dangerous) based on OpenCog, pretty much since it was first open sourced, and a few other projects. I’ve looked at the issues involved in creating a seed AGI, while creating my own design for just such a system. And they are all solvable, or more often already solved but not yet integrated.
I’m a computer scientist who has been in a machine learning and natural language processing PhD program quite recently. I have an in-depth knowledge of machine learning, NLP and text mining.
In particular, I know that the broadest existing knowledge bases in the real-world (e.g. Google’s knowledge Graph) are built on a hodge-podge of text parsing and logical inference techniques. These systems can be huge in scale and very useful, and reveal that a lot of knowledge is quite shallow even if it is apparently deeper, but also reveal the difficulty in dealing with knowledge that genuinely is deeper, by which I mean it relies on complex models of he world.
I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.
The pitfall with private research is that nobody sees your work, meaning there’s nobody to criticize it or tell you your assessment “the issues are solvable or solved but not yet integrated” is incorrect. Or, if it is correct and I’m dead wrong in my pessimism, nobody can know that either. Why would publishing it be dangerous (yeah, I get the general “AGI can be dangerous” thing, but what would be the actual marginal danger vs. not publishing and being left out of important conversations when they happen, assuming you’ve got something)?
In terms of practicalities, AI and AGI share two letters in common, and that’s about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you’ll find most of it covers the specific narrow AI components they’ve selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)
So when you say:
It doesn’t really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What’s different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.
That’s mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.
My two cents here are just:
1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.
Actually, what AGI is about, by definition, is to achieve human-level or higher performance in a broad variety of cognitive tasks.
Whether self-modification is useful or necessary to achieve such goal is questionable.
Even if self-modification turns out to be a core enabling technology for AGI, we are still quite far from getting it to work.
Just having a language or platform that allows introspection and runtime code generation isn’t enough: LISP didn’t lead to AGI. Neither did Eurisko. And, while I’m not very familiar with OpenCog, frankly I can’t see any fundamental innovation in it.
Representing code as data is trivial. The hard problem is making a machine reason about code.
Automatic program verification is only barely starting to become commercially useful in a few restricted application domains, and automatic programming is still largely undeveloped with very little progress being made beyond optimizing compilers.
Having a machine write code at the level of a human programmer in 2 − 5 years is completely unrealistic, and 20 years looks like the bare minimum, with the realistic expectation being higher.
“Having a machine write code at the level of a human programmer” is a strawman. One can already think about machine learning techniques as the computer writing its own classification programs. These machines already “write code” (classifiers) better than any human could under the same circumstances.. it just doesn’t look like code a human would write.
A significant pieces of my own architecture is basically doing the same thing but with the classifiers themselves composed in a nearly turing-complete total functional language, which are then operated on by other reflective agents who are able to reason about the code due to its strong type system. This isn’t the way humans write code, and it doesn’t produce an output which looks like “source code” as we know it. But it does result in programs writing programs faster, better, and cheaper than humans writing those same programs.
Regarding what AGI is “about”, yes that is true in the strictest, definitional sense. But what I was trying to convey is how AGI is separate from narrow AI in that it is basically a field of meta-AI. An AGI approaches a problem by first thinking about how to solve the problem. It first thinks about thinking, before it thinks.
And yes, there are generally multiple ways it can actually accomplish that, e.g. the AGI could not actually solve the problem or modify itself to solve the problem, but instead output the source code for a narrow AI which efficiently does so. But if you draw the system boundary large enough, it’s effectively the same thing.
Yes, and my pocket calculator can compute cosines faster than Newton could. Therefore my pocket calculator is better at math than Newton.
Lots of commonly used classifiers are “nearly Turing-complete”.
Specifically, non-linear SVMs, feed-forward neural networks and the various kinds of decision tree methods can represent arbitrary Boolean functions, while recurrent neural networks can represent arbitrary finite state automata when implemented with finite precision arithmetic, and they are Turing-complete when implemented with arbitrary precision arithmetic.
But we don’t exactly observe hordes of unemployed programmers begging in the streets after losing their jobs to some machine learning algorithm, do we?
Useful as they are, current machine learning algorithms are still very far from performing automatic programming.
Really? Can you system provide a correct implementation of the FizzBuzz program starting from a specification written in English?
Can it play competitively in a programming contest?
Or, even if your system is restricted to machine learning, can it beat random forests on a standard benchmark?
If it can do no such thing perhaps you should consider avoiding such claims, in particular when you are unwilling to show your work.
Which we are currently very far from accomplishing.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
I agree that natural language understanding is not a necessary requirement for an early AGI, but I would say that by definition an AGI would have to be good at the sort of cognitive tasks humans are good at, even if communication with humans was somehow difficult.
Think of making first contact with an undiscovered human civilization, or better, a civilization of space-faring aliens.
Note that it is unclear whether there is any way to achieve “general intelligence” other than by combining lots of modules specialized for the various cognitive tasks we consider to be necessary for intelligence.
I mean, Solomonoff induction, AIXI and the like do certainly look interesting on paper, but the extent they can be applied to real problems (if it is even possible) without any specialization is not known.
The human brain is based on a fairly general architecture (biological neural networks), instantiated into thousands of specialized modules. You could argue that biological evolution should be included into human intelligence at a meta level, but biological evolution is not a goal-directed process, and it is unclear whether humans (or human-like intelligence) was a likely outcome or a fortunate occurrence.
Anyway, even if it turns out that “universal induction” techniques are actually applicable to a practical human-made AGI, given the economic interests of humans I think that before seeing a full AGI we should see lots of improvements in narrow AI applications.
I think we’re now saying the same thing, but to be clear: I don’t think it follows at all that an AGI needs to be good at X, for any interesting X, in order to be considered an AGI. No, it has the meta-level condition instead: it must be able to become good at X, if doing so accomplishes its goals and it is given suitable inputs and processing power to accomplish that learning task.
Indeed, my blitz AGI design involves no natural language processing components, at all. The initial goal loading and debug interfaces would be via a custom language best described as a cross between vocabulary-limited Lojban and a strongly typed functional programming language. Having looked at the best approaches to NLP so far (Watson et al), and expert opinions on what would be required to go beyond that and build a truly human-level understanding of language, I found nothing that could not be rediscovered and developed by a less capable seed AI, if given sufficient resources and time.
Ok, try this experiment: start with a high-level diagram of what you would consider to be a complete human-level AGI design, e.g. able to do everything a human can do, as good or better. I think we’re on the same page in assuming that at least on one level it would consist of a ton of little specialized programs handling the various specialized aspects of human intelligence. Enumerate all of these, and take a guess at how they are interconnected. I doubt you’ll be able to fit it all in one sheet of paper, or even 10. Here’s a start based on OpenCog, but there’s lots lots more details you will need to fill in:
http://goertzel.org/MonsterDiagram.jpg
Now consider each component in turn. If you cut that component out of the diagram (perhaps rearranging some of the connections as necessary), could you reliably recreate it with the remaining pieces, if tasked with doing so and given the necessary inputs and processing power? If so, get rid of it. If not, ask: what are the minimum (less than human-level) capabilities required, which let you recreate the rest? Replace with that. Continue until the design can’t be simplified further.
This experiment is a form of local search, and you may have to repeat from different starting points, or employ other global search methods to be sure that you are arriving at something close to the global minimum seed AGI design, but as an exercise I hope it gets the point across.
The basic AGI design I arrived as involved a dozen different “universal induction” techniques with different strengths, a meta-architecture for linking them together, a generic and powerful internal language for representing really anything, and basic scaffolding to stand in for the rest. It’s damn slow an inefficient at first, but like a human infant a good portion of its time would be spent “dreaming” where it analyzes its acquired memories and seeks improvements to its own processes… and gains there have multiplying affects. Don’t discount the importance of power-law mechanisms.
On the subject of recurrent neural networks, keep in mind that you are such a network, and training you to write code and write it well took years.
Hmmm… Do you have a completeness result? I mean, I can see that if you make it a total language, you can just use coinduction to reason about indefinite computing processes, but I’m wondering what sort of internal logic you’re using that would allow complete reasoning over programs in the language and decidable typing (since to have the agent rewrite its own code it will also have to type-check its own code).
Current theorem-proving systems like Coq that work in logics this advanced usually have undecidable type inference somewhere, and require humans to add type annotations sometimes.
Personal opinion: OpenCog is attempting to get as general as it can within the logic-and-discrete-maths framework of Narrow AI. They are going to hit a wall as they try to connect their current video-game like environment to the real world, and find that they failed to integrate probabilistic approaches reasonably well. Also, without probabilistic approaches, you can’t get around Rice’s Theorem to build a self-improving agent.
Wellll.… the agent could make “narrow” self-improvements. It could build a formal specification for a few of its component parts and then perform the equivalent of provable compiler optimizations. But it would have a very hard time strengthening its core logic, as Rice’s Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.
This seems like the wrong conclusion to draw. Rice’s theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn’t follow that most optimizations are hard to prove. One imagines that software could do what humans do—hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won’t necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.
To do that it’s going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.
Uh, what were you looking at? The basic foundation of OpenCog is a probabilistic logic called PLN (the wrong one to be using, IMHO, but a probabilistic logic nonetheless). Everything in OpenCog is expressed and reasoned about in probabilities.
Aaaaand now I have to go look at OpenCog again.
I don’t see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the “self-improvement-finding submodule”, rather than an explicit part of the overall architecture. I don’t claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.
I have trouble trusting your expert opinion because it is not clear to me that you are an expert in the field, though you claim to be. Google doesn’t point to any of your research in the area, and I can find no mention of your work beyond bitcoin by any (other) AI researchers. Feel free to link to anything corroborating your claims.
I have as much credibility as Eliezer Yudkowsky in that regard, and for the same reason. As I mention in the post you replied to, my work is private and unpublished. None of my work is accessible to the internet, as it should be. I consider it unethical to be publishing AGI research given what is at stake.
That is, not very much.
But at least Eliezer Yudkowsky and pals have made an effort to publish arguments for their position, even if they haven’t published in peer-reviewed journals or conferences (except some philosophical “special issue” volumes, IIRC).
Your “Trust me, I’m a computer scientist and I’ve fiddled with OpenCog in my basement but I can’t show you my work because humans not ready for it” gives you even less credibility.
Eliezer published a lot of relevant work, I have seen none from you.
Eliezer has publications in the field of artificial intelligence? Where?
Yudkowsky, Eliezer (2001): Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures.
Yudkowsky, Eliezer (2007): Levels of Organization in General Intelligence. In: Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389–501.
Hanson,Robin, Yudkowsky, Eliezer (2013): The Hanson-Yudkowsky AI-Foom Debate.
...
Don’t make me figure this stuff out and publish the safe bits just to embarrass you guys.
Do you have any predictions of what types of new narrow-AI we are likely to see in the next few years?
No, I wouldn’t feel qualified to make predictions on novel narrow AI developments. I stay up to date with what’s being published chiefly because my own design involves integrating a handful of narrow AI techniques, and new developments have ramifications for that. But I have no inside knowledge about what frontiers are being pushed next.
Edit: narrow AI and general AI are two very different fields, in case you didn’t know.