AIXI’s contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the ‘math’ of choice vs computational complexity theory, which is the proper domain.
The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.
EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn’t subscribe to SIAI’s math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.
This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.
right now the most likely bet seems to be that the brain is pretty well optimized
at the circuit level, and that the best we can do is reverse engineer it.
That seems like crazy talk to me. The brain is not optimal—not its hardware or software—and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units—and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.
Edit: removed a faulty argument at the end pointed out by wedrifid.
I am talking about optimality for AGI in particular with respect to circuit complexity, with the typical assumptions that a synapse is vaguely equivalent to a transistor, maybe ten transistors at most. If you compare on that level, the brain looks extremely efficient given how slow the neurons are. Does this make sense?
The brain’s circuits have around 10^15 transistor equivalents, and a speed of 10^3 cycles per second. 10^18 transistor cycles / second
A typical modern CPU has 10^9 transistors, with a speed of 10^9 cycles per second. 10^18 transistor cycles / second
Our CPU’s strength is not their circuit architecture or software—its the raw speed of CMOS, its a million X substrate advantage. The learning algorithm, the way in which the cortex rewires in response to input data, appears to be a pretty effective universal learning algorithm.
The brain’s architecture is a joke. It is as though a telecoms engineer decided to connect a whole city’s worth of people together by running cables directly between any two people who wanted to have a chat. It hasn’t even gone fully digital yet—so things can’t easily be copied or backed up. The brain is just awful—no wonder human cognition is such a mess.
Then some questions: How long would moore’s law have to continue into the future with no success in AGI for that to show that the brain’s is well optimized for AGI at the circuit level?
I’ve taken some attempts to show rough bounds on the brain’s efficiency, are you aware of some other approach or estimate?
Then some questions: How long would moore’s law have to continue
into the future with no success in AGI for that to show that the brain’s
is well optimized for AGI at the circuit level?
Most seem to think the problem is mostly down to software—and that supercomputer hardware is enough today—in which case more hardware would not necessarily help very much. The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space. It would not throw much light on the issue of how optimally “designed” the brain is. So: your question is a curious one.
The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space
For every computational system and algorithm, there is a minimum level of space-time complexity in which this system can be encoded. As of yet we don’t know how close the brain is to the minimum space-time complexity design for an intelligence of similar capability.
Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind? If you think the brain is far off that, how do you justify that?
Of course more hardware helps: it allows you to search through the phase space faster. Keep in mind the enormity of the training time.
I happen to believe the problem is ‘mostly down to software’, but I don’t see that as a majority view—the Moravec/Kurzweil view that we need brain-level hardware (within an order of magnitude or so) seems to be majoritive at this point.
We need brain-level hardware (within an order of magnitude or so) if machines are going to be cost-competitive with humans. If you just want a supercomputer mind, then no problem.
I don’t think Moravec or Kurzweil ever claimed it was mostly down to hardware. Moravec’s charts are of hardware capability—but that was mainly because you can easily measure that.
We need brain-level hardware (within an order of magnitude or so) if machines are going to be cost-competitive with humans.
I don’t see why that is. If you were talking about ems, then the threshhold should be 1:1 realtime. Otherwise, for most problems that we know how to program a computer to do, the computer is much faster than humans even at existing speeds. Why do you expect that a computer that’s say, 3x slower than a human (well within an order of magnitude) would be cost-competitive with humans while one that’s 10^4 times slower wouldn’t?
Evidently there are domains where computers beat humans today—but if you look at what has to happen for machines to take the jobs of most human workers, they will need bigger and cheaper brains to do that. “Within an order of magnitude or so” seems like a reasonable ballpark figure to me. If you are looking for more details about why I think that, they are not available at this time.
I suspect that the controlling reason why you think that is that you assume it takes human-like hardware to accomplish human-like tasks, and greatly underestimate the advantages of a mind being designed rather than evolved.
Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind?
Way off. Let’s see… I would bet at even odds that it is 4 or more orders of magnitude off optimal.
If you think the brain is far off that, how do you justify that?
We have approximately one hundred billion neurons each and roughly the same number of glial cells (more of the latter if we are smart!). Each of those includes a full copy of our DNA, which is itself not exactly optimally compressed.
Way off. Let’s see… I would bet at even odds that it is 4 or more orders of magnitude off optimal.
you didn’t answer my question: what is your guess at minimum bit representation of a human equi mind?
you didn’t use the typical methodology of measuring the brain’s storage, nor did you provide another.
I wasn’t talking about molecular level optimization. I started with the typical assumption that synapses represent a few bits, the human brain has around 100TB to 1PB of data/circuitry, etc etc—see the singularity is near.
So you say the human brain algorithmic representation is off by 4 orders of magnitude or more—you are saying that you think a human equivalent mind can be represented in 10 to 100GB of data/circuitry?
If so, why did evolution not find that by now? It has had plenty of time to compress at the circuit level. In fact, we actually know that the brain does perform provably optimal compression on its input data in a couple of domains—see V1 and its evolution into gabor-like edge feature detection.
Evolution has had plenty of time to find a well-optimized cellular machinery based on DNA, plenty of time to find a well-optimized electro-chemical computing machinery based on top of that, and plenty of time to find well-optimized circuits within that space.
Even insects are extremely well-optimized at the circuit level—given their neuron/synapse counts, we have no evidence whatsoever to believe that vastly simpler circuits exist that can perform the same functionality.
When we have used evolutionary exploration algorithms to design circuits natively, given enough time we see similar complex, messy, but near optimal designs, and this is a general trend.
Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that’s invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can’t see that we’d count the DNA for each cell then—yet it is no different really.
I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.
Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that’s invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can’t see that we’d count the DNA for each cell then—yet it is no different really.
I thought we were talking about the efficiency of the human brain. Wasn’t that the whole point? If every cell is remotely controlled from a central server then well, that’d be whole different algorithm. In fact, we could probably scrap the brain and just run the central server.
Genes actually do matter in the functioning of neurons. Chemical additions (eg. ethanol) and changes in the environment (eg. hypoxia) can influence gene expression in cells in the brain, impacting on their function.
I suggest the brain is a ridiculously inefficient contraption thrown together by the building blocks that were practical for production from DNA representations and suitable for the kind of environments animals tended to be exposed to. We should be shocked to find that it also manages to be anywhere near optimal for general intelligence. Among other things it would suggest that evolution packed the wrong lunch.
Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution—which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.
Then some questions: How long would moore’s law have to continue into the future with no success in AGI for that to show that the brain’s is well optimized for AGI at the circuit level?
A Sperm Whale and a bowl of Petunias.
My first impulse was to answer that Moore’s law could go forever and never produce success in AGI, since ‘AGI’ isn’t just what you get when you put enough computronium together for it to reach critical mass. But even given no improvements in understanding we could very well arrive at AGI just through ridiculous amounts of brute force. In fact, given enough space and time, randomised initial positions and possibly a steady introduction of negentropy we could produce an AGI in Conways Life.
I’ve taken some attempts to show rough bounds on the brain’s efficiency, are you aware of some other approach or estimate?
You could find some rough bounds by seeing how many parts of a human brain you can cut out without changing IQ.Trivial little things like, you know, the pre-frontal cortex.
You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.
Good quality general-purpose data-compression would “break the back” of the task of buliding synthetic intelligent agents—and that’s a “simple” math problem—as I explain on: http://timtyler.org/sequence_prediction/
At least it can be stated very concisely. Solutions so far haven’t been very simple—but the brain’s architecture offers considerable hope for a relatively simple solution.
AIXI-shaped magic bullet?
AIXI’s contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the ‘math’ of choice vs computational complexity theory, which is the proper domain.
The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.
EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn’t subscribe to SIAI’s math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.
This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.
I explore this a little more here
That seems like crazy talk to me. The brain is not optimal—not its hardware or software—and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units—and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.
Edit: removed a faulty argument at the end pointed out by wedrifid.
I am talking about optimality for AGI in particular with respect to circuit complexity, with the typical assumptions that a synapse is vaguely equivalent to a transistor, maybe ten transistors at most. If you compare on that level, the brain looks extremely efficient given how slow the neurons are. Does this make sense?
The brain’s circuits have around 10^15 transistor equivalents, and a speed of 10^3 cycles per second. 10^18 transistor cycles / second
A typical modern CPU has 10^9 transistors, with a speed of 10^9 cycles per second. 10^18 transistor cycles / second
Our CPU’s strength is not their circuit architecture or software—its the raw speed of CMOS, its a million X substrate advantage. The learning algorithm, the way in which the cortex rewires in response to input data, appears to be a pretty effective universal learning algorithm.
The brain’s architecture is a joke. It is as though a telecoms engineer decided to connect a whole city’s worth of people together by running cables directly between any two people who wanted to have a chat. It hasn’t even gone fully digital yet—so things can’t easily be copied or backed up. The brain is just awful—no wonder human cognition is such a mess.
Nothing you wrote lead me to this conclusion.
Then some questions: How long would moore’s law have to continue into the future with no success in AGI for that to show that the brain’s is well optimized for AGI at the circuit level?
I’ve taken some attempts to show rough bounds on the brain’s efficiency, are you aware of some other approach or estimate?
Most seem to think the problem is mostly down to software—and that supercomputer hardware is enough today—in which case more hardware would not necessarily help very much. The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space. It would not throw much light on the issue of how optimally “designed” the brain is. So: your question is a curious one.
For every computational system and algorithm, there is a minimum level of space-time complexity in which this system can be encoded. As of yet we don’t know how close the brain is to the minimum space-time complexity design for an intelligence of similar capability.
Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind? If you think the brain is far off that, how do you justify that?
Of course more hardware helps: it allows you to search through the phase space faster. Keep in mind the enormity of the training time.
I happen to believe the problem is ‘mostly down to software’, but I don’t see that as a majority view—the Moravec/Kurzweil view that we need brain-level hardware (within an order of magnitude or so) seems to be majoritive at this point.
We need brain-level hardware (within an order of magnitude or so) if machines are going to be cost-competitive with humans. If you just want a supercomputer mind, then no problem.
I don’t think Moravec or Kurzweil ever claimed it was mostly down to hardware. Moravec’s charts are of hardware capability—but that was mainly because you can easily measure that.
I don’t see why that is. If you were talking about ems, then the threshhold should be 1:1 realtime. Otherwise, for most problems that we know how to program a computer to do, the computer is much faster than humans even at existing speeds. Why do you expect that a computer that’s say, 3x slower than a human (well within an order of magnitude) would be cost-competitive with humans while one that’s 10^4 times slower wouldn’t?
Evidently there are domains where computers beat humans today—but if you look at what has to happen for machines to take the jobs of most human workers, they will need bigger and cheaper brains to do that. “Within an order of magnitude or so” seems like a reasonable ballpark figure to me. If you are looking for more details about why I think that, they are not available at this time.
I suspect that the controlling reason why you think that is that you assume it takes human-like hardware to accomplish human-like tasks, and greatly underestimate the advantages of a mind being designed rather than evolved.
Way off. Let’s see… I would bet at even odds that it is 4 or more orders of magnitude off optimal.
We have approximately one hundred billion neurons each and roughly the same number of glial cells (more of the latter if we are smart!). Each of those includes a full copy of our DNA, which is itself not exactly optimally compressed.
you didn’t answer my question: what is your guess at minimum bit representation of a human equi mind?
you didn’t use the typical methodology of measuring the brain’s storage, nor did you provide another.
I wasn’t talking about molecular level optimization. I started with the typical assumption that synapses represent a few bits, the human brain has around 100TB to 1PB of data/circuitry, etc etc—see the singularity is near.
So you say the human brain algorithmic representation is off by 4 orders of magnitude or more—you are saying that you think a human equivalent mind can be represented in 10 to 100GB of data/circuitry?
If so, why did evolution not find that by now? It has had plenty of time to compress at the circuit level. In fact, we actually know that the brain does perform provably optimal compression on its input data in a couple of domains—see V1 and its evolution into gabor-like edge feature detection.
Evolution has had plenty of time to find a well-optimized cellular machinery based on DNA, plenty of time to find a well-optimized electro-chemical computing machinery based on top of that, and plenty of time to find well-optimized circuits within that space.
Even insects are extremely well-optimized at the circuit level—given their neuron/synapse counts, we have no evidence whatsoever to believe that vastly simpler circuits exist that can perform the same functionality.
When we have used evolutionary exploration algorithms to design circuits natively, given enough time we see similar complex, messy, but near optimal designs, and this is a general trend.
Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that’s invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can’t see that we’d count the DNA for each cell then—yet it is no different really.
I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.
I thought we were talking about the efficiency of the human brain. Wasn’t that the whole point? If every cell is remotely controlled from a central server then well, that’d be whole different algorithm. In fact, we could probably scrap the brain and just run the central server.
Genes actually do matter in the functioning of neurons. Chemical additions (eg. ethanol) and changes in the environment (eg. hypoxia) can influence gene expression in cells in the brain, impacting on their function.
I suggest the brain is a ridiculously inefficient contraption thrown together by the building blocks that were practical for production from DNA representations and suitable for the kind of environments animals tended to be exposed to. We should be shocked to find that it also manages to be anywhere near optimal for general intelligence. Among other things it would suggest that evolution packed the wrong lunch.
Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution—which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.
A Sperm Whale and a bowl of Petunias.
My first impulse was to answer that Moore’s law could go forever and never produce success in AGI, since ‘AGI’ isn’t just what you get when you put enough computronium together for it to reach critical mass. But even given no improvements in understanding we could very well arrive at AGI just through ridiculous amounts of brute force. In fact, given enough space and time, randomised initial positions and possibly a steady introduction of negentropy we could produce an AGI in Conways Life.
You could find some rough bounds by seeing how many parts of a human brain you can cut out without changing IQ.Trivial little things like, you know, the pre-frontal cortex.
You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.
The article linked to in the parent is entitled:
“Created in the Likeness of the Human Mind: Why Strong AI will necessarily be like us”
Good quality general-purpose data-compression would “break the back” of the task of buliding synthetic intelligent agents—and that’s a “simple” math problem—as I explain on: http://timtyler.org/sequence_prediction/
At least it can be stated very concisely. Solutions so far haven’t been very simple—but the brain’s architecture offers considerable hope for a relatively simple solution.