Unspecified false assumptions seems to be too weak to respond to. That page refers to a bunch of estimates and surveys relating to the estimated time to superintelligence by a range of individuals and groups.
WBE is more likely to me for a simple reason: we can implement a brain without understanding how it works. To achieve human-equivalent AGI requires that we know what we are doing.
If we can build a model of a worm brain, we can probably scale it up a billion times without any deep understanding of how it works. That’s just one type of shortcut to superintelligence on the path to WBE. In practice there are lots of similar shortcuts—and taking any one of them skips WBE, making it redundant.
You simply don’t need to understand how an adult brain works in order to build something with superior functionality. We did not need to understand how birds worked to make a flying machine. We did not need to understand how fish worked to build a submarine. Brains are unlikely to be very much different in that respect.
And despite six decades’ worth of research on that topic, I cannot as of yet see any discernable indications that we are significantly closer to mastering general intelligence than we were when we began the effort.
So: you can’t see much progress. However there evidently has been progress—now we have Watson, Siri, W.A., Netflix, Google and other systems doing real work—which is a big deal. Machine intelligence is on the commercial slippery slope that will lead to superintelligence—while whole brain emulation simply doesn’t work and so has no commercial applications. Its flagship “products” are silly PR stunts.
The Whole Brain Emulation Roadmap is silent about the issue—but its figures generally support the contention that WBE is going to arrive too slowly to have a significant impact.
according to the information as yet available—AGI is distantly remote. WBE is not.
If we can build a model of a worm brain, we can probably scale it up a billion times without any deep understanding of how it works. That’s just one type of shortcut to superintelligence on the path to WBE.
Ten million dogs cannot contemplate what Shakespeare meant when he said that a rose, by any other name, would still smell as sweet. Even a billion dogs could not do this. Nor could 3^^^3 nematodes.
This belief is just plain unintelligent.
You simply don’t need to understand how an adult brain works in order to build something with superior functionality.
… I am incomprehensible of how one could come to this belief.
We did not need to understand how birds worked to make a flying machine.
No, but we did need to know how flight works to build a flying machine. We don’t need to know how thought works to make a WBE happen. We only need to know how emulations work—and we already know this. What we don’t know is how cognition / intelligence works.
So: you can’t see much progress. However there evidentlly has been progress—now we have Watson, Siri, W.A., Netflix, Google and other systems doing real work—which is a big deal.
Do you understand the conceptual difference between narrow intelligence and general intelligence? AI researchers gave up decades ago on the notion of narrow AI yielding general AI.
So yes. I can’t see any progress at all to speak of. And yes, I know of Watson, Siri, Google, etc., etc..
And yet the difference in the ability of a collection of dogs to so contemplate it is so negligible that it could be ten million dogs or ten million-zeroes dogs. It still wouldn’t happen.
Even a billion dogs could not do this. Nor could 3^^^3 nematodes. This belief is just plain unintelligent.
That was a straw man, though. The idea was to scale up a small brain into a big brain—not to put lots of small brains together.
You simply don’t need to understand how an adult brain works in order to build something with superior functionality.
… I am incomprehensible of how one could come to this belief.
Right—so: this has already been done in many domains—e.g. chess. Engineers will just mop up the remaining domains without bothering with the daft and unnecessary task of reverse engineering the human brain.
No, but we did need to know how flight works to build a flying machine. [...] What we don’t know is how cognition / intelligence works.
Well, we do know what the equivalent of “lift” is. It’s inductive inference. See: Inductive inference is like lift. We can already generate the equivalent of lift. We just don’t yet know how to get a lot of it in one place.
Do you understand the conceptual difference between narrow intelligence and general intelligence? AI researchers gave up decades ago on the notion of narrow AI yielding general AI.
No, they didn’t. A few researchers did that, in an attempt to distinguish themselves from the mainstream.
So yes. I can’t see any progress at all to speak of. And yes, I know of Watson, Siri, Google, etc., etc..
So: I think progress is happening. It looks something like: this and this. Machines already make most stockmarket trades, can translate languages and do speech recognition, and bots have conquered manufacturing and are busy invading retail outlets, banks, offices and call centres. One wonders what it would take for you to classify something as progress towards machine intelligence.
I can “scale up” a threaded program by giving more processors for the threads to run on, but this doesn’t actually improve the program output (apart from rounding error and nondeterministic effects), it just makes the output faster. I can “scale up” an approximation algorithm that has a variable discretization size N, and that actually improves the output… but how do you adjust “N” in a worm brain?
I can “scale up” a threaded program by giving more processors for the threads to run on, but this doesn’t actually improve the program output (apart from rounding error and nondeterministic effects), it just makes the output faster.
Sure. Computers don’t always behave like brains do. Indeed, they are mostly designed to compensate for brain weakenesses—to be strong where we are weak.
I can “scale up” an approximation algorithm that has a variable discretization size N, and that actually improves the output… but how do you adjust “N” in a worm brain?
We know that nature can easily scale brains up—because it did so with chimpanzees. Scaling brains up may not be trivial—but it’s probably much easier than building one in the first place. Once we can build a brain, we will probablly be able to make another one that is bigger.
Once we can build a brain, we will certainly be able to make another one that is bigger, but that won’t make it better. “Given this arrangement of neurons, what firing patterns will develop” is almost a completely different task than “given this problem to solve, what arrangement of neurons will best solve it”, which itself is merely a footnote to the task of “wait, what are our definitions of ‘problem’ and ‘best’ again?”
Nature scaled chimpanzee brains up by creating billions of them and running them through millions of years of challenging environments; that’s many orders of magnitude more difficult than building a single brain, and the result is merely expected to be whatever works best in the testing environments, which may or may not resemble what the creators of those environments want or expect.
Once we can build a brain, we will certainly be able to make another one that is bigger, but that won’t make it better.
It is very likely that it would, IMHO.
Nature scaled chimpanzee brains up by creating billions of them and running them through millions of years of challenging environments; that’s many orders of magnitude more difficult than building a single brain [...]
Nature had already done the R+D for buillding a partly-resizable brain, though. Turning a chimp brain into a human brain was mostly a case of turning a few knobs relating to brain development, and a few more relating to pelvis morphology. There is no good reason for thinking that resizing brains is terribly difficult for nature to do—at least up to a point.
Could you define how we make a brain “bigger”? Do we replace every one neuron with 2, then connect them up the same way? Without a specific definition there’s nothing but handwaving here, and it’s my contention that finding the specific definition is the difficult part.
But more shockingly: do you really have evidence that the last six million years of human evolution was “turning a few knobs”? If so, then I would very much like to hear it. If not, then we seem to be operating under such divergent epistemologies that I’m not sure what else I can productively say here.
Could you define how we make a brain “bigger”? Do we replace every one neuron with 2, then connect them up the same way? Without a specific definition there’s nothing but handwaving here, and it’s my contention that finding the specific definition is the difficult part.
That would be model-specific. Since we don’t actually have the model under discussion yet it is hard to go into details—but most NN models have the number of neurons as a variable.
But more shockingly: do you really have evidence that the last six million years of human evolution was “turning a few knobs”?
I didn’t claim that the last six million years of human evolution was “turning a few knobs”. I referred explicitly to changes in the brain.
I was mosly referring to the evidence from genetics that humans are chimpanzees with a relatively small number of functional genetic changes. Plus the relatively short time involved. However, it is true that there are still some thirty-five million SNPs.
There’s other evidence that nature’s brains are relatively easy to dynamically resize. Dwarfism, gaintism and other growth disorders show that nature can relatively easily make human-scale brains of variable size—at least up to a point. Even kids illustrate that point pretty well. It does not require a lot of evolutionary R+D to make a brain bigger or smaller—that R+D was done by evolution long ago—and what we now have is resizable brains.
If all we wanted was to triple human brain size just like Nature can, we’d be breeding more whales. And if tripling brain size inherently tripled intelligence, we’d be asking the whales for their opinions afterwards.
If increasing brain size instead merely provides enough raw matter for other changes to eventually mold into improved intelligence, then it doesn’t just matter how hard the size increase is, it also matters how hard the other changes are. And at least in nature’s case, I reiterate that the improvement process was several orders of magnitude harder than the one-brain process. We might be able to do better than nature, but then we’re no longer talking about “it was easy in nature, so it will be easy for us too”, we’re talking about “it was hard in nature, so it might be hard for us too”.
Nature does seem to be able to scale brains down without the millions of years of evolution it took to scale them up, but that at least makes perfect sense as a pre-evolved characteristic. Accidents and poor nutrition are ubiquitous, so there’s a clear selection pressure for brains to develop to be robust enough that restricted growth or damage can still leave a functional result. Is there any similarly strong evolutionary pressure for brains to develop in such a way that opportunities for increased growth produce a better-functional result? If so it may not have been enough pressure; supernormal growth opportunities do seem to exist but aren’t necessarily positive.
And at least in nature’s case, I reiterate that the improvement process was several orders of magnitude harder than the one-brain process.
If we take “time” as a proxy for “difficulty” we have:
Origin of life: 3500 MYA.
Origin of brains: 600 MYA.
Origin of chimp-human split: 7 MYA.
According to those figures, scaling brains up a few thousand times was much easier than making one in the first place. Scaling one up by a factor of 3 was much, much easier.
As for modern big human brains, the main examples I know of arise from giantism and head binding.
Unspecified false assumptions seems to be too weak to respond to. That page refers to a bunch of estimates and surveys relating to the estimated time to superintelligence by a range of individuals and groups.
If we can build a model of a worm brain, we can probably scale it up a billion times without any deep understanding of how it works. That’s just one type of shortcut to superintelligence on the path to WBE. In practice there are lots of similar shortcuts—and taking any one of them skips WBE, making it redundant.
You simply don’t need to understand how an adult brain works in order to build something with superior functionality. We did not need to understand how birds worked to make a flying machine. We did not need to understand how fish worked to build a submarine. Brains are unlikely to be very much different in that respect.
So: you can’t see much progress. However there evidently has been progress—now we have Watson, Siri, W.A., Netflix, Google and other systems doing real work—which is a big deal. Machine intelligence is on the commercial slippery slope that will lead to superintelligence—while whole brain emulation simply doesn’t work and so has no commercial applications. Its flagship “products” are silly PR stunts.
The Whole Brain Emulation Roadmap is silent about the issue—but its figures generally support the contention that WBE is going to arrive too slowly to have a significant impact.
That’s just a baseless fantasy, IMHO.
Ten million dogs cannot contemplate what Shakespeare meant when he said that a rose, by any other name, would still smell as sweet. Even a billion dogs could not do this. Nor could 3^^^3 nematodes.
This belief is just plain unintelligent.
… I am incomprehensible of how one could come to this belief.
No, but we did need to know how flight works to build a flying machine. We don’t need to know how thought works to make a WBE happen. We only need to know how emulations work—and we already know this. What we don’t know is how cognition / intelligence works.
Do you understand the conceptual difference between narrow intelligence and general intelligence? AI researchers gave up decades ago on the notion of narrow AI yielding general AI.
So yes. I can’t see any progress at all to speak of. And yes, I know of Watson, Siri, Google, etc., etc..
Of all the metaphors you could have chosen you picked the one that dogs are the closest to contemplating.
And yet the difference in the ability of a collection of dogs to so contemplate it is so negligible that it could be ten million dogs or ten million-zeroes dogs. It still wouldn’t happen.
That was a straw man, though. The idea was to scale up a small brain into a big brain—not to put lots of small brains together.
Right—so: this has already been done in many domains—e.g. chess. Engineers will just mop up the remaining domains without bothering with the daft and unnecessary task of reverse engineering the human brain.
Well, we do know what the equivalent of “lift” is. It’s inductive inference. See: Inductive inference is like lift. We can already generate the equivalent of lift. We just don’t yet know how to get a lot of it in one place.
No, they didn’t. A few researchers did that, in an attempt to distinguish themselves from the mainstream.
So: I think progress is happening. It looks something like: this and this. Machines already make most stockmarket trades, can translate languages and do speech recognition, and bots have conquered manufacturing and are busy invading retail outlets, banks, offices and call centres. One wonders what it would take for you to classify something as progress towards machine intelligence.
I can “scale up” a threaded program by giving more processors for the threads to run on, but this doesn’t actually improve the program output (apart from rounding error and nondeterministic effects), it just makes the output faster. I can “scale up” an approximation algorithm that has a variable discretization size N, and that actually improves the output… but how do you adjust “N” in a worm brain?
Sure. Computers don’t always behave like brains do. Indeed, they are mostly designed to compensate for brain weakenesses—to be strong where we are weak.
We know that nature can easily scale brains up—because it did so with chimpanzees. Scaling brains up may not be trivial—but it’s probably much easier than building one in the first place. Once we can build a brain, we will probablly be able to make another one that is bigger.
Once we can build a brain, we will certainly be able to make another one that is bigger, but that won’t make it better. “Given this arrangement of neurons, what firing patterns will develop” is almost a completely different task than “given this problem to solve, what arrangement of neurons will best solve it”, which itself is merely a footnote to the task of “wait, what are our definitions of ‘problem’ and ‘best’ again?”
Nature scaled chimpanzee brains up by creating billions of them and running them through millions of years of challenging environments; that’s many orders of magnitude more difficult than building a single brain, and the result is merely expected to be whatever works best in the testing environments, which may or may not resemble what the creators of those environments want or expect.
It is very likely that it would, IMHO.
Nature had already done the R+D for buillding a partly-resizable brain, though. Turning a chimp brain into a human brain was mostly a case of turning a few knobs relating to brain development, and a few more relating to pelvis morphology. There is no good reason for thinking that resizing brains is terribly difficult for nature to do—at least up to a point.
Could you define how we make a brain “bigger”? Do we replace every one neuron with 2, then connect them up the same way? Without a specific definition there’s nothing but handwaving here, and it’s my contention that finding the specific definition is the difficult part.
But more shockingly: do you really have evidence that the last six million years of human evolution was “turning a few knobs”? If so, then I would very much like to hear it. If not, then we seem to be operating under such divergent epistemologies that I’m not sure what else I can productively say here.
That would be model-specific. Since we don’t actually have the model under discussion yet it is hard to go into details—but most NN models have the number of neurons as a variable.
I didn’t claim that the last six million years of human evolution was “turning a few knobs”. I referred explicitly to changes in the brain.
I was mosly referring to the evidence from genetics that humans are chimpanzees with a relatively small number of functional genetic changes. Plus the relatively short time involved. However, it is true that there are still some thirty-five million SNPs.
There’s other evidence that nature’s brains are relatively easy to dynamically resize. Dwarfism, gaintism and other growth disorders show that nature can relatively easily make human-scale brains of variable size—at least up to a point. Even kids illustrate that point pretty well. It does not require a lot of evolutionary R+D to make a brain bigger or smaller—that R+D was done by evolution long ago—and what we now have is resizable brains.
If all we wanted was to triple human brain size just like Nature can, we’d be breeding more whales. And if tripling brain size inherently tripled intelligence, we’d be asking the whales for their opinions afterwards.
If increasing brain size instead merely provides enough raw matter for other changes to eventually mold into improved intelligence, then it doesn’t just matter how hard the size increase is, it also matters how hard the other changes are. And at least in nature’s case, I reiterate that the improvement process was several orders of magnitude harder than the one-brain process. We might be able to do better than nature, but then we’re no longer talking about “it was easy in nature, so it will be easy for us too”, we’re talking about “it was hard in nature, so it might be hard for us too”.
Nature does seem to be able to scale brains down without the millions of years of evolution it took to scale them up, but that at least makes perfect sense as a pre-evolved characteristic. Accidents and poor nutrition are ubiquitous, so there’s a clear selection pressure for brains to develop to be robust enough that restricted growth or damage can still leave a functional result. Is there any similarly strong evolutionary pressure for brains to develop in such a way that opportunities for increased growth produce a better-functional result? If so it may not have been enough pressure; supernormal growth opportunities do seem to exist but aren’t necessarily positive.
(edited to fix broken link)
If we take “time” as a proxy for “difficulty” we have:
Origin of life: 3500 MYA.
Origin of brains: 600 MYA.
Origin of chimp-human split: 7 MYA.
According to those figures, scaling brains up a few thousand times was much easier than making one in the first place. Scaling one up by a factor of 3 was much, much easier.
As for modern big human brains, the main examples I know of arise from giantism and head binding.
If you have arbitrarily large computing power, you can just brute-force all problems. 3^^^3 nematodes could probably play chess.
… “An arbitrarily large population of nematodes” are not a turing-complete computational substrate.