There are 2 main routes to Singularity, brain emulation/upload and de novo AGI.
Brain emulations are a joke. Intelligence augmentation seems much more significant—though it is not really much of an alternative to machine intelligence.
Multiple considerations are involved. One of them is to do with bioinspiration. To quote from my Against Whole Brain Emulation essay:
Engineers did not learn how to fly by scanning and copying birds. Nature may have provided a proof of the concept, and inspiration—but it didn’t provide the details the engineeres actually used. A bird is not much like a propellor-driven aircraft, a jet aircraft or a helicopter.
The argument applies across many domains. Water filters are not scanned kidneys. The hoover dam is not a scan of a beaver dam. Solar panels are not much like leaves. Humans do not tunnel much like moles do. Submarines do not closely resemble fish. From this perspective, it would be very strange if machine intelligence was much like human intelligence.
The reason we didn’t have much historical success with biomimetics is because biological systems are far to complex to just understand with a cursory look. We need modern bioinformatics, imaging, and molecular biology techniques to begin understanding how natural systems work, and be able to manipulate things on a small enough scale to replicate them.
It’s just now becoming possible. Engineers didn’t look at biology before, because they didn’t know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.
I looked at your essay, and don’t see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI. I would argue there’s no way to know how long either will take to develop, because we don’t even know what the obstacles are really. WBE could be as simple as building a sufficiently large network with neuron models like the ones we have already, or we could be missing some important details that make it far more difficult than that. It’s clear that you don’t like WBE, and you have some interesting reasons why we might not want to use WBE.
It’s just now becoming possible. Engineers didn’t look at biology before, because they didn’t know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.
That seems as though it is basically my argument. Biomimetic approaches are challenging and lag behind engineering-based ones by many decades.
I looked at your essay, and don’t see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI.
I don’t think WBE is infeasible—but I do think there’s evidence that it will take longer. We already have pretty sophisticated engineered machine intelligence—while we can’t yet create a WBE of a flatworm. Engineered machine intelligence is widely used in industry; WBE does nothing and doesn’t work. Engineered machine intelligence is in the lead, and it is much better funded.
I would argue there’s no way to know how long either will take to develop, because we don’t even know what the obstacles are really.
Polls of “expert” opinions on when we will develop a technology are not predictors when we will actually develop them. Their opinions could all be skewed in the same direction by missing the same piece of vital information.
For example, they could all be unaware of a particular hurdle that will be difficult to solve, or of an upcoming discovery that makes it possible to bypass problems they assumed to be difficult.
This is an important generalization, but there are also many counterexamples in our use of biotech in agriculture, medicine, chemical production, etc. We can’t design a custom cell, but Craig Venter can create a new ‘minimal’ genome from raw feedstuffs by copying from nature, and then add further enhancements to it. We produce alcohol using living organisms rather than a more efficient chemical process, and so forth. It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.
Creating an emulation involves a lot of further work, but one might put it in a reference class with members like the extensive work needed to get DNA synthesis, sequencing, and other biotechnologies to the point of producing Craig Venter’s ‘minimal genome’ cells.
It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.
Sure—but again, it looks as though that will mostly be relatively insignificant and happen too late. We should still do it. It won’t prevent a transition to engineered machine intelligence, though it might smooth the transition a little.
Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective—but that does seem to be what is happening.
People like Kurzweil (who doesn’t think that WBE will come first) may talk about it in the context of “we will merge with the machines, they won’t be an alien outgroup” as a P.R. exercise to make AI less scary. Some people also talk about whole brain emulation as an easy-to-explain loose upper bound on AI difficulty. But people like Robin Hanson who argue that WBE will come first do not give any indications of being engaged in PR, aside from their disagreement with you on the difficulty of theoretical advances in AI and so forth.
For W.B.E. P.R. I was mostly thinking of I.B.M. - though they say they have different motives (besides W.B.E., I mean).
Robin Hanson is an oddity—from my perspective. He wrote an early paper on the topic, and perhaps his views got anchored long ago.
The thing I notice about Hanson’s involvement is that he uses uploads to argue for the continued relevance of economics and marketplaces—and other material he has invested in. In the type of not-so-competitive future envisaged by others, economics will still be relevant—but not in quite the same way.
Anyway, Robin Hanson being interested in uploads-first counts in their favour—because of who Robin Hanson is. However, it isn’t so big a point in their favour that it overcomes all the uploads-first crazyness and implausibility.
Brain emulations are a joke. Intelligence augmentation seems much more significant—though it is not really much of an alternative to machine intelligence.
Why would you think they’re a joke? We seem to be on a clear path to achieve it in the near future.
As a route to machine intelligence they don’t make sense—because they will become viable too late—they will be beaten.
How do you know that?
Multiple considerations are involved. One of them is to do with bioinspiration. To quote from my Against Whole Brain Emulation essay:
The existence of non-biomimetic technology does not prove that biomimetics are inherently impractical.
There’s plenty of recent examples of successful biomimetics… Biomimetic solar: http://www.youtube.com/watch?v=sBpusZSzpyI Anisotropic dry adhesives: http://bdml.stanford.edu/twiki/bin/view/Rise/StickyBot Self cleaning paints: http://www.stocorp.com/blog/?tag=lotusan Genetic algorithms: http://gacs.sourceforge.net/
The reason we didn’t have much historical success with biomimetics is because biological systems are far to complex to just understand with a cursory look. We need modern bioinformatics, imaging, and molecular biology techniques to begin understanding how natural systems work, and be able to manipulate things on a small enough scale to replicate them.
It’s just now becoming possible. Engineers didn’t look at biology before, because they didn’t know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.
I looked at your essay, and don’t see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI. I would argue there’s no way to know how long either will take to develop, because we don’t even know what the obstacles are really. WBE could be as simple as building a sufficiently large network with neuron models like the ones we have already, or we could be missing some important details that make it far more difficult than that. It’s clear that you don’t like WBE, and you have some interesting reasons why we might not want to use WBE.
That seems as though it is basically my argument. Biomimetic approaches are challenging and lag behind engineering-based ones by many decades.
I don’t think WBE is infeasible—but I do think there’s evidence that it will take longer. We already have pretty sophisticated engineered machine intelligence—while we can’t yet create a WBE of a flatworm. Engineered machine intelligence is widely used in industry; WBE does nothing and doesn’t work. Engineered machine intelligence is in the lead, and it is much better funded.
If one is simpler than the other, absolute timescales matter little—but IMO, we do have some idea about timescales.
Polls of “expert” opinions on when we will develop a technology are not predictors when we will actually develop them. Their opinions could all be skewed in the same direction by missing the same piece of vital information.
For example, they could all be unaware of a particular hurdle that will be difficult to solve, or of an upcoming discovery that makes it possible to bypass problems they assumed to be difficult.
This is an important generalization, but there are also many counterexamples in our use of biotech in agriculture, medicine, chemical production, etc. We can’t design a custom cell, but Craig Venter can create a new ‘minimal’ genome from raw feedstuffs by copying from nature, and then add further enhancements to it. We produce alcohol using living organisms rather than a more efficient chemical process, and so forth. It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.
Creating an emulation involves a lot of further work, but one might put it in a reference class with members like the extensive work needed to get DNA synthesis, sequencing, and other biotechnologies to the point of producing Craig Venter’s ‘minimal genome’ cells.
Sure—but again, it looks as though that will mostly be relatively insignificant and happen too late. We should still do it. It won’t prevent a transition to engineered machine intelligence, though it might smooth the transition a little.
As I argue in my Against Whole Brain Emulation essay the idea is more wishful thinking and marketing than anything else.
Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective—but that does seem to be what is happening.
Possibly biotechnology will result in nanotechnological computing substrates. However, that seems to be a bit different from “whole brain emulation”.
People like Kurzweil (who doesn’t think that WBE will come first) may talk about it in the context of “we will merge with the machines, they won’t be an alien outgroup” as a P.R. exercise to make AI less scary. Some people also talk about whole brain emulation as an easy-to-explain loose upper bound on AI difficulty. But people like Robin Hanson who argue that WBE will come first do not give any indications of being engaged in PR, aside from their disagreement with you on the difficulty of theoretical advances in AI and so forth.
For W.B.E. P.R. I was mostly thinking of I.B.M. - though they say they have different motives (besides W.B.E., I mean).
Robin Hanson is an oddity—from my perspective. He wrote an early paper on the topic, and perhaps his views got anchored long ago.
The thing I notice about Hanson’s involvement is that he uses uploads to argue for the continued relevance of economics and marketplaces—and other material he has invested in. In the type of not-so-competitive future envisaged by others, economics will still be relevant—but not in quite the same way.
Anyway, Robin Hanson being interested in uploads-first counts in their favour—because of who Robin Hanson is. However, it isn’t so big a point in their favour that it overcomes all the uploads-first crazyness and implausibility.