In view of all this, working toward stable whole-brain emulation of a a trusted and highly intelligent person concerned about human well being seems to me like a more promising strategy of reducing existential risk at the present time than researching Friendly AI.
The current situation is that good old-fashioned software engineering has revolutionised the financial markets, saving the world billions of dollars of bad investments, revolutionised the world of information storage and retreival, helping people to have accurate views of the world, revolutionised the world of forecasting, and created an international marketplace of huge proportions, vastly increasing the extent to which nations depend on each other, and so minimising the chance of armed conflict.
By contrast, emulating brains has been a practically useless approach, hardly responsible for anything of interest. The high-profile flagship products in the area are literally useless, they do nothing—except flash pretty lights in peoples’ eyes—funding via hypnosis.
“Whole-brain emulation” seems likely to continue to be useless for a looooong time. The whole approach seems based on wishful thinking to me. It seems to be an example of the type of thinking that you get if you fail to consult with the likely timelines on a project.
It can simultaneously be the case that whole-brain emulation is unlikely to arise first and that pursuing stable whole-brain emulation is more likely to give rise to a positive singularity than by pursuing Friendly AI.
The alleged facts that you cite about software engineering don’t seem relevant here: as far as I know the current state of general artificial intelligence research is still very primitive.
I would recur to FAWS’s response to one of your comments from nine months ago.
It is true that it’s possible that whole-brain emulation could be safer—but humans are psychopaths. Having agents whose brains we don’t understand in charge would be a terrible situation—removing one of our safeguards. Human brains are known to be very bad—stupid, unreliable, etc. The idea that engineered machine intelligence would be worse is, of course, possible—but doesn’t seem to be very likely. Engineered machine intelligence would be much more configurable and controllable.
Anyway, the safely of WBE seems likely to be irrelevant—if it is sufficiently likely to be beaten. We can imagine all kinds of time-consuming super-safe fantasy contraptions—but there’s a clock ticking.
Large-scale bioinspiration is used infrequently by engineers: an aeroplane is not a scanned bird, a car is not a scanned horse, a submarine is not a scanned fish, garders are not scanned logs, ropes are not a scanned vines—and so on. We scan when we really want a copy. Photos, videos, audio. In this case, a copy is exactly what we don’t want. We have cheap human brains. The main vacancies are for inhuman entities—things like a Google datacentre, or a roomba—for example.
IMO, we can see from the details of the situation that scanning isn’t going to happen in this case. If we get a fruit-fly brain scanned and working, someone is bound to scale it up 10,000 times, and then hack it into a shape that is useful for something. There are innumerable short-cuts like that on the path—and some of them seem bound to be taken.
Thinking of “general artificial intelligence” as a field is an artefact. “Artificial General Intelligence” is a marketing term used by a break-away group who were apparently irritated by the barriers to presenting at mainstream machine intelligence conferences. The rest of the field is enormous—by comparison with that splinter group—and the efforts of that mainstream on machine learning seem highly relevant to overall development to me.
Machine intelligence research started to get serious the 1950s. My projected mode “arrival time” [sic] is 2025. If correct, that makes the field about 80% “there”, timewise. Of course, it may not look as though it is close yet—but that could well be an artefact of exponential growth processes, which appear to reach the destination all at once, in a dramatic final surge.
FAWS’s comment seems practically all wrong to me—each paragraph after the first one.
The current situation is that good old-fashioned software engineering has revolutionised the financial markets, saving the world billions of dollars of bad investments, revolutionised the world of information storage and retreival, helping people to have accurate views of the world, revolutionised the world of forecasting, and created an international marketplace of huge proportions, vastly increasing the extent to which nations depend on each other, and so minimising the chance of armed conflict.
By contrast, emulating brains has been a practically useless approach, hardly responsible for anything of interest. The high-profile flagship products in the area are literally useless, they do nothing—except flash pretty lights in peoples’ eyes—funding via hypnosis.
“Whole-brain emulation” seems likely to continue to be useless for a looooong time. The whole approach seems based on wishful thinking to me. It seems to be an example of the type of thinking that you get if you fail to consult with the likely timelines on a project.
It can simultaneously be the case that whole-brain emulation is unlikely to arise first and that pursuing stable whole-brain emulation is more likely to give rise to a positive singularity than by pursuing Friendly AI.
The alleged facts that you cite about software engineering don’t seem relevant here: as far as I know the current state of general artificial intelligence research is still very primitive.
I would recur to FAWS’s response to one of your comments from nine months ago.
It is true that it’s possible that whole-brain emulation could be safer—but humans are psychopaths. Having agents whose brains we don’t understand in charge would be a terrible situation—removing one of our safeguards. Human brains are known to be very bad—stupid, unreliable, etc. The idea that engineered machine intelligence would be worse is, of course, possible—but doesn’t seem to be very likely. Engineered machine intelligence would be much more configurable and controllable.
Anyway, the safely of WBE seems likely to be irrelevant—if it is sufficiently likely to be beaten. We can imagine all kinds of time-consuming super-safe fantasy contraptions—but there’s a clock ticking.
Large-scale bioinspiration is used infrequently by engineers: an aeroplane is not a scanned bird, a car is not a scanned horse, a submarine is not a scanned fish, garders are not scanned logs, ropes are not a scanned vines—and so on. We scan when we really want a copy. Photos, videos, audio. In this case, a copy is exactly what we don’t want. We have cheap human brains. The main vacancies are for inhuman entities—things like a Google datacentre, or a roomba—for example.
IMO, we can see from the details of the situation that scanning isn’t going to happen in this case. If we get a fruit-fly brain scanned and working, someone is bound to scale it up 10,000 times, and then hack it into a shape that is useful for something. There are innumerable short-cuts like that on the path—and some of them seem bound to be taken.
Thinking of “general artificial intelligence” as a field is an artefact. “Artificial General Intelligence” is a marketing term used by a break-away group who were apparently irritated by the barriers to presenting at mainstream machine intelligence conferences. The rest of the field is enormous—by comparison with that splinter group—and the efforts of that mainstream on machine learning seem highly relevant to overall development to me.
Machine intelligence research started to get serious the 1950s. My projected mode “arrival time” [sic] is 2025. If correct, that makes the field about 80% “there”, timewise. Of course, it may not look as though it is close yet—but that could well be an artefact of exponential growth processes, which appear to reach the destination all at once, in a dramatic final surge.
FAWS’s comment seems practically all wrong to me—each paragraph after the first one.
Tangential question:
...good old-fashioned software engineering has revolutionised the financial markets, saving the world billions of dollars of bad investments,...
Is this true? I’d like it to be true for many reasons. But I have seen some plausible arguments that in recent years the financial sector has not been doing its job in the economy very well. For example, http://www.newyorker.com/reporting/2010/11/29/101129fa_fact_cassidy?printable=true .