I don’t know how much we can read into the fact that the explicit aim of the project is to emulate a brain, not make brain inspired AI.
Unfortunately, the project seems to aim at emulating some sort of generic human brain, instead of a high fidelity simulation of a specific individual (which would be likely to retain their values and skills). I’m inferring this from the fact that the project site has no description of how it plans to scan an individual’s brain, nor lists brain scanning as a major research topic. Also, the listed benefits for the project do not depend on having a hi-fi WBE:
Biologically detailed simulations of the brain will make it possible, for the first time, to identify the multi-level chain of interactions leading from genes to cognition and behaviour. Also to be researched, using supercomputer-based simulation technology, are new diagnostic tools and treatments for brain disease, new interfaces to the brain, new types of low-energy technologies with brain-like intelligence, and a new generation of brain-enabled robots.
It looks like they are interested in brain inspired AI (see the part I italicized above).
It happens that Carl Shulman and I have recently been discussing some issues related to your question. Have you seen that thread?
The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.
Unfortunately, the project seems to aim at emulating some sort of generic human brain, instead of a high fidelity simulation of a specific individual (which would be likely to retain their values and skills). I’m inferring this from the fact that the project site has no description of how it plans to scan an individual’s brain, nor lists brain scanning as a major research topic. Also, the listed benefits for the project do not depend on having a hi-fi WBE:
It looks like they are interested in brain inspired AI (see the part I italicized above).
It happens that Carl Shulman and I have recently been discussing some issues related to your question. Have you seen that thread?
How silly of me to not read the project website. Poking around, it looks like they aren’t exactly limiting themselves in scope that much.
Thanks for the link to that thread; I had not seen it! I e-mailed Stuart Armstrong to try to figure out what the current position of the FHI is.
The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
Thanks Stuart.
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.