When most of SI (and a few others) discussed the issue for 10+ hours, we tentatively concluded that WBE progress should not be accelerated. (Context: see differential technological development.)
If we are going to try to influence this decision, we might as well try to make our estimate as high quality as possible. To do this, it seems like a good idea to discuss the issue as thoroughly as possible before coming to any firm conclusions.
If we are going to try to influence this decision, we might as well try to make our estimate as high quality as possible.
If you think WBE progress negatively impacts the survival of humanity I would think you would want to choose the lowest quality group that’s still plausible.
(Unless that breaks your ethical injunctions against dishonesty in which case the best course of action would seem to be to abstain from influencing the decision)
Edit: This appears to be a working link to the discussion Luke is referencing.
Some thoughts:
It doesn’t look like any of the people in your discussion were academic neuroscientists.
I don’t know how much we can read into the fact that the explicit aim of the project is to emulate a brain, not make brain inspired AI.
At this point, I’m pretty uncertain, and it doesn’t look like there is much interest from less wrong users in constructing a better estimate. And since the discussion you describe is something of a black box, it seems like it would be awkward to improve upon.
I don’t know how much we can read into the fact that the explicit aim of the project is to emulate a brain, not make brain inspired AI.
Unfortunately, the project seems to aim at emulating some sort of generic human brain, instead of a high fidelity simulation of a specific individual (which would be likely to retain their values and skills). I’m inferring this from the fact that the project site has no description of how it plans to scan an individual’s brain, nor lists brain scanning as a major research topic. Also, the listed benefits for the project do not depend on having a hi-fi WBE:
Biologically detailed simulations of the brain will make it possible, for the first time, to identify the multi-level chain of interactions leading from genes to cognition and behaviour. Also to be researched, using supercomputer-based simulation technology, are new diagnostic tools and treatments for brain disease, new interfaces to the brain, new types of low-energy technologies with brain-like intelligence, and a new generation of brain-enabled robots.
It looks like they are interested in brain inspired AI (see the part I italicized above).
It happens that Carl Shulman and I have recently been discussing some issues related to your question. Have you seen that thread?
The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.
Thanks for the link, John.
When most of SI (and a few others) discussed the issue for 10+ hours, we tentatively concluded that WBE progress should not be accelerated. (Context: see differential technological development.)
Here is more discussion of the topic:
http://www.overcomingbias.com/2011/12/hurry-or-delay-ems.html
If we are going to try to influence this decision, we might as well try to make our estimate as high quality as possible. To do this, it seems like a good idea to discuss the issue as thoroughly as possible before coming to any firm conclusions.
If you think WBE progress negatively impacts the survival of humanity I would think you would want to choose the lowest quality group that’s still plausible.
(Unless that breaks your ethical injunctions against dishonesty in which case the best course of action would seem to be to abstain from influencing the decision)
Not all of the groups are working on WBE.
Edit: This appears to be a working link to the discussion Luke is referencing.
Some thoughts:
It doesn’t look like any of the people in your discussion were academic neuroscientists.
I don’t know how much we can read into the fact that the explicit aim of the project is to emulate a brain, not make brain inspired AI.
At this point, I’m pretty uncertain, and it doesn’t look like there is much interest from less wrong users in constructing a better estimate. And since the discussion you describe is something of a black box, it seems like it would be awkward to improve upon.
Unfortunately, the project seems to aim at emulating some sort of generic human brain, instead of a high fidelity simulation of a specific individual (which would be likely to retain their values and skills). I’m inferring this from the fact that the project site has no description of how it plans to scan an individual’s brain, nor lists brain scanning as a major research topic. Also, the listed benefits for the project do not depend on having a hi-fi WBE:
It looks like they are interested in brain inspired AI (see the part I italicized above).
It happens that Carl Shulman and I have recently been discussing some issues related to your question. Have you seen that thread?
How silly of me to not read the project website. Poking around, it looks like they aren’t exactly limiting themselves in scope that much.
Thanks for the link to that thread; I had not seen it! I e-mailed Stuart Armstrong to try to figure out what the current position of the FHI is.
The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
Thanks Stuart.
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.
I just noticed from that document that you listed Alexander Funcke as owner of “Zelta Deta.” Googling his name, I think you meant “Zeta Delta?”