The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.
The FHI position on WBE is by no means uniform. The key questions are whether WBE research will lead to neuromorphic AI (NAI), whether WBE makes FAI more or less likey, and whether (WBE transition) followed by (AI transition) is more survivable than the other way round (and, of course, on the problems and solutions of WBE itself, eg Robin’s nightmare scenario vs effective immortality).
My position is that successful WBE will make FAI tremendously easier (we can for instance tell the AI “do what this WBE program would tell you to do, if you ran it for a thousand subjective years” (similarly to a suggestion of Paul Christiano’s), and the WBE would be able to keep pace with the AI’s speed, and thus make a breakout more difficult). Other people at the FHI have different opinions, consistent with their different assessement of the risk of AI and the impact of WBE, and I won’t put words in their mouths. Though one relevant fact is that getting NAI from partial WBE is generally considered to be harder by those who know the most of neuro-biology (and easiest by those who know the least).
Thanks Stuart.
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.