In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.
Thanks Stuart.
In your e-mail to me, you estimated that these conflicting opinions added up to a “weak consensus towards WBE” within FHI. Since SI workshop participants’ opinions added up to a weak consensus against WBE, there doesn’t seem to be a strong case for trying to shift probabilities in either direction at this point.
Edit (2014-05-19): I just spoke with FHI academic project manager Andrew Snyder-Beattie, and he represents FHI as having a widespread consensus towards slowing progress on all artificial intelligence. He additionally says FHI thinks ems could be less safe than mathematically constructed intelligences. So it sounds like FHI wants to slow down all research of this sort and try to increase everyone’s awareness of potential dangers.