Why do we think the WBE is “safe”? Natural intelligence is unfriendly in exactly the same way as a naively created AI.
The human is likely to be less effective than an AI, which makes it safer. But I don’t see how you can assert that a human is less likely to intentionally or accidentally destroy the universe, given the same power.
We think that WBE are safe, in that they are unlikely to be able to produce a single message starting an optimisation process taking over the universe.
Why do we think the WBE is “safe”? Natural intelligence is unfriendly in exactly the same way as a naively created AI.
The human is likely to be less effective than an AI, which makes it safer. But I don’t see how you can assert that a human is less likely to intentionally or accidentally destroy the universe, given the same power.
We think that WBE are safe, in that they are unlikely to be able to produce a single message starting an optimisation process taking over the universe.
You do have to be being careful not to give it too much computation time: http://lesswrong.com/lw/qk/that_alien_message/
Indeed! That’s why I give them three subjective weeks.