I am one of those proponents of stopping all AI research and I will explain why.
(1) Don’t stand too close to the cliff. We don’t know how AGI will emerge and by the time we are close enough to know, it’s probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we’ll even grasp why the first AGI we build works.
Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It’s playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure “box”, or are well distributed across the galaxy.
I am one of those proponents of stopping all AI research and I will explain why.
(1) Don’t stand too close to the cliff. We don’t know how AGI will emerge and by the time we are close enough to know, it’s probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we’ll even grasp why the first AGI we build works.
Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It’s playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure “box”, or are well distributed across the galaxy.