But the even worse failure is the One Great Moral Principle We Don’t Even Need To Program Because Any AI Must Inevitably Conclude It. This notion exerts a terrifying unhealthy fascination on those who spontaneously reinvent it; they dream of commands that no sufficiently advanced mind can disobey.
This is almost where I am. I think my Great Moral Principle would be adopted by any rational and sufficiently intelligent AI that isn’t given any other goals. It is fascinating.
But the even worse failure is the One Great Moral Principle We Don’t Even Need To Program Because Any AI Must Inevitably Conclude It. This notion exerts a terrifying unhealthy fascination on those who spontaneously reinvent it; they dream of commands that no sufficiently advanced mind can disobey.
This is almost where I am. I think my Great Moral Principle would be adopted by any rational and sufficiently intelligent AI that isn’t given any other goals. It is fascinating.
But I don’t think it’s a solution to Friendly AI.