My summary: This is a case against a failed AI Alignment, and extrapolating human values is overwhelmingly likely to lead to an AI, say, stretching your face into a smile for eternity, which is worse than an unaligned AI using your atoms to tile the universe with smiley faces.
More like the AI tortures you for eternity because some religious fundamentalist told it that it should, which is quite worse than an unaligned AI using your atoms to tile the universe with bibles or korans.
Even if only a single person’s values are extrapolated, I think things would still be basically fine. While power corrupts, it takes time do so. Value lock-in at the moment of creation of the AI prevents it from tracking (what would be the) power-warped values of its creator.
I’m frankly not sure how many among respectably-looking members of our societies those who would like to be mind-controlling dictators if they had chance.
My summary: This is a case against a failed AI Alignment, and extrapolating human values is overwhelmingly likely to lead to an AI, say, stretching your face into a smile for eternity, which is worse than an unaligned AI using your atoms to tile the universe with smiley faces.
More like the AI tortures you for eternity because some religious fundamentalist told it that it should, which is quite worse than an unaligned AI using your atoms to tile the universe with bibles or korans.
Even if only a single person’s values are extrapolated, I think things would still be basically fine. While power corrupts, it takes time do so. Value lock-in at the moment of creation of the AI prevents it from tracking (what would be the) power-warped values of its creator.
I’m frankly not sure how many among respectably-looking members of our societies those who would like to be mind-controlling dictators if they had chance.