While others will probably answer your question as-is, I’d just like to point out that for most people who care about AI and who support MIRI, this is not the line of reasoning that convinced them nor is it the best reason to care. FAI is important because it would fix most of the world’s problems and ensure us all very long, fulfilling lives, and because without it, we’d probably fail to capture the stars’ potential and wither to death of old age.
Torture mostly comes up because philosophical thought-experiments tend to need a shorthand for “very bad thing not otherwise specified”, and it’s an instance of that which won’t interact with other parts of the thought experiments or invite digressions.
If you believe my moral system (not the topic of this post) is patently absurd, please PM me the full version of your argument. I promise to review it with an open mind. Note: I am naturally afraid of torture outcomes, but that doesn’t mean I’m not excited about FAI. That would be patently absurd.
Torture mostly comes up because philosophical thought-experiments...
To clarify: are you saying there is no chance of torture?
Yes, I am saying that the scenario you allude to is vanishingly unlikely.
But there’s another point, which cuts close to the core of my values, and I suspect it cuts close to the core of your values, too. Rather than explain it myself, I’m going to suggest reading Scott Alexander’s Who By Very Slow Decay, which is about aging.
That’s the status quo. That’s one of the main the reasons I, personally, care about AI: because if it’s done right, then the thing Scott describes won’t be a part of the world anymore.
I second this. Mac, I suggest you read “Existential Risk Prevention as a Global Priority” if you haven’t already to further understand why an AI killing all life (even painlessly) would be extremely harmful.
While others will probably answer your question as-is, I’d just like to point out that for most people who care about AI and who support MIRI, this is not the line of reasoning that convinced them nor is it the best reason to care. FAI is important because it would fix most of the world’s problems and ensure us all very long, fulfilling lives, and because without it, we’d probably fail to capture the stars’ potential and wither to death of old age.
Torture mostly comes up because philosophical thought-experiments tend to need a shorthand for “very bad thing not otherwise specified”, and it’s an instance of that which won’t interact with other parts of the thought experiments or invite digressions.
If you believe my moral system (not the topic of this post) is patently absurd, please PM me the full version of your argument. I promise to review it with an open mind. Note: I am naturally afraid of torture outcomes, but that doesn’t mean I’m not excited about FAI. That would be patently absurd.
To clarify: are you saying there is no chance of torture?
Yes, I am saying that the scenario you allude to is vanishingly unlikely.
But there’s another point, which cuts close to the core of my values, and I suspect it cuts close to the core of your values, too. Rather than explain it myself, I’m going to suggest reading Scott Alexander’s Who By Very Slow Decay, which is about aging.
That’s the status quo. That’s one of the main the reasons I, personally, care about AI: because if it’s done right, then the thing Scott describes won’t be a part of the world anymore.
Good piece, thank you for sharing it.
I agree with you and Scott Alexander—painful death from aging is awful.
I second this. Mac, I suggest you read “Existential Risk Prevention as a Global Priority” if you haven’t already to further understand why an AI killing all life (even painlessly) would be extremely harmful.