I agree, and I attempted to emphasize the winner-take-all aspect of AI in my original post.
The intended emphasis isn’t on which of the two outcomes is preferable, or how to comparatively allocate resources to prevent them. It’s on the fact that there is no difference between alignment and misalignment with respect to the survival expectations of the average person.
Ok, then that I understand. I do not think it follows that I should be indifferent between those two ways of me dying. Both are bad, but only one of them necessarily destroys everything I value.
In any case I think it’s much more likely a group using an aligned-as-defined-here AGI to kill (almost) everyone by accident, rather than intentionally.
I’d say their value is instrumental, not terminal. The sun and stars are beautiful, but only when there are minds to appreciate them. They make everything else of value possible, because of their light and heat and production of all the various elements beyond Helium.
But a dead universe full of stars, and a sun surrounded by lifeless planets, have no value as far as I’m concerned, except insofar as there is remaining potential for new life to arise that would itself have value. If you gave me a choice between a permanently dead universe of infinite extent, full of stars, or a single planet full of life (of a form I’m capable of finding value in, so a planet full of only bacteria doesn’t cut it) but surrounded by a bland and starless sky that only survives by artificial light and heat production (assume they’ve mastered controlled fusion and indoor agriculture), I’d say the latter is more valuable.
@AnthonyC I may be mistaken, but I took @M. Y. Zuo to be offering a reductio ad absurdum response to your comment about not being indifferent between the two ways of dying. The ‘which is a worse way to die’ debate doesn’t respond to what I wrote. I said
With respect to the survival prospects for the average human, this [whether or not the dying occurs by AGI] seems to me to be a minor detail.
I did not say that no one should care about the difference.
But the two risks are not in competition, they are complementary. If your concern about misalignment is based on caring about the continuation of the human species, and you don’t actually care how many humans other humans would kill in a successful alignment(-as-defined-here) scenario, a credible humans-kill-most-humans risk is still really helpful to your cause, because you can ally yourself with the many rational humans who don’t want to be killed either way to prevent both outcomes by killing AI in its cradle.
I agree, and I attempted to emphasize the winner-take-all aspect of AI in my original post.
The intended emphasis isn’t on which of the two outcomes is preferable, or how to comparatively allocate resources to prevent them. It’s on the fact that there is no difference between alignment and misalignment with respect to the survival expectations of the average person.
Ok, then that I understand. I do not think it follows that I should be indifferent between those two ways of me dying. Both are bad, but only one of them necessarily destroys everything I value.
In any case I think it’s much more likely a group using an aligned-as-defined-here AGI to kill (almost) everyone by accident, rather than intentionally.
You don’t value the Sun, or the other stars in the sky?
Even in the most absurdly catastrophic scenarios it doesn’t seem plausible that they could be ‘necessarily destroyed’.
I’d say their value is instrumental, not terminal. The sun and stars are beautiful, but only when there are minds to appreciate them. They make everything else of value possible, because of their light and heat and production of all the various elements beyond Helium.
But a dead universe full of stars, and a sun surrounded by lifeless planets, have no value as far as I’m concerned, except insofar as there is remaining potential for new life to arise that would itself have value. If you gave me a choice between a permanently dead universe of infinite extent, full of stars, or a single planet full of life (of a form I’m capable of finding value in, so a planet full of only bacteria doesn’t cut it) but surrounded by a bland and starless sky that only survives by artificial light and heat production (assume they’ve mastered controlled fusion and indoor agriculture), I’d say the latter is more valuable.
@AnthonyC I may be mistaken, but I took @M. Y. Zuo to be offering a reductio ad absurdum response to your comment about not being indifferent between the two ways of dying. The ‘which is a worse way to die’ debate doesn’t respond to what I wrote. I said
I did not say that no one should care about the difference.
But the two risks are not in competition, they are complementary. If your concern about misalignment is based on caring about the continuation of the human species, and you don’t actually care how many humans other humans would kill in a successful alignment(-as-defined-here) scenario, a credible humans-kill-most-humans risk is still really helpful to your cause, because you can ally yourself with the many rational humans who don’t want to be killed either way to prevent both outcomes by killing AI in its cradle.