I interpreted Eliezer as writing from the assumption that the superintelligence(s) in question are in fact not already aligned to maximize whatever it is that humanity needs to survive, but some other goal(s), which diverge from humanity’s interests once implemented.
He explicitly states that the essay’s point is to shoot down a clumsy counterargument (along “it wouldn’t cost the ASI a lot to let us live, so we should assume they’d let us live”). So the context (I interpret) is that such requests, however sympathetic, have not been ingrained into the ASI:s goals. Using a different example would mean he was discussing something different.
That is, “just because it would make a trivial difference from the ASI:s perspective to let humanity thrive, whereas it would make an existential difference from humanity’s perspective, doesn’t mean ASIs will let humanity thrive”, assuming such conditions aren’t already baked into their decision-making.
I think Eliezer spends so much time on working from these premises because he believes 1) an unaligned ASI to be the default outcome of current developments, and 2) that all current attempts at alignment will necessarily fail.
I interpreted Eliezer as writing from the assumption that the superintelligence(s) in question are in fact not already aligned to maximize whatever it is that humanity needs to survive, but some other goal(s), which diverge from humanity’s interests once implemented.
He explicitly states that the essay’s point is to shoot down a clumsy counterargument (along “it wouldn’t cost the ASI a lot to let us live, so we should assume they’d let us live”). So the context (I interpret) is that such requests, however sympathetic, have not been ingrained into the ASI:s goals. Using a different example would mean he was discussing something different.
That is, “just because it would make a trivial difference from the ASI:s perspective to let humanity thrive, whereas it would make an existential difference from humanity’s perspective, doesn’t mean ASIs will let humanity thrive”, assuming such conditions aren’t already baked into their decision-making.
I think Eliezer spends so much time on working from these premises because he believes 1) an unaligned ASI to be the default outcome of current developments, and 2) that all current attempts at alignment will necessarily fail.