I’d certainly prefer that smart people work to save me (and humanity, though I’m less sure that I care about future biological human-like entities all that much more than future electronic intelligent entities). And I very much appreciate your appeal to esthetics rather than trying to call it logically or ethically required.
But I can definitely sympathize with the choice to get $millions for the next few years, usable for a finite-but-not-irrelevant time, over dedicating one’s life to a very small chance of delaying or avoiding disastrous unaligned AI. I don’t much like the mouse who rolls over and intentionally dies, but I’m not sure I root for the mouse who sacrifices and hurts itself and then still gets eaten either. I do have some respect for the mouse that runs on it’s wheel and enjoys some cheese until the cat eats it.
I’d certainly prefer that smart people work to save me (and humanity, though I’m less sure that I care about future biological human-like entities all that much more than future electronic intelligent entities). And I very much appreciate your appeal to esthetics rather than trying to call it logically or ethically required.
But I can definitely sympathize with the choice to get $millions for the next few years, usable for a finite-but-not-irrelevant time, over dedicating one’s life to a very small chance of delaying or avoiding disastrous unaligned AI. I don’t much like the mouse who rolls over and intentionally dies, but I’m not sure I root for the mouse who sacrifices and hurts itself and then still gets eaten either. I do have some respect for the mouse that runs on it’s wheel and enjoys some cheese until the cat eats it.