I’m having a disconnect. I think I’m kind of selfish too. But if it came to a choice between me dying this year and humanity dying 100 years from now, I’ll take my death. It’s going to happen anyway, and I’m old enough I got mine, or most of it. I’m confident I’d feel the same if I didn’t have children, though less intensely. What is causing the difference in these perspectives? IDK. My 90-year-old friend would snort at the question; what difference would a year or two make? The old have less to lose. But the young are usually much more willing to risk their lives. So: IDK.
I’m in the same boat. I’m not that worried about my own life, in the general scheme of things. I fully expect I’ll die, and probably earlier than I would in a world without AI development. What really cuts me up is the idea that there will be no future to speak of, that all my efforts won’t contribute to something, some small influence on other people enjoying their lives at a later time. A place people feel happy and safe and fulfilled.
If I had a credible offer to guarantee that future in exchange for my life, I think I’d take it. (I’m currently healthy, more than half my life left to live, assuming average life expectancy)
Sometimes I try to take comfort in many-worlds, that there exist different timelines where humanity manages to regulate AI or align it with human values (whatever those are). Given that I have no capacity to influence those timelines though, it doesn’t feel like they are meaningfully there.
I’m having a disconnect. I think I’m kind of selfish too. But if it came to a choice between me dying this year and humanity dying 100 years from now, I’ll take my death. It’s going to happen anyway, and I’m old enough I got mine, or most of it. I’m confident I’d feel the same if I didn’t have children, though less intensely. What is causing the difference in these perspectives? IDK. My 90-year-old friend would snort at the question; what difference would a year or two make? The old have less to lose. But the young are usually much more willing to risk their lives. So: IDK.
I’m in the same boat. I’m not that worried about my own life, in the general scheme of things. I fully expect I’ll die, and probably earlier than I would in a world without AI development. What really cuts me up is the idea that there will be no future to speak of, that all my efforts won’t contribute to something, some small influence on other people enjoying their lives at a later time. A place people feel happy and safe and fulfilled.
If I had a credible offer to guarantee that future in exchange for my life, I think I’d take it.
(I’m currently healthy, more than half my life left to live, assuming average life expectancy)
Sometimes I try to take comfort in many-worlds, that there exist different timelines where humanity manages to regulate AI or align it with human values (whatever those are). Given that I have no capacity to influence those timelines though, it doesn’t feel like they are meaningfully there.