Co-founder of AI-Plans and volunteer with PauseAI.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
I’m sorry for your loss. It is something no one should have to go through.
My father was diagnosed with Parkinson’s last year. I have processed and accepted the fact that he is going to die.
Under the circumstances, he is most likely going to die from artificial intelligence at about the same time that I do.
There is no temptation you could give me that would make me risk the end of all things. Not prevention of my father’s death. Not the prevention of my death. Not the prevention of my partner’s death. I do not need AGI. Humanity as a whole does not need AGI, nor do most people even want it.
Death is horrible, which is why everyone should be strongly advocating for AGI to not be built, until it is safe. By default, it will kill literally everyone.
If you find yourself weighing the lives of everyone on earth and deciding for yourself whether they should be imperiled, then you have learned the wrong lesson from stories of comic book supervillains. It’s not our choice to make, and we are about to murder everyone’s mothers.