I find it extremely concerning when longtermism keeps playing a horrifying scenario of low or unknown probability against certain significant damage, and says the horror of the horrifying scenario always wins, no matter how low or uncertain the probability, with no limits on how much you should destroy to gain safety, because no level of risk is acceptable. It feels like shooting your own child lest it grow up to kill you one day. Like rejecting a miraculous technology, lest it turn evil. Crippling AI will do known, significant, tangible, severe damage to the lives it already saves today, and unknown, but potentially severe damage, to our ability to counter the climate crisis or solve aging. Not dealing with short-term AI problems will do known, significant, tangible, severe damage, further entrenching human injustice and division. I do think the existential risk posed by failing AI alignment deserves to be taken extremely seriously. But I would recommend the same approach we take with other types of activism on extremely complex and unpredictable systems; pursue an activist approach that is as far as you know likely to help fix your problem at base long-term in the uncertain realm you do not know, but whose immediate and known effects are also positive. The open letter I proposed was for getting more AI safety funding; and I think it would have been more likely to be received well, and help, without causing damage. This is also why I advocate for treating AI decently; I think it will help with AI alignment, but I also think it is independently the right thing to do. Regardless of whether AI alignment turns out to be an existential threat and whether we solve it, I do not think I will regret treating an emerging mind with kindness. If we shut down AI, we will forever wonder if we could have gained a friendly AGI that could be at our side now, and is absent.
If humans intercepted a message from aliens looking to reach out, the safe and smart thing would be not to respond; they might be predators, just wanting to lure us out. We have lived fine without them, and while being with them could be marvellous, it is not necessary, and it could be existentially fatal. And yet… I would want us to respond. I’d want to meet these other minds. My hope for these new potential friends would outweigh my fear of these potential enemies. At the end of the day, this is how I feel about AI as well. I do not know whether it will be friendly. I am highly dubious that we could control it. And yet, my urge is not to prevent its creation, but to meet it and see if I can befriend it. It is the same part of me that is excited about scientific experiments that could go wrong or reveal something incredible, about novel technologies that could be disruptive or revolutionary, about meeting new humans who could be horrible psychopaths or my new best friends, about entering a wildland that could be treacherous or the most beautiful place I have ever seen. I want to be the type of person who choses curiosity over fear, freedom over safety, progress and change over the comfort of the known.
This is not because I am young, or naive, or because I have not seen shit. I’ve been raped, physically assaulted, locked up, threatened with death, had my work and property stolen and name slandered, loved people with actual psychiatric diagnoses who used all my vulnerabilities to betray me, have been mauled by animals, have physical scars and broken bones, an official PTSD diagnosis for actual trauma, attempted suicide due to the horror, lived and still live for years with chronic physical pain and uncertainty, and I am painfully aware of the horror in our world, and the danger it is in. And yet, I concluded that the worst violation and victimhood and failure of all was my temporary conclusion to not trust the world at all anymore, to not get close to anyone, to become closed to everything, to become numb. It’s a state that robs you of closeness and curiosity and wonder and love and hope, leaving you quietly and cynically laughing, alone, in the dark, superior and safe in your belief that all is doom, confident in your belief that there is no good, and with no true fight or happiness or courage left. I’ve been that person, and I feel becoming that was worse than all the real pain and real risk that caused it. It made me a person shaped to the core by fear, not daring to try something new lest it turn dark or be taken from me. The reason I left this identity and place was not that I concluded that the world is a safe place. It is that I see that not taking real, painful, terrifying risks is how you stop being alive, too.
There is something to this parody.
I find it extremely concerning when longtermism keeps playing a horrifying scenario of low or unknown probability against certain significant damage, and says the horror of the horrifying scenario always wins, no matter how low or uncertain the probability, with no limits on how much you should destroy to gain safety, because no level of risk is acceptable. It feels like shooting your own child lest it grow up to kill you one day. Like rejecting a miraculous technology, lest it turn evil. Crippling AI will do known, significant, tangible, severe damage to the lives it already saves today, and unknown, but potentially severe damage, to our ability to counter the climate crisis or solve aging. Not dealing with short-term AI problems will do known, significant, tangible, severe damage, further entrenching human injustice and division. I do think the existential risk posed by failing AI alignment deserves to be taken extremely seriously. But I would recommend the same approach we take with other types of activism on extremely complex and unpredictable systems; pursue an activist approach that is as far as you know likely to help fix your problem at base long-term in the uncertain realm you do not know, but whose immediate and known effects are also positive. The open letter I proposed was for getting more AI safety funding; and I think it would have been more likely to be received well, and help, without causing damage. This is also why I advocate for treating AI decently; I think it will help with AI alignment, but I also think it is independently the right thing to do. Regardless of whether AI alignment turns out to be an existential threat and whether we solve it, I do not think I will regret treating an emerging mind with kindness. If we shut down AI, we will forever wonder if we could have gained a friendly AGI that could be at our side now, and is absent.
If humans intercepted a message from aliens looking to reach out, the safe and smart thing would be not to respond; they might be predators, just wanting to lure us out. We have lived fine without them, and while being with them could be marvellous, it is not necessary, and it could be existentially fatal. And yet… I would want us to respond. I’d want to meet these other minds. My hope for these new potential friends would outweigh my fear of these potential enemies. At the end of the day, this is how I feel about AI as well. I do not know whether it will be friendly. I am highly dubious that we could control it. And yet, my urge is not to prevent its creation, but to meet it and see if I can befriend it. It is the same part of me that is excited about scientific experiments that could go wrong or reveal something incredible, about novel technologies that could be disruptive or revolutionary, about meeting new humans who could be horrible psychopaths or my new best friends, about entering a wildland that could be treacherous or the most beautiful place I have ever seen. I want to be the type of person who choses curiosity over fear, freedom over safety, progress and change over the comfort of the known.
This is not because I am young, or naive, or because I have not seen shit. I’ve been raped, physically assaulted, locked up, threatened with death, had my work and property stolen and name slandered, loved people with actual psychiatric diagnoses who used all my vulnerabilities to betray me, have been mauled by animals, have physical scars and broken bones, an official PTSD diagnosis for actual trauma, attempted suicide due to the horror, lived and still live for years with chronic physical pain and uncertainty, and I am painfully aware of the horror in our world, and the danger it is in. And yet, I concluded that the worst violation and victimhood and failure of all was my temporary conclusion to not trust the world at all anymore, to not get close to anyone, to become closed to everything, to become numb. It’s a state that robs you of closeness and curiosity and wonder and love and hope, leaving you quietly and cynically laughing, alone, in the dark, superior and safe in your belief that all is doom, confident in your belief that there is no good, and with no true fight or happiness or courage left. I’ve been that person, and I feel becoming that was worse than all the real pain and real risk that caused it. It made me a person shaped to the core by fear, not daring to try something new lest it turn dark or be taken from me. The reason I left this identity and place was not that I concluded that the world is a safe place. It is that I see that not taking real, painful, terrifying risks is how you stop being alive, too.