I am worried about some of the tonal shifts around the future of AI, as it may relate to more vulnerable members of the community. While I understand Mr. Eudowski’s stance of “Death with Dignity,” I worry that it’s creating a dangerous feeling of desperation for some.
People make bad decisions when they feel threatened and without options. Violence starts to look more appealing, because there is a point where violence is warranted, even if it isn’t something most of us would consider.
One of the big take-always from this community is “do something:” you can make a difference if you’re willing to put in the work. How many have we seem who have taken up AI safety research? This is generally a good thing.
Once people think that they and their families are threatened, “do something” becomes a war cry. After all, bombing OpenAI datacenter might not have a high probability of succeeding, but when you feel like you’re out of options, it might seem to be your only hope.
This becomes more dangerous when you’re talking about people who feel disaffected, and struggle with anxiety and depression. Those who feel alone and desperate. This that don’t feel like they have a lot to lose, and maybe dream about becoming humanity’s savior.
I don’t know what to do about this, but I do think that it’s an important conversation. What can we do to minimize the chance of someone from our community committing violent acts? How do we fight this aspect of Moloch?
Death and Desperation
I am worried about some of the tonal shifts around the future of AI, as it may relate to more vulnerable members of the community. While I understand Mr. Eudowski’s stance of “Death with Dignity,” I worry that it’s creating a dangerous feeling of desperation for some.
People make bad decisions when they feel threatened and without options. Violence starts to look more appealing, because there is a point where violence is warranted, even if it isn’t something most of us would consider.
One of the big take-always from this community is “do something:” you can make a difference if you’re willing to put in the work. How many have we seem who have taken up AI safety research? This is generally a good thing.
Once people think that they and their families are threatened, “do something” becomes a war cry. After all, bombing OpenAI datacenter might not have a high probability of succeeding, but when you feel like you’re out of options, it might seem to be your only hope.
This becomes more dangerous when you’re talking about people who feel disaffected, and struggle with anxiety and depression. Those who feel alone and desperate. This that don’t feel like they have a lot to lose, and maybe dream about becoming humanity’s savior.
I don’t know what to do about this, but I do think that it’s an important conversation. What can we do to minimize the chance of someone from our community committing violent acts? How do we fight this aspect of Moloch?