In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses.
And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...).
However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world’s best AI researchers unemployed, removes them from competing in the next round of model improvements—whoever makes a gpt-5 at a competitor will have the best model outright.
Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent.
However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
I mean, yes, a company self-destructing doesn’t stop much if their knowledge isn’t also actively deleted—and even then, it’s just a setback of a few months. But also, by going “oh well we need to work inside the system to fix it somehow” at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.
In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses.
And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...).
However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world’s best AI researchers unemployed, removes them from competing in the next round of model improvements—whoever makes a gpt-5 at a competitor will have the best model outright.
Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent.
However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
I mean, yes, a company self-destructing doesn’t stop much if their knowledge isn’t also actively deleted—and even then, it’s just a setback of a few months. But also, by going “oh well we need to work inside the system to fix it somehow” at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.