I don’t know if there are too many tech advances that are unqualified “good” from an X risk perspective. In this case, any advances in bioengineering might make it easier to create bioweapons, for example. Any advances in AI create more demand for AI...
Fair enough. My idea was focused on AI existential risk; from that perspective, it seems to me that this result doesn’t increase directly the existential risk from AI, in the way that GPT-3 does, for example. But the effect of pushing more people in the field is probably a real issue.
I don’t know if there are too many tech advances that are unqualified “good” from an X risk perspective. In this case, any advances in bioengineering might make it easier to create bioweapons, for example. Any advances in AI create more demand for AI...
Fair enough. My idea was focused on AI existential risk; from that perspective, it seems to me that this result doesn’t increase directly the existential risk from AI, in the way that GPT-3 does, for example. But the effect of pushing more people in the field is probably a real issue.