Fair enough. My idea was focused on AI existential risk; from that perspective, it seems to me that this result doesn’t increase directly the existential risk from AI, in the way that GPT-3 does, for example. But the effect of pushing more people in the field is probably a real issue.
Fair enough. My idea was focused on AI existential risk; from that perspective, it seems to me that this result doesn’t increase directly the existential risk from AI, in the way that GPT-3 does, for example. But the effect of pushing more people in the field is probably a real issue.