Marginal taking-of-safety-seriously, as Eliezer points out, doesn’t look good enough: you just delay the inevitable a little bit, if even that. On the other hand, establishing a widely-accepted consensus that AGI is as dangerous as A-bombs that blow up the whole universe might influence the field in more systematic ways (although it’s unclear how, and achieving this goal doesn’t look plausible).
If AGI is a long way away, then seeding a safety message to current and future grad students could influence the directions they take, and turn the field in the direction of higher safety.
If AGI comes soon, then influencing people is much less useful, I agree.
Marginal taking-of-safety-seriously, as Eliezer points out, doesn’t look good enough: you just delay the inevitable a little bit, if even that. On the other hand, establishing a widely-accepted consensus that AGI is as dangerous as A-bombs that blow up the whole universe might influence the field in more systematic ways (although it’s unclear how, and achieving this goal doesn’t look plausible).
If AGI is a long way away, then seeding a safety message to current and future grad students could influence the directions they take, and turn the field in the direction of higher safety.
If AGI comes soon, then influencing people is much less useful, I agree.