“Evolution wasn’t trying to solve the robustness problem at all.”—Agreed that this makes the analogy weaker. And, to state the obvious, everyone doing safety work at MIRI and OpenAI agrees that there’s some way to do neglected-by-evolution engineering work that gets you safe+useful AGI, though they disagree about the kind and amount of work.
The docility analogy seems to be closely connected to important underlying disagreements.
“Evolution wasn’t trying to solve the robustness problem at all.”—Agreed that this makes the analogy weaker. And, to state the obvious, everyone doing safety work at MIRI and OpenAI agrees that there’s some way to do neglected-by-evolution engineering work that gets you safe+useful AGI, though they disagree about the kind and amount of work.
The docility analogy seems to be closely connected to important underlying disagreements.
Conversation also continues here.