They even seem wary of descendants who are cell-by-cell emulations of prior human brains, “brain-inspired AIs running on human-derived “spaghetti code”, or `opaque’ AI designs …produced by evolutionary algorithms.” Why? Because such descendants “may not have a clear `slot’ in which to specify desirable goals.”
I think Robin is misunderstanding Anna and Luke here; they’re talking about vaguely human-brain-inspired AI, not about literal human brains run on computer hardware. In general, I think Robin’s critique here makes sense as a response to someone saying ‘we should be terrified of how strong and fast-changing ems will be, and potentially be crazily heavy-handed about controlling ems’. I don’t think AGI systems are relevantly analogous, because AGI systems have a value loading problem and ems just don’t.
I think Robin is misunderstanding Anna and Luke here; they’re talking about vaguely human-brain-inspired AI, not about literal human brains run on computer hardware. In general, I think Robin’s critique here makes sense as a response to someone saying ‘we should be terrified of how strong and fast-changing ems will be, and potentially be crazily heavy-handed about controlling ems’. I don’t think AGI systems are relevantly analogous, because AGI systems have a value loading problem and ems just don’t.