So AIs are dangerous, because they’re blind optimization processes; evolution is cruel, because it’s a blind optimization process… and still Eliezer wants to build an optimizer-based AI. Why? We human beings are not optimizers or outcome pumps. We are a layered cake of instincts, and precisely this allows us to be moral and kind.
No idea what I’m talking about, but the “subsumption architecture” papers seem to me much more promising—a more gradual, less dangerous, more incrementally effective path to creating friendly intelligent beings. I hope something like this this will be Eliezer’s next epiphany: the possibility of non-optimizer-based high intelligence, and its higher robustness compared to paperclip bombs.
So AIs are dangerous, because they’re blind optimization processes; evolution is cruel, because it’s a blind optimization process… and still Eliezer wants to build an optimizer-based AI. Why? We human beings are not optimizers or outcome pumps. We are a layered cake of instincts, and precisely this allows us to be moral and kind.
No idea what I’m talking about, but the “subsumption architecture” papers seem to me much more promising—a more gradual, less dangerous, more incrementally effective path to creating friendly intelligent beings. I hope something like this this will be Eliezer’s next epiphany: the possibility of non-optimizer-based high intelligence, and its higher robustness compared to paperclip bombs.