This robot is not a consequentialist—it doesn’t have a model of the world which allows it to extrapolate (models of) outcomes that follow causally from its choices. It doesn’t seem to steer the universe any particular place, across changes of context, because it explicitly doesn’t contain a future-steering engine.
This robot is not a consequentialist—it doesn’t have a model of the world which allows it to extrapolate (models of) outcomes that follow causally from its choices. It doesn’t seem to steer the universe any particular place, across changes of context, because it explicitly doesn’t contain a future-steering engine.
Heh, it’s pretty much exactly what I said.