As far as I understand the post, a system that wouldn’t contain human values but would still be sufficient to drastically reduce existential risk from AI would not need to execute an action that has a specific effect on humans. If I’m getting the context right, it refers to something like task-directed AGI that would allow the owner to execute a pivotal act – in other words, this is not yet the singleton we want to (maybe) finally build that CEVs us out into the universe, but something that enables us to think long & careful enough to actually build CEV safely (e.g. by giving us molecular nanotechnology or uploading that perhaps doesn’t depend on human values, modeled or otherwise).
As far as I understand the post, a system that wouldn’t contain human values but would still be sufficient to drastically reduce existential risk from AI would not need to execute an action that has a specific effect on humans. If I’m getting the context right, it refers to something like task-directed AGI that would allow the owner to execute a pivotal act – in other words, this is not yet the singleton we want to (maybe) finally build that CEVs us out into the universe, but something that enables us to think long & careful enough to actually build CEV safely (e.g. by giving us molecular nanotechnology or uploading that perhaps doesn’t depend on human values, modeled or otherwise).
Or have I misunderstood your comment?