We imagine Shah saying: “1. Why will the AI have goals at all?, and 2. If it does have goals, why will its goals be incompatible with human survival? Sure, most goals are incompatible with human survival, but we’re not selecting uniformly from the space of all goals.”
Yeah, that’s right. Adapted to the language here, it would be 1. Why would we have a “full and complete” outcome pump, rather than domain-specific outcome pumps that primarily use plans using actions from a certain domain rather than “all possible actions”, and 2. Why are the outcomes being pumped incompatible with human survival?
Yeah, that’s right. Adapted to the language here, it would be 1. Why would we have a “full and complete” outcome pump, rather than domain-specific outcome pumps that primarily use plans using actions from a certain domain rather than “all possible actions”, and 2. Why are the outcomes being pumped incompatible with human survival?