The quoted section more seems like instrumental convergence than orthogonality to me?
The second part of the sentence, yes. The bolded one seems to acknowledge AIs can have different goals, and I assume that version of EY wouldn’t count “God” as a good goal.
Another more relevant part:
Obviously, if the AI is going to be capable of making choices, you need to create an exception to the rules—create a Goal object whose desirability is not calculated by summing up the goals in the justification slot.
Presumably this goal object can be anything.
But in order to accept that, one needs to accept the orthogonality thesis.
I agree that EY rejected the argument because he accepted OT. I very much disagree that this is the only way to reject the argument. In fact, all four positions seem quite possible:
Accept OT, accept the argument: sure, AIs can have different goals, but this (starting an AI without explicit goals) is how you get an AI which would figure out the meaning of life.
Reject OT, reject the argument: you can think “figure out the meaning of life” is not a possible AI goal.
and 4. EY’s positions at different times.
In addition, OT can itself be a reason to charge ahead with creating an AGI: since it says an AGI can have any goal, you “just” need to create an AGI which will improve the world. It says nothing about setting an AGI’s goal being difficult.
The second part of the sentence, yes. The bolded one seems to acknowledge AIs can have different goals, and I assume that version of EY wouldn’t count “God” as a good goal.
Another more relevant part:
Presumably this goal object can be anything.
I agree that EY rejected the argument because he accepted OT. I very much disagree that this is the only way to reject the argument. In fact, all four positions seem quite possible:
Accept OT, accept the argument: sure, AIs can have different goals, but this (starting an AI without explicit goals) is how you get an AI which would figure out the meaning of life.
Reject OT, reject the argument: you can think “figure out the meaning of life” is not a possible AI goal.
and 4. EY’s positions at different times.
In addition, OT can itself be a reason to charge ahead with creating an AGI: since it says an AGI can have any goal, you “just” need to create an AGI which will improve the world. It says nothing about setting an AGI’s goal being difficult.