Yes, an agent’s goals aren’t causally or probabilistically independent of its intelligence, though perhaps a weaker claim such as “almost any combination is possible” is true.
EDIT: re philosophical skepticism: okay, so how does bringing in predictive accuracy help? That doesn’t resolve philosophical skepticism either (see: no free lunch theorems).
Even if the universal claim that complete orthogonality is impossible is true - - - I notice in passing that it is argued for with a claim about how the world works, so that you are assuming scepticism has been resolved in order to resolve scepticism - - even if it is true, the correlation between prediction and correspondence could be 0.0001%.
Predictive accuracy doesn’t help with philosophical scepticism. It is nonetheless worth pursuing because it has practical benefits.
So the orthogonality thesis is a priori false?
But that is exactly what I am taking about!
Yes, an agent’s goals aren’t causally or probabilistically independent of its intelligence, though perhaps a weaker claim such as “almost any combination is possible” is true.
EDIT: re philosophical skepticism: okay, so how does bringing in predictive accuracy help? That doesn’t resolve philosophical skepticism either (see: no free lunch theorems).
Even if the universal claim that complete orthogonality is impossible is true - - - I notice in passing that it is argued for with a claim about how the world works, so that you are assuming scepticism has been resolved in order to resolve scepticism - - even if it is true, the correlation between prediction and correspondence could be 0.0001%.
Predictive accuracy doesn’t help with philosophical scepticism. It is nonetheless worth pursuing because it has practical benefits.