The strong form of the Orthogonality Thesis says that there’s no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal
You say:
“To whatever extent you (or a superintelligent version of you) could figure out how to get a high-U outcome if aliens offered to pay you huge amount of resources to do it, the corresponding agent that terminally prefers high-U outcomes can be at least that good at achieving U.”
Arbital is where I found this specific wording for the strong form.
Since I wrote this (two weeks), I am working on addressing some lesser forms as presented in Stuart Armstrong’s article at section 4.5.
Arbital says:
You say:
I don’t see the connection.
This is actually a quote from Arbital. Their article explain the connection.