The core argument that there is “no universally compelling argument” holds if we literally consider all of mind design space, but for the task of building and aligning AGIs we may be able to constrain the space such that it is unclear that the argument holds.
For example in order to accomplish general tasks AGIs can be expected to have a coherent, accurate and compressed model of the world (as do transformers to some extent) such that they can roughly restate their input. This implies that in a world where there is a lot of evidence that the sky is blue (input / argument), AGIs will tend to believe that the sky is blue (output / fact).
So even if “[there is] no universally compelling argument” holds in general, for the subset of minds we care about it does not hold.
To be clear I do not think this constraint will auto-magically align AGIs with human values. But they will know what values humans tend to have.
More broadly values do not seem to be in the category of things that can be argued for because all arguments are judged in light of the values we hold, so arguing that values X are better than Y, is false when judged off Y (if we visualize values Y as a vector, it is more aligned with itself than it is with X).
Values can be changed (soft power, reprogramming etc) or shown to be already aligned in some dimension such that (temporary) cooperation makes sense.
It sometimes takes me a long time to go from “A is true”, “B is true”, “A and B implies C is true” to “C is true”.
I think this is a common issue with humans, for example I can see a word such as “aqueduct”, and also know that “aqua” means water in Latin, yet fail to notice that “aqueduct” comes from “aqua”. This is because when I see a word it does not trigger a dynamic that searches for a root.
Another case is when the rule looks a bit different, say “a and b implies c” rather than “A and B implies C” and some effort is needed to notice that it still applies.
I think an even more common reason is that the facts are never brought in working memory at the same time, and so inference never happens.
All this hints to a practical epistemological-fu: we can increase our knowledge simply by actively reviewing our facts, say every morning, and trying to infer new facts from them! This might even create a virtuous circle, as the more facts one infers, the more facts one can combine to generate more inferences.
On the other hand there is a limit to the number of facts one can review in a given amount of time, so perhaps a healthy epistemological habit to have is to trigger one’s inference engine every time one learns a new (significant?) fact.