Solomonoff’s universal prior assigns a probability to every individual Turing machine. Usually the interesting statements or hypotheses about which machine we are dealing with are more like “the 10th output bit is 1” than “the machine has the number 643653″. The first statement describes an infinite number of different machines, and its probability is the sum of the probabilities of those Turing machines that produce 1 as their 10th output bit (as the probabilities of mutually exclusive hypotheses can be summed). This probability is not directly related to the K-complexity of the statement “the 10th output bit is 1” in any obvious way. The second statement, on the other hand, has probability exactly equal to the probability assigned to the Turing machine number 643653, and its K-complexity is essentially (that is, up to an additive constant) equal to the K-complexity of the number 643653.
So the point is that generic statements usually describe a huge number of different specific individual hypotheses, and that the complexity of a statement needed to delineate a set of Turing machines is not (necessarily) directly related to the complexities of the individual Turing machines in the set.
I have recently had the unpleasant experience of getting subjected to the kind of dishonest emotional manipulation that is recommended here. A (former) friend tried to convert me to his religion by using these tricks, and I can attest that they are effective if the person on the receiving end is trusting enough and doesn’t realize that they are being manipulated. In my case the absence and avoidance of rational argument eventually led to the failure of the conversion attempt, but not before I had been inflicted severe emotional distress by a person I used to trust.
Needless to say, I find it unpleasant that these kind of techniques are mentioned without also mentioning that they are indeed manipulative, dishonest and very easy to abuse.