Dropping a quick note that, depending on how it is being interpreted, I disagree with this attempted helpful restatement of logical decision theory’s central principle.
Dispute over decision theory generally, in every case, can be said to be a dispute over what algorithm is normative. CDTers think that a CDT algorithm is normative, although of course CDT itself endorses a different algorithm Son-of-CDT as being most useful. EDTers think an EDT algorithm is normative and LDTers think an LDT algorithm is normative. Outside of the smallest toy examples with very tame fully described tiny environments, none of them can of course exhibit the true source code of a sapient being; but a CDT proponent thinks that for this unspecified sapient being to have some CDT algorithm, rather than an EDT or LDT algorithm, would make it be most rational. As for the algorithm that a CDT proponent thinks would be most useful to its own goals to possess, that is of course the different entity Son-of-CDT—or at least, it is Son-of-CDT across most ordinary cases where your payoff in decision problems depends only on your disposition to different kinds of decisions. If instead of Omega you are about to face Alpha who will look at your source code rather than your decisions, who will reward you with Heaven only for source code that chooses on the basis of computing the first option in alphabetical order, and will punish other agents who arrive at the same output and choice but by a different computation, then the most useful algorithm to have in the face of Alpha is Alphabetizing Decision Theory.
LDT thinks that the algorithm you can have, which would make you most rational, would be an LDT one. As for the most LDT-useful algorithm, this will itself be LDT across a much wider range of problems where the decision problems treat you entirely on the basis of your disposition. Still, when facing Alpha rather than Omega, LDT will agree that ADT is the useful, irrational algorithm to have. LDT is self-endorsing for the class of problems where your payoffs depend on which choices you are disposed to make, and not on which exact algorithm makes them.
LDT does not in the moment choose which algorithm to have. It chooses which choices are the output of its own, fixed algorithm. It potentially views this as controlling, but not changing or physically causing, events that have already happened. For example, suppose Omega surprises you with the information that It will reward you with a $1M if the state of the universe one minute ago had the property of logically implying that you raise your hand right now, and separately, somebody will be physically caused to pay you $1000 if you keep your hand still. LDT thinks it possesses the power to make it be the case that the state of the universe one minute ago logically implied, by way of the regular and reliable outputs of the algorithm it is now running, that it will raise its hand; and chooses to raise its hand, thereby controlling and determing, but not changing, a complicated logical fact that was physically true about the universe one minute earlier. CDT keeps its hand low for $1000, unless of course it has before that one-minute time horizon been given a chance to change its algorithm to Son-of-CDT; afterwards CDT thinks it is too late.
LDT does not see itself as changing its algorithm. Its source code remains the same. Its decision rule remains the same. It is determining this logical fact about the fixed physical past by way of its algorithm running to determine what its own output will be, in a cognitive process that takes into account the promised payoffs along the way to computing for different choices what payoffs would result from which logical facts ending up true, and so finally ends up determining that it will raise its hand—as was then, of course, true all along. But it is true logically-because that is what gives the LDT agent the highest payoff, among all the logical facts that could’ve been. LDT does not change what is physically true about the past, it does not change logical facts between one time and another, it is just that logical facts with a dependence on the outputs of the LDT algorithm end up being determined by what is best for LDT, under a fixed and unchanging LDT algorithm.
Poultry and pork rinds will get you a bunch of linoleic fatty acid, which is a whole separate dietary villain.