There is no “first” in precommiting—your source code precommits you to certain actions, and you can’t influence your source code, only carry out what the code states. The notion of precommiting, as a modification, is bogus
You can influence your source code. You change the words and symbols in the text file, hit recompile, load the new binary into memory and execute it. If your code is such that it considers making such modifications as a suitable action to a situation then that is what you will do.
Common computer programs have a rather sharp boundary between their source code and the data. In brains (and hypothetical AIs) this distinction is (would be) probably less explicit. Whenever the baron learns anything, his source code changes in some sense, involuntarily, without recompiling. Still, the original source code contains all the information. Precommiting, in order to have some importance, should mean learning about a particular output of your own source code, rather than recompiling.
But I am interested in what happened after. If a tape operating on a UTM is programmed to operate a peripheral device to take the tape and modify it. then it is able to do that and the original tape is no longer running, the new one is. For any given agent in the universe it is possible to alter its state such that it behaves differently. Agents that are not implemented within this universe may not be changed in this way and those are the agents that I am not interested in.
Think functional program
Functional programs can operate machines that alter code to produce new, different functional programs.
The baron can alter his source code. Once he does so he is a different agent. How a countess responds to the baron’s decision to modify his source code is a different question. If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.
If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.
Now this is a game of signalling—to lie or not to lie, to trust or not to trust (or just how to interpret a given signal). The payoffs of the original game induce the payoff for this game of signalling the facts useful for efficiently playing the original game.
You don’t neet to talk about “modified source code” to discuss this data as signalling the original source code. (The original source code is interesting, because it describes the strategy.) The modified code is only interesting to the extent it signals the original code (which it probably doesn’t).
(Incidentally, one can only change things in accordance with the laws of physics, and many-to-one mapping may not be an option, though reconstructing the past may be infeasible in practice.)
You can influence your source code. You change the words and symbols in the text file, hit recompile, load the new binary into memory and execute it. If your code is such that it considers making such modifications as a suitable action to a situation then that is what you will do.
Common computer programs have a rather sharp boundary between their source code and the data. In brains (and hypothetical AIs) this distinction is (would be) probably less explicit. Whenever the baron learns anything, his source code changes in some sense, involuntarily, without recompiling. Still, the original source code contains all the information. Precommiting, in order to have some importance, should mean learning about a particular output of your own source code, rather than recompiling.
The use of ‘source code’ here is merely a metaphor.
Metaphor standing for what exactly?
UTM tape, brain, clockwork mechanism… whatever.
Think functional program, or what was initially written on the tape of a UTM. We are interested in that particular fact, not what happened after.
But I am interested in what happened after. If a tape operating on a UTM is programmed to operate a peripheral device to take the tape and modify it. then it is able to do that and the original tape is no longer running, the new one is. For any given agent in the universe it is possible to alter its state such that it behaves differently. Agents that are not implemented within this universe may not be changed in this way and those are the agents that I am not interested in.
Functional programs can operate machines that alter code to produce new, different functional programs.
The baron can alter his source code. Once he does so he is a different agent. How a countess responds to the baron’s decision to modify his source code is a different question. If the countess is wise she will not pay in such a situation, the baron will know this and he will choose not to modify his source code. But it is a choise, the universe permits it.
Now this is a game of signalling—to lie or not to lie, to trust or not to trust (or just how to interpret a given signal). The payoffs of the original game induce the payoff for this game of signalling the facts useful for efficiently playing the original game.
You don’t neet to talk about “modified source code” to discuss this data as signalling the original source code. (The original source code is interesting, because it describes the strategy.) The modified code is only interesting to the extent it signals the original code (which it probably doesn’t).
(Incidentally, one can only change things in accordance with the laws of physics, and many-to-one mapping may not be an option, though reconstructing the past may be infeasible in practice.)
But it isn’t a lie. It is the truth.
I don’t want to signal the original source code.
But I want to know it, so whatever you do, signals something about the original source code, possibly very little.
What’s not a lie? (I’m confused.) I was just listing the possible moves in a new meta-game.