When you say “precommited”, you mean “effectively signalled precommitment”. When you say “can’t precommit” (that is, can precommit only to certain other things), you mean “there is no way of effectively signalling this precommitment”. Thus, you state that you can’t signal that you’d uphold a counterfactual precommitment. But if it’s possible to give your source code, you can.
(Or the game might have a notion of rational strategy, and so you won’t need either source code or signalling of precommitment.)
Please don’t correct me on what I think. My use of precommitting has absolutely nothing to do with signaling. I first thought about these things (this explicitly) in the context of time travel, and you can’t fool the universe with signaling, no matter how good your acting skills.
I don’t propose fooling anyone, signaling is most effective when it’s truthful.
What could it mean to “make a precommitment”, if not to signal the fact that your strategy is a certain way? You strategy either is, or isn’t a certain way, this is a fixed fact about yourself, facts don’t change. This being apparently the only resolution, I was not so much correcting as elucidating what you were saying (but assuming you didn’t think of this elucidation explicitly), in order to make the conclusion easier to see (that the problem is with inability to signal counterfactual aspects of the strategy).
I don’t propose fooling anyone, signaling is most effective when it’s truthful.
Signaling is about perceptions, not the truth by necessity. That means that fooling is at least a hypothetical possibility. Which is not the case for my use of precommittment.
What could it mean to “make a precommitment”, if not to signal the fact that your strategy is a certain way?
Taking the decision not to change your mind later in a way you will stick to. If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).
Taking the decision not to change your mind later in a way you will stick to.
That you’ve taken this decision is a fact about your strategy (as such, it’s timeless: looking at it from ten years ago doesn’t change it). There is a similar fact of what you’d do if the situation was different.
Did you read about counterfactual mugging, and do you agree that one should give up the money? No precommitment in this sense could help you there: there is no explicit decision in advance, it has to be a “passive” property of your strategy (the distinction between a decision that was “made” and that wasn’t is superficial one—that’s my point).
If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).
How could it be otherwise? And if so, “deciding to precommit” (in the sense of fixing this fact at a certain moment) is impossible in principle. All you can do is tell the other player about this fact, maybe only after you yourself discovered it (as being the way to win, and so the thing to do, etc.)
That you’ve taken this decision is a fact about your strategy (as such, it’s timeless: looking at it from ten years ago doesn’t change it). There is a similar fact of what you’d do if the situation was different.
Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making that decision (it may have been a strategy you were considering, though). Unless you want to argue that there is no such thing as a decision, which would be a curious position in the context of a thought experiment about decision theory.
Did you read about counterfactual mugging, and do you agree that one should give up the money?
Yes, I considered myself precommitted to hand over the money when reading that.
I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can’t tell what I would have said given unlimited time).
Yes, I considered myself precommitted to hand over the money when reading that. I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can’t tell what I would have said given unlimited time).
Would it make a difference if Omega told you that it tossed the coin a thousand years ago (before you’ve “precommited”), but only came for the money now?
No, a serious question. I was referring to the discussion starting from the top-level comment here (it’s more of praise’s position—my mistake for confusing this—it’s unclear whether you agree).
“Who precommits first wins” means that if one party can make the other party learn about its precommitment before the other party can commit the first party wins. Not because commitment has magical powers that vary with time, but because learning about the precommitment makes making an exception in just this one case “rational” (if it’s not “rational” to you you already had implicitly precommmitted).
Yes, this (general spin of your argument, not particular point) was my position at one time as well, until I realized that all rational decision-making has to consist of such “implicit precommitments”, which robs the word of nontriviality.
Using the word precommitment makes it easier to talk about these things (unless you find yourself in an argument like this) and finding a reason to treat just this one case as an exception can genuinely be in the best interest of a particular instance of you that already finds itself in that exception (this is more obvious with time travel scenarios than with game theoretic scenarios), even though overall eliminating such exceptions is in your interest (since it will reduce the probability of the circumstances of these would be exceptions arising in the first place).
“Who precommits first wins” means that if one party can make the other party learn about it’s precommitment before the other party can commit the first party wins.
I don’t agree. Not because I think you are believing anything crazy. I disagree with what is rational for the second person to do. I say that anything an agent can do by precommiting to an action it can also do just because it is the rational thing to do. Basically, any time you are in a situation where you think “I wish I could go back in time and change my source code that right now I would be precommitted to doing X” just do X. It’s a bit counter-intuitive but it seems to give you the right answer. In this case the Baron will just not choose to precommit to defection because he knows that will not work due to the ‘if I could time travel...” policy that he reads in your source code. It’s kind of like ‘free precommitment’!
ETA: The word ‘rational’ was quoted, distancing FAWS own belief from a possible belief that some other people may call “rational”. So I do agree. :)
I thought it was obvious that I have exactly the same opinion you voice in this post? After all I used quotes for rational and mentioned that considering this not rational is equivalent to an implicit precommitment. And I thought that It’s obvious that I already have an implicit precommitment through the sort of mechanism you describe from my other posts.
Pardon me, I did not notice the quotes you placed around “rational”. I was surprised by what seemed to me to be a false claim because your other posts did suggest to me that you ‘get it’. Oversights like that are my cue to sleep!
learning about the precommitment makes making an exception in just this one case “rational”
If you allow precommitments that are strategies, that react to what you learn (e.g. about other precommitments), you won’t need any exceptions. You’d only have “blank” areas where you haven’t yet decided your strategy.
Have I ever said anything else? I believe I mentioned agents that come into existence precommitted, and my very first post in this thread mentioned such a fully general, indistiguishable-from-strategy precommmitment. The case I described is the one where “precommitted first” makes sense. Which is also the sort of case in the original post. Obviously the precise timing of a fully general precommitment before the actors even learn about each other doesn’t matter.
Agreed. (I assume by non-general precommitments—timing of which matters—you refer to specific nonconditional strategies that don’t take into account anything—obviously you won’t want to make such a precommitment too early, or too late. I still think it’s a misleading concept, as it suggests that precommitment imposes additional limitation on one’s actions, while as you agree it doesn’t when it isn’t rational—that is when you’ve made a “general precommitment” to avoid that.)
(I assume by non-general precommitments—timing of which matters—you refer to specific nonconditional strategies that don’t take into account anything
I meant things like “I commit to one-box in Newcomb’s problem” or “I commit not to respond to Baron Chastity’s blackmail”, specific precommitments you can only make after anticipating that situation. As a human it seems to be a good idea to make such a specific precommitment in addition to the general precommitment for the psychological effect (this is also more obvious in time travel scenarios), so I disagree that this is a misleading concept.
Why should rational agents deliberately sabotage their ability to understand humans? Merely having a concept of something doesn’t imply applying it to yourself. Not that I even see any noticeable harm in a rational agent applying the concept of a specific precommitment to itself. It might be useful for e. g. modeling itself in hypothesis testing.
I consider a strategy that involves killing myself in certain circumstances, but have not yet committed to it.
Before I can do so these circumstances suddenly arise. I chicken out and don’t kill myself, because I haven’t committed yet (or psyched myself up if you want to call it that). That strategy wasn’t really my strategy yet.
5 Minutes later I have committed myself to that strategy. The circumstances I would kill myself under arise, and I actually do it (or so I hope. I’m not completely sure I can make precommittments that strong) The strategy I previously considered is now my strategy.
Thanks, this explains the “would not have been your strategy” thing.
So, when you talk about “X is not my strategy”, you refer to particular time: X is not the algorithm you implement at 10AM, but X is the algorithm you implement at 11AM. When you said “before I decided at 10:30AM, X wasn’t my strategy”, I heard “before I decided at 10:30AM, at 11AM there was no fact about which strategy I implement, but after that, there appeared a fact that at 11AM I implement X”, while it now seems that you meant “at 10AM I wasn’t implementing X; I decided to implement X at 10:30AM; at 11AM I implemented X”. Is the disagreement resolved? (Not the original one though, of the top-level comment—that was about facts.)
Yes.
I can’t see why you would interpret my position in a way that is both needlessly complicated (taking “before” to be a statement about some sort of meta-time rather than just plain normal time?) and doesn’t make any sense whatsoever, though.
Well, it’s a common failure mode, you should figure out some way of signalling that you don’t fall in it (and I should learn to ask the right questions). Since you can change your mind about what to do at 11AM, it’s appealing to think that you can also change the fact of the matter of what happens at 11AM. To avoid such confusion, it’s natural enough to think about “the algorithm you implement at 10AM” and “the algorithm you implement at 11AM” as unrelated facts that don’t change (but depend and are controlled by particular systems, such as your source code at given time, or even “acausally”, or “logically” controlled by the algorithms in terms of which they are defined).
Signaling is about perceptions, not the truth by necessity.
Any evidence, that is any way in which you may know facts about the world, is up to interpretation, and you may err in interpreting it. But it’s also the only way to observe the truth.
You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to. Are you deliberately trying to misinterpret me? Do I need to make all of my posts lawyer-proof?
You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to.
The other people will interpret your words depending on whether they expect them to be in accordance with reality. Thus, I’m talking about the relation between the way your words will be interpreted by the people you talk to, and the truth of your words. If signaling (communication) bore no relation to the truth, it would be as useless as listening to white noise.
You’re doing it again. I never said that signaling bore no relationship to the truth whatsoever, I said it was about perceptions and not by necessity about the truth, and what I (obviously, it seemed to me) meant was that signaling means attempting to manipulate the perceptions of others in a certain way, and that this does not necessarily mean changing the reality of the thing these perceptions are about.
You can’t change reality… You can only make something change in time, but every instant, as well as the whole shape of the process of change, are fixed facts.
By signalling I mean, for example, speaking (though the term fits better in the original game). Of course, you are trying to manipulate the world (in particular, perceptions of other people) in a certain way by your actions, but it’s a general property shared by all actions.
You can’t change reality in this meta-time sort of sense you seem to be eager to assign me. If I take a book out of the book case and put it on my desk I have changed the reality of where that book is. I haven’t changed the reality of where that book will be in 2 minutes in your meta-time sense through my magical free will powers at the meta-time of making the decision to do that, but have changed the reality of where that book is in the plain English sense.
EDIT: You edited your post while I was replying. I only saw the first sentence.
The point is that it is not a fixed fact about yourself unless you have an esoteric definition of self that is “what I was, am or will be at one particular instant in time”. Under the conventional meaning of ‘yourself’, you can change and do so constantly. Essentially the ‘So?’ is a fundamental rejection of the core premise of your comment.
(We disagree about a fundamental fact here. It is a fact that appears trivial and obvious to me and I assume your view appears trivial and obvious to you. It doesn’t seem likely that we will resolve this disagreement. Do you agree that it would be best for us if we just leave it at that? You can, of course, continue the discussion with FAWS who on this point at least seems to have the same belief as I.)
I agree with the statement of FAWS’ that you quoted there. Although I do note that FAWS’ statement is ambiguous. I only agree with it to the extent that the meaning is this:
Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making [the decision to precommit which involved some change in the particles in the universe such that your new state is one that will take a certtain action in a particular circumstance].
Still ambiguous, and hints at non-lawful changes, though likely not at all intended. It’s better to merge in this thread (see the disambiguation attempt).
What is the fact about which you see us disagreeing? I don’t understand this discussion as having a point of disagreement. From my point of view, we are arguing relevance, not facts. (For example, I don’t see why it’s interesting to talk of “Who this fact is about?”, and I’ve lost the connection of this point to the original discussion.)
What is the fact about which you see us disagreeing?
You can modify your source code.
You can make precommitments.
“What could it mean to “make a precommitment”″ is ‘make a precommitment’. That is a distinct thing and ’signalling that you have made a precommitment”. (If you make a precommitment and do not signal it effectively then it sucks for you.)
More simply—on the point on which you were disagreeing with FAWS (I assert that)
FAWS’ position does have meaning.
FAWS’ meaning is a different meaning to what you corrected it to.
FAWS is right.
I don’t understand this discussion as having a point of disagreement. From my point of view, we are arguing definitions, not facts.
It is probably true that we would make the same predictions about what would happen in given interactions between agents.
“What could it mean to “make a precommitment”″ is ‘make a precommitment’.
Not helping!
That is a distinct thing and ’signalling that you have made a precommitment”. (If you make a precommitment and do not signal it effectively then it sucks for you.)
Of course, having a strategy that behaves in a certain way and signaling this fact are different things. It isn’t necessarily a bad thing to hide something (especially from a jumble of wires that distinguishes your thoughts and not just actions as terminal value).
More simply—on the point on which you were disagreeing with FAWS (I assert that)
FAWS’ position does have meaning.
FAWS’ meaning is a different meaning to what you corrected it to.
No, it is not. You asked (with some implied doubt) where we disagree. I answered as best I could. As I stated, we are probably not going to resolve our disagreement so I will leave it at that, with no disrespect intended beyond, as Robin often describes, the inevitable disrespect implicit in the actual fact of disagreement.
The “Not helping!” parts didn’t explain where we disagree (what are the facts I believe are one way and you believe are the other way), they just asserted that we do disagree.
But the last sentence suggests that we disagree about the definition of disagreement, because how could we disagree if you concede that
It is probably true that we would make the same predictions about what would happen in given interactions between agents.
When you say “precommited”, you mean “effectively signalled precommitment”. When you say “can’t precommit” (that is, can precommit only to certain other things), you mean “there is no way of effectively signalling this precommitment”.
FAWS clearly does not mean that. He means what he says he means and you disagree with him.
Since the game stipulates that one of the two acts before the other editing their source code is a viable option. If you happen to know that the other party is vulnerable to this kind of tactic then this is the right decision to make.
(Or the game might have a notion of rational strategy, and so you won’t need either source code or signalling of precommitment.)
FAWS clearly does not mean that. He means what he says he means and you disagree with him.
I don’t disagree with him, because I don’t see what else it could mean.
Since the game stipulates that one of the two acts before the other editing their source code is a viable option.
See the other reply—the edited code is not an interesting fact. The communicated code must be the original one—if it’s impossible to verify, this just means it can’t be effectively communicated (signalled), which implies that you can’t signal your counterfactual precommitment.
See the other reply—the edited code is not an interesting fact. The communicated code must be the original one
No, it need not be the original code. In fact, if the Baron really wants to he can destroy all copies of the original code. This is a counterfactual actual universe. The agent that is the baron is made up of quarks which can be moved about using the normal laws of physics.
It need not be the original code, but if we are interested in the original code, then we read the communicated data as evidence about the original code—for what it’s worth. It may well be in Baron’s interest to give info about his code—since otherwise, what distinguishes him from a random jumble of wires, in which case the outcome may not be appropriate for his skills.
When you say “precommited”, you mean “effectively signalled precommitment”. When you say “can’t precommit” (that is, can precommit only to certain other things), you mean “there is no way of effectively signalling this precommitment”. Thus, you state that you can’t signal that you’d uphold a counterfactual precommitment. But if it’s possible to give your source code, you can.
(Or the game might have a notion of rational strategy, and so you won’t need either source code or signalling of precommitment.)
Please don’t correct me on what I think. My use of precommitting has absolutely nothing to do with signaling. I first thought about these things (this explicitly) in the context of time travel, and you can’t fool the universe with signaling, no matter how good your acting skills.
I don’t propose fooling anyone, signaling is most effective when it’s truthful.
What could it mean to “make a precommitment”, if not to signal the fact that your strategy is a certain way? You strategy either is, or isn’t a certain way, this is a fixed fact about yourself, facts don’t change. This being apparently the only resolution, I was not so much correcting as elucidating what you were saying (but assuming you didn’t think of this elucidation explicitly), in order to make the conclusion easier to see (that the problem is with inability to signal counterfactual aspects of the strategy).
Signaling is about perceptions, not the truth by necessity. That means that fooling is at least a hypothetical possibility. Which is not the case for my use of precommittment.
Taking the decision not to change your mind later in a way you will stick to. If as you seem to suggest the question whether the agent later acts a certain way or not is already implicit in its original source code then this agent already comes into existence precommitted (or not, as the case may be).
That you’ve taken this decision is a fact about your strategy (as such, it’s timeless: looking at it from ten years ago doesn’t change it). There is a similar fact of what you’d do if the situation was different.
Did you read about counterfactual mugging, and do you agree that one should give up the money? No precommitment in this sense could help you there: there is no explicit decision in advance, it has to be a “passive” property of your strategy (the distinction between a decision that was “made” and that wasn’t is superficial one—that’s my point).
How could it be otherwise? And if so, “deciding to precommit” (in the sense of fixing this fact at a certain moment) is impossible in principle. All you can do is tell the other player about this fact, maybe only after you yourself discovered it (as being the way to win, and so the thing to do, etc.)
Yes, its a fact about your strategy, but this particular strategy would not have been your strategy before making that decision (it may have been a strategy you were considering, though). Unless you want to argue that there is no such thing as a decision, which would be a curious position in the context of a thought experiment about decision theory.
Yes, I considered myself precommitted to hand over the money when reading that. I would not have considered myself precommmitted before my speculations about time travel a couple of years ago, and if I had read the scenario of the counterfactual mugging and nothing else here, and if I had been forced to say whether I would hand over the money without time to think it though I would have said that I would not (I can’t tell what I would have said given unlimited time).
Would it make a difference if Omega told you that it tossed the coin a thousand years ago (before you’ve “precommited”), but only came for the money now?
That would make no difference whatsoever of course. Only the time I learn about the mugging matters.
But the coin precommited to demand the money from you first. How do you reconcile this with your position about the order of precommitments?
Are you trying to make fun of me?
No, a serious question. I was referring to the discussion starting from the top-level comment here (it’s more of praise’s position—my mistake for confusing this—it’s unclear whether you agree).
“Who precommits first wins” means that if one party can make the other party learn about its precommitment before the other party can commit the first party wins. Not because commitment has magical powers that vary with time, but because learning about the precommitment makes making an exception in just this one case “rational” (if it’s not “rational” to you you already had implicitly precommmitted).
Yes, this (general spin of your argument, not particular point) was my position at one time as well, until I realized that all rational decision-making has to consist of such “implicit precommitments”, which robs the word of nontriviality.
Using the word precommitment makes it easier to talk about these things (unless you find yourself in an argument like this) and finding a reason to treat just this one case as an exception can genuinely be in the best interest of a particular instance of you that already finds itself in that exception (this is more obvious with time travel scenarios than with game theoretic scenarios), even though overall eliminating such exceptions is in your interest (since it will reduce the probability of the circumstances of these would be exceptions arising in the first place).
I don’t agree. Not because I think you are believing anything crazy. I disagree with what is rational for the second person to do. I say that anything an agent can do by precommiting to an action it can also do just because it is the rational thing to do. Basically, any time you are in a situation where you think “I wish I could go back in time and change my source code that right now I would be precommitted to doing X” just do X. It’s a bit counter-intuitive but it seems to give you the right answer. In this case the Baron will just not choose to precommit to defection because he knows that will not work due to the ‘if I could time travel...” policy that he reads in your source code. It’s kind of like ‘free precommitment’!
ETA: The word ‘rational’ was quoted, distancing FAWS own belief from a possible belief that some other people may call “rational”. So I do agree. :)
I thought it was obvious that I have exactly the same opinion you voice in this post? After all I used quotes for rational and mentioned that considering this not rational is equivalent to an implicit precommitment. And I thought that It’s obvious that I already have an implicit precommitment through the sort of mechanism you describe from my other posts.
Pardon me, I did not notice the quotes you placed around “rational”. I was surprised by what seemed to me to be a false claim because your other posts did suggest to me that you ‘get it’. Oversights like that are my cue to sleep!
If you allow precommitments that are strategies, that react to what you learn (e.g. about other precommitments), you won’t need any exceptions. You’d only have “blank” areas where you haven’t yet decided your strategy.
Have I ever said anything else? I believe I mentioned agents that come into existence precommitted, and my very first post in this thread mentioned such a fully general, indistiguishable-from-strategy precommmitment. The case I described is the one where “precommitted first” makes sense. Which is also the sort of case in the original post. Obviously the precise timing of a fully general precommitment before the actors even learn about each other doesn’t matter.
Agreed. (I assume by non-general precommitments—timing of which matters—you refer to specific nonconditional strategies that don’t take into account anything—obviously you won’t want to make such a precommitment too early, or too late. I still think it’s a misleading concept, as it suggests that precommitment imposes additional limitation on one’s actions, while as you agree it doesn’t when it isn’t rational—that is when you’ve made a “general precommitment” to avoid that.)
I meant things like “I commit to one-box in Newcomb’s problem” or “I commit not to respond to Baron Chastity’s blackmail”, specific precommitments you can only make after anticipating that situation. As a human it seems to be a good idea to make such a specific precommitment in addition to the general precommitment for the psychological effect (this is also more obvious in time travel scenarios), so I disagree that this is a misleading concept.
For humans, certainty it’s a useful concept. For rational agents, exceptions overwhelm.
Why should rational agents deliberately sabotage their ability to understand humans? Merely having a concept of something doesn’t imply applying it to yourself. Not that I even see any noticeable harm in a rational agent applying the concept of a specific precommitment to itself. It might be useful for e. g. modeling itself in hypothesis testing.
Obviously.
Determinism doesn’t allow such magic. You need to read up on free will.
Are you being deliberately obtuse?
I consider a strategy that involves killing myself in certain circumstances, but have not yet committed to it.
Before I can do so these circumstances suddenly arise. I chicken out and don’t kill myself, because I haven’t committed yet (or psyched myself up if you want to call it that). That strategy wasn’t really my strategy yet.
5 Minutes later I have committed myself to that strategy. The circumstances I would kill myself under arise, and I actually do it (or so I hope. I’m not completely sure I can make precommittments that strong) The strategy I previously considered is now my strategy.
How is any of that free will magic?
Thanks, this explains the “would not have been your strategy” thing.
So, when you talk about “X is not my strategy”, you refer to particular time: X is not the algorithm you implement at 10AM, but X is the algorithm you implement at 11AM. When you said “before I decided at 10:30AM, X wasn’t my strategy”, I heard “before I decided at 10:30AM, at 11AM there was no fact about which strategy I implement, but after that, there appeared a fact that at 11AM I implement X”, while it now seems that you meant “at 10AM I wasn’t implementing X; I decided to implement X at 10:30AM; at 11AM I implemented X”. Is the disagreement resolved? (Not the original one though, of the top-level comment—that was about facts.)
Yes. I can’t see why you would interpret my position in a way that is both needlessly complicated (taking “before” to be a statement about some sort of meta-time rather than just plain normal time?) and doesn’t make any sense whatsoever, though.
Well, it’s a common failure mode, you should figure out some way of signalling that you don’t fall in it (and I should learn to ask the right questions). Since you can change your mind about what to do at 11AM, it’s appealing to think that you can also change the fact of the matter of what happens at 11AM. To avoid such confusion, it’s natural enough to think about “the algorithm you implement at 10AM” and “the algorithm you implement at 11AM” as unrelated facts that don’t change (but depend and are controlled by particular systems, such as your source code at given time, or even “acausally”, or “logically” controlled by the algorithms in terms of which they are defined).
Any evidence, that is any way in which you may know facts about the world, is up to interpretation, and you may err in interpreting it. But it’s also the only way to observe the truth.
You are talking about the relation between truth and your own perceptions. None of this is relevant for the relation between truth and what you want other peoples perceptions to be, which is the context those words are used in the post you reply to. Are you deliberately trying to misinterpret me? Do I need to make all of my posts lawyer-proof?
No.
The other people will interpret your words depending on whether they expect them to be in accordance with reality. Thus, I’m talking about the relation between the way your words will be interpreted by the people you talk to, and the truth of your words. If signaling (communication) bore no relation to the truth, it would be as useless as listening to white noise.
You’re doing it again. I never said that signaling bore no relationship to the truth whatsoever, I said it was about perceptions and not by necessity about the truth, and what I (obviously, it seemed to me) meant was that signaling means attempting to manipulate the perceptions of others in a certain way, and that this does not necessarily mean changing the reality of the thing these perceptions are about.
You can’t change reality… You can only make something change in time, but every instant, as well as the whole shape of the process of change, are fixed facts.
By signalling I mean, for example, speaking (though the term fits better in the original game). Of course, you are trying to manipulate the world (in particular, perceptions of other people) in a certain way by your actions, but it’s a general property shared by all actions.
You can’t change reality in this meta-time sort of sense you seem to be eager to assign me. If I take a book out of the book case and put it on my desk I have changed the reality of where that book is. I haven’t changed the reality of where that book will be in 2 minutes in your meta-time sense through my magical free will powers at the meta-time of making the decision to do that, but have changed the reality of where that book is in the plain English sense.
EDIT: You edited your post while I was replying. I only saw the first sentence.
Agreed.
What I was 10 years ago is a fixed fact about what I was 10 years ago. That doesn’t change. But I have.
So? (Not a rhetorical question.)
The point is that it is not a fixed fact about yourself unless you have an esoteric definition of self that is “what I was, am or will be at one particular instant in time”. Under the conventional meaning of ‘yourself’, you can change and do so constantly. Essentially the ‘So?’ is a fundamental rejection of the core premise of your comment.
(We disagree about a fundamental fact here. It is a fact that appears trivial and obvious to me and I assume your view appears trivial and obvious to you. It doesn’t seem likely that we will resolve this disagreement. Do you agree that it would be best for us if we just leave it at that? You can, of course, continue the discussion with FAWS who on this point at least seems to have the same belief as I.)
Also, you shouldn’t agree with the statement I cited here. (At least, it seems to be more clear-cut than the rest of the discussion.) Do you?
I agree with the statement of FAWS’ that you quoted there. Although I do note that FAWS’ statement is ambiguous. I only agree with it to the extent that the meaning is this:
Still ambiguous, and hints at non-lawful changes, though likely not at all intended. It’s better to merge in this thread (see the disambiguation attempt).
What is the fact about which you see us disagreeing? I don’t understand this discussion as having a point of disagreement. From my point of view, we are arguing relevance, not facts. (For example, I don’t see why it’s interesting to talk of “Who this fact is about?”, and I’ve lost the connection of this point to the original discussion.)
You can modify your source code.
You can make precommitments.
“What could it mean to “make a precommitment”″ is ‘make a precommitment’. That is a distinct thing and ’signalling that you have made a precommitment”. (If you make a precommitment and do not signal it effectively then it sucks for you.)
More simply—on the point on which you were disagreeing with FAWS (I assert that)
FAWS’ position does have meaning.
FAWS’ meaning is a different meaning to what you corrected it to.
FAWS is right.
It is probably true that we would make the same predictions about what would happen in given interactions between agents.
Sure, why not?
Not helping!
Of course, having a strategy that behaves in a certain way and signaling this fact are different things. It isn’t necessarily a bad thing to hide something (especially from a jumble of wires that distinguishes your thoughts and not just actions as terminal value).
Not helping!
No, it is not. You asked (with some implied doubt) where we disagree. I answered as best I could. As I stated, we are probably not going to resolve our disagreement so I will leave it at that, with no disrespect intended beyond, as Robin often describes, the inevitable disrespect implicit in the actual fact of disagreement.
The “Not helping!” parts didn’t explain where we disagree (what are the facts I believe are one way and you believe are the other way), they just asserted that we do disagree.
But the last sentence suggests that we disagree about the definition of disagreement, because how could we disagree if you concede that
FAWS clearly does not mean that. He means what he says he means and you disagree with him.
Since the game stipulates that one of the two acts before the other editing their source code is a viable option. If you happen to know that the other party is vulnerable to this kind of tactic then this is the right decision to make.
On this I agree.
I don’t disagree with him, because I don’t see what else it could mean.
See the other reply—the edited code is not an interesting fact. The communicated code must be the original one—if it’s impossible to verify, this just means it can’t be effectively communicated (signalled), which implies that you can’t signal your counterfactual precommitment.
No, it need not be the original code. In fact, if the Baron really wants to he can destroy all copies of the original code. This is a counterfactual actual universe. The agent that is the baron is made up of quarks which can be moved about using the normal laws of physics.
It need not be the original code, but if we are interested in the original code, then we read the communicated data as evidence about the original code—for what it’s worth. It may well be in Baron’s interest to give info about his code—since otherwise, what distinguishes him from a random jumble of wires, in which case the outcome may not be appropriate for his skills.