This entire thing is super confused. A lot of complexity and assumptions are hidden inside your words, seemingly without you even realizing it.
The whole point of using a formal language is that IF your premises / axioms are correct, AND you only use logically allowed operations, THEN what comes out at the tail end should equal truth. However, you are really just talking with letters acting as placeholders for what could just as well be simply more words:
Committing on A’s part, causes B to commit to defect (and vice versa). committing leads to outcomes ranked 3rd and 4th in their preferences. As A and B are rational, they do not commit.
What does “commit” mean?
As A is not committing, A’s strategy is either predict(B) or !predict(B).
What could it even mean to not commit and !predict(B)? How is not predicting B just another way of committing to defect/collaborate?
If A predicts B will defect, A can cooperate or defect.
If A predicts B will cooperate, A can cooperate or defect, Vice versa.
They can cooperate or defect whether or not they predict or don’t predict each other, because that is all they can do anyway so this statement has zero information content.
The above assignment is circular and self-referential. If A and/or B tried to simulate it, it leads to a non terminating recursion of the simulations.
What makes you possibly say that with such confidence at this point in all this confusion?
You should google “cargo cult science” by Feynman, because It seems like there is also such a thing as cargo cult rationality and frankly I think you’re doing it. I’m not trying to be mean and it’s not like I could do these kind of mental gymnastics better than you but you can’t do it it either yet and the sooner you realize it the better for you.
To commit to a choice means to decide to adopt that choice irrespective of all other information. Committing means not taking into account any other information in deciding your choice.
What could it even mean to not commit and !predict(B)? How is not predicting B just another way of committing to defect/collaborate?
To not commit is to decide to base your strategy on your prediction of the choice the opponent adopts. You can either choose the same choice as what you predicted, or choose the opposite of what you predicted (any other choice is adopting an invariant strategy).
They can cooperate or defect whether or not they predict or don’t predict each other, because that is all they can do anyway so this statement has zero information content.
Yes, I was merely outlining A’s options when they reach the point in their decision making that they predict the opponent’s strategy.
What makes you possibly say that with such confidence at this point in all this confusion?
It is obvious. Self-referential assignments do not compute. If the above assignment was implemented as a program it would not terminate. Trying to implement a self referential assignment leads to an infinite recursion.
You should google “cargo cult science” by Feynman, because It seems like there is also such a thing as cargo cult rationality and frankly I think you’re doing it. I’m not trying to be mean and it’s not like I could do these kind of mental gymnastics better than you but you can’t do it it either yet and the sooner you realize it the better for you.
I am not, if you do not understand what I said, then you can ask for clarification rather than assuming poor epistemic hygiene on my part—that is not being charitable.
This entire thing is super confused. A lot of complexity and assumptions are hidden inside your words, seemingly without you even realizing it.
The whole point of using a formal language is that IF your premises / axioms are correct, AND you only use logically allowed operations, THEN what comes out at the tail end should equal truth. However, you are really just talking with letters acting as placeholders for what could just as well be simply more words:
What does “commit” mean?
What could it even mean to not commit and !predict(B)? How is not predicting B just another way of committing to defect/collaborate?
They can cooperate or defect whether or not they predict or don’t predict each other, because that is all they can do anyway so this statement has zero information content.
What makes you possibly say that with such confidence at this point in all this confusion?
You should google “cargo cult science” by Feynman, because It seems like there is also such a thing as cargo cult rationality and frankly I think you’re doing it. I’m not trying to be mean and it’s not like I could do these kind of mental gymnastics better than you but you can’t do it it either yet and the sooner you realize it the better for you.
To commit to a choice means to decide to adopt that choice irrespective of all other information. Committing means not taking into account any other information in deciding your choice.
To not commit is to decide to base your strategy on your prediction of the choice the opponent adopts. You can either choose the same choice as what you predicted, or choose the opposite of what you predicted (any other choice is adopting an invariant strategy).
Yes, I was merely outlining A’s options when they reach the point in their decision making that they predict the opponent’s strategy.
It is obvious. Self-referential assignments do not compute. If the above assignment was implemented as a program it would not terminate. Trying to implement a self referential assignment leads to an infinite recursion.
I am not, if you do not understand what I said, then you can ask for clarification rather than assuming poor epistemic hygiene on my part—that is not being charitable.
The problem with this strategy is that one can end up wasting arbitrary amounts of one’s time on idiots and suicide rocks.
I’m pretty sure that charity is a good principle to have in general.