1) The notion of a “perfectly selfish rational agent” presupposes the concept of a utility function. So does the idea that agent A’s strategy must depend on agent B’s which must depend on agent A’s. It doesn’t need to depend, you can literally just do something. And that is what people do in real life. And it seems silly to call it “irrational” when the “rational” action is a computation that doesn’t converge.
2) I think humanity as a whole can be thought of as a single agent. Sure maybe you can have a person who is “approximately that selfish”, but if they are playing a game against human values, there is nothing symmetrical about that. Even if you have two selfish people playing against each other, it is in the context of a world infused by human values, and this context necessarily informs their interactions.
I realize that simple games are only a proxy for complicated games. I am attacking the idea of simple games as a proxy for attacking the idea of complicated games.
3) Eliezer definitely says that when your decision is “logically correlated” with your opponent’s decision then you should cooperate regardless of whether or not there is anything causal about the correlation. This is the essential idea of TDT/UDT. Although I think UDT does have some valuable insights, I think there is also an element of motivated reasoning in the form of “it would be nice if rational agents played (C,C) against each other in certain circumstances rather than (D,D), how can we argue that this is the case”.
1) The notion of a “perfectly selfish rational agent” presupposes the concept of a utility function. So does the idea that agent A’s strategy must depend on agent B’s which must depend on agent A’s. It doesn’t need to depend, you can literally just do something. And that is what people do in real life. And it seems silly to call it “irrational” when the “rational” action is a computation that doesn’t converge.
2) I think humanity as a whole can be thought of as a single agent. Sure maybe you can have a person who is “approximately that selfish”, but if they are playing a game against human values, there is nothing symmetrical about that. Even if you have two selfish people playing against each other, it is in the context of a world infused by human values, and this context necessarily informs their interactions.
I realize that simple games are only a proxy for complicated games. I am attacking the idea of simple games as a proxy for attacking the idea of complicated games.
3) Eliezer definitely says that when your decision is “logically correlated” with your opponent’s decision then you should cooperate regardless of whether or not there is anything causal about the correlation. This is the essential idea of TDT/UDT. Although I think UDT does have some valuable insights, I think there is also an element of motivated reasoning in the form of “it would be nice if rational agents played (C,C) against each other in certain circumstances rather than (D,D), how can we argue that this is the case”.