Basically TDT is about the fact that we don’t evaluate individual decisions for rationality but agents strategies.
For every strategy that an agent has, other agents can punish the agent for having that strategy if they specifically choose to do so.
If there’s for example a God/Matrix overlord who rewards people for believing in him even if the person has no rational reason to believe in him that would invalidate a lot of decision theories.
Basically decisions are entangled with the agent that makes them and not independent as you assume in your model. That entangelement forbids perfect decision making strategies that have the property independent of the world and whether other agents decide to specifically discriminate against certain strategies.
Do you have a quote where Elizier says rationality doesn’t exist? I don’t believe that he’d say that. I think he’d argue that it is okay for rationality to fail when rationality is specifically being punished.
Regardless, I disagree with TDT, but that’s a future article.
Do you have a quote where Eliezier says rationality doesn’t exist?
No, but I don’t think that he shares your idea of perfect theoretical rationality anyway. From that point there no need to say that it doesn’t exist.
I think that it’s Eliezer position that it’s good enough that TDT provides the right answer when the other agent doesn’t specifically choose to punish TDT.
But I don’t want to dig up a specific quote at the moment.
Regardless, I disagree with TDT, but that’s a future article.
You are still putting assumption about nonentanglement into your models. Questions about the actions of nonentangled actors are like asking how many angels can stand of the pin of a nail.
They are far removed from the kind of rationality of the sequences. You reason far away from reality. That’s a typical problem with thinking too much in terms of hypotheticals instead of basing your reasoning on real world references.
In order to prove that it is a problem, you have to find a real world situation that I’ve called incorrectly due to my decision to start reasoning from a theoretical model.
In general that’s hard without knowing many real world situations that you have judged.
But there’s a recent example. The fact that different people won’t come up with different definitions of knowledge if you ask them would be one recent example of concentrating too much on a definition and too little on actually engaging with what different people think.
My understanding was that he argued for a different definition of rationality, not that it didn’t exist.
Basically TDT is about the fact that we don’t evaluate individual decisions for rationality but agents strategies. For every strategy that an agent has, other agents can punish the agent for having that strategy if they specifically choose to do so.
If there’s for example a God/Matrix overlord who rewards people for believing in him even if the person has no rational reason to believe in him that would invalidate a lot of decision theories.
Basically decisions are entangled with the agent that makes them and not independent as you assume in your model. That entangelement forbids perfect decision making strategies that have the property independent of the world and whether other agents decide to specifically discriminate against certain strategies.
Do you have a quote where Elizier says rationality doesn’t exist? I don’t believe that he’d say that. I think he’d argue that it is okay for rationality to fail when rationality is specifically being punished.
Regardless, I disagree with TDT, but that’s a future article.
No, but I don’t think that he shares your idea of
perfect theoretical rationality
anyway. From that point there no need to say that it doesn’t exist.I think that it’s Eliezer position that it’s good enough that TDT provides the right answer when the other agent doesn’t specifically choose to punish TDT.
But I don’t want to dig up a specific quote at the moment.
You are still putting assumption about nonentanglement into your models. Questions about the actions of nonentangled actors are like asking how many angels can stand of the pin of a nail.
They are far removed from the kind of rationality of the sequences. You reason far away from reality. That’s a typical problem with thinking too much in terms of hypotheticals instead of basing your reasoning on real world references.
In order to prove that it is a problem, you have to find a real world situation that I’ve called incorrectly due to my decision to start reasoning from a theoretical model.
In general that’s hard without knowing many real world situations that you have judged. But there’s a recent example. The fact that different people won’t come up with different definitions of
knowledge
if you ask them would be one recent example of concentrating too much on a definition and too little on actually engaging with what different people think.