It’s not the antagonistic tone of your comments that puts me off, it’s the way in which you seem to deliberately not understand things. For example my definition of analogous — what else could you possibly have expected in this context? No, don’t answer that.
I genuinely don’t understand what question you’re asking
I believe I have said everything already, but I’ll put it in a slightly different way:
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we’re looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
The meaning of analogous should be very clear by now. Screw practical and impractical.
As an aside note, I don’t know what kind of stuff they teach at US grad schools, but what’s of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge, except some basic game theory and decision theory. As far as I know, what I’m trying to do here is uncharted territory.
For example my definition of analogous — what else could you possibly have expected in this context? No, don’t answer that.
The question is how close you wanted the analogy to be.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we’re looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
Okay, this is clearer.
As an aside note, I don’t know what kind of stuff they teach at US grad schools, but what’s of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge
I can point you to a large body of evidence that I have all of these things.
The question is how close you wanted the analogy to be.
Close enough that anything we can infer from the analogous problem must apply to the original problem as well, especially concerning the decisions agents make. I thought I said that a few times.
Okay, this is clearer.
Does that imply it is actually clear? Do you have an approach for this? A way to divide the problem into smaller chunks? An idea how to tackle the issue of “any possible agent”?
I’ll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a “rather mathematical nature”, let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
Nothing in your post or the proceeding discussion is of a “rather mathematical nature”, let alone a precise specification of a mathematical problem.
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
You do realize that game theory is a branch of mathematics, as is decision theory? That we are trying to prove something here, not by empirical evidence, but by logic and reason alone? What do you think this is, social economics?
Your question is not stated in anything like the standard terminology of game theory and decision theory. It’s also not clear what you are asking on an informal level. What do you mean by “analogous”?
What you have stated is unclear enough that I can’t recognize it as a problem in either game theory or decision theory, and meanwhile you are being very rude. Disincentivizing people who try to help you is not a good way to convince people to help you.
That’s because it’s not strictly speaking a problem in GT/DT, it’s a problem (or meta-problem if you want to call it that) about GT/DT. It’s not “which decision should agent X make”, but “how can we prove that problems A and B are identical.”
Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn’t even read what you write and says he has no idea what you’re talking about to conclude that you’re not talking about mathematics. I find that rude.
It’s not the antagonistic tone of your comments that puts me off, it’s the way in which you seem to deliberately not understand things. For example my definition of analogous — what else could you possibly have expected in this context? No, don’t answer that.
I believe I have said everything already, but I’ll put it in a slightly different way:
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
For instance, how can we find a problem that is analogous to Newcomb, but without Omega? I have described such an analogous problem in my top-level post and demonstrated how CDT agents will in the initial state not make the analogous decision. What we’re looking for is a problem in which any imaginable agent would, and we can prove it. If we believe that such a problem cannot exist without Omega, how can we prove that?
The meaning of analogous should be very clear by now. Screw practical and impractical.
As an aside note, I don’t know what kind of stuff they teach at US grad schools, but what’s of help here is familiarity with methods of proof and a mathematical mindset rather than mathematical knowledge, except some basic game theory and decision theory. As far as I know, what I’m trying to do here is uncharted territory.
The question is how close you wanted the analogy to be.
Okay, this is clearer.
I can point you to a large body of evidence that I have all of these things.
Close enough that anything we can infer from the analogous problem must apply to the original problem as well, especially concerning the decisions agents make. I thought I said that a few times.
Does that imply it is actually clear? Do you have an approach for this? A way to divide the problem into smaller chunks? An idea how to tackle the issue of “any possible agent”?
I’ll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a “rather mathematical nature”, let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
Given a problem A, find an analogous problem B with the same payoff matrix for which it can be proven that any possible agent will make analogous decisions, or prove that such a problem B cannot exist.
You do realize that game theory is a branch of mathematics, as is decision theory? That we are trying to prove something here, not by empirical evidence, but by logic and reason alone? What do you think this is, social economics?
Your question is not stated in anything like the standard terminology of game theory and decision theory. It’s also not clear what you are asking on an informal level. What do you mean by “analogous”?
I’m not surprised you don’t understand what I’m asking when you don’t read what I write.
I did read that. It either doesn’t say anything at all, or else it trivializes the problem when you unpack it.
Also, this is not worth my time. I’m out.
What you have stated is unclear enough that I can’t recognize it as a problem in either game theory or decision theory, and meanwhile you are being very rude. Disincentivizing people who try to help you is not a good way to convince people to help you.
That’s because it’s not strictly speaking a problem in GT/DT, it’s a problem (or meta-problem if you want to call it that) about GT/DT. It’s not “which decision should agent X make”, but “how can we prove that problems A and B are identical.”
Concerning the matter of rudeness, suppose you write a post and however many comments about a mathematical issue, only for someone who doesn’t even read what you write and says he has no idea what you’re talking about to conclude that you’re not talking about mathematics. I find that rude.