I still want to figure out games (like PD) in the oracle setting first. After the abortive attempt on the list, I didn’t yet get around to rethinking the problem. Care to take a stab?
The symmetric case (identical payoffs and identical algorithms) is trivial in the oracle setting. Non-identical algorithms seems to be moderately difficult, our candidate solutions in the non-oracle setting only work because they privilege one of the outcomes apriori, like Loebian cooperation. Non-identical payoffs seems to be very difficult, we have no foothold at all.
I think we have a nice enough story for “fair” problems (where easy proofs of moral arguments exist), and no good story for even slightly “unfair” problems (like ASP or non-symmetric PD). Maybe the writeup should emphasize the line between these two kinds of problems. It’s clear enough in my mind.
Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It’s unclear to me how to combine this with possibility of reasoning about such agents (by other agents).
That’s very speculative, I don’t remember any nontrivial results in this vein so far. Maybe the writeup shouldn’t need to wait until this gets cleared up.
Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?
(It’s not “advanced”, it’s not even in its infancy yet. On the other hand, there is a lot of decision theory that’s actually advanced, but solves different problems.)
Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?
This totally deserves to be polished a bit and published in a mainstream journal.
That’s a question of degree. Some past posts of mine are similar to this one in formality.
Nesov also said in an email on Jan 4 that now we can write this stuff up. I think Wei and Gary should be listed as coauthors too.
I still want to figure out games (like PD) in the oracle setting first. After the abortive attempt on the list, I didn’t yet get around to rethinking the problem. Care to take a stab?
The symmetric case (identical payoffs and identical algorithms) is trivial in the oracle setting. Non-identical algorithms seems to be moderately difficult, our candidate solutions in the non-oracle setting only work because they privilege one of the outcomes apriori, like Loebian cooperation. Non-identical payoffs seems to be very difficult, we have no foothold at all.
I think we have a nice enough story for “fair” problems (where easy proofs of moral arguments exist), and no good story for even slightly “unfair” problems (like ASP or non-symmetric PD). Maybe the writeup should emphasize the line between these two kinds of problems. It’s clear enough in my mind.
Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It’s unclear to me how to combine this with possibility of reasoning about such agents (by other agents).
That’s very speculative, I don’t remember any nontrivial results in this vein so far. Maybe the writeup shouldn’t need to wait until this gets cleared up.
(It’s not “advanced”, it’s not even in its infancy yet. On the other hand, there is a lot of decision theory that’s actually advanced, but solves different problems.)
I think Luke meant “advanced” as in superrationality, not “advanced” as in highly developed.
BTW, nice work.