I think it would be helpful to have a worked example here—say, the twin PD in which both players are close but not identical copies, and they are initially unsure about whether or not they are correlated (one thinks they probably are, another thinks they probably aren’t) but they want to think and reflect more before making up their minds. (Case 2: As above except that they both begin thinking that they probably are.) Is this the sort of thing you ar imagining?
nning through a lot of irrelevant mathematical observations every time we need a new decision) is to run our heuristics continuously (also in decisions we care about), and keep track of which work better.
Put in terms of Logical Inductors, this amounts to taking all the traders from two Inductors, selecting those that have done best (each tested on their own Inductor), and computing their aggregate bet.
Uh oh, this is starting to sound like Oesterheld’s Decision Markets stuff.
I think it would be helpful to have a worked example here—say, the twin PD
As in my A B C example, I was thinking of the simpler case in which two agents disagree about their joint correlation to a third. If the disagreement happens between two sides of a twin PD, then they care about slightly different questions (how likely A is to Cooperate if B Cooperates, and how likely B is to Cooperate if A Cooperates), instead of the same question. And this presents complications in exposition. Although, if they wanted to, they could still share their heuristics, etc.
To be clear, I didn’t provide a complete specification of “what action a and action c are” (which game they are playing), just because it seemed to distract from the topic. That is, the relevant part is their having different beliefs on any correlation, not its contents.
Uh oh, this is starting to sound like Oesterheld’s Decision Markets stuff.
Yes! But only because I’m directly thinking of Logical Inductors, which are the same for epistemics. Better said, Caspar throws everything (epistemics and decision-making) into the traders, and here I am still using Inductors, which only throw epistemics into the traders.
My point is: ”In our heads, we do logical learning by a process similar to Inductors. To resolve disagreements about correlations, we can merge our Inductors in different ways. Some are lower-bandwidth and frugal, while others are higher-bandwidth and expensive.” Exactly analogous points could be made about our decision-making (instead of beliefs), thus the analogy would be to Decision Markets instead of Logical Inductors.
I think it would be helpful to have a worked example here—say, the twin PD in which both players are close but not identical copies, and they are initially unsure about whether or not they are correlated (one thinks they probably are, another thinks they probably aren’t) but they want to think and reflect more before making up their minds. (Case 2: As above except that they both begin thinking that they probably are.) Is this the sort of thing you ar imagining?
Uh oh, this is starting to sound like Oesterheld’s Decision Markets stuff.
As in my A B C example, I was thinking of the simpler case in which two agents disagree about their joint correlation to a third. If the disagreement happens between two sides of a twin PD, then they care about slightly different questions (how likely A is to Cooperate if B Cooperates, and how likely B is to Cooperate if A Cooperates), instead of the same question. And this presents complications in exposition. Although, if they wanted to, they could still share their heuristics, etc.
To be clear, I didn’t provide a complete specification of “what action a and action c are” (which game they are playing), just because it seemed to distract from the topic. That is, the relevant part is their having different beliefs on any correlation, not its contents.
Yes! But only because I’m directly thinking of Logical Inductors, which are the same for epistemics. Better said, Caspar throws everything (epistemics and decision-making) into the traders, and here I am still using Inductors, which only throw epistemics into the traders.
My point is:
”In our heads, we do logical learning by a process similar to Inductors. To resolve disagreements about correlations, we can merge our Inductors in different ways. Some are lower-bandwidth and frugal, while others are higher-bandwidth and expensive.”
Exactly analogous points could be made about our decision-making (instead of beliefs), thus the analogy would be to Decision Markets instead of Logical Inductors.