Jumping to your point on Recursion — I imagine you could ask participants to (1) specify their premises, (2) specify their evidence for each premise, (3) put confidence numbers on given facts, and (4) put something like a “strength of causality” or “strength of inference” on causal mechanisms, which collectively would output their certainty.
In this case, you wouldn’t need to have two people who want to wager against each other, but rather anyone with a difference in confidence of a given fact or the (admittedly vague) “strength of causality” for how much a true-but-not-the-only-variable input effects a system.
Something along these lines might let you use the mechanism more as a market than an arbiter.
2. Discount rate?
After that, I imagine most people would want some discount rate to participate in this — I’m trying to figure out what odds I’d accept if I was 99% sure in a proposition to wager against someone… I don’t think I’d lay 80:1 odds, even though it’s in theory a good bet, just because the sole fact that someone was willing to bet against me at such odds would be evidence I might well be wrong!
The likelihood that anyone participating in a thoughtful process along these lines and laying real money (or other valuable commodity like computing power) against me means there’s probably a greater than 1 in 50 chance I made an error somewhere.
Of course, if the time for Alice and Bob to prepare arguments was sufficiently low, if the resource pool Kelly Criterion style was sufficiently large, and there was sufficient liquidity to get regression to the mean on reasonable timeframes to reduce variance, then you’d be happy to play with small discounts if you were more-right-than-not and reasonably well-calibrated.
Anyway — this is fascinating, lots of ideas here. Salut.
Multiple fascinating ideas here. Two thoughts:
1. Solo formulation → open to market mechanism?
Jumping to your point on Recursion — I imagine you could ask participants to (1) specify their premises, (2) specify their evidence for each premise, (3) put confidence numbers on given facts, and (4) put something like a “strength of causality” or “strength of inference” on causal mechanisms, which collectively would output their certainty.
In this case, you wouldn’t need to have two people who want to wager against each other, but rather anyone with a difference in confidence of a given fact or the (admittedly vague) “strength of causality” for how much a true-but-not-the-only-variable input effects a system.
Something along these lines might let you use the mechanism more as a market than an arbiter.
2. Discount rate?
After that, I imagine most people would want some discount rate to participate in this — I’m trying to figure out what odds I’d accept if I was 99% sure in a proposition to wager against someone… I don’t think I’d lay 80:1 odds, even though it’s in theory a good bet, just because the sole fact that someone was willing to bet against me at such odds would be evidence I might well be wrong!
The likelihood that anyone participating in a thoughtful process along these lines and laying real money (or other valuable commodity like computing power) against me means there’s probably a greater than 1 in 50 chance I made an error somewhere.
Of course, if the time for Alice and Bob to prepare arguments was sufficiently low, if the resource pool Kelly Criterion style was sufficiently large, and there was sufficient liquidity to get regression to the mean on reasonable timeframes to reduce variance, then you’d be happy to play with small discounts if you were more-right-than-not and reasonably well-calibrated.
Anyway — this is fascinating, lots of ideas here. Salut.