I think the best way to measure it in any meaningful way would be to consider the same scenerio with millions of people doing it instead of just one, but even then it doesn’t look like it makes much of a difference.
This is a good point. What happens in this individual case would be dominated by random facts about the individuals directly involved. If you imagine the same situation repeated many times, 100 should be plenty, the randomness cancels out.
I am struggling to convey this, so I’ll have to think about it more.
For now, though: I do think that differences in the initial conditions would be propagated by adaptive individuals and institutions (rather than smoothed away). That should lead to bifurcations and path dependencies that would generate drastically different outcomes. Enough that averaging them would be meaningless.
Why do you think repeating it many times would converge? Are the statistical limit theorem conditions really met? I don’t think so..
None of this really explicitly says that you wouldn’t be able to at least figure out the sign of the change. It might be computationally intractable but qualitatively determinable in special cases.
I think the best way to measure it in any meaningful way would be to consider the same scenerio with millions of people doing it instead of just one, but even then it doesn’t look like it makes much of a difference.
This is a good point. What happens in this individual case would be dominated by random facts about the individuals directly involved. If you imagine the same situation repeated many times, 100 should be plenty, the randomness cancels out.
So you might think. Sensitivity to initial conditions!
care to explain why we should expect sensitivity to initial conditions to matter in the particular example being discussed here?
I am struggling to convey this, so I’ll have to think about it more.
For now, though: I do think that differences in the initial conditions would be propagated by adaptive individuals and institutions (rather than smoothed away). That should lead to bifurcations and path dependencies that would generate drastically different outcomes. Enough that averaging them would be meaningless.
Why do you think repeating it many times would converge? Are the statistical limit theorem conditions really met? I don’t think so..
None of this really explicitly says that you wouldn’t be able to at least figure out the sign of the change. It might be computationally intractable but qualitatively determinable in special cases.