“Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability.”
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that’s “inherently unknowable” enough for me! :-) Or to say it even more strongly: I don’t actually care much whether someone chooses to regard the unknowability of such a fact as “part of the map” or “part of the territory”—any more than, if a bear were chasing me, I’d worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable—and if it isn’t, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing—ever cleanly separating out the “simpler” question, of whether a machine could be built that couldn’t be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.
“you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.”
Actually, the randomness that arises from quantum measurement is empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could’ve measured |psi> in a basis containing |psi> -- in which case, we would’ve confirmed that a measurement in the first basis must give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can’t be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.
But the more basic point is that, if freebits existed, then they wouldn’t be “random,” as I use the term “random”: instead they’d be subject to Knightian uncertainty. So they couldn’t be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.
“Refusing to bet is, itself, just making a different bet.”
Well, I’d regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of risk-aversion, which could be defined as “that which means you’re no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility.”
With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.
“Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability.”
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that’s “inherently unknowable” enough for me! :-) Or to say it even more strongly: I don’t actually care much whether someone chooses to regard the unknowability of such a fact as “part of the map” or “part of the territory”—any more than, if a bear were chasing me, I’d worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable—and if it isn’t, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing—ever cleanly separating out the “simpler” question, of whether a machine could be built that couldn’t be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.
“you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.”
Actually, the randomness that arises from quantum measurement is empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could’ve measured |psi> in a basis containing |psi> -- in which case, we would’ve confirmed that a measurement in the first basis must give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can’t be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.
But the more basic point is that, if freebits existed, then they wouldn’t be “random,” as I use the term “random”: instead they’d be subject to Knightian uncertainty. So they couldn’t be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.
“Refusing to bet is, itself, just making a different bet.”
Well, I’d regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of risk-aversion, which could be defined as “that which means you’re no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility.”
Formatting note: You can quote a paragraph by beginning it with ‘>’.
With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.