With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.
With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.