As y’all know, I agree with Hume (by way of Jaynes) that the error of projecting internal states of the mind onto the external world is an incredibly common and fundamental hazard of philosophy.
Probability is in the mind to start with; if I think that 103,993 has a 20% of being prime (I haven’t tried it, but Prime Number Theorem plus it being not divisible by 2, 3, or 5 wild ballpark estimate) then this uncertainty is a fact about my state of mind, not a fact about the number 103,993. Even if there are many-worlds whose frequencies correspond to some uncertainties, that itself is just a fact; probability is in the map, not in the territory.
Then we have Knightian uncertainty, which is how I feel when I try to estimate AI timelines, i.e., when I query my brain on different occasions it returns different probability estimates, and I know there are going to be some effects which aren’t on my causal map. This is a kind of doubly-subjective double-uncertainty. Of course you still have to turn it into betting odds on pain of violating von Neumann-Morgenstern; see also the Ellsberg paradox of inconsistent decision-making if ambiguity is given a special behavior.
Taking this doubly-map-level property of Knightian uncertainty (a sort of confusion about probabilities) and trying to reify it in the territory as a kind of stuff (encoded in hidden interstices of QM) which somehow plays an irreplaceable functional role in cognition is...
...probably not going to be the best-received philosophical speculation ever posted to LW. I mean, as a species we should know by now that this kind of idea just basically never turns out to be correct. If X is confusing and Y is confusing this does not make X a good explanation for Y when X makes no new experimental predictions about Y even in retrospect, thou shalt not answer confusing questions by postulating new mysterious opaque substances, etc.
(1) One of the conclusions I came to from my own study of QM was that we can’t always draw as sharp a line as we’d like between “map” and “territory.” Yes, there are some things, like Stegosauruses, that seem clearly part of the “territory”; and others, like the idea of Stegosauruses, that seem clearly part of the “map.” But what about (say) a quantum mixed state? Well, the probability distribution aspect of a mixed state seems pretty “map-like,” while the quantum superposition aspect seems pretty “territory-like” … but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.
(Since you approvingly mentioned Jaynes, I should quote the famous passage where he makes the same point: “But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble.”)
Indeed, this strikes me as an example where—to put it in terms LW readers will understand—the exact demarcation line between “map” and “territory” is empirically sterile; it doesn’t help at all in constraining our anticipated future experiences. (Which, again, is not to deny that certain aspects of our experience are “definitely map-like,” while others are “definitely territory-like.”)
(2) It’s not entirely true that the ideas I’m playing with “make no new experimental predictions”—see Section 9.
(3) I don’t agree that Knightian uncertainty must always be turned into betting odds, on pain of violating this or that result in decision theory. As I said in this essay, if you look at the standard derivations of probability theory, they typically make a crucial non-obvious assumption, like “given any bet, a rational agent will always be willing to take either one side or the other.” If that assumption is dropped, then the path is open to probability intervals, Dempster-Shafer, and other versions of Knightian uncertainty.
Well, the probability distribution aspect of a mixed state seems pretty “map-like,” while the quantum superposition aspect seems pretty “territory-like” … but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.
I think the underlying problem here is that we’re using the word “probability” to denote at least two different things, where those things are causally related in ways that keep them almost consistent with each other but not quite. Any system which obeys the axioms of Cox’s theorem can potentially be called probability. The numbers representing subjective judgements of an idealized reasoner satisfy those axioms; call these reasoner subjective probabilities, P_r(event,reasoner). The numbers representing a quantum mixed state do too; call these quantum probabilities, P_q(event,observer).
For an idealized reasoner who knows everything about quantum physics, has unlimited computational power, and has some numbers from the quantum system to start with, these two sets of numbers can be kept consistent: reasoner=observer-->P_r(x,reasoner)=P_q(x,observer). In other words, there is a bit of map and a bit of territory, and these contain the exact same numbers, within the intersection of their domains. The numeric equivalence makes it tempting to merge these into one entity, but that entity can’t be localized to the map or the territory, because it contains one part from each. And the equivalence breaks down, when you step outside of P_q’s domain; if a portion of a quantum system is causally isolated from an observer, then P_q becomes undefined, while P_r does still has a value and still obeys Cox’s axioms.
If the domain of P_r failed to cover all possible events, that would be a huge deal, philosophically. But for P_q to be undefined in some places, that isn’t nearly as interesting.
1) I’m not so clear that the map/territory distinction in QM is entirely relevant to the map/territory distinction with regard to uncertainty. The fact that we do not know a fact, does not describe in any way the fact itself. Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability. There is no functional difference between uncertainty caused by the proposed QM effects, and uncertainty caused by any other factor. So long as we are uncertain for any reason, we are uncertain. We can map that uncertainty with a probability distribution, which will exist in the map; in the territory everything is (at least according to the Many-Worlds interpretation Eliezer ascribes to, and which is the best explanation I’ve seen to date although I’ve not studied the subject extensively) determined, even if our experiences are probabilistic. Even if it turns out that reality really does just throw dice sometimes, that won’t change our ability to map probability over our uncertainty. The proposed source of randomness is not any more or less “really random” than other QM effects, and we can still map a probability distribution over it. The point of drawing the map/territory distinction is to avoid the error of proposing special qualities onto parts of the territory that should only exist on the map. “Here there be randomness” is fine on the map to mark your uncertainty, but don’t read that as ascribing a special “random” property to the territory itself. “Randomness” is not a fundamental feature of reality, as a concept, even if sometimes reality is literally picking numbers at random; you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.
2) I’m not really qualified to evaluate such claims.
3) Refusing to bet is, itself, just making a different bet. Refusing a 50⁄50 bet with a 1000:1 payoff is just as stupid as picking the wrong side. Any such refusal is just selecting the alternative option of “100%: no loss no gain,” and decision theories are certainly able to handle options of that nature. Plus, often in reality there is no way to avoid a bet entirely; usually “do nothing” is just one side of the bet. You can’t ever avoid the results of decision theory; you are guaranteed to get worse payoffs on average by refusing to take the recommended actions, even in the real world. This is minimized by computational limitations, such that you usually wouldn’t be implementing a decision theory anyway or would only be approximating one, but you will still lose out on average.
“Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability.”
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that’s “inherently unknowable” enough for me! :-) Or to say it even more strongly: I don’t actually care much whether someone chooses to regard the unknowability of such a fact as “part of the map” or “part of the territory”—any more than, if a bear were chasing me, I’d worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable—and if it isn’t, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing—ever cleanly separating out the “simpler” question, of whether a machine could be built that couldn’t be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.
“you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.”
Actually, the randomness that arises from quantum measurement is empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could’ve measured |psi> in a basis containing |psi> -- in which case, we would’ve confirmed that a measurement in the first basis must give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can’t be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.
But the more basic point is that, if freebits existed, then they wouldn’t be “random,” as I use the term “random”: instead they’d be subject to Knightian uncertainty. So they couldn’t be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.
“Refusing to bet is, itself, just making a different bet.”
Well, I’d regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of risk-aversion, which could be defined as “that which means you’re no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility.”
With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.
Eliezer, with due respect, your comment consisted of re-iterating a bunch of basic arguments that Scott has seen many times before, without even attempting to engage with any of Scott’s actual arguments. This seems a bit uncharitable...
The entanglement(s) of hot-noisy-evolved biological cognition with abstract ideals of cognition that Eliezer Yudkowsky vividly describes in Harry Potter and the Methods of Rationality, and the quantum entanglement(s) of dynamical flow with the physical processes of cognition that Scott Aaronson vividly describes in Ghost in the Quantum Turing Machine, both find further mathematical/social/philosophical echoes in Joshua Landsberg’s Tensors: Geometry and Applications (2012), specifically in Landsberg’s thought-provoking introductory section Section 0.3: Clash of Cultures (this introduction is available as PDF on-line).
E.g., the above discussions above relating to “map versus object” distinctions can be summarized by:
“These conversations [are] very stressful to all involved … there are language and even philosophical barriers to be overcome.”
The Yudkowsky/Aaronson philosophical divide is vividly mirrored in the various divides that Landsberg describes between geometers and algebraists, and mathematicians and engineers.
Question Has it happened before, that philosophical conundrums have arisen in the course of STEM investigation, then been largely or even entirely resolved by further STEM progress?
Answer Yes of course (beginning for example with Isaac Newton’s obvious-yet-wrong notion that “absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external”).
Conclusion It may be that, in coming decades, the philosophical debate(s) between Yudkowsky and Aaronson will be largely or even entirely resolved by mathematical discourse following the roadmap laid down by Landsberg’s outstanding text.
An elaboration of the above argument now appears on Shtetl Optimized, essentially as a meditation on the question: What strictly mathematical proposition would comprise rationally convincing evidence that the key linear-quantum postulates of “One Ghost in the Quantum Turing Machine* amount to “an unredeemed claim [that has] become a roadblock rather than an inspiration” (to borrow an apt phrase from Jaffe and Quinn’s arXiv:math/9307227).
Readers of Not Even Wrong seeking further (strictly mathematical) mathematical illumination in regard to these issues may wish to consult Arnold Neumaier and Dennis Westra’s textbook-in-progress Classical and Quantum Mechanics via Lie Algebras (arXiv:0810.1019, 2011), whose Introduction states:
“The book should serve as an appetizer, inviting the reader to go more deeply into these fascinating, interdisciplinary fields of science. … [We] focus attention on the simplicity and beauty of theoretical physics, which is often hidden in a jungle of techniques for estimating or calculating quantities of interest.”
That the Neumaier/Westra textbook is an unfinished work-in-progress constitutes proof prima facie that the final tractatus upon these much-discussed logico-physico-philosophicus issues has yet to be written! :)
As y’all know, I agree with Hume (by way of Jaynes) that the error of projecting internal states of the mind onto the external world is an incredibly common and fundamental hazard of philosophy.
Probability is in the mind to start with; if I think that 103,993 has a 20% of being prime (I haven’t tried it, but Prime Number Theorem plus it being not divisible by 2, 3, or 5 wild ballpark estimate) then this uncertainty is a fact about my state of mind, not a fact about the number 103,993. Even if there are many-worlds whose frequencies correspond to some uncertainties, that itself is just a fact; probability is in the map, not in the territory.
Then we have Knightian uncertainty, which is how I feel when I try to estimate AI timelines, i.e., when I query my brain on different occasions it returns different probability estimates, and I know there are going to be some effects which aren’t on my causal map. This is a kind of doubly-subjective double-uncertainty. Of course you still have to turn it into betting odds on pain of violating von Neumann-Morgenstern; see also the Ellsberg paradox of inconsistent decision-making if ambiguity is given a special behavior.
Taking this doubly-map-level property of Knightian uncertainty (a sort of confusion about probabilities) and trying to reify it in the territory as a kind of stuff (encoded in hidden interstices of QM) which somehow plays an irreplaceable functional role in cognition is...
...probably not going to be the best-received philosophical speculation ever posted to LW. I mean, as a species we should know by now that this kind of idea just basically never turns out to be correct. If X is confusing and Y is confusing this does not make X a good explanation for Y when X makes no new experimental predictions about Y even in retrospect, thou shalt not answer confusing questions by postulating new mysterious opaque substances, etc.
Hi Eliezer,
(1) One of the conclusions I came to from my own study of QM was that we can’t always draw as sharp a line as we’d like between “map” and “territory.” Yes, there are some things, like Stegosauruses, that seem clearly part of the “territory”; and others, like the idea of Stegosauruses, that seem clearly part of the “map.” But what about (say) a quantum mixed state? Well, the probability distribution aspect of a mixed state seems pretty “map-like,” while the quantum superposition aspect seems pretty “territory-like” … but oops! we can decompose the same mixed state into a probability distribution over superpositions in infinitely-many nonequivalent ways, and get exactly the same experimental predictions regardless of what choice we make.
(Since you approvingly mentioned Jaynes, I should quote the famous passage where he makes the same point: “But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble.”)
Indeed, this strikes me as an example where—to put it in terms LW readers will understand—the exact demarcation line between “map” and “territory” is empirically sterile; it doesn’t help at all in constraining our anticipated future experiences. (Which, again, is not to deny that certain aspects of our experience are “definitely map-like,” while others are “definitely territory-like.”)
(2) It’s not entirely true that the ideas I’m playing with “make no new experimental predictions”—see Section 9.
(3) I don’t agree that Knightian uncertainty must always be turned into betting odds, on pain of violating this or that result in decision theory. As I said in this essay, if you look at the standard derivations of probability theory, they typically make a crucial non-obvious assumption, like “given any bet, a rational agent will always be willing to take either one side or the other.” If that assumption is dropped, then the path is open to probability intervals, Dempster-Shafer, and other versions of Knightian uncertainty.
I think the underlying problem here is that we’re using the word “probability” to denote at least two different things, where those things are causally related in ways that keep them almost consistent with each other but not quite. Any system which obeys the axioms of Cox’s theorem can potentially be called probability. The numbers representing subjective judgements of an idealized reasoner satisfy those axioms; call these reasoner subjective probabilities, P_r(event,reasoner). The numbers representing a quantum mixed state do too; call these quantum probabilities, P_q(event,observer).
For an idealized reasoner who knows everything about quantum physics, has unlimited computational power, and has some numbers from the quantum system to start with, these two sets of numbers can be kept consistent: reasoner=observer-->P_r(x,reasoner)=P_q(x,observer). In other words, there is a bit of map and a bit of territory, and these contain the exact same numbers, within the intersection of their domains. The numeric equivalence makes it tempting to merge these into one entity, but that entity can’t be localized to the map or the territory, because it contains one part from each. And the equivalence breaks down, when you step outside of P_q’s domain; if a portion of a quantum system is causally isolated from an observer, then P_q becomes undefined, while P_r does still has a value and still obeys Cox’s axioms.
If the domain of P_r failed to cover all possible events, that would be a huge deal, philosophically. But for P_q to be undefined in some places, that isn’t nearly as interesting.
1) I’m not so clear that the map/territory distinction in QM is entirely relevant to the map/territory distinction with regard to uncertainty. The fact that we do not know a fact, does not describe in any way the fact itself. Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability. There is no functional difference between uncertainty caused by the proposed QM effects, and uncertainty caused by any other factor. So long as we are uncertain for any reason, we are uncertain. We can map that uncertainty with a probability distribution, which will exist in the map; in the territory everything is (at least according to the Many-Worlds interpretation Eliezer ascribes to, and which is the best explanation I’ve seen to date although I’ve not studied the subject extensively) determined, even if our experiences are probabilistic. Even if it turns out that reality really does just throw dice sometimes, that won’t change our ability to map probability over our uncertainty. The proposed source of randomness is not any more or less “really random” than other QM effects, and we can still map a probability distribution over it. The point of drawing the map/territory distinction is to avoid the error of proposing special qualities onto parts of the territory that should only exist on the map. “Here there be randomness” is fine on the map to mark your uncertainty, but don’t read that as ascribing a special “random” property to the territory itself. “Randomness” is not a fundamental feature of reality, as a concept, even if sometimes reality is literally picking numbers at random; you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.
2) I’m not really qualified to evaluate such claims.
3) Refusing to bet is, itself, just making a different bet. Refusing a 50⁄50 bet with a 1000:1 payoff is just as stupid as picking the wrong side. Any such refusal is just selecting the alternative option of “100%: no loss no gain,” and decision theories are certainly able to handle options of that nature. Plus, often in reality there is no way to avoid a bet entirely; usually “do nothing” is just one side of the bet. You can’t ever avoid the results of decision theory; you are guaranteed to get worse payoffs on average by refusing to take the recommended actions, even in the real world. This is minimized by computational limitations, such that you usually wouldn’t be implementing a decision theory anyway or would only be approximating one, but you will still lose out on average.
“Even if we could not, by physical law, possibly know the fact, this still does not equate to the fact having inherent unknowability.”
I think the sentence above nicely pinpoints where I part ways from you and Eliezer. To put it bluntly, if a fact is impossible for any physical agent to learn, according to the laws of physics, then that’s “inherently unknowable” enough for me! :-) Or to say it even more strongly: I don’t actually care much whether someone chooses to regard the unknowability of such a fact as “part of the map” or “part of the territory”—any more than, if a bear were chasing me, I’d worry about whether aggression was an intrinsic attribute of the bear, or an attribute of my human understanding of the bear. In the latter case, I mostly just want to know what the bear will do. Likewise in the former case, I mostly just want to know whether the fact is knowable—and if it isn’t, then why! I find it strange that, in the free-will discussion, so many commentators seem to pass over the empirical question (in what senses can human decisions actually be predicted?) without even evincing curiosity about it, in their rush to argue over the definitions of words. (In AI, the analogue would be the people who argued for centuries about whether a machine could be conscious, without --- until Turing—ever cleanly separating out the “simpler” question, of whether a machine could be built that couldn’t be empirically distinguished from entities we regard as conscious.) A central reason why I wrote the essay was to try to provide a corrective to this (by my lights) anti-empirical tendency.
“you would be mistaken if you tried to draw on that fundamental “randomness” in any way that was not exactly equivalent to any other uncertainty, because on the map it looks exactly the same.”
Actually, the randomness that arises from quantum measurement is empirically distinguishable from other types of randomness. For while we can measure a state |psi> in a basis not containing |psi>, and thereby get a random outcome, we also could’ve measured |psi> in a basis containing |psi> -- in which case, we would’ve confirmed that a measurement in the first basis must give a random outcome, whose probability distribution is exactly calculable by the Born rule, and which can’t be explainable in terms of subjective ignorance of any pre-existing degrees of freedom unless we give up on locality.
But the more basic point is that, if freebits existed, then they wouldn’t be “random,” as I use the term “random”: instead they’d be subject to Knightian uncertainty. So they couldn’t be collapsed with the randomness arising from (e.g.) the Born rule or statistical coarse-graining, for that reason even if not also for others.
“Refusing to bet is, itself, just making a different bet.”
Well, I’d regard that statement as the defining axiom of a certain limiting case of economic thinking. In practice, however, most economic agents exhibit some degree of risk-aversion, which could be defined as “that which means you’re no longer in the limiting case where everything is a bet, and the only question is which bet maximizes your expected utility.”
Formatting note: You can quote a paragraph by beginning it with ‘>’.
With regard to “inherent randomness” I think we essentially agree. I tend to use the map/territory construct to talk about it, and you don’t, but in the end the only thing that matters is what predictions we can make (predictions made correspond to what’s in the “map”). The main point there is to avoid the mind-projection fallacy of purporting that concepts which help you think about a thing must necessarily relate to how the thing really is. You appear to not actually be committing any such fallacy, even though it almost sounded like you were due primarily to different uses of terminology. “Can I predict this fact?” is a perfectly legitimate question, so long as you don’t accidentally mix up the answer to that question with something about the fact having some mysterious quality. (I know that this sounds like a pointless distinction, because you aren’t actually making the error in question. It’s not much of a leap to start spouting nonsense, but it’s hard to explain why it isn’t much of a leap when it’s so far from what either of us is actually saying.)
I am pretty definitely not curious about the empirical question of how accurately humans can really be predicted, except in some sense that the less predictable we are the less I get the feeling of having free will. I already know with high confidence that my internal narrative is consistent though, so I’m not too concerned about it. The main reasoning behind this is the same as why I feel like the question of free will has already been entirely resolved. From the inside, I feel like I make decisions, and then I carry out those decisions. So long as my internal narrative leading up to a decision matches up with my decision, I feel like I have free will. I don’t really see the need for any further explanation of free will, or how it really truly exists outside of simply me feeling like I have it. I feel like I have it, and I know why I feel like I have it, and that’s all I need to know.
I recognize that quantum randomness is of a somewhat different nature than other uncertainty, since we cannot simply gain more facts to make it go away. However, when we make predictions, it doesn’t really matter whether our uncertainty comes from QM or classical subjective uncertainty. We have to follow the same laws of probability either way. It matters with regard to how we generate the probabilities, but not with regard to how we use them. I think the core disagreement, though, is that I don’t see Knightian uncertainty as being in it’s own special class of uncertainty. We can, and indeed have to, compute it in our models or they will be wrong. We can generate a probability distribution for the effect of freebits upon our models, even if that distribution is extremely imprecise. Those models, assuming they are generated correctly, will then give correct probabilistic predictions of human behavior. There’s also the issue of whether such effects would actually have anything close to a significant role in our computations; I’m extremely skeptical that such effects would be large enough to so much as flip a single neuron, though I am open to being proven wrong (I’m certainly not a physicist).
Risk-aversion is just a modifier on how the agent computes expected utility. You can’t avoid the game just by claiming you aren’t playing; maximizing expected utility, by definition, is the outcome you want, and decision theory is all about how to maximize utility. If you’re offered a 50⁄50 bet at 1000:1 odds (in utils) and you refuse it, you’re not being risk-averse, you’re being stupid. Real agents are often stupid, but it doesn’t follow that being stupid is rational. Rational agents maximize utility, using decision theory. All agents always take some side of a bet, once you frame it right (and if any isomorphic phrasing makes the assumptions used to derive probability theory true, then they should hold for the original scenario). There is no way to avoid the necessity of finding betting odds if you want to be right.
Eliezer, with due respect, your comment consisted of re-iterating a bunch of basic arguments that Scott has seen many times before, without even attempting to engage with any of Scott’s actual arguments. This seems a bit uncharitable...
The entanglement(s) of hot-noisy-evolved biological cognition with abstract ideals of cognition that Eliezer Yudkowsky vividly describes in Harry Potter and the Methods of Rationality, and the quantum entanglement(s) of dynamical flow with the physical processes of cognition that Scott Aaronson vividly describes in Ghost in the Quantum Turing Machine, both find further mathematical/social/philosophical echoes in Joshua Landsberg’s Tensors: Geometry and Applications (2012), specifically in Landsberg’s thought-provoking introductory section Section 0.3: Clash of Cultures (this introduction is available as PDF on-line).
E.g., the above discussions above relating to “map versus object” distinctions can be summarized by:
as contrasted with the opposing assertion
As Landsberg remarks
The Yudkowsky/Aaronson philosophical divide is vividly mirrored in the various divides that Landsberg describes between geometers and algebraists, and mathematicians and engineers.
Question Has it happened before, that philosophical conundrums have arisen in the course of STEM investigation, then been largely or even entirely resolved by further STEM progress?
Answer Yes of course (beginning for example with Isaac Newton’s obvious-yet-wrong notion that “absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external”).
Conclusion It may be that, in coming decades, the philosophical debate(s) between Yudkowsky and Aaronson will be largely or even entirely resolved by mathematical discourse following the roadmap laid down by Landsberg’s outstanding text.
An elaboration of the above argument now appears on Shtetl Optimized, essentially as a meditation on the question: What strictly mathematical proposition would comprise rationally convincing evidence that the key linear-quantum postulates of “One Ghost in the Quantum Turing Machine* amount to “an unredeemed claim [that has] become a roadblock rather than an inspiration” (to borrow an apt phrase from Jaffe and Quinn’s arXiv:math/9307227).
Readers of Not Even Wrong seeking further (strictly mathematical) mathematical illumination in regard to these issues may wish to consult Arnold Neumaier and Dennis Westra’s textbook-in-progress Classical and Quantum Mechanics via Lie Algebras (arXiv:0810.1019, 2011), whose Introduction states:
That the Neumaier/Westra textbook is an unfinished work-in-progress constitutes proof prima facie that the final tractatus upon these much-discussed logico-physico-philosophicus issues has yet to be written! :)
Once more with feeling:
If there is a kind of probability in the mind, that doesn’t mean there isn’t another kind in reality.
You came decide the nature of reality with amchair arguments.