I don’t think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them).
For instance, given all available evidence, if you haven’t heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don’t understand algorithmic information theory well enough to quantify how much probability should be allocated to “sub-Solomonoff Loss”, to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there.
Why, particularly in the case of quantum physics? Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and micro-scale) on one underlying reality using incompatible theories. This is always and only our fault, and if we want to deal with that fault, we need to be able to quantify it.
Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman’s lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.
This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing … and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.
For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves—a metaphor whose validity came from the same equations having the same solutions.
I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman’s remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains—in a lecture delivered before 1964, mind—that “[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines.” And at the end of section 4.7 of the paper you linked to:
From the value of alpha, it seems that the electrostatic force is about two orders of magnitude weaker than the mechanical force between resonant bubbles. This suggests one limitation of the bouncing-droplet experiment as a model of quantum mechanics, namely that spherically-symmetric resonant solutions are not a good model for the electron.
On the basis of these factors, I think I would fully endorse Brady and Anderson’s conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics—such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:
What are the quantum parallels for the effective external forces in these hydrodynamic quantum analogs, i.e. gravity and the vibrations of the table? Not all particles carry electric charge, or weak or color charge. But they are all effected by gravity. Is their a connection here to gravity? Quantum gravity?
...all I can think is, “does this person understand what the word ‘analogue’ means?” There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it’s worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don’t have gravity in them … so whatever work gravity does in the solution of the first equation is work it doesn’t do in the solution of the second.
I may be doing the man a gross injustice, but this ain’t no way to run a railroad.
It is a neat toy, and I’m glad you posted the link to it.
The reason I got so mad is that Warren Huelsnitz’s attempt to draw inferences from these—even weak, probabilistic, Bayesian inferences—were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn’t a true lesson to learn from the metaphor, and Richard Feynman agrees with him:
However, a question surely suggests itself at the end of such a discussion: Why are the equations from different phenomena so similar? We might say: “It is the underlying unity of nature.” But what does that mean? What could such a statement mean? It could mean simply that the equations are similar for different phenomena; but then, of course, we have given no explanation. The “underlying unity” might mean that everything is made out of the same stuff, and therefore obeys the same equations. That sounds like a good explanation, but let us think. The electrostatic potential, the diffusion of neutrons, heat flow—are we really dealing with the same stuff? Can we really imagine that the electrostatic potential is physically identical to the temperature, or to the density of particles? Certainly ϕ is not exactly the same as the thermal energy of particles. The displacement of a membrane is certainly not like a temperature. Why, then, is there “an underlying unity”?
Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars … which is simply asinine.
Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period.
We don’t actually know that. Weinberg has suggested that GR might be asymptotically safe. Most people seem to think this isn’t the case, but no one has been able to show that he is wrong. We can rephrase your argument, and instead of putting weight on theories for which we have no evidence, dump the “built up” probability mass on the idea that the two theories don’t actually disagree.
Certainly the amount of “contradiction” between GR and quantum field theories are often overblown. You can, for instance, treat GR as an effective field theory and compute quantum corrections to various things. They are just too small to matter/measure.
I don’t think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
(That said, the experiments sound awesome! Any particular place you’d recommend to start reading?)
There don’t seem to be many popularizations. This looks fun and as far as I can tell is neither lying nor bullshitting us. This is an actual published paper, for those with the maths to really check.
I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them).
For instance, given all available evidence, if you haven’t heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don’t understand algorithmic information theory well enough to quantify how much probability should be allocated to “sub-Solomonoff Loss”, to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there.
Why, particularly in the case of quantum physics? Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and micro-scale) on one underlying reality using incompatible theories. This is always and only our fault, and if we want to deal with that fault, we need to be able to quantify it.
Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman’s lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.
This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing … and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.
For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves—a metaphor whose validity came from the same equations having the same solutions.
I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman’s remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains—in a lecture delivered before 1964, mind—that “[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines.” And at the end of section 4.7 of the paper you linked to:
On the basis of these factors, I think I would fully endorse Brady and Anderson’s conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics—such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:
...all I can think is, “does this person understand what the word ‘analogue’ means?” There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it’s worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don’t have gravity in them … so whatever work gravity does in the solution of the first equation is work it doesn’t do in the solution of the second.
I may be doing the man a gross injustice, but this ain’t no way to run a railroad.
Why draw strong conclusions? Let papers be published and conferences held. It’s a neat toy to look at, though.
It is a neat toy, and I’m glad you posted the link to it.
The reason I got so mad is that Warren Huelsnitz’s attempt to draw inferences from these—even weak, probabilistic, Bayesian inferences—were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn’t a true lesson to learn from the metaphor, and Richard Feynman agrees with him:
Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars … which is simply asinine.
We don’t actually know that. Weinberg has suggested that GR might be asymptotically safe. Most people seem to think this isn’t the case, but no one has been able to show that he is wrong. We can rephrase your argument, and instead of putting weight on theories for which we have no evidence, dump the “built up” probability mass on the idea that the two theories don’t actually disagree.
Certainly the amount of “contradiction” between GR and quantum field theories are often overblown. You can, for instance, treat GR as an effective field theory and compute quantum corrections to various things. They are just too small to matter/measure.
...huh.
I have to go, but downvote this comment if I don’t reply again in the next five hours. I’ll be back.
Edit: Function completed; withdrawing comment.