Maybe so, but this also assumes that you’re good at determining who’s an idiot. Many people are not, but think they are. So you need to consider that if you make a policy of “don’t argue with idiots” widespread, it will be adopted by people with imperfect idiot-detectors. (And I’m pretty sure that many common LW positions would be considered idiocy in the larger world.)
Consider also that “don’t argue with idiots” has much of the same superficial appeal as “allow the government to censor idiots”. The ACLU defends Nazis for a reason, even though they’re pretty obviously idiots: any measures taken against idiots will be taken against everyone else, too.
(And I’m pretty sure that many common LW positions would be considered idiocy in the larger world.)
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that’s largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
A large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called “mountains of evidence”. The strength and stridency with which LW believes and believes in certain things fail a “smell test” for overconfidence, even though the really smelly things (like, for example, cryonics) are usually actively debated on LW itself (I recall reading in this year’s survey that the mean LW-er believes cryonics has a 14% chance of working, which is lower than people with less rationality training estimate).
So in contradistinction to Traditional Rationality (as practiced by almost everyone with a remotely scientific education), we are largely defined (as was noted in the survey) by our dedication to Bayesian reasoning, and our willingness to take ideas seriously, and thus come to probabilistic-but-confident conclusions while the rest of the world sits on its hands waiting for further information. Well, that and our rabid naturalism on philosophical topics.
large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called “mountains of evidence”.
Except for scientific research, which will happily accept p < 0.05 to publish the most improbable claims.
And even with more than 5 sigma people will be like ‘we probably screwed up somewhere’ when the claim is sufficiently improbable, see e.g. the last paragraph before the acknowledgements in arXiv:1109.4897v1.
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that’s largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
There’s also a tendency to be doctrinaire among LW-ers that people may be reacting to—an obvious manifestation of this is our use of local jargon and reverential capitalization of “the Sequences” as if these words and posts have significance beyond the way they illuminate some good ideas. Those are social markers of deluded crackpots, I think.
Yes, very definitely so. The other thing that makes LW seem… a little bit silly sometimes is the degree of bullet swallowing in the LW canon.
For instance, just today I spent a short while on the internet reading some good old-fashioned “mind porn” in the form of Yves Couder’s experiments with hydrodynamics that replicate many aspects of quantum mechanics. This is really developing into quite a nice little subfield, direct physical experiments can be and are done, and it has everything you could want as a reductive explanation of quantum mechanics. Plus, it’s actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
But if you swallowed your bullet, you’ll never discover it yourself. In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher, given that a large component of research consists of inventing new models to absorb probability mass that currently has nowhere better to go than a known-wrong model.
Yves Couder’s experiments are neat, but the underlying ‘quantum’ interpretation is basically just Bohm’s interpretation. The water acts as a pilot wave, and the silicon oil drops act as Bohmian particles. Its very cool that we can find a classical pilot-wave system, but its not pointing in a new interpretational direction.
Personally, I would love Bohm, but for the problem that it generalizes so poorly to quantum field theories. Its a beautiful, real-feeling interpretation.
Edit: Also neat- the best physical analogue to a blackhole that I know of is water emptying down a bathtub drain faster than the speed-of-sound in the fluid. Many years ago, Unruh was doing some neat experiments with some poor grad student, but I don’t know if they ever published anything.
Plus, it’s actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
Note that because of Bell’s theorem, any classical system is going to have real trouble emulating all of quantum mechanics; entanglement is going to trip it up. I know you said “replicate many aspects of quantum mechanics,” but it’s probably important to emphasize that this sort of thing is not going to lead to a classical model underlying all of QM.
“Within the philosophy of science, the view that new discoveries constitute a break with tradition was challenged by Polanyi, who argued that discoveries may be made by the sheer power of believing more strongly than anyone else in current theories, rather than going beyond the paradigm. For example, the theory of Brownian motion which Einstein produced in 1905, may be seen as a literal articulation of the kinetic theory of gases at the time. As Polanyi said:
‘Discoveries made by the surprising configuration of existing theories might in fact be likened to the feat of a Columbus whose genius lay in taking literally and as a guide to action that the earth was round, which his contemporaries held vaguely and as a mere matter for speculation.’”
I think that a ‘reductive’ explanation of quantum mechanics might not be as appealing as it seems to you.
Those fluid mechanics experiments are brilliant, and I’m deeply impressed by them for coming up with them, let alone putting it into practice! However, I don’t find it especially convincing as a model of subatomic reality. Just like the case with early 20th-century analog computers, with a little ingenuity it’s almost always possible to build a (classical) mechanism that will obey the same math as almost any desired system.
Definitely, to the point that it can replicate all observed features of quantum mechanics, the fluid dynamics model can’t be discarded as a hypothesis. But it has a very very large Occam’s Razor penalty to pay. In order to explain the same evidence as current QM, it has to postulate a pseudo-classical physics layer underneath, which is actually substantially more complicated than QM itself, which postulates basically just a couple equations and some fields.
Remember that classical mechanics, and most especially fluid dynamics, are themselves derived from the laws of QM acting over billions of particles. The fact that those ‘emergent’ laws can, in turn, emulate QM does imply that QM could, at heart, resemble the behaviour of a fluid mechanic system… but that requires postulating a new set of fundamental fields and particles, which in turn form the basis of QM, and give exactly the same predictions as the current simple model that assumes QM is fundamental. Being classical is neither a point in its favour nor against it, unless you think that there is a causal reason why the reductive layer below QM should resemble the approximate emergent behaviour of many particles acting together within QM.
If we’re going to assume that QM is not fundamental, then there is actually an infinite spectrum of reductive systems that could make up the lower layer. The fluid mechanics model is one that you are highlighting here, but there is no reason to privilege it over any other hypothesis (such as a computer simulation) since they all provide the same predictions (the same ones that quantum mechanics does). The only difference between each hypothesis is the Occam penalty they pay as an explanation.
I agree that, as a general best practice, we should assign a small probability to the hypothesis that QM is not fundamental, and that probability can be divided up among all the possible theories we could invent that would predict the same behaviour. However, to be practical and efficient with my brain matter, I will choose to believe the one theory that has vastly more probability mass, and I don’t think that should be put down as bullet swallowing.
Is QM not simple enough for you, that it needs to be reduced further? If so, the reduction had better be much simpler than QM itself.
I don’t think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them).
For instance, given all available evidence, if you haven’t heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don’t understand algorithmic information theory well enough to quantify how much probability should be allocated to “sub-Solomonoff Loss”, to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there.
Why, particularly in the case of quantum physics? Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and micro-scale) on one underlying reality using incompatible theories. This is always and only our fault, and if we want to deal with that fault, we need to be able to quantify it.
Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman’s lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.
This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing … and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.
For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves—a metaphor whose validity came from the same equations having the same solutions.
I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman’s remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains—in a lecture delivered before 1964, mind—that “[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines.” And at the end of section 4.7 of the paper you linked to:
From the value of alpha, it seems that the electrostatic force is about two orders of magnitude weaker than the mechanical force between resonant bubbles. This suggests one limitation of the bouncing-droplet experiment as a model of quantum mechanics, namely that spherically-symmetric resonant solutions are not a good model for the electron.
On the basis of these factors, I think I would fully endorse Brady and Anderson’s conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics—such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:
What are the quantum parallels for the effective external forces in these hydrodynamic quantum analogs, i.e. gravity and the vibrations of the table? Not all particles carry electric charge, or weak or color charge. But they are all effected by gravity. Is their a connection here to gravity? Quantum gravity?
...all I can think is, “does this person understand what the word ‘analogue’ means?” There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it’s worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don’t have gravity in them … so whatever work gravity does in the solution of the first equation is work it doesn’t do in the solution of the second.
I may be doing the man a gross injustice, but this ain’t no way to run a railroad.
It is a neat toy, and I’m glad you posted the link to it.
The reason I got so mad is that Warren Huelsnitz’s attempt to draw inferences from these—even weak, probabilistic, Bayesian inferences—were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn’t a true lesson to learn from the metaphor, and Richard Feynman agrees with him:
However, a question surely suggests itself at the end of such a discussion: Why are the equations from different phenomena so similar? We might say: “It is the underlying unity of nature.” But what does that mean? What could such a statement mean? It could mean simply that the equations are similar for different phenomena; but then, of course, we have given no explanation. The “underlying unity” might mean that everything is made out of the same stuff, and therefore obeys the same equations. That sounds like a good explanation, but let us think. The electrostatic potential, the diffusion of neutrons, heat flow—are we really dealing with the same stuff? Can we really imagine that the electrostatic potential is physically identical to the temperature, or to the density of particles? Certainly ϕ is not exactly the same as the thermal energy of particles. The displacement of a membrane is certainly not like a temperature. Why, then, is there “an underlying unity”?
Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars … which is simply asinine.
Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period.
We don’t actually know that. Weinberg has suggested that GR might be asymptotically safe. Most people seem to think this isn’t the case, but no one has been able to show that he is wrong. We can rephrase your argument, and instead of putting weight on theories for which we have no evidence, dump the “built up” probability mass on the idea that the two theories don’t actually disagree.
Certainly the amount of “contradiction” between GR and quantum field theories are often overblown. You can, for instance, treat GR as an effective field theory and compute quantum corrections to various things. They are just too small to matter/measure.
Consider also that “don’t argue with idiots” has much of the same superficial appeal as “allow the government to censor idiots”.
The former has a fair amount of appeal for me and the latter I would find appalling and consider to be descent into totalitarianism. I don’t think this comparison works.
Jiro didn’t say appeal to you. Besides, substitute “blog host” for “government” and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn’t make “don’t argue with idiots” wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.
Besides, substitute “blog host” for “government” and I think it becomes a bit clearer
Speaking for myself, I’ve got a fair bit of sympathy for the concept with that substitution and a fair bit of antipathy without it. It’s a lot easier to find a blog you like and that likes you than to find a government with the same qualities.
Being an idiot is less about positions and more about how one argues. The easiest way to identify an idiot is when debating gets someone angry to the point of violence. Beyond that, idiots can be identified by the use of fallacies, ad hominems, non-sequiturs, etc.
This rule fails for RationalWiki in particular, so I don’t think it’s sufficiently expressive. RationalWiki will never get violent, they’ll never use basic rhetorical fallacies, but are they not idiots?
I think a better rule for idiocy is the inability to update. An idiot will never change their mind, and will never learn. More intelligent idiots can change their mind about minor things related to things they already deeply believe, but never try to understand anything that’s a level or two of inference away from their existing core.
Nonidiocy requires the intelligence to think correctly, the wisdom to know when you’re wrong, and the charisma to tolerate the social failing of being wrong. It takes all three to avoid being an idiot.
This rule fails for RationalWiki in particular, so I don’t think it’s sufficiently expressive. RationalWiki will never get violent, they’ll never use basic rhetorical fallacies, but are they not idiots?
They won’t threaten physical violence, but when discussing certain political topics (libertarianism, social justice and feminism) they do use basic rhetorical fallacies in addition to generally abusive behaviour even from the admins (trolling, name calling and swinging the banhammer). Surprisingly, when discussing other topics, such as science, pseudosciences and paranormal beliefs, they look like perfectly sane and rational folks. (I’ve never engaged them, my experience comes form browsing the wiki and lurking a little bit on the 4ch-...Facebook group)
I think they aren’t idiots but just political fanatics.
Well, if Rossi’s free energy generators worked and were replacing power stations or gasoline in cars or the like, we all would change our mind about Rossi. I guess that means we’re probably idiots, because that’s highly unlikely.
Cranks constantly demand that we change our minds in response to Andrea Rossi plain as day rigging up another experiment, Randel L Mills releasing some incoherent formula salad, Chris Langan taking an IQ test, or the like.
Many people are not, but think they are. So you need to consider that if you make a policy of “don’t argue with idiots” widespread
I posted this quote on site with average IQ above 99th percentile for a reason. Also, please read the original comment for context, I think you’ll interpret it a bit differently.
Having a high IQ does not equate to having a good idiot detector.
Also, policies which treat people differently based on a self-serving distinction need more justification than normal, because of the increased prior that the person making the policy is affected by an ulterior motive.
Maybe so, but this also assumes that you’re good at determining who’s an idiot. Many people are not, but think they are. So you need to consider that if you make a policy of “don’t argue with idiots” widespread, it will be adopted by people with imperfect idiot-detectors. (And I’m pretty sure that many common LW positions would be considered idiocy in the larger world.)
Consider also that “don’t argue with idiots” has much of the same superficial appeal as “allow the government to censor idiots”. The ACLU defends Nazis for a reason, even though they’re pretty obviously idiots: any measures taken against idiots will be taken against everyone else, too.
Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that’s largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.
A large part of the problem is that all the lessons of Traditional Rationality teach to guard against actually arriving to conclusions before amassing what I think one Sequence post called “mountains of evidence”. The strength and stridency with which LW believes and believes in certain things fail a “smell test” for overconfidence, even though the really smelly things (like, for example, cryonics) are usually actively debated on LW itself (I recall reading in this year’s survey that the mean LW-er believes cryonics has a 14% chance of working, which is lower than people with less rationality training estimate).
So in contradistinction to Traditional Rationality (as practiced by almost everyone with a remotely scientific education), we are largely defined (as was noted in the survey) by our dedication to Bayesian reasoning, and our willingness to take ideas seriously, and thus come to probabilistic-but-confident conclusions while the rest of the world sits on its hands waiting for further information. Well, that and our rabid naturalism on philosophical topics.
Except for scientific research, which will happily accept p < 0.05 to publish the most improbable claims.
No, “real science” requires more evidence than that − 5 sigma in HEP. p < 0.05 is the preserve of “soft science”.
And even with more than 5 sigma people will be like ‘we probably screwed up somewhere’ when the claim is sufficiently improbable, see e.g. the last paragraph before the acknowledgements in arXiv:1109.4897v1.
There’s also a tendency to be doctrinaire among LW-ers that people may be reacting to—an obvious manifestation of this is our use of local jargon and reverential capitalization of “the Sequences” as if these words and posts have significance beyond the way they illuminate some good ideas. Those are social markers of deluded crackpots, I think.
Yes, very definitely so. The other thing that makes LW seem… a little bit silly sometimes is the degree of bullet swallowing in the LW canon.
For instance, just today I spent a short while on the internet reading some good old-fashioned “mind porn” in the form of Yves Couder’s experiments with hydrodynamics that replicate many aspects of quantum mechanics. This is really developing into quite a nice little subfield, direct physical experiments can be and are done, and it has everything you could want as a reductive explanation of quantum mechanics. Plus, it’s actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
But if you swallowed your bullet, you’ll never discover it yourself. In fact, if you swallow bullets in general, I find it kind of difficult to imagine how you could function as a researcher, given that a large component of research consists of inventing new models to absorb probability mass that currently has nowhere better to go than a known-wrong model.
Yves Couder’s experiments are neat, but the underlying ‘quantum’ interpretation is basically just Bohm’s interpretation. The water acts as a pilot wave, and the silicon oil drops act as Bohmian particles. Its very cool that we can find a classical pilot-wave system, but its not pointing in a new interpretational direction.
Personally, I would love Bohm, but for the problem that it generalizes so poorly to quantum field theories. Its a beautiful, real-feeling interpretation.
Edit: Also neat- the best physical analogue to a blackhole that I know of is water emptying down a bathtub drain faster than the speed-of-sound in the fluid. Many years ago, Unruh was doing some neat experiments with some poor grad student, but I don’t know if they ever published anything.
Note that because of Bell’s theorem, any classical system is going to have real trouble emulating all of quantum mechanics; entanglement is going to trip it up. I know you said “replicate many aspects of quantum mechanics,” but it’s probably important to emphasize that this sort of thing is not going to lead to a classical model underlying all of QM.
How could you function? Well, a quote from last year put it nicely:
I think that a ‘reductive’ explanation of quantum mechanics might not be as appealing as it seems to you.
Those fluid mechanics experiments are brilliant, and I’m deeply impressed by them for coming up with them, let alone putting it into practice! However, I don’t find it especially convincing as a model of subatomic reality. Just like the case with early 20th-century analog computers, with a little ingenuity it’s almost always possible to build a (classical) mechanism that will obey the same math as almost any desired system.
Definitely, to the point that it can replicate all observed features of quantum mechanics, the fluid dynamics model can’t be discarded as a hypothesis. But it has a very very large Occam’s Razor penalty to pay. In order to explain the same evidence as current QM, it has to postulate a pseudo-classical physics layer underneath, which is actually substantially more complicated than QM itself, which postulates basically just a couple equations and some fields.
Remember that classical mechanics, and most especially fluid dynamics, are themselves derived from the laws of QM acting over billions of particles. The fact that those ‘emergent’ laws can, in turn, emulate QM does imply that QM could, at heart, resemble the behaviour of a fluid mechanic system… but that requires postulating a new set of fundamental fields and particles, which in turn form the basis of QM, and give exactly the same predictions as the current simple model that assumes QM is fundamental. Being classical is neither a point in its favour nor against it, unless you think that there is a causal reason why the reductive layer below QM should resemble the approximate emergent behaviour of many particles acting together within QM.
If we’re going to assume that QM is not fundamental, then there is actually an infinite spectrum of reductive systems that could make up the lower layer. The fluid mechanics model is one that you are highlighting here, but there is no reason to privilege it over any other hypothesis (such as a computer simulation) since they all provide the same predictions (the same ones that quantum mechanics does). The only difference between each hypothesis is the Occam penalty they pay as an explanation.
I agree that, as a general best practice, we should assign a small probability to the hypothesis that QM is not fundamental, and that probability can be divided up among all the possible theories we could invent that would predict the same behaviour. However, to be practical and efficient with my brain matter, I will choose to believe the one theory that has vastly more probability mass, and I don’t think that should be put down as bullet swallowing.
Is QM not simple enough for you, that it needs to be reduced further? If so, the reduction had better be much simpler than QM itself.
I don’t think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.
(That said, the experiments sound awesome! Any particular place you’d recommend to start reading?)
There don’t seem to be many popularizations. This looks fun and as far as I can tell is neither lying nor bullshitting us. This is an actual published paper, for those with the maths to really check.
I think I should phrase this properly by dropping into the language of the Lord High Prophet of Bayes, E.T. Jaynes: it is often optimal to believe in some model with some probability based on the fixed, finite quantity of evidence we have available on which to condition, but this is suboptimal compared to something like Solomonoff Induction that can dovetail over all possible theories. We are allocating probability based on fixed evidence to a fixed set of hypotheses (those we understand well enough to evaluate them).
For instance, given all available evidence, if you haven’t heard of sub-quantum physics even at the mind-porn level, believing quantum physics to be the real physics is completely rational, except in one respect. I don’t understand algorithmic information theory well enough to quantify how much probability should be allocated to “sub-Solomonoff Loss”, to the possibility that we have failed to consider some explanation superior to the one we have, despite our current best explanations adequately soaking up the available evidence as narrowed, built-up probability mass, but plainly some probability should be allocated there.
Why, particularly in the case of quantum physics? Because we’ve known damn well for decades that it’s an incomplete theory! If it cannot be unified with the other best-supported theory in the same domain (General Relativity), then it is incomplete. Period. Reality does not contradict itself: the river of evidence flowing into General Relativity and the river of evidence flowing into quantum mechanics cannot collide and run against each-other unless we idiot humans have approximated two different perspectives (cosmic scale and micro-scale) on one underlying reality using incompatible theories. This is always and only our fault, and if we want to deal with that fault, we need to be able to quantify it.
Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman’s lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.
This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing … and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.
For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves—a metaphor whose validity came from the same equations having the same solutions.
I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman’s remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains—in a lecture delivered before 1964, mind—that “[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines.” And at the end of section 4.7 of the paper you linked to:
On the basis of these factors, I think I would fully endorse Brady and Anderson’s conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics—such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:
...all I can think is, “does this person understand what the word ‘analogue’ means?” There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it’s worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don’t have gravity in them … so whatever work gravity does in the solution of the first equation is work it doesn’t do in the solution of the second.
I may be doing the man a gross injustice, but this ain’t no way to run a railroad.
Why draw strong conclusions? Let papers be published and conferences held. It’s a neat toy to look at, though.
It is a neat toy, and I’m glad you posted the link to it.
The reason I got so mad is that Warren Huelsnitz’s attempt to draw inferences from these—even weak, probabilistic, Bayesian inferences—were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn’t a true lesson to learn from the metaphor, and Richard Feynman agrees with him:
Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars … which is simply asinine.
We don’t actually know that. Weinberg has suggested that GR might be asymptotically safe. Most people seem to think this isn’t the case, but no one has been able to show that he is wrong. We can rephrase your argument, and instead of putting weight on theories for which we have no evidence, dump the “built up” probability mass on the idea that the two theories don’t actually disagree.
Certainly the amount of “contradiction” between GR and quantum field theories are often overblown. You can, for instance, treat GR as an effective field theory and compute quantum corrections to various things. They are just too small to matter/measure.
...huh.
I have to go, but downvote this comment if I don’t reply again in the next five hours. I’ll be back.
Edit: Function completed; withdrawing comment.
The former has a fair amount of appeal for me and the latter I would find appalling and consider to be descent into totalitarianism. I don’t think this comparison works.
Jiro didn’t say appeal to you. Besides, substitute “blog host” for “government” and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn’t make “don’t argue with idiots” wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.
Speaking for myself, I’ve got a fair bit of sympathy for the concept with that substitution and a fair bit of antipathy without it. It’s a lot easier to find a blog you like and that likes you than to find a government with the same qualities.
Hence the substitution. :)
You are atypical in this respect.
Really? I feel the same way as Lumifer and asusmed that this was the obvious, default reaction. Damned typical-mind fallacy.
I also feel the same way, but in my experience most people don’t.
Also as RobinZ pointed out here things get fuzzy in the limit where one has to taboo “government”.
No, I don’t think they do. The basic distinction is between making the choices for yourself and forcing choices on others.
That’s typical for me :-D
Being an idiot is less about positions and more about how one argues. The easiest way to identify an idiot is when debating gets someone angry to the point of violence. Beyond that, idiots can be identified by the use of fallacies, ad hominems, non-sequiturs, etc.
This rule fails for RationalWiki in particular, so I don’t think it’s sufficiently expressive. RationalWiki will never get violent, they’ll never use basic rhetorical fallacies, but are they not idiots?
I think a better rule for idiocy is the inability to update. An idiot will never change their mind, and will never learn. More intelligent idiots can change their mind about minor things related to things they already deeply believe, but never try to understand anything that’s a level or two of inference away from their existing core.
Nonidiocy requires the intelligence to think correctly, the wisdom to know when you’re wrong, and the charisma to tolerate the social failing of being wrong. It takes all three to avoid being an idiot.
They won’t threaten physical violence, but when discussing certain political topics (libertarianism, social justice and feminism) they do use basic rhetorical fallacies in addition to generally abusive behaviour even from the admins (trolling, name calling and swinging the banhammer).
Surprisingly, when discussing other topics, such as science, pseudosciences and paranormal beliefs, they look like perfectly sane and rational folks.
(I’ve never engaged them, my experience comes form browsing the wiki and lurking a little bit on the 4ch-...Facebook group)
I think they aren’t idiots but just political fanatics.
Well, if Rossi’s free energy generators worked and were replacing power stations or gasoline in cars or the like, we all would change our mind about Rossi. I guess that means we’re probably idiots, because that’s highly unlikely.
Cranks constantly demand that we change our minds in response to Andrea Rossi plain as day rigging up another experiment, Randel L Mills releasing some incoherent formula salad, Chris Langan taking an IQ test, or the like.
I posted this quote on site with average IQ above 99th percentile for a reason. Also, please read the original comment for context, I think you’ll interpret it a bit differently.
Having a high IQ does not equate to having a good idiot detector.
Also, policies which treat people differently based on a self-serving distinction need more justification than normal, because of the increased prior that the person making the policy is affected by an ulterior motive.
I’m of opinion that the process of detecting idiots is heavily g-loaded.