[SEQ RERUN] Tolerate Tolerance
Today’s post, Tolerate Tolerance was originally published on 21 March 2009. A summary (taken from the LW wiki):
One of the likely characteristics of someone who sets out to be a “rationalist” is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people’s tolerance—to avoid rejecting them because they tolerate people you wouldn’t—since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots—so long as they don’t literally believe the same ideas themselves—try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Why Our Kind Can’t Cooperate, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Lunatics and crackpots don’t necessarily have seriously flawed thinking. Different priors, different information, and small biases can lead to entirely different conclusions. Most people avoid crackpot status by never venturing to attempt an original thought, or any thought, really, and just going along with conventional wisdom.
Most of the canonical sources here, like Korzybski, Jaynes, and Everett, were seen as crackpots at one time, and some still are in some quarters.
In my experience, what leads to becoming a crackpot is when one’s ego significantly exceeds one’s intelligence. Now, If only I had a way to measure both on the same scale...
“Crackpot” has been leveled as an epithet against just about every major thinker. Had the epithet existed then, Galileo surely would have qualified even by the most conservative use of the term.
I regard the term as indicating that somebody is going very far afield of established thinking. 99.9% of all crackpots will be wrong. But it may be worth listening to them anyways.
I guess Galileo had read and understood Aristotle, he just disagreed with him. Conversely, the cranks who infest Usenet and similar places usually have at most a very vague idea of what Einstein said. So I don’t think the two are analogous. (See also)
I understand quantum theory (the specific theory of quantized energy, not the catch-all way it’s used to commonly encapsulate modern physics), I just disagree with it. Yes, it’s been pretty much a million for a million out of its predictions. I still disagree with it.
Am I or am I not a crackpot?
Why do you disagree with it? That is very much a relevant question...
Because it privileges our -scale- in something the same way we used to privilege our planet and then our sun, as having some special place in the universe, is one answer, and the reason I started looking for alternatives. (Hey, I noticed a trend.)
It’s hard to communicate the why—essentially, however, I found a brilliant physicist who was contemporary to the development of the theory who had been working on an alternative explanation which drew on his subject expertise (Johannes Rhydberg). I found him because I had independently came to the same alternative explanation and was looking for the mathematics to prove it out and found his work on hydrogen atoms. (His work is what the mathematics for predictions of what light spectra are emitted by given atomic configurations at given energy levels are based upon)
I haven’t sat down to crunch it out, but I’m pretty sure that if you use modern atomic models with a couple minor tweaks, his theories would, for scales above the Planck scale, make the same predictions as quantum physics, and have explanatory power for light frequency emissions, without privileging scale. (Essentially the difference would be that Planck scales of energy are those necessary to shift electrons between shells, if you’ll pardon my use of the simpler non-Standard Model model for descriptive purposes, and sub-Planck scales of energy appear as quantum-random Planck-scale events when enough energy gathers or dissipates through sub-Planck emissions to cause Planck-scale events, such as an electron rising or falling a shell—essentially particles “hide” sub-Planck energy levels).
Or, in short, I have the same answer as most crackpots: I find the standard theory inelegant, and have an alternative I prefer.
That’s rather better considered than most crackpot theories. What different predictions do you have, and how much data, if any, are you currently defying?
Biggest prediction would be that “spontaneous emissions” should be predictable. Unfortunately, quantum field theory already says they’re predictable, just not to the same extent,
The data shouldn’t be defied at all. I’m not going to say it -doesn’t-, since, as previously mentioned, I haven’t actually crunched it out, but if it does, it’s wrong. It would mostly be an update to the mechanisms—replacing the second quantization of quantum field theory with something a little less abstract. (I’ll add, somewhat sardonically, that there are already two different mechanisms of calculating quantum field theory which both make the same predictions, and there used to be something like nine before most of them were generalized into a single approach. So quantum field theory already has a long history of such substitutions without impacting predictions.)
You are not a crackpot, unless there are major factual errors in your explanation of your theory.
It does render the two-slit experiment invalid for its intended purpose, even if it doesn’t affect the data, if that helps my case.
The two-slit experiment which acts exactly as wave physics suggests it should, even where it is used to demonstrate entaglement?
Yep. It wouldn’t impact entanglement experiments, however, and wouldn’t impact wave physics characteristics, but rather the particle physics characteristics of the experiment.
The two-slit experiment depends upon the assumption that photons (wave or particle) are above the Planck threshold—if they’re beneath it, they wouldn’t have sufficient energy to reliably induce a reaction in the screen behind the plate, meaning a strict wave interpretation could be valid (the intermittent reactions could be the product of sufficient energy build-up in the receiving electrons, rather than photons intermittently striking different parts of the screen). In regard to wave characteristics of photons, statistically this would be nearly identical to particle emissions—we should expect “blips” in a distribution roughly equal to the distribution we should expect from particle emissions. I say nearly identical because I assume some underlying mechanism by which electrons lose energy over time, meaning the least-heavily radiated areas to lose energy at a rate rapid enough to prevent valence shell shifting and hence fewer blips.
...which might be evidence for my theory, actually, since we do indeed see fewer reactions than we might expect in the least-radiated portions of the screen, per that open problem/unexplained phenomenon whose name I can’t recall that Eliezer goes on about a bit in one of the sequences. (The observed reactions are the square of the probability, rather than the probability itself, of a particle hitting a given section of the screen. I’m mangling terminology, I know.) Laziness is now competing with curiosity on whether I go and actually pull out one of my mathematics textbooks. If I were in therapy for crackpottery this would set me back months.
(Note: Having looking up the experiment to try to get the proper name for the screen behind the plate (without any success), it appears I was mistaken in my initial claim—the -original- intended purpose of the experiment, demonstrating wave characteristics of light, remains intact. It’s merely wave-particle duality, a later adaptation of the experiment, which loses evidence. Retracting that comment as invalid.)
I have a question. My meta-question is whether the question makes sense in light of what you said. (I like working in low-information conditions, downside being dumb questions.)
Wouldn’t this still be a testable difference? If electrons can briefly store energy, you could send a steady stream of below-Planck photons. Standard QM predicts no spots on the photoplate, but you predict spots, right?
The answer to that is a firm “Maybe.”
The question becomes—how do you create a steady stream of below-Planck photons? In the current model, photons are only emitted when electrons shift valence shells—these photons start, at least, as above-Planck.
Rhydberg’s model (assuming I understand where he was going with it correctly) asserts that photons are -also- emitted when the electrons are merely energetic—black-body radiation, essentially. However, if your electrons are energetic, and at least 50% of all photons are being shared by the emitting medium, you’re going to get above-Planck photons anyways. (If you’re emitting enough radiation to create spots in the receiving medium, you’re dealing with energy that is at least occasionally above Planck scales, and this energy is already in the emitting medium.)
An important thing to remember is that the existing model was devised to explain black-body radiation. The Planck scale is really really low, low enough that the bar can be cleared by (AFAIK) any material with an energy level meaningfully above absolute zero. (And maybe even there, I’ve never looked into blackbody radiation of Einstein-Bose condensates.)
So in principle, for a sensitive enough photoplate (it’s currently nonreactive to blackbody radiation), for a dark enough room (so as not to set the photoplate off constantly), yes.
However, that ties us into another problem, which you may have sensed coming—the photoplate would be setting -itself- off constantly.
I assume light is a wave, not is a particle, which gives a little more wiggle-room on the experiment; sections of the plate experience distributed energy build-up, which is released all at once in a cascade reaction when a sufficiently large (still quite small) region of the photoplate has amassed sufficient energy to react with only a small amount (say, a nearby atom reacting) of energy.
Cyclotron radiation wavelengths can be tuned, as they aren’t tied to valence shells.
The number of spots per second from thermal statistics plus harmonics on the cyclotron radiation can be calculated. If the electrons are also absorbing photons classically, you should get extra spots when they happen to add up.
I think you’re going to see Rhydberg-OrphanWilde-interpretation blackbody radiation anyway. When an electron bounces off another, it counts as acceleration and produces cyclotron radiation. It might be different in magnitude, though.
I think photoplates can be tuned too. It should have to be hit by a single particle with more than the activation energy for the light-sensitive reaction. (Neglecting tunneling.) Therefore, it should be possible to pick a compound with a suitably high activation energy.
But it will look statistically different. From what I understand, photons below the necessary energy will just bounce off or get absorbed by some other process. That’s how the photoelectric effect is supposed to work, anyway.
I’m not familiar enough with cyclotron radiation (read: I’m not familiar with it at all; my understanding of cyclotrons is limited to “They’re the things hospitals and labs use to produce small amounts of radioactive isotopes”) to be able to contribute to this discussion, so I’m afraid I’ll have to tap out due to ignorance I currently don’t have time to rectify.
Also, keep in mind that the only was an experiment can fail is if it provides no new information; the only way to render an experiment invalid or less useful is to show that you don’t know as much more as you thought you did.
But I thought you were referring to the modification of the two-slit experiment where electrons were the wave being measured, not photons.
And an experiment can’t fail to provide new information, because you thought it would provide information and then it didn’t, which means it has something to teach you about experiment design. Unless you’re proposing that an experiment that goes exactly as expected is a waste of time?
That said I think what Wilde means by ‘invalid’ is that a strong conclusion that resulted from the experiment is invalid in light of the fact that an entirely different model is consistent with the evidence.
An experiment that fails would be “I was trying to measure the speed of neutrinos, but I measured lab errors instead.”, or “I tried to titrate a solution, but used an excess of phenolphthalein accidentally.”