This is incredibly cool and it makes me sad that I’ve never seen this experiment done in a science museum, physics instructional lab, or anywhere else.
asr
This is actually a really good example of what I wanted.
I think I have a lot of reason to believe v = f lambda—It follows pretty much from the definition of “wave” and “wavelength”. And I think I can check the frequency of my microwave without any direct assumptions about the speed of light, using an oscilloscope or somesuch.
But yes, you are correct, as long as your main criterion is something like “compelling at an emotional level”, you should expect that different people understand it very differently.
This actually brings out something I had never thought about before. When I am reading or reviewing papers professionally, mostly the dispute between reviewers is about how interesting the topic is, not about whether the evidence is convincing. Likewise my impression about the history of physics is that mostly the professionals were in agreement about what would constitute evidence.
So it’s striking that when I put aside my “working computer scientist” hat and put on my “amateur natural scientist” hat, suddenly that consensus goes away and everybody disagrees about what’s convincing.
Well, you could use your smartphone’s accelerometer to verify the equations for centrifugal force, or its GPS to verify parts of special and general relativity, or the fact that its chip functions to verify parts of quantum mechanics.
These don’t feel like the are quite comparable to each other. I do really trust the accelerometer to measure acceleration. If I take my phone on the merry-go-round and it says “1.2 G”, I believe it. I trust my GPS to measure position. But I only take on faith that the GPS had to account for time dilation to work right—I don’t really know anything about the internals of the GPS and so “trust us it works via relativity” isn’t really compelling at an emotional level. For somebody who worked with GPS and really knew about the internals of the receiver, this might be a more compelling example.
But I’m not sure how you can legitimately claim to be verifying anything; if you don’t trust those laws how can you trust the phone? It would be like using a laser rangefinder to verify the speed of light. For this sort of thing the fact that your equipment functions is better evidence that the people who made it know the laws of physics, than any test you could do with it.
Yes of course. In real life I’m perfectly happy to take on faith that everything in my undergraduate physics textbooks was true. But I want to experience it, not just read about it. And I think “my laser rangefinder works correctly” doesn’t feel like experiencing the speed of light. In contrast, building my own rangefinder with a laser and a timing circuit would count as experiencing the speed of light.
I am starting to worry that my criteria for “experience” are idiosyncratic and that different people would find very different science demonstrations compelling.
Another advantage of replicating the original discovery is that you don’t accidentally use unverified equipment or discoveries (ie equipment dependent on laws that were unknown at the time).
I don’t consider this an advantage. My goal is to find vivid and direct demonstrations of scientific truths, and so I am happy to use things that are commonplace today, like telephones, computers, cameras, or what-have-you.
That said, I certainly would be interested in hearing about cases where there’s something easy to see today that used to be hard—is there something you have in mind?
Various ways to measure the speed of light. Many require few modern implements. How to measure constancy of the speed of light—the original experiment, does not require any complicated or mysterious equipment, only careful design.
The early measurements of the speed of light don’t require “modern implements.” They do require quite sophisticated engineering or measurement. In particular, the astronomical measurements are not easy at all. Playing the”how would I prove X to myself” game brought home to me just how hard science is. Already by the 18th century and certainly by the 19th, professional astronomers were sophisticated enough to do measurements I couldn’t easily match without extensive practice and a real equipment budget.
Suppose you were going to measure the speed of light by astronomy. Stellar aberration seems like the easiest approach, and that’s a shift of 20 arcseconds across a time interval of six months. This is probably within my capacities to measure, but it’s the sort of thing you would have to work at. It would be a year-long or years-long observation program requiring close attention to detail. In particular, if I wanted a measurement of the speed of light accurate to within 10% I would need my measurement to have error bars of about 2 arcseconds. I suspect an amateur who knew what they were doing could manage it, but it’s not something you would just stumble onto as a casual observation.
Is there an easily visible consequence of special relativity that you can see without specialized equipment?
A working GPS receiver.
In general, things like a smartphone “verify” a great deal of modern science.
Yah. Though the immediacy of the verification will vary. When I use my cell phone, I really feel it that information is being carried by radio waves that don’t penetrate metal. But I never found the GPS example quite compelling; people assure me “oh yes we needed relativity to get it to work right” and of course I believe them, but I’ve never seen the details presented and so this doesn’t impress me at an emotional level.
I don’t know how much my feelings here are idiosyncratic; how similar are different people in what sorts of observations make a big impression on them?
Just direct observation, by the way, gives you little. Yes, you can observe discontinuous spectra of fluorescent lights. So what? This does not prove quantum mechanics in any way, this is merely consistent with quantum mechanics, just as it is consistent with a large variety of other explanations.
I’m not so sure about “consistent with a large variety of other explanations”—my impression is that nobody was able to come up with a believable theory of spectroscopy before Bohr. Can you point to a non-quantum explanation that ever seemed plausible? Furthermore once you say “okay, spectral lines are due to electron energy-level transitions”, you wind up intellectually committed to a whole lot of other things, notably the Pauli exclusion rule.
I would read this if written well.
Rebutting radical scientific skepticism
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
It’s a big deal. In particular, I was startled to see Russell signing it. I don’t put much weight on the physicists, who are well outside their area of expertise. But Russell is a totally respectable guy and this is exactly his nominal area. I interacted with him a few times as a student and he impressed me as a smart thoughtful guy who wasn’t given to pet theories.
Has he ever stopped by MIRI to chat? The Berkeley CS faculty are famously busy, but I’d think if he bothers to name-check MIRI in a prominent article, he’d be willing to come by and have a real technical discussion.
I have updated my respect for MIRI significantly based on Stu Russell signing that article. (Russell is a prominent mainstream computer scientist working on related issues; as a result his opinion I think has substantially more credibility here than the physicists.)
Maybe it’s the allure of alarmism, but aren’t we mostly concerned with predicting catastrophe? This is kind of like saying you can predict the weather except for typhoons and floods.
I think the analogy goes the other way. A weather forecast that didn’t cover catastrophes would still be useful. I like knowing if it’s going to be rainy or sunny, wet or dry.
Similarly, I find it useful to know in a general sense which way short-term interest rates are going, how much inflation to expect over the next few years, and whether the job market is getting better or worse from quarter to quarter.
Yes, sometimes there are external shocks or surprising internal developments, but an imperfect prediction is still better than none.
It goes back much, much further. Newton was appointed to the royal mint, Leibniz worked for several rulers, Galileo was directly funded by the rulers of Florence,etc (I specifically named people from what I consider to be the beginning of science). The tradition in democratic governments dates back to WW2, but the tradition itself is much older.
All those people had government funding, but with the possible exception of Galileo, it wasn’t funding “for basic science.”
The one I know most about is Newton, and the example seems clearly misleading here. When he went to the mint he was already well-established and had done much of his important scientific work. (The Principia came out in 1687, and Newton went to the mint in 1696.) Moreover, this wasn’t a funding source for scientific pursuits. He devoted a huge amount of time and energy to running the mint, including personally investigating and prosecuting counterfeiters. (See Thomas Levenson’s entertaining Newton and the Counterfeiter.)
Leibnitz, as near as I can gather from wikipedia was there to be an ornament to the court of Hannover, but it’s not at all clear that they cared about his scientific or mathematical achievements. Can you point me to something specific?
Galileo was a professor, so I suppose that counts.
I grant that governments and rulers have funded philosophers and professors, for a long while. But big-money research, with billion-dollar budgets and massive labs with thousands of researchers is much newer.
The difference between “applied” and “basic” is the difference between biology and medical research. While biomed is booming, its incredibly hard to get a job as a biologist (Douglas Prasher is the norm, not the exception).
I don’t think this is strong evidence for “insufficient funding.” In the US, and to some extent elsewhere, research money is channeled towards graduate student assistantships and fellowships, and away from full-time mid-career researchers. As a result, regardless of the total degree of funding, the population of researchers is skewed towards young people. In consequence, there is a fierce competition for permanent jobs.
Traditionally, the government helped to fill the gap in profitable intermediate milestones by massively subsidizing basic research, but the push within universities for profitable patents (and a few other factors) has shifted the system towards more and more applied research.
It’s a relatively recent tradition. Serious government funding of basic research didn’t start until after WW2, as near as I can tell, anywhere in the world.
Also I am skeptical that “basic” and “applied” research is a useful distinction these days. A large fraction of science funding goes to biology and medical research. Understanding the mechanisms underlying biological processes gets you pretty far towards coming up with treatments that alter those mechanisms. But if “understanding how cells work, exactly” isn’t basic research, what is?
Yes. The NSA isn’t a threat I worry about, since I figure they could get my stuff via a demand to Google, if they wanted it. I am primarily worried about non-government-aided criminals. See Steve Bellovin’s analysis for why this isn’t so suitable an attack for that class of adversary.
Context: I want to give some insight as to why I (and others) voted for “not changing password, not very worried” and as to why the company is not telling everybody to change password immediately.
I agree that the fact that patches were needed does imply that they were running the bad OpenSSL versions. The company is saying, on the record, that people do not need to change passwords. And this matches what I am hearing informally from friends who work there.
Is it good hygiene to change passwords? Yes. Given two-factor authentication and perfect forward secrecy, it might not be super critical though.
I have friends who do security at Google, and they explicitly told me “we don’t think the company was vulnerable and you don’t need to change your GMail password.” So as near as I can tell, the third-party sites and Google, inc, disagree about whether Google is vulnerable here.
This does sometimes happen. A recent very impressive example was the collective effort to improve the bound on gaps between primes.
I think not so naive as all that. The effectiveness of a security measure depends on the threat. If your worry is “employers searching for my name or email address” then a pseudonym works fine. If your worry is “law enforcement checking whether a particular forum post was written by a particular suspect,” then it’s not so good. And if your worry is “they are wiretapping me or will search my computer”, then the pseudonym is totally unhelpful.
I think in most LW contexts—including drug discussions—the former model is a better match. My impression is that security clearance investigations in the United States involve a lot of interviews with friends and family, but, at the present time, don’t involve highly sophisticated computer analysis.