What was your biggest recent surprise?
I recently flipped through the “Cartoon Guide to Physics”, expecting an easy-to-understand rehash of ideas I was long familiar with; and that’s what I got—right up to the last few pages, where I was presented with a fairly fundamental concept that’s been absent from the popular science media I’ve enjoyed over the years. (Specifically, that the uncertainty principle, when expressed as linking energy and time, explains what electromagnetic fields actually /are/, as the propensity for virtual photons of various strengths to happen.) I find myself happy to try to integrate this new understanding—and at least mildly disturbed that I’d been missing it for so long, and with an increased curiosity about how I might find any other such gaps in my understanding of how the universe works.
So: what’s the biggest, or most surprising, or most interesting concept /you/ have learned of, after you’d already gotten a handle on the basics?
- 11 Jun 2012 6:37 UTC; 1 point) 's comment on More intuitive programming languages by (
I was surprised that it is possible to apply simple(?) signal processing techniques to extract subtle signals from a video, e.g. somebody’s heartbeat.
Surprise levels:
1) I never thought of that (that there could be useful hidden signals in standard video). Their paper references a few other attempts at this.
2) If I had thought of it, or someone had mentioned the idea, I would have guessed that those signals are not strong enough to be extracted by any method.
3) And, even if there were a signal, I would have thought it would take very powerful techniques and many assumptions (like manually annotating where you expect to see the heartbeat, etc.) to make it work. This is required less than I’d expect. From the paper:
The same way we could obtain details from an astronomy video. One hour of a video of a distant planet might be worth of a big telescope, The long exposition time was the first step in this direction, long ago.
The current exo planet detection is another, bigger.
We simply don’t yet use every information we have.
Astronomy is an interesting connection to think about wrt to this work. In astronomy, we’re integrating the light received. In some sense this is dynamic, because there are small variations due to atmosphere. But the underlying signal is assumed to be static? I guess there are pulsars where we don’t expect that. Maybe then people have to apply similar techniques (filtering out dynamics, e.g. from atmosphere, at frequencies far from that expected from pulsars?)
You, an astronomer, should always ask yourself: Giving this light pattern in time, what is the most probable source which would give me this pattern. Be it static or dynamic, whichever fits the best.
The standard approach is to simulate multiple possible sources and use Bayesian techniques, such as maximum likelihood, to evaluate which ones match the data best and whether the best is a good enough fit. The waveforms matching in LIGO is one of the extremes, given how weak the potential signal is.
A very salient moment of surprise was when I realized that my mental model of a simple three-quark proton was deeply (or simply) wrong:
http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/whats-a-proton-anyway/
What still surprises me, whenever I think of it, is how we live in a such a big world, even on the smallest scales we are able to probe. And also that things like nuclei happen to be stable over long enough timescales for things like chemistry and life to occur.
All of those gluons and quark-antiquark pairs are every bit as stable as the Earth’s gravitational field. They’re elements of the ground state for a quark.
The process of finding the ground state for a particle from its interactions, including dragging in virtual pairs to screen high field intensities around the singularity, is called Renormalization.
For an explanation using more showing and less telling: Checking what’s inside a proton
(The author is blooking high-energy physics.)
Not really related to any explicit field of study, but...
Most recently, I was surprised by the extent to which the Japanese still use faxes.
Before that, I was really surprised by the whole Planetary Resources thing. My model of the world claimed that aside for some relatively minor stuff like space tourism and such, plausible pushes to actually do something new and non-trivial in space simply do not happen, and that there would be essentially no real progress in any kind of space exploration before the Singularity. At best, there would be a new private space station in orbit, or NASA would announce a manned Mars mission that would get quietly killed by budget cuts a few years later. Having a bunch of billionaires announce a real effort to actually mine asteroids was something that made it slightly easier for me to alieve in the Singularity happening some day. Before, both asteroid mining and the Singularity used to belong to the mental category of “things that I intellectually acknowledge as possible, but which would be such huge changes to the current paradigm that on a gut level, I don’t really grasp either of them happening”.
Why it just something which made it easier to “alieve” (in contrast to just believing) in a singularity, or do you think this information was good evidence for updating towards that a singularity is more likely? (eg because it shows that billionaires might invest in such crazy projects)
I don’t think it changed my beliefs about the probability of a Singularity, only my aliefs about whether “science fiction-like” events could happen.
I learned that I’m not crazy for having been confused by the double-speak I was taught in college about “observation” in Quantum Mechanics and that maybe there’s a community where I can get straight answers to things.
According to RolfAndreassen:
According to Douglas_Knight,
Thanks Less Wrong.
That there is reason to believe that it is “relatively easy” (say if we survive x-risk and get a good singleton within a million years) to colonize billions of galaxies. That makes the expected (ignoring possibility of discovering new useful physics, creating universes etc) hedonic utility of x-risk reduction up to some 9-orders of magnitude greater than I had previously thought.
I think my biggest recent surprise was the notion that AI didn’t have to be sentient to be worth taking seriously. Seemed like such a simple idea to overlook but it totally changed my worldview.
I find it striking that Google and Wolfram Alpha are more useful at helping people answer questions than greater than human intelligence AI was imagined to be in most older sci-fi novels.
Not very recent, but...
I was surprised way back when I learned that we had already located some neurons which seem to encode the expected utility of possible actions. (‘Utility’ here isn’t meant in the philosophical sense but in the neuroeconomic sense.)
I also remember being amused 1+ years ago when I did some more studying in AI and decision theory and learned that all currently described AI agents are Cartesian dualists. (This is old news ’round these parts, I know.)
I don’t quite understand what you mean by that, can you elaborate?
Some AI have a limited understanding of their own bodies; they can learn kinematic models of the actuators in the robots they control or form “affordances”, ideas about what kind of interactions with their environments they can effect. But very few (apparently no?) cognitive architectures or AI designs model thier minds as being algorithms executing on their computing hardware, so whatever metacognitive representation and processing they have, it’s “disembodied”, like old ideas of the mind being made of spooky stuff. The combination of physical bodies and spooky minds is called Cartesian Dualism after philospher René Descartes.
Even though the composition of two rotations is a rotation on a standard sphere, the same is not true for higher dimensional spheres. Possibly even weirder, on a sphere the composition of two periodic rotations is not necessarily periodic.
This seems really unsurprising to me given the infinite dihedral group being generated by two reflections.
I don’t get the first thing. Isn’t SO(n) the group of rotations of the unit n-sphere?
In this context rotations are rotations about some n-subspace by some angle rather than all oritentation preserving isometries.
Wow, I didn’t know that. It makes sense now I think about it though; SO(n) must be something like an n(n-1)/2 dimensional space, but the space of rotations about an (n-2)-subspace must be … err … something smaller—maybe 2n-3 dimensional? I may be abusing the idea of dimension here...
First of all, terminology. SO(n) is orientation-preserving orthogonal transformations on n-space, or equivalently the orientation-preserving symmetries of an (n-1)-sphere in n-space. So Joshua’s statement is about SO(n) for n>3.
OK. So the obvious way to interpret “rotation about an axis” in many dimensions is: you choose a 2-dimensional subspace V, then represent an arbitrary vector as v+w with v in V and w in its orthogonal complement, and then you rotate v. The dimension of the set of these things is (n-1)+(n-2) from choosing V—you can pick one unit vector to be in V, and then another unit vector orthogonal to it—plus 1 from choosing how far to rotate. So, 2n-2.
And yes, the dimension of SO(n) is n(n-1)/2. One way to see this: you’ve got matrices with n^2 elements, and n(n+1)/2 constraints on those elements because all the pairwise inner products of the columns (including each column with itself) are specified.
These dimensions are all topological dimensions rather than vector-space dimensions, since the sets we’re looking at aren’t vector subspaces of R^(n^2), but there’s nothing abusive about that :-).
It can’t be 2n-2 because it’s 3 when n=3. I get 2n-3 because the first vector is chosen with n-1 degrees of freedom, then the second with n-2, then subtract one because of the equivalence class of rotations, then add one for choosing how far to rotate.
EDIT: More generally, I think that the dimension of k-dimensional subspaces of an n-dimensional spaces is k(n-k), so where k=2 you get 2n-4, then add one for choosing how far to rotate. I’d feel better if I knew what I meant by “dimension” here though; it’s not a vector space.
These are the best references I know:
Calculus on Manifolds
Boothby
As for topological dimension, roughly, if you consider a neighborhood of a point in the space, what does space look like from there? Locally it’s Euclidean if you’re “on” a manifold. The rigorous definition involves charts. See also Lebesgue covering dimension.
Meh, you’re right: the dimension of the space of 2-dimensional subspaces of n-space is 2n-4, not 2n-3. The reason why my handwavy dimension-counting above was wrong is (“of course”) that I failed to “subtract one because of the equivalence class of rotations”. And yes, you’re right that in general it’s k(n-k).
“Dimension” here means: locally the set looks like a that-many-dimensional vector space. That is, e.g., any element of SO(n) has a neighbourhood that’s topologically the same as a neighbourhood in R^(n(n-1)/2).
This is correct.
The number of parameters you need to label each element (provided the labelling is a continuous function, otherwise you can label points of R^2 with a single parameter e.g. (3.1415..., 2.7182...) → 32.174118...)
To make this precise, you need the idea of “charts” and “atlases” that witzvo references.
I don’t recall encountering this usage before. Is it widespread?
It is certainly in use, I don’t know how widespread it is. Generally in high dimensions one is just interested in SO(n) anyways, so there’s not that much need to make the distinction in most contexts.
Just figured something new out, based on my original post here.
The energy/time version of the uncertainty principle says that virtual particles of any given energy can spontaneously appear—but the bigger the energy, the shorter they last. This explains why the strength of electromagnetism falls off at a distance—virtual photons with high energies last for short times and thus travel short distances, while virtual photons with low energies can last for longer times and travel longer distances. All straight from the book.
But I just recalled that other forces, the strong and weak, are described as having a range limitation. I’ve always read about that range-limit existing—but since no reason was given for it, and I couldn’t figure it out, I just shrugged my shoulders with an assumption of ‘quantum weirdness’. But now I have an idea /why/ that range limit exists: with a minimum amount of energy in any given virtual particle for those forces, in the form of those particles’ rest mass, the uncertainty principle thus also imposes a maximum lifespan, and thus a maximum range.
It’s been such a long time since I’ve had a chance to figure out something about physics that I wasn’t simply directly told, it’s a surprisingly pleasant experience. :)
(Now, I’m wondering if this particular idea implies that since gravity’s range is infinite, that implies that if gravity is transmitted by force-particles rather than space-curvature (assuming that that’s a distinction with meaning), then the virtual gravity force-carrying particles have to be able to have arbitrarily small energies, and thus no significant rest mass...)
Your insight about forces carried by massless vs. massive particles and their respective ranges is absolutely correct. Congratulations!
It is generally agreed that the still-to-be-constructed theory of quantum gravity will have gravitons, particles carrying the gravitational force analogous to photons for the EM field, and yes, gravitons should be massless as you argue. This is not however in conflict with the description of gravity as space-time geometry. Though the full details will have to wait till we understand quantum gravity completely, provisionally we can make unambiguous sense of gravitons at the pertrubative level: Think of a gravitational wave as a small ripple in spacetime, then one can quantize this perturbation and gravitons are to the wave as photons are to classical EM waves.
I just had a big “update”.
EDIT: I’m a little less sure now. See the end.
I found something to teach programming on an immediate level to non-programmers without knowing they are programming, without any cruft. I always wished this was possible, but now I think we’re really close.
If you want to get programming, and are a visual thinker, but never could get over some sort of inhibition, I think you should try this. You won’t even know you’re programming. It may not be “quite” programming, but it’s closer than anything else I’ve seen at this level of simplicity. And anyway it’s fun and pretty.
The important thing about this “programming” environment is that it is completely concrete. There are no formal “abstractions,” and yet it’s all about concrete representation of the idea formerly known as abstractions.
Enough words. Take a look: http://recursivedrawing.com/
[I was excited because to me this seems awfully close to the untyped lambda-calculus, made magically concrete. The “normal forms” are the “fixed points” are the fractals. It’s all too much and requires more thought. It only makes pictures, though, for now. However, I can’t see anything in it like “application” so… the issue of how close it is seems actually quite subtle. Somehow application’s being bypassed in a static way. Curious. I’m sure there’s a better way to see it I just haven’t gotten yet.]
PS: Blue! Blue! Blue! (**)
** This is a joke that will only make sense if you’ve read The Name of the Wind: Rothfuss. If you prefer to spoil yourself, here, but buy the book afterward if you like it.
cross-posted here [I’m not sure about the etiquette, but I think this idea deserves not to be lost in an old thread.]
A flash game along the same lines:
http://pleasingfungus.com/Manufactoria/