Stupid Questions, December 2015
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don’t be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people’s admitting ignorance and don’t mock them for it, as they’re doing a noble thing.
I was into dinosaurs when I was a kid, and now I’m teaching my kids about dinosaurs. In light how much our understanding of dinosaurs has drifted in 30 years, and after learning about how certain dinosaurs species are basically extrapolations based on, like, a single bone, I’m trying to get a sense of what we actually know about dinosaurs versus what is just being made up to fill in the gaps.
As an example of what I’m talking about, the wing span for quetzalcoatlus keeps being revised downward. They seem to have a few skeleton fragments, and then they extrapolated what the rest of the skeleton looked like based on other species which they assume to be similar. I find myself wondering if at some point they’re just going to come out and say, “Yeah, this thing never flew at all, the assumption that it was just a bigger version of other azhdarchid species was completely wrong, sorry.” Sometimes I read passages that suggest that pterodactyls may not have been fliers at all.
We’ve seen many similar revisions, such as the famous “T. Rex was a scavenger” and “all these dinosaurs had feathers”. So if I had a pithy version of this “stupid question” it would be “What do we actually know about dinosaurs?”
I think that the basic answer is that they were really big, and looked kind of like dinosaurs. We certainly know more than that, but most of what we know is either highly technical, or just deductions that you might not want to consider “really knowing”. There is also a third category, with things like “laid eggs” and “underwent a mass extinction event” that you already know and are probably not interested in hearing about again.
It is worth noting that although they were the dominant land animal for about 135 million years, we only have about 1,000 clearly identified species (to help put this in perspective, we have two clearly defined species identified in the genus Triceratops, one of the most recent of the dinosaur genera… as opposed to, for example, about 30 species for the genus Homo {not a very prolific genera}. These two genera, Triceratops and Homo, spent approximately the same amount of time on planet Earth.) It is possible that the less than 1% of dinosaur species that we happen to have stumbled across are a good representative sample of the clade, but it is also possible that they are not.
But… We do know that many dinosaurs had feather-like structures. We do know that T. Rex could eat you, even if you were hiding in your car, and there is some evidence of them attacking live prey (prey that escaped with wounds that were able to start healing before they died). Pterodactyls could fly, although some early scientists in the 17-1800s doubted this, and for a while it was believed to be a swimmer. (But pterodactyls are not technically dinosaurs).
Studying dinosaurs is still just peeking at the very edge of a very large and mysterious world, and that is probably more useful for kids to learn about than hunting down the few facts we know for sure. But I do agree that the information out there is not presented very well; it should all be in the form “we think X because of Y”, but it is generally boiled down for popular audiences to something like “MAYBE X!”
Monty Python—Theory on Brontosauruses
Once in a while I read somewhere online an article that tells people not to worry about sexually transmitted diseases, because they are rare, and most of them can be easily cured by antibiotics anyway, so the dangers of having a lot of sex with random people are exaggerated. (And then the article often becomes political and starts explaining why the bad guys—the conservatives—want to scare you into having less happiness in your life. Because they are stupid and evil, duh.)
How realistic is this? The argument about frequency of diseases in population ignores the fact that the risk is not distributed evenly. For reasons similar to “why your average Facebook friend has more friends than you (because having a lot of friends makes them also more likely to become your friend)”, having a lot of sex with random people will make you more likely to have sex with partners who also have a lot of sex with random people, therefore the risk is higher than the statistics calculated for people with average behavior would suggest. (Seems to me that the usual hypocrisy could actually be a good strategy here: if you decide to have sex with many partners, it still makes sense to avoid people known to have sex with many partners.)
However the part “can be cured by antibiotics” also deserves some attention. The words “can be” do not necessarily imply ~100% success, although the article can make such impression. If I understand it correctly, using antibiotics is like carpet bombing the microorganisms in your body: you will do a lot of damage to your gut flora, but the intended target is likely to survive. Also here is the evolutionary arms race against the diseases: the more people rely on the “antibiotics can cure anything” strategy, the greater evolutionary pressure there is on bacteria to mutate into variants resistant to the known antibiotics. And the bacteria can mutate much faster than we can invent new antibiotics. This seems like a “tragedy of commons” scenario.
I’m interested in your opinions in general, but especially whether my reasoning about the antibiotics is more or less correct.
(I am not any kind of medical expert, so the value of this comment is very limited. But you did ask for general opinions.)
I think you are clearly correct about the likelihood of encountering a partner with an STD being higher than naive calculations would suggest. It isn’t clear to me how much higher. If you are at least taking typical safe-sex precautions then the probability of transmission can be quite low even if your partner does have an STD; I fear there’s no good substitute for actually doing the calculations, for someone who actually wants to make a sensible decision on this and would if possible like to have a lot of sex with a lot of people.
My impression (based on almost exactly zero information) is that antibiotic treatment for most STDs is very close to 100% successful at present, but I would be concerned about hastening the development of antibiotic resistance in STDs, on account of the implications for other people. (If you[1] can guarantee that if diagnosed with an STD and treated with antibiotics you will have absolutely no sex with anyone until you’re definitely cured, maybe that’s not a factor.)
Also, of course, antibiotics will do you no good at all if you get, say, genital herpes or HIV.
[1] Meaning a hypothetical person wondering about this stuff for practical reasons, not necessarily you.
https://srconstantin.wordpress.com/2015/04/30/std-statistics/ information.
As far as sexually transmitted diseases go, a lot of it is “tragedy of commons”. As far as population less goes, everybody profits from reducing the spread of sexually transmitted disease.
This is one of those times where I shear on Theranos. Having cheaper bloodtests that only take a drop of blood for those diseases will allow our society to act very differently when it comes to sexually transmitted disease.
This is one of those “bravery debate” things. The statement is not precise enough to be true or false. Some people definitely overestimate the risks. Are they the audience for this? Are you among them? I don’t know.
My possibly stupid question is: “Are some/all of LessWrong’s values manufactured?”
Robin Hanson brings up the plasticity of values. Humans exposed to spicy food and social conformity pressures rewire their brain to make the pain pleasurable. The jump from plastic qualia to plastic values is a big one, but it seems plausible. It seems likely that cultural prestige causes people to rewire things like research, studying, etc. as interesting/pleasurable. Perhaps intellectual values and highbrow culture are entirely manufactured values. This seems mildly troubling to me, but it would explain why rationality and logic are so hard to come by. Perhaps the geek to nerd metamorphosis involves a more substantial utility function modification than merely acquiring a taste for something new.
Define manufactured? There isn’t really any “default” culture to compare current human values to, for which you could say that “these values are manufactured because they don’t manifest in the default culture”.
By “manufactured values” I meant artificial values coming from nurture rather than innate human nature. Obviously there are things we give terminal value, and things we give instrumental value. I meant to refer to a subset of our terminal values which we were not born with. That may be a null set, if it is impossible to manufacture artificial values from scratch or from acquired tastes. Even if this is the case, that wouldn’t imply that instrumental values could not be constructed from terminal values as we learn about the world. There are 4 possible categories, and I meant only to refer to the last one:
Innate terminal values: “Being generous is innately good, and those who share are good people.” (Note: “generosity admired” is on the list of human universals, so it’s likely to be an innate value we are born with.)
Innate instrumental values: N/A (I don’t think there is anything in this category, because innate human values in babies precede the capacity to reason and develop instrumental values. Maybe certain aesthetic values don’t express themselves until a baby first opens its eyes, and so there could be reasoned instrumental values which are more “innate” than aesthetic values.)
learned instrumental values: “eating spicy food is good to do because it clears your sinuses”
learned terminal values (that is, “manufactured” values): “Bacteria suffering matters, even though I have no emotional connection to them, because of these abstract notions of fairness.” Or, alternatively “Eating spicy food is a pure, virtuous activity in its own right rather than for some other reason. Those who partake are thus good people, and those who don’t are unclean and subhuman.” The former is merely extrapolated from existing values and dubbed a terminal value, while the latter arises from an artificially conditioned aesthetic.
To use a more LW-central example, those of us who favor epistemic rationality over instrumental rationality do so because true knowledge is a terminal value for us. If this value is a human universal, then that would be strong evidence that every neurotypical baby is born valuing truth, and therefore that truth-seeking is a terminal value. If only a few cultures value truth, then it would seem more plausible that truth-seeking was a manufactured terminal value or an instrumental value.
To test ideas like this, we can look at the terms on the list related to epistemic rationality: abstraction in speech & thought, classification, conjectural reasoning, interpolation, logical notions [there are several examples on the list], measuring, numerals (counting), overestimating objectivity of thought, semantics [several semantic categories are also on the list], true and false distinguished. So, either all cultures get a lot of value out of instrumental truth-seeking, or truth-seeking is an innate human value. Judging by the curiosity of children, I’m strongly inclined toward the latter. Perhaps LW users have refined and accentuated their innate human curiosity, but it certainly doesn’t seem like a manufactured value.
But it looks like you guys forced me to make my question specific enough that I could answer it empirically. I could just take each item on the list of the twelve virtues of rationality, or any other list I thought gave a good representation of LW values or intellectual values. Just cross-reference them against a couple lists of human universals and lists of traits of small children. If very small children display a value, it’s probably innate, but may be learned very early. If no infants have it but some/all adults do, it’s probably a learned value developed later in life. If it seems like it is probably a learned value, and seems subjectively to be a terminal value, then it is manufactured.
Also, to be clear, just because something is manufactured doesn’t make it a bad thing. To say so is to commit the naturalistic fallacy. However, altering one’s utility function is scary. If we are going to replace our natural impulses with more refined values, we should do so carefully. Things like the trolley problem arguably segregate people who have replaced their default values with more abstract utilitarian notions (value all lives equally, regardless of in-group or a sense of duty). Extrapolating new values from existing ones doesn’t seem as dangerous as deriving them from acquired tastes.
I don’t think that this distinction really cuts reality at the joints. In general, it’s my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect—if it’s easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture… then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it’s unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn’t have learned to classify things according to that specific function. But on the other hand “learning any function” is in the untrained neural network’s nature, so just because something came from nurture, doesn’t mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn’t have learned to discriminate anything.
Similarly, without a culture surrounding us we’ll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We’re clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy—where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions—Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent “potential morality” becomes an actual concrete morality:
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven’t yet invented them don’t value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn’t prevent some cultures from rejecting these things...
First example that comes to my mind: Most cultures value killing their enemies (or something like that). However, LW culture prefers to find a way to make everyone happy (by inventing Friendly AI, donating to effective charity, etc.).
An uncharitable explanation would be that nerds are usually physically weak, and even if they happen to be strong individually, they would be still weak as a group (because most of them are weak as individuals, most people are not nerds, you cannot easily “convert” people into nerds, etc.)… so we have this “learned helplessness” about the basic human value of exterminating your enemies, and we deny having this value.
But if you would change the laws of universe so that understanding equations would allow you to directly shoot fireballs from your fingers (and the Bayes’ rule would be the most powerful fireball), LessWrong local groups would quickly turn into some kind of mage-Nazi militant groups, and we would all laugh diabolically at the pain of our enemies.
I think many of LW values are manufactured. I think you detect which are most manufactured by looking at the ones not widely held by other humans. Values like “you should get your head frozen when you die” are probably at the most manufactured end as they are nearly unique to LW and fellow travelers. Values like polyamory are pretty manufactured by do show up in a larger minority of non-LW types than head freezing. Values like a world with 3**3 created AIs in it that are a little happy is better than a world with 1 billion humans in it who are all living quite well are manufactured. Certainly held beyond LW but plenty of people hold the opposite value, that a better world would have a sustainable biologically human population.
In my opinion, the human values that are not manufactured are the ones you are born with. They feel more like moral sentiments than coherently stated values, because, I think, you aren’t born with ideas, you are born with tendencies to feel certain ways. From your moral sentiments, and discussions with other people, you build ideas that, in my opinion, you think help you explain why you have your moral sentiments. In my opinion you have your moral sentiments, and thinking your value ideas account for them is like being attracted to another person and “thinking” that means they are attracted to you: it is a form of projection, a human bias, very helpful in propagating the species but not particularly well suited to accurately explaining how the world works.
It isn’t just LW values that are manufactured, in my opinion, all values expressed as ideas are manufactured. This is why they have to be taught to be propagated, no particular set of values expressed as ideas arise spontaneously in a large number of humans.
I don’t think that manufactured is a useful word here. If I were to try to use it, I would say that any LessWrong value that you gained from LessWrong was “manufactured in you”. I would also say that any value commonly expressed on LessWrong has been shaped beyond the form in which it was originally conceived, and ‘manufactured’ in this sense.
There is no real sense that you can can any value you hold is not manufactured, unless you are talking about values like eating and breathing.
P.S. As far an a universal human culture goes, we can say with some certainty that religion, for example, is part of human nature—but no specific god, church, or belief is. So any religious/spiritual views you hold are clearly manufactured; the extent to which you hold them or do not was ‘shaped’ (which you may call manufactured or not).
Why can I hear noise (white noise / pink noise / brown noise), but not hear temperatures?
EDIT FOR CLARIFICATION Air temperature is caused by air molecules moving randomly at high speed, white noise is caused by air molecules moving randomly at high speed, what’s the difference? Why does white noise fill the room with sound instead of just raising the temperature slightly?
My hand-wavy-sounds-like-science-technobable guess is that temperature does fill the air with sound, but most of the energy of that sound is at far too high frequencies for my eardrums to detect (in part because my eardrums are emitting noise at a those frequencies). Maybe the average wavelength of thermal noise is roughly the mean free path length of the air molecules, so the average frequency of the noise is roughly 5 GHz. But I just making stuff up and really don’t know.
You can’t hear temperatures because if the temperatures of air were high enough to make enough noise for you to hear, you would be incinerated.
http://physics.stackexchange.com/questions/110540/how-loud-is-the-thermal-motion-of-air-molecules goes over this. There is a lot of error in that thread, but the parts that are right show up a few times and calculate the white noise sound level of room temperature air at about −20 dB SPL. SPL of 0 dB is the approximate threshold of human hearing. dB is a logarithmic scale such that every 10 dB increase is a 10X higher power. So −20 dB SPL is about 1/100 the average sound power level that would just barely be audible by a human. This is calculated at something close to room temperature, about 23 C which is about 300 K.
How hot would air have to get to have its thermal fluctuations audible as sound to humans? Any thermal power (at sufficiently low frequencies which situation applies here) is proportional to the temperature. So to increase the thermal sound level from −20 dB to 0 dB, the sound power needs to be increased by a factor of 100. So this would happen at an absolute air temperature of 30000 K, or about 29700 C. For us Americans, that is 53500 F. Super crazy hot, hotter than the sun.
So wait a minute, am I saying that a white noise generator generating 0 dB (barely audible) white noise is heating the air to super-solar temperatures? That doesn’t pass the smell test: if it was true my ears would be burning off when exposed to any white noise loud enough for them to hear. But the answer is, we are only generating white noise over a very small frequency range in order to hear it. Even a high fidelity white noise generator will have a bandwidth covering about 50 Hz to 20,000 Hz. But the “natural” bandwidth of thermal fluctuations is found from quantum mechanical considerations: BW = T * kb/h or bandwidth is Temperature(in Kelvin) times Boltzmann’s constant divided by Planck’s constant. That ratio kb/h turns out to be about 20 GHz per degree K. So thermal noise loud enough to hear would have a bandwidth of 600,000 GHz or 6e14 Hz. TO an approximation, thermal power is proportional to bandwidth, so a 20 kHz white noise generator putting out 0 dB SPL is putting out only 20000⁄600000000000 = 1⁄30000000000 the power level associated with a 30000 K source. So in terms of TOTAL energy, a band-limited white noise source is delivering way less than 1 K of extra temperature to your ears, even though in terms of energy density (power per bandwidth), it sounds hotter than the sun.
Much of the thread below covers some of this, but perhaps I add a little detail with what I write. As to blackbody radiation, yes that is appropriate to use here and its upper frequency limit has nothing to do with electromagnetics, or not fundamentally. It is a quantum mechanical limit. At a high enough frequency, the quantum of energy becomes comparable to the thermal energy, and so at higher frequencies than that those frequencies can’t be effectively generated by thermal sources. This is true for both photons (electromagnetic energy quantized) and phonons (sound or vibration energy quantized).
Hope this is clear enough to add more light than heat to the discussion. Or in this case, more sound than heat :)
The peak frequency of thermal noise at room temperature is far higher than 5 GHz, it’s actually closer to 30 THz. I’m not exactly sure about the biology here and whether Brownian motion of air molecules excites the hair cells in your cochlea. I’m guessing that it does, but even so, the range of frequencies you can hear (20-20,000 Hz) carries only a very, very tiny fraction of the thermal energy. Someone should do the calculations; my guess is that it’s far below the detection threshold.
Another thing to keep in mind is that at equilibrium, you have thermal excitation everywhere. You might as well ask why you don’t hear or see or smell the thermal excitation in your own brain.
As far as I remember, you need to hit the resonant frequency of a particular hair to trigger a “sound” response, so frequencies higher than 20KHz might excite them, but if you’re not getting resonance, nothing triggers.
No this is wrong. Each hair is excited by the amount of its particular resonant frequeny in the sound hitting it. If a violin note is heard, that note only has a few discrete frequencies in it and so a few hairs are very excited about it and the brain (of the trained violinist with perfect pitch anyway) goes “oh, A 440.” If white noise loud enough to hear is hitting the ear, then essentially all the hairs are excited because all frequencies are present in white noise, and the brain goes “sounds like the ocean.”
As to excitement by sound above 20 kHz, a very high frequency ultrasound, say at 100 kHz, can be modulated with the vibrations associated with a violin string, much as sound can be modulated on radio carriers. Such ultrasound hitting a human ear can actually cause the appropriate hairs to be excited so that the brain goes “oh, A 440.” The phenomenon relies on the non-linear response of cochlear hairs and highly directional speakers based on this effect have been built and demonstrated. See for example http://www.holosonics.com/
That’s a somewhat crude way of putting it; when studying a resonator it’s better to look at the q factor: https://en.wikipedia.org/wiki/Q_factor
Lower q factor means a higher spread of frequencies can trigger them. Mammalian hair cells have q factors of 5-10. Q=10 is pretty high for a biological resonator, but pretty low compared to, say, even crude electronic equipment. A typical LC oscillator has Q of 100 or more.
I think you are suggesting something like: if I was detecting thermal vibration by the vibration of a membrane due to thermally induced air pressure I wouldn’t because the temperature is the same in the air on both sides of the membrane and therefore the thermal air pressure on each side of the membrane is the same and so fails to move the membrane. If this is what you are suggesting it is wrong, and in a basic enough way to merit explanation.
Sound is pressure changing in time. Thermal vibration follows a random distribution. The air on each side of a membrane at the same temperature will have the same statistics of pressure change on each side of the membrane, but not the same instantaneous pressure on each side of the membrane. If the random pressure exceeds p1 25% of the time an is less than p0 25% of the time, then 6.25% of the time there will be a pressure difference of at least p1 - p0 on the membrane, and a different 6.25% of the time there will be an opposite sign pressure difference of at most p0 - p1 where we have chosen p1 to be the higher pressure than p0. So thermal vibrations will absolutely cause a membrane to vibrate randomly. Further, it is the case that the magnitudes of p1 and p0 rise as temperature rises as temperature rises, so we expect the membrane to be moved more when surrounded by hotter air than it does when surrounded by cooler air.
SO it is the case that generally heating air makes its average pressure rise if it is in a constrained volume, and a membrane will certainly not be displaced on average if it has air on each side at the same average pressure, but it is the temporal or time variations that produce sound, and the time variations on each side of the membrane for most conditions you can create in the lab are uncorrelated, and so the membrane vibrates randomly and with an amplitude that rises as the temperature of the air rises.
I don’t make that suggestion at all. I’m pointing out that sound receptors are just neurons, and if the thermal vibrations in your ear can excite some set of neurons than the thermal vibrations impinging on the dendrites of any neuron in your body—including inside your brain—should also elicit a response.
That is a little like suggesting that a sound recorder is just electronics and shouting at any electronics should elicit a response. Bringing it back to the neurons,
loud enough sound on any neuron will probably excite it
However the sensitivity of neurons connected in the ear to sound is thousands or millions or billions (not bothering to calculate it) higher than the sensitivity of a random neuron in the brain to sound
A random neuron responding to sound won’t feel like sound. If a pain neuron is activated by sound, it will appear as pain, if a hot neuron activated by sound will appear as heat, etc.
So as hot as the air has to be to excite your cochlear apparatus, and thus the neurons connected to it, it probably has to be thousands or millions times hotter to excite the neurons directly in your brain. And long before it gets to that temperature your brains has been cooked, then dessicated, then burned, and finally decomposed into a plasma of atoms and electrons flying about separately, and probably at the temperatures we are talking about, the protons and neutrons are smashed apart into a cloud of subatomic particles.
Some minor side notes:
Your cochlea is filled with a liquid called endolymph, not air.
A hair cell that was triggered by Brownian motion would be useless. All inner hair cells are tuned to certain vibrations in the endolymph that are greater than those caused by Brownian motion.
Brownian motion is motion of air that, considered as vibrations, has a broad range of frequencies in it. Which means that an ear exposed to air experiencing a sufficiently high level of brownian motion will have many or all of its inner hair cells excited. If your statement was correct, humans would not be able to hear white noise, whereas obviously (to any hearing person who has ever been exposed to white noise) we can.
White noise requires that we hear a number of frequencies, but also requires that the frequencies are of sufficient amplitude to move the ear drum.
But that is just the TLDR. I am trying to keep this simple, but it is not simple, so here is the next level of complexity.
The issue is not only frequency, but also amplitude and duration.
Since Brownian motion is not sufficient to significantly affect the ear drums (in any real life situation), instead of worrying about the air, you need to be worrying about the liquid in the inner ear.
This liquid is in a precisely shaped reservoir (the cochlea) that will amplify certain sound waves at certain points (it is more complicated than this, but this is a generally accurate simplification); hair cells at each point respond (fire) in response to the amplified waves. Brownian motion cannot and will not set up a standing wave at any frequency for a time period or with an intensity that you would be able to perceive.
It may be helpful to picture the difference in intensity produced by a particle of water versus a wave; one you will not feel (it cannot push you or the hair cell with enough force to be detected), but the other certainly can. We are talking a difference of multiple orders of magnitude.
I’m not certain that I understand your argument, so I may have responded incorrectly. Let me know if you need any clarification.
Edit: removed a redundant sentence.
On re-reading, I actually misunderstood your original point and my argument has nothing to do with your original point.
I would still want to point out a few things that may make what is going on clearer.
First, Brownian motion amplitude rises as temperature rises. So while the Brownian motion of temperatures typically found in the ear, or in the air near the ear, is small enough that the ear can’t detect it, as you say, if you were to raise the temperature, the Brownian motion would be higher amplitude and would eventually rise to a point where it was detectable. This is a pretty academic point: the temperatures required to hear the brownian motion would harm the ear so in practical terms your statements are right enough.
If vibrations in the air cause the endolymph to have pressure waves in it which then cause cochlear hairs to move, it is still quite reasonable to describe that as air vibrations making cochlear hairs move. Introducing the endolymph is a clarification at best, not a correction.
Do you happen know a back-of-the-envelope way to get that 30 THz figure?
https://en.wikipedia.org/wiki/Wien%27s_displacement_law
Oh! So you’re saying the spectrum of the acoustic noise at a given temperature will be the spectrum of black body radiation! Yes, I could definitely believe that. That is high-frequency indeed.
Sort of. Blackbody radiation is electromagnetic in nature, however under some ideal assumptions you can assume that the molecules emitting that radiation are also vibrating at roughly the same spectrum. ‘vibrating’, though, can mean a lot of different things; this is related to the microscopic properties of the substance and its degrees of freedom. In an ideal gas, it’s taken to mean the particle collision frequency spread (but not necessarily the frequency of particle collisions). If you consider heat to be composed of a disordered collection of phonons, then you could definitely say that this is ‘sound’, but it’s probably neater to draw a distinction between thermal phonons (high-entropy, low free energy) and acoustic phonons.
The reasoning behind blackbody electromagnetic radiation applies equally well to thermal vibrations in solids and gases. Meaning the spectral limits derived from a quantum consideration of the quantization of electromagnetic radiation (into photons) applies equally well to the quantum considerations of vibrational radiation (into phonons).
“Thermal” photons are indistinguishable individually from photons from other sources. The thing that makes a thing thermal is the distribution and prevalence of photons in time and frequency, those from a thermal source follow a well understood set of statistics, while photons from other sources clearly deviate from that. So a photon arising from a cell phone tower’s radio transmitter reacts similarly with a cell phone’s radio receiver as a photon at a similar frequency arising from thermal emission from the air. Physics can’t distinguish between these two photons which is why it is a major effort in building radio communications to get enough signal-sourced photons compared to the thermal-sourced photons so that the signal-sourced photons dominate, and therefore the signal can be accurately derived from their detection.
Similarly with phonons. Vibrations because something is hot are indistinguishable from vibrations from a vocal cord. It is the statistical distribution of the vibrations in time and frequency that defines a thermal set of vibrations. And again, to hear what someone is saying, it is important to get enough phonons from their vocal cords into your ears compared to the phonons from other sources in order to accurately enough derive the intended information.
Thermal noise or other white noise, and a symphony, have the same kind of phonons and both can be heard by the same kinds of ears. They carry different kinds of information (they sound different) because of their different time and frequency statistics.
Black-body radiation is electromagnetic radiation, so I’m a bit confused how that’s connected with acoustic noise. As to molecule collisions, I’m not sure vibrations at sufficiently high frequency can be called “acoustic” at all.
Your reasoning here carries useful information. For example, when you are dealing with vibrations whose frequency is so high that the wavelength of the vibration is less than the average spacing between molecules in a gas, or in a solid lattice, then a lot of what you calculate about the detection and interactions with lower frequency vibrations no longer applies.
However, the same limitations apply to electromagnetic radiation. For example we think of vacuum or empty space as transparent to EM radiation, and it is as long as the EM frequency is low enough frequency. But high enough frequency EM radiation, empty space is opaque to it! For example, at high enough frequencies, a single photon has enough energy to create a positron-electron pair in free space. Photons at that frequency don’t travel very far before they are destroyed by such a spontaneous generation of particles.
So in principle, EM radiation and acoustic vibrations are the same in this respect: as long as you are considering frequencies “low enough” that they don’t rip apart the medium in which the wave exists, they behave in the ways we usually think of for sound and light. But above those frequencies, they rip apart the media they are traveling through, even if that medium is so-called empty space.
So what kind of energies are we talking about here, and what distances?
Photons with over 1 Million electron volts of energy can create a positron-electron pair, but only when near another massive particle (like the nucleus of an atom). The other massive particle is moved in the interaction but is otherwise not-necessarily changed. https://en.wikipedia.org/wiki/Pair_production. This process has been demonstrated experimentally. The mean free path of the energetic photon near an atomic nucleus is something down on the atomic scale, the experiment I read about used a piece of gold foil and generated lots of positron-electron pairs.
A single photon in otherwise empty space cannot create a pair of particles I was wrong when stating that. However, space with nothing but two photons in it can create matter. Two photons each with a bit over 511 million electron volts of energy can collide and result in the creation of a positron and an electron. https://en.wikipedia.org/wiki/Two-photon_physics Alternatively a single 80 Tera Electron Volt photon can collide with a very low energy photon to create an electron-positron pair. This effect actually makes our existing universe opaque to photons above 80 TeV because our universe is filled with approximately 0.0003 eV photons known as the Cosmic Microwave Background radiation. This background radiation is left-over radiation from the big bang which by now has cooled down to about 3 Kelvin in temperature. I don’t know any of the actual mean-free-paths associated with this, just that they are much shorter than interstellar distances.
This is wrong. What you hear is sound waves, that is, rarefaction/compression zones in the air, pressure differentials. They are a phenomenon at a different scale than molecules. In particular, the energy involved is different. “White noise” means the frequencies are uniformly distributed.
Essentially, an air molecule doesn’t have enough energy to register at your hearing sensors, that is, to move your eardrum (or cochlear hairs).
Though, now that I’m thinking about it, if the white noise generator I bought to help me sleep is really good at producing white noise with uniform power at high enough frequencies, an air molecule would have enough energy to move my eardrums. I would also be on fire.
And if my white noise generator is really really good at producing white noise with power uniform across all frequencies, the noise’s mass-energy will cause my bedroom to collapse into a black hole and I will be unable to leave a 5 star review on Amazon.
Yes white noise is an ideal that can never be realized in reality, like a perfectly rigid object, or a frictionless wheel, or an absolute zero freezer. White noise would carry infinite power.
To clarify what I believe is the question: Why can’t solipsist hear brownian motion?
The question is pretty good; Brown Noise derives its name from brownian motion, or rather the discoverer of such, as it is the frequency (or set of frequency) that brownian motion produces.
I’d -guess- the answer is that the motion all cancels out on the average, approximately, and the remaining statistical noise isn’t energetic enough to be perceived.
The way a sense organ interacts with temperature follows a different mechanism from perception of sound.
What does “hear temperature” mean?
See my edit
Repeating my question from late in the previous thread:
It seems to me that if you buy a stock, you could come out arbitrarily well-off, but your losses are limited to the amount you put in. But if you short, your payoffs are limited to the current price, and your losses could be arbitrarily big, until you run out of money.
Is this accurate? If so, it feels like an important asymmetry that I haven’t absorbed from the “stock markets 101” type things that I’ve occasionally read. What effects does it have on markets, if any? (Running my mouth off, I’d speculate that it makes people less inclined to bet on a bubble popping, which in turn would prolong bubbles.) Are there symmetrical ways to bet a stock will rise/fall?
It gets very interesting if there actually are no stocks to buy back in the market. For details on how it gets interesting google “short squeeze”.
Other than that exceptional situation it’s not that asymmetrical:
-Typically you have to post some collateral for shorting and there will be a well-understood maximum loss before your broker buys back the stock and seizes your collateral to cover that loss. So short (haha) of a short squeeze there actually is a maximum loss in short selling.
-You can take similar risks on the long side by buying stocks on credit (“on margin” in financial slang) with collateral, which the bank will use to close your position if the stock drops too far. So basically long risks also can be made as big as your borrowing ability.
This is accurate.
This asymmetry comes from the fact that prices are non-negative numbers and do not dip below zero.
Effect on the market? Off the top of my head, here is a couple: long-term shorts are more risky than they seem; and shorting penny stocks (stocks with a low price, typically below $5) is also “extra” risky because your upside is small, but your downside is not.
The fact that shorting penny stocks is dangerous isn’t because of their price per se, it’s because they are typically much smaller companies than normal stocks. That means their profits and prices are much more unstable, so are much more likely to double in value in a short period of time than, say, British Petroleum. Also, because they are so small, much smaller amounts of money can change the price of the stock, which makes them more prone to market manipulation or investor exuberance (this is a non-linear effect, some stocks are so thinly traded that just a couple of trades might happen per month, and whichever is last sets the price for it). Even if a penny stock did a reverse split that made their shares $100 each, they would still be just as risky for these reasons. Also, because penny stocks are thinly traded, stops are much less effective for short sellers.
You are talking about why penny stocks are volatile and yes, that’s all true. However it’s also true that the upside/downside asymmetry is especially pronounced for them.
I’m confused. Do you believe that if you took a penny stock and divided its share count by 100, thus multiplying its price by 100, it would be less risky for short sellers simply because of the price? Let’s assume for the sake of argument that it’s current price is in the $3 range, so the fact that the minimum quote is in pennies isn’t a large effect.
Hm, I think you’re right. The high(er) risk for shorts is a function of volatility (or, more generally, distribution shape) and not of the price level.
The price level has its consequences but these tend to be beneficial for shorts.
You usually avoid unlimited liability by placing a stop order to cover your position as soon as the price goes sufficiently high. Or for instance you can bound your losses by including a term in the contract which says that instead of giving back the stock you borrowed and sold, you can pay a certain price.
Note that for volatile assets (the very ones where you feel uncomfortable about unbounded risk), stop orders are not guaranteed to help. Remember, prices are not continuous—there is a discrete sequence of bids. Price can go from below your stop to MASSIVELY above it before your stop order can be executed. Most often this happens on news when a market is closed, but it can occur intraday as well.
The stop order feels hackish, to me. I was thinking along the lines of short squeezes even before I learned their name. But also, if I’m expecting a bubble to burst, I won’t necessarily be surprised if the price rises massively before it does. I’d be looking for limited exposure without having to chicken out.
The contract term sounds like the sort of thing I was looking for.
You can always play with options to construct whatever payoff structure you desire.
Accurate, but not asymmetrical. It’s perfectly symmetrical: purchase of an asset for resale has a loss floor and no gain ceiling, sale of an asset (including short sales) has a gain floor and no loss ceiling. For actual transactions in either direction, there is a practical maximum gain/loss, even when there’s not a theoretical one: if a value goes too far out of modeled range, one of the parties will abrogate when not able to pay the ludicrous amount.
For smaller investors making short-term trades (which is illegal if one has inside info, and unwise if not), generally Call or Put options are used. The constraints of payout/loss can get quite complicated fairly quickly by mixing different strike and maturity options.
You never only buy, but at the same time you have traded your dollars, euros or whatever currency for that stock.
There is nothing like “buying” and “shorting”—it’s always trading. Swapping two “currencies”.
That’s not actually wrong, but I think it’s highly misleading.
The failure mode in the long case that corresponds to “stock price suddenly skyrockets” in the short case is that the value of whatever currency you bought the stock with suddenly skyrockets relative to other assets. This (1) is extremely rare, corresponding to a very large negative inflation rate, and (2) is generally something you would be happy about overall because you surely have a lot more dollars (or whatever) than you are spending on the stock.
On the other hand, it’s not nearly so unusual for the price of a stock to increase abruptly, and if you’re shorting it you probably don’t have a lot more of it to be happy about the increasing value of. (If you did, you’d just be selling rather than shorting.)
Um, does this ever happen? Ever? It looks like an imaginary situation.
Besides, your description implies that you don’t want to measure your wealth in money. What do you want to measure it in?
Value of USD (and many other currencies) against Zimbabwe Dollar skyrocketed pretty spectacularly in July 2008. I have a Z$10^12 note at home.
This is a rare, but absolutely possible, outcome.
I don’t see how it fits the case.
If your domestic currency is USD, you bought an asset (foreign currency) and that asset dropped in price to near zero.
If your domestic currency is Zimbabwe dollar ZB, then you had to *short the USD to suffer huge losses.
As gjm notes, you need a huge negative inflation rate (aka deflation) and I don’t think that ever happened, at least during the fiat money era.
Don’t privilege any given currency—you don’t buy or sell things, you trade commodities. Sometimes that commodity is a currency, sometimes it’s a stock, sometimes it’s an actual thing.
For the trade that is currency exchange, one currency’s hyperinflation is the other’s hyperdeflation. If you traded away USD for Zw$, your loss (as measured by the amount of Zw$ you could have had later for that USD) was near-infinite.
You are now talking, basically, opportunity costs. I don’t think your approach makes sense.
(note: I honestly believe this, but I am presenting it more forcefully than I believe for socratic and exploration reasons).
Interesting. What other approach makes sense? When you stop treating currency as special, all costs are opportunity costs. The only actual loss you experience from spending now is that you can’t spend it later.
Well, to start with the Z$ example, you say
but if everything is just a tradeable good, why do you choose to measure your loss in Z$? Your loss in McDonald’s hamburgers is zero, your loss in some now-out-of-fashion accessory is actually a gain, etc. etc. If you don’t have money, you have no baseline but just a huge matrix of barter ratios. Whether you have a gain or loss (and its magnitude) solely depends on which pair you pick and there is no pair that’s privileged, is there?
Speaking more generally, not all costs are opportunity costs, some are just actual losses. If you want to think of spending your resources (=money=commodities) in terms of consumption and investment then sure, any consumption incurs opportunity costs because it’s not investment and investment can be seen as risky delayed consumption. But that’s just Econ 101 and it works perfectly well with money as well.
Within the investment world yes, cash is just another asset. But you still need a baseline way to measure things and measuring investment returns in bananas or Swiss watches is kinda inconvenient and an excellent way to screw yourself up. What’s the point?
I see. I think you’re treating your varied anticipated future consumption as your “base currency”, which adds a fair bit of complexity over the simpler two-commodity model. (but it matches common intuitions better, I’ll admit).
I’ve never heard of its doing so. That was approximately half of my point (#1 in the above). If you think I was suggesting it’s a thing anyone should be worrying about, then I respectfully advise you to read what I wrote again. If you merely think I should have been more forceful about how unlikely such an event is, you may be right.
Ability to procure things I value. If my bank account stays exactly as it is and prices of food and books and computers and other things I spend money on halves, then the portion of my wealth embodied in my bank account has effectively doubled. If the prices of those other things double instead, then the portion of my wealth embodied in my bank account has effectively halved.
Of course in practice different things’ prices change in different ways. And in practice the relationship between money and those other things I care about stays pretty stable, which is one reason why Thomas’s analysis is highly misleading. And in practice I care about future prices at least as much as about present prices (but present prices are pretty much our best estimates of future prices, at least for well traded assets). So measuring wealth in money works very well in principle. But Thomas was (in effect) envisaging a weird situation in which the value of money relative to everything else increases abruptly, and although it’s very unlikely ever to happen it seemed worth pointing out some actual likely consequences.
[EDITED to add: This is currently at −1. I honestly have no idea why that might be. Anyone—preferably whoever actually downvoted me—want to explain?]
The closest direct analog is a crash—if I go from being able to buy one share for one dollar to being able to sell my one share for one penny, one can see this as the value of cash going up 100X.
(This is somewhat contrived when dealing with cash, but it does seem that the foundational level of wealth is food and ammunition. It could happen that the exchange rate between those and cash and stocks skyrockets, and that would be Bad News for a lot of reasons.)
Indirect analogs rely on opportunity cost—because you invested in A and got a 2X return, you missed out on investing in B, where you would have gotten a 2000X return. This is a profoundly unhealthy way to view markets.
For this to work you need for basically all financial assets to crash, not just some particular stocks. Besides, we still have the problem of the unit of measurement. If you want to measure your wealth in consumables (say, cans of beans) then for “unlimited” losses from long positions you need not only a financial crash, but also cans of beans becoming really really cheap. This is.. unlikely.
All in all, there is a real asymmetry between going long and shorting. Trying to construct imaginary situations in which you could lose a lot from being long isn’t terribly helpful.
I think it is the correct way to view the markets once you add risk management. If the probabilities of getting those returns for A and B were the same (and the distributions were shaped the same), you indeed missed out greatly.
Yeah, basically the only scenario I see is cans of beans becoming very cheap in terms of ammunition for unethical reasons.
Agreed—I’m making the assumption that such comparisons are made retrospectively instead of prospectively, and thus are implicitly ignoring risk.
Unethical even in the Zombie Apocalypse scenario? X-)
But sure, if the entire financial system {im|ex}plodes, your shorts aren’t going to do you any good and so we finally achieve symmetry—everyone is fucked.
It is still the right way even retrospectively if you think in probability distributions. And, of course, anything “ignoring risk” is automatically the wrong way to think about the markets :-)
So I can trade one currency for another, and then trade back, and the amount I now have in the first currency can be arbitrarily high. This doesn’t feel like it particularly changes anything.
You are welcome!
I’m not a biologist, but am I right in thinking that Crispr could be the most important human innovation ever? This Wired article claims that a knowledgeable scientists thinks that the “off-target mutations are already a solved problem.” Within a decade we should know a lot about the genetic basis of intelligence. Wouldn’t it then probably be easy to create embryos that give birth to extremely smart people, far smarter than have ever existed?
Bit late but Aubrey de Grey in his latest reddit AMA estimates that Crispr/CAS9 cuts off about 20 (!) years of the SENS/immortality timeline.
As a man approaching 50, I desperately hope this is true.
Unfortunately I have to retract my above statement, I checked https://www.reddit.com/r/Futurology/comments/3fri9a/ask_aubrey_de_grey_anything/ .
No concrete timeframe, but he also gives estimates:
https://www.reddit.com/r/Futurology/comments/3fri9a/ask_aubrey_de_grey_anything/ctr90ru
Seems as if he gives a 50yo has about 50% chance to be around when SENS comes.
Emphasis on probably—intelligence is not a simple matter, and it is unclear that our genome, even if we clearly identify all relevant factors, would be “open ended”—that is to say, there may be a difference between “making you as smart as you can be” and “making you smarter than any human ever”. As a poor analogy, we will certainly soon be able to make humans taller, but there may be limits to how tall a human can be without important system failures; we have already had very tall people, and even if we do want to breed for tall, we might choose to top out at 7′0″ for health reasons. Likewise, when you think of smart people, it may be that you are thinking of people with skills maximized for specific functions at the cost of other functions, and a balanced intelligence might top out at some level… at least until we get past mapping what we have and into the much harder task of designing new types of genomes.
There are several competing techniques. People who use the other techniques think that CRISPR mostly has better PR, and is a fairly minor technical innovation. Gene editing, regardless of the technique involved, will be tremendously important for the next few decades.
When they say “a solved problem” they mean that the cost of off-target mutations is worth it for a single high-value edit. It is unlikely that most genomes have the option of a high-value edit to improve intelligence. It’s probably more like 1 IQ point per edit. Of course, accuracy in 10 years will be better than accuracy now. In fact, if we truly had no off-target mutations, we could act now, without knowing the structure of intelligence, just by “spell-checking”—correcting rare variants.
Yes, that’s Greg Cochran’s theory. I wonder by how much this could increase IQ? If I were a young billionaire I would be planning to create a clone of myself that didn’t have rare variants.
We don’t know. Cochran’s theory is not well backed by evidence at this point. Most of it is quite indirect like the attempts at quantifying paternal age effects. Emil didn’t turn up anything when I asked the other day. Some of the studies which come to mind which don’t support the idea that mutation load matters much:
“The total burden of rare, non-synonymous exome genetic variants is not associated with childhood or late-life cognitive ability”, Marioni et al 2014 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953855/
“A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence” http://www.nature.com/mp/journal/vaop/ncurrent/full/mp2015108a.html , Spain et al 2015
“Thinking positively: The genetics of high intelligence”, Shakeshaft et al 2015
Spain and Shakeshaft aren’t relevant. Marioni is interesting, but I think 1% is way too high a cutoff.
Sure they are. The mutations involved in mutation load are, almost by definition, rare; if they were singly or in aggregate with large effects, that should show up in surveys on the high end. Instead, we just get that with the low end, which is consistent with there being occasional very rare or de novo mutations which can drastically reduce below average, but not that they will increase multiple or many SDs for the above average who already escaped the retardation-bullets.
If there were aggregate effects, how would they show up in Spain and Shakeshaft? Just going by the abstract, Spain is looking for genes where the rare variant has a positive effect. That is the opposite of the mutational load theory, and they don’t find any. I think Shakeshaft reaches the same conclusion by pedigree analysis.
Say that there are 10k genes, MAF=0.01, each worth 1.5 IQ points. What would Spain detect? If the TIP population is 10/3σ,* then these 10k genes each appear as the mutant 2⁄3 as often, 20 hits, rather than the expected 30. That’s a 2 sigma event. So if an oracle gave you this list of 10k genes, you could use Spain to confirm it. But if you have to find the list, it’s harder. They should expect 5k false positives among the 200k variants that they tested. If all of the true genes were among the 200k, there would be 15k hits rather than the expected 5k, confirming the theory. But with poor coverage, the true hits might be lost in the noise. And even if they have good coverage, they have restricted to non-synonymous protein coding mutations.
Moreover that model is what Steve Hsu believes, not the mutational load hypothesis. Spain et al can’t test the mutational load hypothesis: if the relevant genes are rarer or have smaller effect, they wouldn’t notice them at all. On the other hand, if the TIP population really is 5σ, it would be possible to detect more.
* The TIP population is usually described as 0.03% of the population, which is 3.4σ under a normal distribution, but I chose 10⁄3 for simplicity of calculation. They score about 5σ in raw SAT. Self-selection probably means that they’re actually rarer than 0.03%, but probably not much.
My lower bound guess, if rare variants turn out to be only a small portion of IQ, is 5 standard deviations.
Your answer confuses me. Why so much if “rare variants turn out to be only a small portion of IQ”?
My lower bound is that mutational load contributes 10% of the variance in IQ. I call that small. Independently, I propose that there should be room for 50 standard deviations in improvement. Although it’s not clear what more would even mean. Surely linearity would break down. What I mean by the possibility of “50 standard deviations” is 20 disjoint sets of changes, each of which would accomplish 2.5 standard deviations.
If the typical gene is deleterious and contributes 1/N of a standard deviation, then there is room for N standard deviations of improvement above the mean. Of course there is a mixture of genes of different effect sizes. I expect genes of both effect sizes 1⁄10 and 1⁄100. Say, half of each. That gives room for 55 standard deviations of improvement.
If variation came from positive genes, an additive model would suggest much more room for improvement, but such genes would be much less likely to combine well than correcting mutations to the wild type.
It is hard to tell in advance what is important. Quite a few innovations that were promised to change everything turned out to have much more limited value.
I don’t see any reason for it. So far, all knowledge in this area is just correlation between some genes and IQ, with no understanding how it works. Judging from history of other technologies, with such theoretical base any major improvements take centuries of trial and error.
Even if the Crispr protein itself doesn’t cause mutations you likely will have to doublicate DNA a few time via PCR which produces additional errors.
According to Wikipedia:
I think we are very far off from reaching the exactly the same level of mutations or a lower level. The difficult question will be what level of mutations is acceptable.
If gene A raises IQ and gene B also raises IQ than that doesn’t mean that both genes together will raise IQ even more. The might cancel each other out. A few people will grow to be extremely smart but I don’t think that will be the case for every embryo in the project.
When people get dizzy from spinning around, they sometimes spin the other way to get less dizzy. My default reaction is, no way that can actually work.
So, does it work?
(I remember trying inconclusively to research this.)
https://www.reddit.com/r/askscience/comments/198ld3/does_spinning_in_the_opposite_direction_really/
When you spin, your endolymph moves against hair cells in your semicircular canals. This tells your brain that you are spinning. But the hair cells respond strongly to changes in movement, and when you continue to spin, there is no change in movement—your vestibular system is telling your body that you aren’t really moving much, rather, the world is spinning, and you are still. This leads to sensory integration issues, and you start to feel dizzy and nauseous. Stopping spinning doesn’t help too much—suddenly your hair cells get a jerk from you stopping spinning, and say that you are moving when you are not, and you get more conflicting sensory information.
When you start spinning again, you get consistent sensory inputs—your vestibular system tells you that you are spinning, and you are. This is a good thing for maybe a spin or two, but then you will start getting conflicting input again, and get dizzier. I’m not sure that it matters which direction you spin. It is possible that the continuing motion of your endolymph (in the direction you were spinning) might be enough to lessen the triggering of hair cells when you start to spin in the same direction, while spinning in the opposite direction (against the flow) gives a stronger trigger. But that is speculation on my part.
Footnote: When dancers turn their heads to focus on a single spot, this works both because they are consistently orienting themselves visually, and because the sudden movements of their head tells their vestibular system that they are spinning—no conflict in sensory input.
can you explain this more? I am interested in what you mean and how the state of this belief feels...
“No way” might be overstating my past self. (Or maybe I’m misremembering my past self.)
I think it just seems like a really silly thing to do, like trying to heal a would by sewing it together. Like I can kind of see why someone might, in a cargo-culty way, think these things might help, but surely they’re just going to make the problem worse?
Clearly this is not a very reliable instinct, on my part.
I am still confused; maybe can you describe the mechanism by which it would cause the “not helpful” things to happen?
Suppose someone said: “my clothes are cold and wet, so I’m dipping them in warm water so they dry faster”. I think I’d have a similar reaction to that. Like sure, warm water evaporates faster, but… really? You think that’s going to help?
(If that turns out to work, I will be very annoyed.)
I’m not sure I had any explicit mechanisms in mind. These were gut reactions. But I can try to elaborate on the reactions.
With dizziness: you’ve spun to get dizzy and now you’re spinning more and that’s obviously just going to get you more dizzy.
With stitches: the problem is a hole in your skin and you’re trying to fix it by adding more holes and running some thread through them? Thread does not belong inside your body!
Assuming you mean “heal a wound”, sewing a wound together is a rather common treatment for wounds.
Yes, I was poking fun at myself.
(I discovered that when reading Order of the Phoenix. “It sounds as though you’ve been trying to sew your skin back together, but even you, Arthur, wouldn’t be that stupid.” Wait, that actually is what stiches are?)
Oops, sorry about missing what should have been obvious!
In my non-RCT experience :-) yes, this does work.
Placebo effect should be enough to make it work if you believe strongly that it works.
Nope. It’s actually related to how one’s sense of balance works.
The standard tactic dancers use to prevent getting dizzy is about having a visual fix point towards which they orient themselves. It’s not about using the ear (the organ responsible for balance) differently.
It’s true that dancers use a visual fix point, and that also works, probably better. I have no idea why. Do you?
However, turning in the opposite direction also works, and has a basis in the balance mechanism.
Movements having goals is essential for the way our brain coordinates them.
It can help but when dancing Salsa I couldn’t simply remove dissyness completely from a dance parter by turning her in the other direction. It could help a bit, but not fully.
When, if ever, is playing computer games good for me?
Value things that can be gotten out of computer games:
community
feelings of victory
feelings of adventure
feelings of learning (for learning games)
feelings of relaxation
feelings of pain
computer games can be good for getting things like the sense of community or victory, but on the downside they occupy time and sometimes do other things i.e. become addictive.
For simulating victory-feels; yes.
Ask yourself -
what are you getting out of games,
at what cost?
what do you want to be getting out of games?
and what do you want to be getting out of those expenses?
You should weigh up the costs and the benefits. Do you need some of those benefits? Are you willing to incur some costs? And decide where the optimum game-time is. (also check again at a different time and re-evaluate regularly)
There’s been some actual research into that, this talk is a good summary.
Why don’t ordinary photons spontaneously collapse into black holes? You should get a singularity if the energy density in any region of space is high enough. But you can pick an inertial reference frame such that any given photon has arbitrarily high frequency (and thus energy) due to blueshift. Since any inertial reference frame is as valid as any other due to relativity, why don’t all photons collapse under their own weight?
That applies to anything, not just photons. In any event, I’m not an expert in general relativity, but I think what matters is the energy of an object in its own center-of-mass frame (a.k.a. its mass). (And a single photon, or a collection of photons traveling in the same direction, doesn’t even have a center-of-mass frame.) Anyway, elementary particles (including photons) already are point-like so far as we know, so they couldn’t possibly collapse any further.
Quantum mechanics and relativity are not compatible. That’s one of the big problems (probably even the biggest) of modern physics. From each one’s point of view the other one is nonsense, the math breaks down.
How do you know that ordinary photons aren’t black holes?
I think I see where you’re going with this. If I can answer the question of how the universe would be different if photons did collapse, that might help explain why they don’t.
What would the world look like if, say, electrons were actually black holes with an electric charge? I do think black holes can be electrically charged, that is, charge is still conserved even if charged particles fall into a black hole. Same with angular momentum etc. We would expect black holes of electron mass to spontaneously decay via Hawking radiation. Into a shower of particles that in sum obey the conservation laws… in other words, into another electron. Hmm. That didn’t really change anything did it? It might help to explain quantum tunneling though.
I would also expect electrons to have an event horizon of finite radius, rather than behaving as an infinitesimal point. I don’t know enough general relativity to calculate how big this should be for a black hole of electron mass, but perhaps it’s too small for us to have observed yet. (Edit: asking Wolfram Alpha yields 1.353E-57 meters. Plank Length is only 1.616E-35 meters, far too small to observe.) An event horizon means that light can be trapped by the gravity of the electron. Which would give the black hole enough extra mass to spontaneously decay into more than just an electron. In the case a low-energy photon, into another electron and photon (explains photon scattering), or if high-enough energy, into heavier particles that add up to zero charge and spin, plus the election again. Like positron/electron pair production. Which has also been observed. Hmm. That still didn’t change anything, did it?
Maybe electrons really are black holes?
Oh, I know! Neutrinos are as massive as electrons (edit: not really, but they do have positive rest mass), but lack charge. If electrons are black holes, then neutrinos are also. The effect of gravitationally scattering light as described above should work for neutrinos too, but to my knowledge, they don’t. (Do they?)
Energy density is a function of mass, and photons have zero mass.
Zero rest mass. Photons certainly have energy.
I gave my book to my dad and I noticed him licking his finger to turn pages. I exploded and took the book away from him. I apologized later and explained this annoys me to no end.
Can anyone explain why people do it? FWIW I occasionally had two page turn at once and very rarely three at ones. I’m guessing it’s something with the ink, just to make my head work a bit. Or perhaps something to do with older books. Can anyone explain why people do this? I ran a search and I only got even more questions, like if it spreads germs.
(My dad said he always did it, so he’s rather unhelpful too)
It is very hard to turn pages if your fingers are too dry (or lack oil). Most people turn pages by using the friction of their fingers against the face of the page, rather than hunting for the edge of the page. Very dry skin doesn’t present much friction, and your fingers just slide along the page.
I usually blow on my fingers, on the theory that it is slightly more sanitary, and provides enough moisture to ‘grip’ the paper.
I have never found that. I always turn pages by the edge, and find the idea of licking my fingers to turn them revolting.
It did not become a problem for me until I started washing my hands and using hand sanitizer multiple times a day. Aging may also be a factor. Glossy pages are also worse than ‘normal’ paper. (e.g., I do not tend to have this problem when reading normal books, but do when reading children’s picture books.)
I don’t know if any of that was useful information :-)
EDIT: Typo
I have never found that. I always turn pages by the edge, and find the idea of licking my fingers to turn them revolting.
A dry finger slides easily on paper, a wet finger sticks to it (and moves it).
Cashiers often have a wet sponge on hand to moisten their finger when counting banknotes.
My grandfather saw me do this to a book once and told me the story of an Eastern mage who avenged himself by leaving the king who beheaded him a wondrous book with pages stuck together—but they were poisoned.
Never did this again...
I see no reason why bacteria can’t survive like this.
I’ve found that breathing on my fingertips is enough to get a grip on paper. I don’t actually have to lick them. I’m not sure if breath is much more sanitary than spit though.
I commisioned a 99designs logo and bought the corresponding domain name. The logo and domain name are pretty bad. It was a stupid decision I made months ago. It’s a sunk cost. What happens now? Should I just sit on these? Sell the domain and just forget about the logo. The logo I chose is such that I probably could have made a better one myself in half an hour and the domain name isn’t inherently valuable.
keep the domain; start a blog. if you don’t like the logo—let it go.
or get rid of the domain. up to you!
nothing bad happens if you keep them; or get rid of them. I assume you paid a year or two worth of the domain.
The best thing you could do is maybe make use of the resource now that you have it. The worst is maybe to not make use of it. (actively doing yourself harm aside...)
I have NEGATIVE karma on LessWrong. My writing is evidently unappealing. Why would I start a blog when it would generate negative publicity about myself?
Many people with opinions that are popular generally would not achieve high karmas on LessWrong, so I don’t think that is necessarily a good barometer of whether your ideas would be successful in other venues.
Evaporative Cooling in that situation should lead to readers and commenters being only people who like you and people who hate you and want to verbalize it on a regular basis.
On a seperate note, your karma history is 44% positive. I think people both often agree with your comments and often disagree with your comments. They just disagree slightly more often. I think this is a good reason for you to keep posting for the time being.
True. In this sense you have an understanding of yourself. That’s a good thing.
The don’t think having a domain name is a reason to start or not start a blog. It shouldn’t factor into the decision as domain names are cheap.
Why shouldn’t I just publish all my identity documents on Facebook?
My immediate thought is: ‘identity theft!’ and ‘that’s illegal’.
But now that I’m trying to evaluate the evidence for the first and the second, it’s hard to find any hard indications of either .
Are there records of anyone publishing all their identity documents online?
Are there any stories attached to these, or accounts of the consequences?
What for?
Proof of concept, conceptual/experimental art?
Let me see if I got this right: you are planning to expose yourself to the utmost degree of vulnerability and give all possible tools to anyone who may want to ruin your life, all as part of an art project.
I hope you have already thought of a way to explain how this isn’t the most flabbergastingly stupid idea ever.
Probably nothing in particular will happen to you if you do that. But it is also probable that nothing will happen to you if you never wear a seat belt. But in both cases there are very bad potential consequences, even if they have a probability of less than 50%.
Identity theft does happen in the real world. There no reason to be make easier for other people.
There are literally millions of cases of personal information being published online in places where people could access it and steal it (if they were smart enough). Basically, you are just lowering the intelligence bar for identity theft—or perhaps more accurately, the entry cost.