My partner has requested that I learn to give a good massage. I don’t enjoy massages myself and the online resources I find seem to mostly steeped in woo to some degree. Does anybody have some good non-woo resources for learning it?
dhoe
Doctors being dumfounded is a hallmark of irrationalist stories. Not saying this one is—I don’t even know the story here—but as someone who grew up around a lot of people who basically believed in magic, I can conjure so many anectotes of people thinking their doctors were blown away by sudden recoveries and miraculous healings. I mostly figure doctors go “oh cool it’s going pretty well” and add a bit of color for the patient’s benefit.
Reactions: Hacker News, Metafilter.
I have this half-baked idea that trying to be rational by oneself is a slightly pathological condition. Humans are naturally social, and it would make sense to distribute cognition over several processors, so to speak. It would explain the tendencies I notice in relationships to polarize behavior—if my partner adopts the position that we should go on vacations as much as possible, I almost automatically tend to assume the role worrying about money, for example, and we then work out a balanced solution together. If each of us were to decide on our own, our opinions would be much less polarized.
I could totally see how it would make sense in groups that some members adopt some low probability beliefs, and that it would benefit the group overall.
Is there any merit to this idea? Considering the well known failures in group rationality, I wonder if this is something that has long been disproved.
What are the practical benefits of having an intuitive understanding of Bayes’ Theorem? If it helps, please name an example of how it impacted your day today
I work in tech support (pretty advanced, i.e. I’m routinely dragged into conference calls on 5 minutes notice with 10 people in panic mode because some database cluster is down). Here’s a standard situation: “All queries are slow. There are some errors in the log saying something about packets dropped.”. So, do I go and investigate all network cards on these 50 machines to see if the firmware is up to date, or do I look for something else? I see people picking the first option all the time. There are error messages, so we have evidence, and that must be it, right? But I have prior knowledge: it’s almost never the damn network, so I just ignore that outright, and only come back to it if more plausible causes can be excluded.
Bayes gives me a formal assurance that I’m right to reason this way. I don’t really need it quantitatively—just repeating “Base rate fallacy, base rate fallacy” to myself gets me in the right direction—but it’s nice to know that there’s an exact justification for what I’m doing. Another way would be to learn tons of little heuristics (“No. It’s not a compiler bug.”, “No. There’s not a mistake in this statewide math exam you’re taking”), but it’s great to look at the underlying principle.
Drinking has surprisingly little impact on those parts of mathematics where you just mechanically apply a couple of rules, btw. Just mentioning this in case others didn’t try to solve integrals as teenagers as a sort of self-check—it totally doesn’t work. Your ability to walk is a better indicator of drunkenness.
On topic: don’t wear these shirts if you aim at anything more than signalling affiliation with a particular tribe. It’s also inefficient if you accept the existence of interesting people outside this very small tribe.
I’ve started running more seriously a couple of months ago, and it’s just fantastic. Once I got to the point that 30 minutes became easy, it really started to be its own reward. I get to explore my town—Strava allows me to see the routes of other runners, and if you pick the more experienced ones, they tend to run in beautiful places I’d never see otherwise. I get to see the seasons change. I get out of my head and away from the keyboard. I lost 10kg since summer. Can’t recommend it enough.
The Schizophrenia Classification Challenge. I haven’t done anything difficult, which is the biggest surprise; when I read the description I doubted I’d even be able to produce anything useful.
I’d be interested. More in awesome stuff than R itself. I’m currently at #22 out of 99 in a Kaggle contest and am doing it in R, but don’t really know what I’m doing. I do find that participating there is not a bad way to practice.
Finally decided to enter a Kaggle contest. Apparently my bits and pieces of self-taught stats paired with good intuition is enough for (currently) position 14 out of 81 participants.
I’m sure there are moral systems where living off your children is an acceptable moral choice, but I can’t say I’m very motivated to check them out.
Their actions were rational from their point of view, however. They just radically overestimated the probabilities of total societal collapse. If that’s what you expect, moving out of the city and trying to live from your garden and some goats might not be the worst course of action.
As someone spending a pretty solid part of my earnings on maintaining my aging former hippie parents, I’d like to point out that it’s a radically egoistic choice to make, even if it doesn’t appear at the time.
They dropped off the grid and managed many years with very little money, just living and appreciating nature and stuff. Great, right? But you don’t accumulate any pension benefits in those years, and even if you move back to a more conventional life later, your earning potential is severely impacted.
I do, but it’s mostly because doing it helps me focus. I rarely go back to read my notes. Here’s an example, for a book about SQL query tuning.
If this is something that can be looked up in your PhD dissertation, where can I get a copy?
Edit: here (pdf)
But is there a rational argument for that? Because on a gut level, I just don’t like humans all that much.
Bringing life to the stars seems a worthy goal, but if we could achieve it by building an AI that wipes out humanity as step 0 (they’re too resource intensive), shouldn’t we do that? Say the AI awakes, figures out that the probability of intelligence given life is very high, but that the probability of life staying around given the destructive tendencies of human intelligence is not so good. Call it an ecofascist AI if you want. Wouldn’t that be desirable iff the probabilities are as stated?
What’s so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don’t care so much about rationality, and specifically I don’t really see why having the human-style half-assed implementation of it around is considered a good idea.
I started partecipating, but got turned off by the ridicolously detailed questions outside my area of expertise. Do I think a sack of rice will fall over when the Ethiopian delegation visits Ecuador in March? How sure am I about my prediction? It doesn’t seem to help me to achieve better calibration. I’m curious if people that are partecipating are getting value out of it, and what kind of value.
- 5 Jun 2015 2:44 UTC; 4 points) 's comment on Summary of my Participation in the Good Judgment Project by (
I’m interested in potential future meetups, but probably won’t make this one (flying back from San Francisco on the 23rd).
I think the article is mostly correct in seeing a connection. This community does not have a particularly good immune system against modes of thought that appear like contrarian cold realism, and is easily tempted to reach for a repugnant conclusion if it feels like you earn rationality brownie points for doing so.