Open Thread, September 1-15, 2012
If it’s worth saying, but not worth its own post, even in Discussion, it goes here.
- 24 Dec 2012 0:16 UTC; 8 points) 's comment on Thoughts on the Drake Equation and the Great Filter by (
If it’s worth saying, but not worth its own post, even in Discussion, it goes here.
I believe the correct term is “straw individual” by Yvain is well worth reading.
A dramatic understatement—I found this to be far superior to WAitW, as it’s more concrete and offers reasonable advice to its readers. By being more systematic, it strikes me as a better illustration of WAitA than the actual WAitA article.
WAitw = “Worst argument on the world” yes? The acronym is unclear.
Yes.
That you for linking to this article; I enjoy seeing other people making my point better than I could. Here are some additional thoughts:
My first thought after reading the linked article was “pick your battles”, especially as expressed in Paul Graham’s “What You Can’t Say”. It sounds like the exact opposite of “Atheism+”, and yet they both seem to make a lot of sense… well, how is that possible?
More generally: You hold opinions X and Y, and they are both important to you. Another person agrees with X and disagrees with Y. When should you treat this person as an ally, and when should you treat them as an enemy? Let’s suppose that both X and Y are your core values, so you can’t decide by “which one is more important to you”.
Seems to me that when X is endangered, then each proponent of X is a gift, and you don’t look the gift horse in the mouth. On the other hand, when X is safe—it may be still a minority belief, but it gains momentum irreversibly—it is strategic to associate X with Y as much as possible, to transfer some momentum to Y; declaring “X but not Y” people as the “enemies of the true X (which includes Y)” is the obvious way to do it. You can afford to alienate the few “X but not Y” people if X will win without them too.
This was a strategic analysis in general, but now let’s look at these specific X and Y; namely: What’s could be possibly wrong about asking people to be compassionate and reasonable, and not be a bully???
Well, it depends on your specific definitions or “compassionate”, “reasonable” and “bully”. Yes, the devil is in the details. As long as the vocal people are allowed to redefine these words to mean exactly what they need them to mean in a given moment, and especially if “surely, you said A, but we all know that’s just a code for B” arguments are accepted, it allows to relabel each dissent as bullying, and to ostracize given people not because they disagree, but simply because their behavior is interpreted as ethically unacceptable. (Even pointing out this mechanism will be problematic, because we all know that only a bully would expend their energy to defend bullies.)
Also this comment by Kaj_Sotala:
Has it occurred to anyone that worst-argument-in-the-world type thinking is probably a result of the affect heuristic?
After reading David Burns’s “Feeling Good” and receiving a score on the depression test corresponding to a severe depression I tried the exercises in the book. Though I still struggle with them, they have helped me temendously and lowered the score on the test after only a week. I can not attribute the change only to the exercises seeing as I have been more strict in my meditation regimen (15min at evening). The exercises are very interesting to this community I think and maybe I will write a dedicated discussion post.
With my new found optimism/hope/energy I am much more motivated to start exercising again in the next days, maybe a programming project and again taking up quantifying myself.
Feeling Good also helped me a lot, I think I self-diagnosed on moderate depression using its test and then got much better after reading the first chapters of it.
Write a main post! Summarizing a widely acclaimed book about a rationality-related topic of interest to many LessWrongers surely constitutes worthy subject matter.
I am going to write it in discussion. If the moderators feel it belongs in main they can move it.
Is there any precedent for the moderators doing such a thing?
Some, but not much.
Very interesting, is it available online anywhere?
It’s available on Library Genesis
No, you will have to buy it or go to your local library. Of course, those are only the legal options. Alternatively, wait until I publish my post, then you will be willing to buy the book. Yes, it is that good.
I’d be very interested in reading it!
I look forward to it.
Awesome! I look forward to reading any post you make about it.
I’m thinking about a fantasy setting that I expect to set stories in in the future, and I have a cryptography problem.
Specifically, there are no computers in this setting (ruling out things like supercomplicated RSA). And all the adults share bodies (generally, one body has two people in it). One’s asleep (insensate, not forming memories about what’s going on, and not in any sort of control over the body) and one’s awake (in control, forming memories, experiencing what’s going on) at any given time. There is not necessarily any visible sign when one party falls asleep and the other wakes, although there are fakeable correlates (basically, acting like you just appeared wherever you are). It does not follow a rigid schedule, although there is an approximate maximum period of time someone can stay awake for, and there are (also fakeable) symptoms of tiredness. Persons who share bodies still have distinct legal and social existences, so if one commits a crime, the other is entitled to walk free while awake as long as they come back before sleeping—but how do they prove it?
There are likely to be three levels of security, with one being “asking”, the second being a sort of “oh yeah? prove it” (“tell me something only my wife would know / exhibit a skill your cohabitor hasn’t mastered / etc.”), and the third being… something. Because you don’t want to turn loose someone who could be a dangerous criminal just because they were collaborating with a third party to learn information, or broke into the National Database of Secret Person-Distinguishing Passphrases, or didn’t disclose all their skills to some central skill registry—but you don’t want to lock up innocent people who made bad choices about who to move in with when they were eight, either.
Is there something that doesn’t require computers, or human-atypical levels of memorization/computation, or rely critically on a potentially-break-into-able National Database of Secret Person-Distinguishing Passphrases, which will let someone have a permanently private bit of information they can use to verify to arbitrary others who they are? (There is magic, but it is not math-doing magic.)
All personalities are given a pair of esoteric stimuli. Through reinforcement/punishment, one personality is conditioned to have a positive physiological reaction to Stimulus A and a negative physiological reaction stimulus B. The other personality is given the converse.
The stimuli are all drawn from a common pool of images like “bear”, “hat” or “bicycle”, so one half of a stimuli pair may be “a bear in a hat on a bicycle”. There’s a canonical set of stimuli, like a huge deck of cards, with all possible combinations, all of which are numbered. The numbers for my stimuli pair are tattoed on my body in some obscure location, like the sole of my foot.
If I need to prove my identity, I show my tattoo to the authority figure. It will read something like “1184/0346”. They pick out either image 1184 (bear in a hat on a bicycle) or image 0346 (a sword in a hill being struck by lightning), and show it to me. My immediate response will be either arousal or disgust, and they will know which personality I am.
Is this a realistic cultural adaptation? In most human societies if you are stuck working or living with someone your social existence is somewhat shared. A person from your clan doing something bad is a also bad for your own reputation. If someone from your family committed a crime some legal traditions would hold you responsible. It seems much more plausible that society would consider the two people living in the same body to be legally treated at least like a married couple or brothers where in some past ones.
Given your constraints and assuming no cheap and easy test of distinguishing them, of all historical examples I can think of, only modern Western culture with its hyper-individual liberalism would bother with the impracticality of treating the two people like two fully distinct individuals. And even then they would have to give a family-like if not legal guardian-like relationship for the issue of making medical decisions. Not sharing your place of residence and ownership over it would be impractical, though perhaps there would be a strong norm of not going into the other guys part of the house.
Also as a minor note the culture would probably develop a norm of some sort of marker (perhaps clothes, jewellery) or face paint to show which of the two persons in currently in control. The distinction would be more or less universal not simply individualized so even strangers could tell this was two different persons. Think more “Ah I see your patron god is the first twin Jahu. Your cohabitor was here yesterday.” instead of “Aha James always wears his leather jacket you must be Harry!”. Using the wrong marker would probably at least as taboo as cross-dressing was in some past cultures.
I’m not trying to get too much into the cultural details here—certainly cultures vary in the setting. Some of them do treat cohabiting like it’s on par with marriage, and even arrange it through families (which makes sense: if we want to share grandchildren, we arrange for our kids to get married if they’re the opposite sex, but if they’re the same sex nonfantasy humans are out of grandchildren-sharing luck. In comes cohabitation!) But importantly, cohabitors cannot talk to each other. There is no way for them to socially pressure each other outside of self-destructive attacks or sternly written letters. You could hold someone responsible for what their cohabitor did, but this would only deter people who were compassionate enough to care about the fate of someone they cannot ever interact with—and, if they picked each other instead of being arranged, chose on the basis of not particularly desiring to ever interact with them again. (You don’t pick your friends as cohabitors: you pick people whose company you don’t care for with comparable danger tolerances and cosmetic features you want to include when you have your bodies conglomerated.)
Also, they don’t sleep, so “place of residence” dissolves for most people. They have typical hangouts, storage lockers, clubhouses and favorite restaurants and rental kitchens—but why bother maintaining an entire house? You don’t need a secure place in which to sleep; your cohabitor will look after your body while you’re unconscious. Medical decisions are also made a lot simpler by the magic system, although they don’t completely go away and there’s probably some plot to be had there.
Most people would probably adopt cosmetic markers, but how required these would be would certainly vary; I think your expectation here would be a reasonable way for one society to operate but too sweeping for all. This isn’t how we treat identical twins, who, while uncommon, are still a known feature of the real world. I look a lot like my sister to the point where one time I walked into her school and six of her friends mistook me for her; we were not then obliged to choose distinct ritual scarves and wear them at all times.
Cohabitors could also pressure each other with rewards, and with threatening to withhold rewards.
I’m not sure about the lack of residences. A storage locker isn’t the same thing as having your stuff conveniently arranged for use.
Well, houses are at least a great deal more optional. I’m imagining them as something of a status symbol.
How much of a status symbol would a home be? Only the poorest don’t have a home? A home is a middle-class sort of thing? Only the rich? Only the very rich?
Again, would vary from culture to culture within the setting.
IIRC, in some cultures (e.g. mid-20th-century Italy) they did the opposite, i.e. they dressed their twin children identically.
Each personality owns a bracelet with a combination lock. To prove you’re you, you unlock your bracelet. This is basically the password system, but localized, and now you just have to worry about making combination locks tamper-proof.
Unfortunately, physical locks interact very badly with the magic system. (In brief: “Lockedness” is a thing. If you are about average at magic, it’s a thing you can move from one thing you’re touching that is locked to another thing you are touching that can be locked but isn’t.)
Since it’s the only thing I know about the magic system, I suggest looking closely into what it means that X can be Y. (By “looking closely” I mean “exercise your authorial authority”.) Then tie the procedure to something that can’t be moved to anything that prisoners have around, other than the actual testing thing.
But the thing that keeps returning to my mind is that in our world we do quarantine innocent people if they carry dangerous enough diseases. I think you’d need a pretty high rate of evil-twinniness for a society not to take the easy way out and do the same. Even a very trustworthy person can fail to return to prison (?) by accident.
Anyway, I think pen-and-paper cryptography is your best guess, unless “encryptedness” and related properties are things that can be moved. Neal Stephenson’s Cryptonomicon has an example of a protocol that uses a deck of cards. (Which is imaginary but possible AFAIK.)
It’s not imaginary; the protocol is described in one of the appendices, and I’ve implemented it once.
Cool! Do you remember the “performance” of the protocol? (That is, how much work it takes to exchange how much information, in approximate human-scale terms, and its approximate security in usual cryptographic language.)
Sadly, Bruce Schneier’s “Solitaire” is broken. That break was one of the things that got me into crypto!
Can you explain how broken it is to this layperson?
Warning: What follows likely has major technical errors—basically all I know about cryptography I learned from Cryptonomicon.
From the description, the random numbers are not evenly generated so that what should have a 1⁄26 chance of happening has a 1⁄22.5. And the output is heavily biased.
How much does that matter? We can easily decrypt Enigma with brute force right now. Is the difference in the amount of computing power to brute force Solitaire all that much different from what is expected?
In other words, encryptions with 256-bit keys are harder to crack than 128-bit keys. But is the problem with Solitaire 20-years-safe vs. 10-years-safe, or is it 20-years-safe vs. 12-months-safe?
Yeah… I guess as long as I’m postulating accomplices, I might as well postulate accomplices who’d kidnap their jailed friend’s cohabitor and wait until they are forced to sleep by sheer exhaustion.
Is there a risk that any authentication scheme could be bypassed by transferring the “Autenticatedness” from someone else, or does the magic system forbid that somehow?
In any case, some kind of magical version of the bracelet lock sounds like a good idea, if you can think of one.
Transferring authenticatedness doesn’t work, so that’s not going to be an issue.
I can’t think of a way to magic up the bracelet to work like this, unfortunately.
Couldn’t they just each memorise a six digit number and recite it on demand?
The first thing that occurs to me is to decentralise the database, which incidentally is rather a computer-ish concept. Each person designates two or more Keyphrase Holders, with a separate password for each. For low-security situations, they have to give their passphrase to one KH; for maximum security, they have to convince all of them. Ten or a dozen passwords should not be beyond anyone’s memorisation capabilities in a world without shiny Internet distractions, and the KH can write them down—this gives you a lot of different DSP-DPs instead of one big one. Any given KH may be suborned or have his database broken into, but by the time you get up to a dozen or so that is unlikely.
Obviously this works best if you don’t have to physically drag the KH to the prison cell, or whatever, before you let the innocent one out.
To make this easier to memorize and more secure, you could have there be a much larger number of KHs. Their job is to be KHs; their identities are kept secret even from each other. Each KH has a certain property about the person’s password that they learn- e.g. its length, the number of vowels, the number of times the letter “a” appears minus the number of times a letter appears, etc. However, they don’t know the password itself; they only know the person’s answer to the question. When a person wants to be released, a certain number of KH’s, randomly selected, large enough that correct guesses or collaboration is unlikely, and all wearing hoods, are summoned to the person’s cell to figure out their identity.
You’d need to ensure that, following an incorrect guess, the same KH isn’t used again- or that the innocent person picks a new password. (Propagating password changes would be terrible- it would make sense to have very severe punishments for claiming to be another person. The first time would be standard jail processing- everybody innocent would need to go down a line of KH’s and tell them their name and the answer. This also highlights the main weakness of any possible system- the need to have verified who is who when dealing with the initial passwords, since criminals would presumably immediately go to sleep following crimes, or claim to have just woken up.)
Give everybody training in a particular skill during their childhood: Juggling, acrobacy, calligraphy, drawing, playing a particular instrument—or even something more esoteric like doing figures with a Yoyo or a Diabolo, or doing pool tricks, or tricks with a socker ball; anyway something require a good amount of motor skills and training; and also make sure that no cohabitor pair has skills that are too similar (like calligraphy and drawing, or acrobacy and soccer tricks, or the violin and the bass).
Then have a taboo against learning those skills outside the “official” (or religious) context in childhood (for example: being seen training for them is a crime, the props can’t be found outside special temples, etc.).
Physiological correlates to anxiety in response to known personality-specific trauma?
Can they use quill and parchent?
If so, the usual public key algorithms could be encoded into something like a tax form, i.e. something like ”...51. Subtract the number on line 50 from the number on line 49 and write the result in here:__ …500. The warden should also have calculated the number on line 499. Burn this parchent.”
Of course there would have to be lots of error checks. (“If line 60 doesn’t match line 50 you screwed up. If so, redo everything from line 50 on.”)
To make it practical, each warden/non-prisoner-pair would do a Diffie-Hellman exchange only once. That part would take a day or two. After establishing a shared secret the daily authentication would be done by a hash, which probably could be done in half an hour or less.
Of course most people would have no clue why those forms work, they would just blindly follow the instructions, which for each line would be doable with primary school math.
The wardens would probably spend large parts of their shifts precalculating hashes for prisoners still asleep, so that several prisoners could do their get-out work at the same time. Or maybe they would do the crypto only once a month or so and normally just tell the non-prisoners their passwords for the next day every time they come in.
I don’t think that I understand how this works, which has a meta-level drawback...
You might have better expository skills than Salutator, and people love learning esoteric things about mysterious professions in the midst of fiction. Diffie-Helman relies on certain properties of math in prime modulus groups, but understanding those properties isn’t necessary just to do DH. It only takes primary-school level math abilities to follow the example on Wikipedia (and note that, if nobody has computers, you don’t need a 2048 bit modulus.
Everyone is born with a true name that they intuitively know but can’t say, and they also have a unique soul-color. And there are special glow-stones that you can think your true-name at, which will then glow the same as the soul-color of the person with that name.
I’d rather not solve the problem by adding magic that doesn’t fit into the existing system. Especially suspiciously convenient magic.
You need to think about one-way functions (hashes) and trapdoor one-way functions (public key algorithms). There are some additional issues that arise like nonces to thwart replay attacks and the level of protection individuals can be expected to give to secret keys.
Also, even without explicit mathematics the universe will presumably have a concept of entropy and conservation of something, even if it’s just conservation of magical energy. If you can come up with a plausible problem that magic can solve given a lot of expended magical energy but can be solved much more easily with the knowledge of a secret, then you can build a challenge-response identify proof so long as it’s not easy to steal the secret by watching the demonstration. If additionally it’s very hard to derive the secret from the demonstration of its knowledge you probably have the power of a public key system.
Not all the following problems require magic to implement, and many of them actually benefit from not having a knowledge of mathematics and algorithms, since most of these are not cryptographically secure.
Have each person construct an elaborate puzzle out of oddly shaped objects that can be packed into a small finite volume in only one way (the knapsack problem)
Each person constructs a (large) set of sticks (or metal rods, or whatever) of varying lengths, of which a subset add up to a standard length like a meter (the subset sum problem)
Society forms a hierarchical tree of secret handshakes so that each person only has to remember, say, 100 secret handshakes and the tree only has to be log_100 (N) tall so the courts can just subpoena a logarithmic number of individuals to verify handshakes between any two arbitrary people. Obviously any one of your 100 acquaintances can impersonate you, so two or more distinct trees would at least require collusion.
Any magical item that only functions for its “owner”.
A magical “hash function”, like a petronus or an aura, that is unique to every individual (not body) and can’t be faked. Producing it would be an effective identifier.
Lastly, I should point out that very few “normal” people in the situation you describe would be able to achieve cryptographic security anyway. I can (barely) memorize a passphrase with 128-bit entropy (using diceware, so I’m certain it actually has 128 bits), and even that’s not quite enough to choose a secure secret key for Elliptic Curve DSA. And it would have to only be memorized and never written down anywhere, and only computed on trusted hardware (who the sleep-twin could modify to their heart’s content while I slept). So, yeah, Magic.
Maybe you could adapt this implicit memory-based authentication scheme into a board game format similar to Mastermind.
Recognition memory is actually even cooler than implicit memory, I thought, and can contain quite a bit of information (as far as I could tell, working through Shannon’s theorem): http://www.gwern.net/Spaced%20repetition#fn63
Dunno how it would work in this setting, though, unless the personalities share visual recognition.
If I do something in this approximate neighborhood, I think I’ll go with the hypnotism idea, since it’s easier both to understand and to handwave about.
A few possibilities:
A clockwork Analytical engine / Enigma machine, that does something equivalent to public key verification (though I assume you don’t want that kind of machine either).
In each city is a temple of the Sigils, in which are stored the Sigils of people, in public view. The Sigils are like intricate signatures drawn on clay tablets; but they are made on a special clay, Sigil Clay, that dries in about a minute, and changes color depending on the pressure you apply to it, the heat (depending of whether you’re touching it with a stylus or with your fingers), and how dry it is. Sigil Priests know hundreds of drawing techniques, and when an alternate pair is created, each person will be taught a few techniques to apply to his drawing, with no overlap between the alternates (so it should be quite hard for someone to reproduce his alternate’s Sigil). Being able to draw one’s Sigil is generally considered a proof of identity, and since only the Sigil Priests know how to make Sigil Clay, one has little opportunity to practice drawing someone else’s Sigil (not to mention that it’s of course considered a grave crime).
For the prisonner’s case, why not having the “day” persona return to prison to sleep and give a new passphrase short (randomly generated with a special set of dice) to the guard, and when he wakes up and wants to get out, he must give the same passphrase (if he gets it wrong, he is lightly punished and must wait at least 30 minutes before trying again.
This is a weird and interesting premise!
The passphrase idea you describe is probably fine for minimum and even medium security, it’s just vulnerable to eavesdropping and message-passing by third parties if the prisoner has friends.
So basically the Cherubs in Homestuck.
I barely got ten pages into Homestuck, so I wouldn’t know.
Calliope/Caliborn share the same body. Each is “asleep” while the other is “awake”, and they have a pair of ankle-shackles of which magically only one can open. They also have disjoint skillsets; due to some kind of brain trauma, Caliborn is incapable of drawing, while Calliope is pretty good: example
Caliborn circumvents this latter restriction by biting off his own leg.
Why use cryptography? If I understand the problem statement correctly, there’s a simpler solution. When a prisoner wants to go to sleep, they signal and a guard walks over and renders them unconscious, presumably using drugs. Since we know that nobody would go to sleep outside of jail, you can figure out who is who by counting the number of times they’ve been sedated.
(This is vulnerable to troubles telling who is who at the start, but so is any knowledge-based method. This is also vulnerable to people falling asleep outside, but so is any knowledge based method. It’s also fairly dangerous, given that most drugs capable of rendering somebody unconscious are dangerous; however, giving guards some training and then handwaving away or saying the society isn’t concerned by the (minimal) danger sounds reasonable. It assumes certain things about going to sleep and drugs that may not be true in this universe, but it at least sounds reasonable- and this is fiction.)
Sedatives would cause physical sleep, and the reason people share bodies in this world is because having your body be asleep will cause your soul to be eaten by insubstantial demons. Sleeping-while-someone-else-pilots-your-body is safe in large part because it cuts off interventions regarding your soul from outside sources—demons, drugs, magic, etc.
Also, this method relies on cooperative criminals, not just cooperative cohabitors-with-criminals. The criminal has an incentive to make being in jail really inconvenient for their cohabitor—by, for instance, not notifying anyone before going to sleep. They’re already in jail, so making their cohabitor mad at them has limited power to make their situation worse, but if the guards wind up having to imprison the cohabitor too to be safe, the cohabitor might work on ways to get out.
I suppose reallocating cohabitors (say, criminals with criminals) is out of the question?
Moving one person in with another person is already very magically challenging; this might not be strictly impossible but your average community would not have access to even one person who could do it. Perhaps this would be a good last resort on a national level for anyone with a demonstrated propensity to actually escape, or whose escape would be particularly dreadful.
Is handwriting style per person or per body?
Per person, but most people in ordinary day-to-day life will have plenty of opportunity to observe and practice mimicking their cohabitor’s handwriting if they feel like working on that—they can’t talk to each other directly, so they leave notes (“watch out for our left foot, it’s still tender, I dropped something on it”, “so how are you doing, what are you up to”, “we’re pregnant”).
So handwriting is secure between a pair; then all you need is some sort of authentication. Why not use a very simple random number generator? Each member of a pair knows it, of course, and they occasionally set up fresh seeds. Each day is one iteration. To ‘sign’ a message, one simply writes down today’s random number afterwards. (You said handwriting is secure, so you don’t worry about someone tampering with the message and making an authentic number testify to a faked message.)
What RNG? Dunno. Blum Blum Shub has a hilarious name, but the multiplying is a bit painful. Depending on how much accuracy you want, you could make up your own simple recurrence (imagine a list of 5 integers, which shift each day, and the first is defined by the sum of the last two modulo 5). But it turns out geeks have already discussed PRNGs you can do with mental arithmetic:
http://ask.metafilter.com/191135/Help-me-get-random-numbers-by-mental-arithmetic
http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/
http://stackoverflow.com/questions/3919597/is-there-a-pseudo-random-number-generator-simple-enough-to-do-in-your-head
http://ask.metafilter.com/20334/Random-sequences-in-your-head
For the looks of them, at least one suggestion should work for you.
This allows pair members to authenticate themselves to each other, but not third parties to tell members apart.
Set up another pair of RNGs; both write down on a piece of paper and show the paper simultaneously, something like that. With third parties, you lose the time-delay aspect which makes things hard in the case of temporally separate pair members trying to authenticate to each other.
OMG!
Well, first, handwriting is extremely hard to mimic perfectly, but maybe it’s easier if you are using the same hand (and brain). Think of other individual traits that are harder to observe in your other half. Maybe speech patterns, or mannerisms, or some other subconscious manifestations. Maybe have a separate hypnotic induction for each person when they become of age. Judging by your writings, you don’t suffer from the lack of imagination. The goal is to have a cheap version of the same feature, and “There are likely to be three levels of security” sounds pretty complicated already.
Oh, come on, it’s an obvious consequence of the premise.
Hypnosis has some promise. Speech patterns/mannerisms seem like they’d rely on the testimony of people who know both of the cohabitors really well and who probably aren’t cops, which has the problem of those people being corruptible in various ways.
I don’t suffer from lack of imagination, but I’m just one person. An entire civilization which has had this problem for a long time should be able to come up with a solution that’s more robust than what I’ve been coming up with, so I solicit help—I’d feel especially silly if there were some trivially implementable noncomputerized version of RSA that someone could tell me about. Also, the entire setting does this thing where people share bodies, and there are multiple cultures in the setting—ideally they’d have different approaches, so if I can come up with more than one workable idea, so much the better.
Without introducing more magic and without there being at least some kind of database, this is an unsolvable problem. I would say use a one-time pad, but the key would have to be stored in a database.
If the technology of the time is at least that of, say, the 1940′s, you could use quantum key distribution to at least be alerted if the crypto is broken (more useful than any other solutions), but would still require a database.
Maybe it would be obvious, were I female.
Good point. RSA in a nutshell is “I’m the only one who knows a certain secret, and I’m the only one who can unconditionally and repeatedly verify this fact without divulging the secret itself”. Well, this is one half of it, the authentication part, not the encryption part.
So you need a way for a person to produce some output from a given input that can be unique both to the person and to the input. but easily verifiable. What kind of non-technical output is available? Visual? Aural? Motor functions?
For example, maybe a way one’s eyes follow a complicated pattern is while unpredictable, but unique enough and easy to check. Or a rhythm one drums in response to something. Or the interpretation of the Rorschach test.
By the way, if you find something that works in real life, you will be famous and set for life, as this is an open problem with multiple applications.
These people are humans, although there is much more potential for magical alteration of the base plan than real humans have. They have human capacities to memorize and transmit information.
I’m reminded of this. Although the technique in the article was taught using a computer game, one could plausibly develop an analog equivalent. Give someone a musical instrument and teach them to play specific sequences in response to the sequences somebody else plays, or something.
But the teaching would be really time-consuming, and of course you’d have to make sure that the right person was in charge of the body while they were being taught.
If it’s something you can teach children, then wealthy societies (which can afford to wait longer before having people move into each other’s bodies) can be sure to teach only the correct people, but indeed time consumption remains an issue.
Well, there is visual cryptography in various forms, and if one databank is not secure enough, make it two or three- parole officier+National Databank or something, thats called secret-sharing-cryptography. It is possible to combine both, and even have them at a simple enough level to not require PCs. Of course, for visual cryptography you need a fast way to recreate the visual secrets- computing and graphing polynoms for thirty minutes every twelve-ish hours is a serious waste of time…
Does the protocol need to be robust against cohabitors in league with each other? That is, is “permanently private” built in, or could someone share their key with a cohabitor who agrees to take the fall?
I think under the circumstances they’re going to have to consider cohabitors who aid and abet their cohabitor’s crimes to be accessories deserving of the same punishment (at least insofar as that punishment is restriction of movement) - otherwise you let the accessory go, they travel to a safe place, and they nap, boom, criminal is free.
I just ran across this in Wikipedia:
“Our “real will” (in Bosanquet’s terms) or “rational will” (in Blanshard’s) is simply that which we would want, all things considered, if our reflections upon what we presently desire were pursued to their ideal limit.”
This is remarkably similar to the informal descriptions of CEV and moral “renormalization” that exist. Someone should look into the literature on Bosanquet and Blanshard’s rational will, and see if there’s anything else of use.
Thanks for the reference. It’s a shame that the informal description wasn’t attached to a more distinctive label. If so it would be worth adopting it for the sake of conformity.
If I had a dollar for every time a philosopher talked informally about something potentially very cool...
...then you’d have a dollar for every post in the Sequences.
The waning of the nuclear family by Razib Khan
This is a good example of why I think it makes much more sense to frame history in terms of “value change” rather than holding notions of “moral progress”.
That’s a pretty good example of that, yeah. It’s also interesting to note how values, or at least the potential for them, may be conserved across long-term shifts: American culture is notably fixated on geneaology compared to societies where the extended family is a socioeconomic norm; the motivation to have a wider familial context is there, even in families and individuals who are quite comfy with the nuclear pattern. I’m not suggesting it’s a causal influence that trumps the economics driving the push for extended families, but I can’t help seeing it as influential. The demographic transition and decline of extended families in the US wasn’t that long ago...
I own a personal server running Debian Squeeze which has a 1Gb/s symmetric connection and 15TB per month bandwidth.
I am offering free shell accounts to lesswrongers, with one contingency:
1) You’ll be placed in a usergroup, ‘lw’, as opposed to various other usergroups for various other communities I belong to, which will be in other usergroups. Anything that ends up in /var/log is fair game. I intend to make lots of graphs and post them on all the communities I belong to. There won’t be any personally identifying data in anything that ends up publicly.
Your shell account will start out with a disk quota of 5g, and if you need more you can ask me. I’m totally cool with you seeding your torrents. I do not intend to terminate accounts at any point for inactivity or otherwise; you can reasonably expect to have access for at least a year, probably longer.
Query me on freenode’s irc (JohnWittle), or send me an email. johnwittle@gmail.com.
Also, while the results of my analysis are likely to go in Discussion, I was wondering if this offering of free service itself might go in discussion. I asked in IRC and was told that advertisements are seriously frowned upon and that I would lose all my karma.
Related to: List of public drafts on LessWrong
An online course in rationality?
A month or two ago I made a case on the #lesswrong channel on IRC that a massive online class or several created in partnership with and organization like Khan Academy or Udacity, would be a worthy project for CFAR and LW. I specifically mention those two organizations because they are more open to non-academic instructors than say Coursera or EdX and seem more willing to innovate rather than just dump classical university style lectures online.
The reason I consider it a worthy project, is besides it exposing far more people to the material and ideas we want to spread, it would allow us to make progress on the difficult problems of teaching and testing “rationality” with the magic of Big Data and even something as basic as A/B testing to help us.
I considered making an article on it but several people advised me that this would prove a distraction for CFAR, more trouble than is worth at this early stage. I have set up a one year reminder to make such a proposal next summer and plan to do some research on the subject in the meanwhile to see if it really is as good an opportunity as I think it would be.
Obama has been reading Kahneman’s Thinking, Fast and Slow.
It has become increasingly clear over the last year or so that planets can in fact form around highly metal poor stars. Example planet. This both increases the total number of planets to expect and increase the chance that planets formed around the very oldest stars. (Younger stars have higher metal content). One argument against Great Filter concerns is that it might be that life cannot arise much younger than it did on Earth because stars much older than our sun would not have high metal content. This seems to seriously undermine this argument.
How much should this do to our estimates for whether to expect heavy Filtration in front of us? My immediate reaction is that it does make future filtration more likely but not by much since even if planets could form, a lack of carbon and other heavier elements would still make formation of life and its evolution into complicated creatures difficult. Is this analysis accurate?
I have a Great Filter related thought which doesn’t address your question directly but, hey, it’s the Open Thread.
My thesis here is that the presence of abundant fossil energy on earth is the primary thing that has enabled our technological civilization, and abundant fossil energy may be far less common than intelligent life.
On top of all the other qualities of Earth which allowed it to host its profusion of life, I’ll point out a few more facts related specifically to fossil energy, which I haven’t seen in any discussions of Fermi’s Paradox or the Great Filter.
Life on Earth happens to be carbon-based, and carbon-based life, when heated in an anoxic environment, turns into oil, gas and coal.
Earth is roughly 2⁄3 covered in oceans (this figure has varied over geologic time), a fact with significant consequences to deposition of dead algae, erosion, and sedimentation.
Earth possesses a mass, size, and age such that the temperature a few kilometers below the surface may be hundreds of degrees C, while the surface temperature remains “Goldilocks.”
Earth has a conveniently oxidizing atmosphere in which hydrocarbons burn easily, but not so oxidizing that it prevents stable carbon-based life. Quite a narrow window, really.
Life has existed on Earth for billions of years, and thus algae and other life forms have been dying in oceans and swamps and accumulating subsurface hydrocarbon source material, for billions of years.
Put all this together and realize that the formation of oil, gas, and coal happens only in rare and specific circumstances even on Earth. We seem to have a lot of these resources today, but it took billions of years for them to accumulate in the quantities we now find.
If any one of the above facts were not true, we would not have fossil energy—coal, oil, gas, plastics, lubricants—we would not have an industrial revolution, and we would not have a technological civilization.
Many of the facts on the above list have to be true simply to enable fire, as in, wood fire, imagine what human history would look like if the oxygen content of Earth’s atmosphere was too low to sustain wood fire?
Anyways, maybe people have discussed this before, but I wasn’t able to Google anything up.
The oxidizing atmosphere is not due to chance. It was created by early life that exhaled oxygen, and killed off its neighbors that couldn’t handle it. Hence, I don’t think the goldilocks oxygen levels speak much to great filter questions.
Early in civilization, we used wood and charcoal as energy sources. Blacksmithing and cast iron were originally done with wood charcoal. Cast iron is a very important tool in our history of machine tools and hence the industrial revolution. It’s possible that we could have carried on without coal, instead using large-scale forestry management or other biomass as our energy source. In the early 1700s there were already environmental concerns about deforestation. They were more related to continued supply of wood for charcoal and hunting grounds than “ecological” concerns, but there were still laws and regulations enacted to deal with the problem.
How many people do we need to support a high-tech civilization? I suspect fewer than we tried it with. It’s quite possible that biofuel sources would have produced a high tech civilization, just slower and with fewer people.
Also, note that biofuels can produce all the lubricants and plastics you need just fine. The Fischer-Tropsch process has been implemented on a large scale before.
I think that given all this, you could get the modern metal lathe and the steam engine without fossil fuels. We already harnessed basic water and wind power without fossil fuels. I suspect with modern machine tools you get to electricity and large-scale water and wind power generation, even without fossil fuels. Again, more slowly, and possibly without so many people, but I think you can get there.
These are all good points and I don’t disagree with you. It probably is worth pointing out that ever since about 1800 our civilization has had “the pedal to the metal” in terms of accelerating our demand for energy, i.e. an exponential rise in energy demand, and that demand has been consistently met and often exceeded—this is why we can afford to fill our personal cars with this precious fuel on a regular basis.
I think that a sufficiently forward-thinking civilization probably could base its energy production around biofuels, but a gallon of gasoline-equivalent would probably cost about a thousand dollars-equivalent. Building a skyscraper would be a project akin to manned space flight. Manned space flight would be completely out of the realm of possibility.
The more important question would be how hard it would be to get nuclear energy.
I find this doubtful, being as ethanol (25 MJ/L) is nowhere near that expensive to create, and is fairly near the energy density of gasoline (35 MJ/L).
Consider the entire economy, though. Let’s not assume that ethanol could ever replace fossil fuels at the scale needed for explosive technological growth. the reason pure ethanol is cheap in the modern world is because we have enormous economies of scale producing the necessary feedstocks which rely on trucks and trains and fertilizers, hell, even the energy used to distill the ethanol is typically from fossil fuel.
It’s about supply-demand. If, tomorrow, there were no gasoline anymore, the price of ethanol would be astronomical.
Note, though, that we are talking about much smaller population—so you could spend quite a lot of land per capita on growing both ethanol source and fuel.
Current size of humankind is clearly unsustainable in this mode, of course.
With a much smaller population you start losing all sorts of other advantages, especially economies of scale and comparative advantage.
Careful. Economies of scale for quantity million parts don’t show up until probably the 20th century. Prior to that, the effect of reduced population size might just be reduced variety. Do you have any idea how many manufacturers of engine lathes there were at the end of the 19th century, for instance? (Hint: more than a couple.)
Actually, the neighbors that couldn’t handle oxygen got forced underground. They live in the mud under the deep sea, in digestive tracts, etc.
Well, some of their descendants are still alive, yes. But I believe that there was a lot of dying involved in that process. More than I think is implied by the phrase “forced underground”.
Well, one point is that supposedly there were a lot of societal factors that also had to be in place for the industrial revolution to take place. (Apparently if you lived anywhere but Britain, if you were doing anything cool, the ruling monarch would come along and just take it.) So it’s not necessarily just tech.
Another point is that Earth appears to have periodic ice ages, and many/most human civilizations seem to collapse after a while. So sustaining progress over long periods is nontrivial.
Environmental ones too. Britain had to be so short of wood and charcoal to burn that using coal in home stoves, even with its nasty byproducts, was preferably to most people going without any source of burnable fuel. The widespread proliferation of coal that followed to meet the demand meant there was plenty of it about to turn to other purposes.
Frankly, I’m wondering if the whole idea of exponential growth is just short cultural time horizons applied to the implications of fossil fuels for energy production, which touched off the Industrial Revolution. The Hubbert Peak holds, although coming out the other side of it resembles a gradual stepping-downward with its own local spikes and valleys (much as there are spikes and valleys in growth and use now, despite a steady upward trend). Fossil fuels still supply over three quarters of the world’s energy demand; there hasn’t been a nuclear renaissance so far and as much as someone always wants to boost pebble bed, travelling-wave or thorium reactors, innovation and growth for nuclear both seem quite limited on the balance. That might not seem like a big deal now (surely it could happen, right?) but what if that situation does not change appreciably, and world civilization starts transiting down the other side of of the curve, taking a few centuries to do it? What if we never do figure out FAI, or MNT, or fusion, or whatnot? What if that’s because the noise of society, geopolitics and history-in-general just don’t allow for them to come to pass?
What if the the answer to Fermi’s paradox is simply “You’d have to mistake the infrastructural equivalent of a blood sugar rush for an inexorable trend in technological development to even wonder why nobody’s zipping around in relativistic spacecraft or building Dyson spheres?” What if the problem is just short time horizons and poor understanding of context?
This. I’ve been searching for a way to articulate this idea for quite some time, and this is the best way I’ve seen it stated.
The last few centuries are potentially extremely atypical in human history. We have three generations of economists raised on exponentiation to think it is normal, and a series of technological advances that almost all require highly concentrated energy in ways that are seldom appreciated. When you think about it too, it would appear that something like an oil well is the most concentrated source of easily captured energy in the solar system—where else do you get such a huge amount of highly reduced matter next to such highly oxidized gas? With the interface between them requiring something as simple as a drill and furnace? Per unit of infrastructure and effort that is an incredible resource that I honestly doubt you can really improve upon. I have long suspected that reversion towards (through perhaps not all the way to) the mean is far more likely in our future.
Thank you. It’s still a bit indistinct to me as yet—I haven’t seen many other people talking about it in these terms, except Karl Schroeder (who explores it a bit in his science fiction writing), but I knew something seemed a little funny when the Rare Earth Hypothesis and its pop-sci cousins started growing in popularity among the transhumanist set. It seems like an awful lot of the background ideas about the Fermi Paradox and its implications for anthropics in the core cluster that LW shares go back to an intellectual movement that came to prominence at a time before we’d discovered more than a tiny handful of exoplanets. Now we know there’s at least one Earth-sized world around Alpha bloody Centauri and even Tau Ceti of all stars is being proposed as rich in worlds; at this rate I personally expect to learn about the probable existence of another biosphere around a star within 100 ly, within my natural lifetime (though, for the reasons expressed in my comment, I’m doubtful we’d be able to reliably notice another civilization unless they signalled semi-deliberately or we got staggeringly lucky and they have a recognizably-similar fossil fuel “spike” within a similar window, meaning we can catch the light of cities on the night side assuming Sufficiently Powerful Telescopes).
nod I suspect the future probably looks rather weird to LWian eyes, in this regard—neither a reversion to the 10th or 17th century for the rest of human existence, nor much like the most common conceptions of it here (namely: UFAI-driven apocalypse vs FAI-driven technorapture). It’s hard to tease out the threads that seem most relevant to my budding picture of things, but they look something like: increasing efficiency where it’s possible, a gradual net reduction in the stuff economists have been watching grow for the last few generations, some decidedly weirdtopian adaptations in lifestyle that I can only guess at… we’ve learned so much about automation, efficiency, logistics and soforth and it seems like there’s plenty of time to learn a great deal more, such that my brain tries to conjure up visions of a low-energy but surprisingly smart future infrastructure where the world is big again.
I think the jury is still out on this… on the one hand we are finding huge numbers of planets, and it is likely that our sampling biases are what push us towards finding all these big “super-earths” close to their parent stars (I take issue with that terminology, calling something something of ~5 earth masses ‘potentially habitable’ or even ‘terrestrial’ is problematic because we have no experience with planets of that size range in our system and you can’t confidently state that most things with that mass would actually necessarily have a surface resembling a rock-to-liquid/gas transition). On the other hand we are finding so many systems that look nothing like ours with compact orbits and arrangements that probably could not have formed that way and thus went through a period of destructive chaos, suggesting that the stability of our system could be an anomaly. I’m waiting on the full several years of Kepler data that should actually be able to detect earth-radius planets at a full AU or so from a star, until then there seem to be too many variables.
I mostly agree. I actually find the ‘great silence’ not particularly puzzling—the only things we have reliably excluded are things like star system scale engineering, and massive radio beacons that either put out large percentages of a planet’s solar input out in the form of omnidirectional radio or ping millions of nearby stars with directional beams on a regular basis. When you consider the vast space of options where such grand things don’t happen, for reasons other than annihilation, you get a different picture. We couldn’t detect our own omnidirectional radiation more than a fraction of a light-year away, and new technologies are actually decreasing it of late. And how many directional messages have we sent out explicitly aimed at other star systems? A dozen? And they would need directional antennas to be picked up. What are the odds that two points in space that don’t know of each other’s existence would first have one point their message in the right direction, and then have the second one look in the correct direction at the right time? Even if you assume there are many thousands of sources in our galaxy (which I think could be a wild overestimate filter or no given the history of terrestrial life), that puts the nearest one hundreds of light years away in a volume containing millions of stars. Even if they have an order of magnitude or three more effort being put out into sending messages, that still isn’t much given the sheer volume. A full galaxy just wouldn’t look different from an empty one to beings like us that have been looking less than a century, if the proposed grand destiny of intelligent life proves to be a ‘sugar rush’ even in the absence of reversion to the mean. (Such a reversion would pretty well certainly still include radio in our toolkit or the toolkit of whoever else was smart enough to figure out electrodynamics, so such a civilization could still be detectable—although what a reversion could lack is the concentrated wealth to build and maintain lots of fifty meter dishes and use them for, effectively, stargazing.)
I would say what it is most likely to resemble is, simply, history. Civilizations rise and fall over centuries (not overnight!) and ours is probably no exception, even if the endpoint might not be as low as past troughs. There are eras of prosperity and eras of destitution, different in different parts of the world as power structures and ecologies shift. Technologies appear, some of them stick around essentialy forever after they are invented and others, like steam heat and clockwork and factories exporting across an entire continent in ancient Rome, get lost when the context that produced them changes. Most eras produce something new that they can pass on usefully to the future, though what they produce that can be perpetuated in a different context would often be unexpected to those in that era.
The limiting oxygen concentration for most woods is between 14% and 18%. The Earth oxygen concentration is a little over 20% so it does look close. But this is slightly misleading: All that oxygen showed up because carbon based life was releasing it from water and carbon dioxide in photosynthesis. Oxygen using life only showed up after there were dangerously high levels of oxygen. And if the oxygen levels get very high then the photosynthesizers will start to get poisoned and the percentage will go down. So it isn’t really likely to have an atmosphere with so much oxygen that it is a problem for carbon life.
But yes, certainly an equilibrium with less oxygen is plausible in which case fire would be close to impossible even if the percentage dropped by only a small amount.
I think it’s pretty clear that for broad definitions of life, you need carbon or something heavier. It’s possible you could substitute boron, but I don’t think you can get boron by any process that won’t produce carbon as well. You almost certainly need both reducing and oxidizing agents, which means oxygen and hydrogen as the lightest options. There have been proposals of exotic life chemistries, but all the serious ones I’ve seen substitute heavier atoms like silicon.
The more interesting question is whether you can build more complex life without all the trace elements used on earth. For example, there are plenty of bacteria and fungi that have much lower dependence on heavier metals than multicellular life does, and some simpler multicellular organisms need less than humans do. My unfounded hunch is that you need something that can play the role of phosphorous as an energy carrier, and that it would be hard to find that in just CHON structures. On the other hand, it’s possible that even a really poor substitute would offer enough for life to arise, even if it was inefficient, slow, and fragile compared to life on earth: there would be no stronger threat from other life using phosphorous.
The next question is whether metal-poor planets can produce a technological civilization. How important is metalworking in our history? Can you substitute something else for it? Can you get a spacefaring or radio-capable civilization without metals for magnets, wires, and electronics? There are alternatives like organic conductors and semiconductors, but are those accessible without the intervening metals stage? Just how metal-poor are these planets, anyway? Would it be like iron, copper, aluminum, and tin being only as available as, say, nickel is on Earth? Or is silver or gold a more appropriate comparison? Or even rarer than that? Or are they present, but not concentrated into usable deposits?
I feel like I don’t know enough about the detailed makeup of these planets to give even a qualitative answer to your question. I’m not sure anyone knows enough about what is required for life to give a good answer. More data about the planets in question will clearly be helpful.
Another great filter related question I posted a while ago but didn’t get much response to:
Could the great filter just be a case of anthropic bias?
Assume any interplanetary species will colonise everything within reasonable distance in a time-scale significantly shorter than it takes a new intelligent species to emerge.
If a species had colonised our planet their presence would have prevented our evolution as an intelligent species.
Therefore we shouldn’t expect to see any evidence of other species.
So the universe could be teeming with intelligent life, and theres no good reason there can’t be any near us, but if there were we would not have existed. Hence we don’t see any.
This is an interesting idea but I think it doesn’t work. Say for example that another species starts 200 million light years away and is spreading a colonization wave at .5 which is a pretty extreme value. Then one should have at least 400 million years to notice that. And it is going to be pretty hard to do a fast colonization wave without some astronomically detectable signs. Reducing the colonization speed makes it less likely to be detected but increases the time span.
It seems no less plausible that what spreads outward at a sizable fraction of lightspeed is a wave of “terraforming” agents, altering all planets in the neighborhood into more suitable colony planets. Meanwhile colonization spreads at a rate roughly bounded by the ratio of reproduction rate to death rate, which might well be significantly slower than that.
That scenario would be enough to ensure that if an intelligent species evolves, it is necessarily far from any spreading interstellar empire (since otherwise the terraforming agents would have destroyed it), without having to posit such a fast colonization wave.
That said, though, why assume that a colonization wave is astronomically detectable? Being detectable at this range with our instruments is surely an indication of wasting rather an enormous amount of energy that could instead be put to use by a sufficiently advanced technology, no?
Waste heat is one thing there’s not much one can do about. Even a Dyson sphere will have it. In the case of Dyson spheres there have been active attempts to find them. See here although some other work suggests that Dyson spheres are just not that likely(pdf). Most largescale engineering projects will leave a recognizable signature. In this example, systematic searches have only been done out a few hundred light years, but stellar engineering is in the more blunt forms noticeable even at an intergalactic level.
Moreover, many ship designs lead to detectable results. For example, large fusion torch drives have a known sort of signature that we’ve looked for and haven’t found.
Several times more planets could increase the probability of a distant civilization several times, at the most. That is not a lot, if the initial probability is already tiny.
A rocky planet with no metals have a much weaker magnetic field. A civilization without iron and other metals is more difficult as well. Without heavy radioactive isotopes, volcanoes and tectonics is also different or non existent.
May be some other factors, not all against aliens.
Are there any metals necessary for life?
Astronomers use metal to mean elements other than hydrogen and helium. Metals in the chemists sense of the word aren’t in general necessary. A lot of life is pure CHONPS. However, most complex life involves some amount of metals in the chemical sense (most animals require both iron and selenium for example). And planets which are of low metalicity in the astronomical sense will be necessarily be of extremely low metal content in the chemical sense, since in order to get the actual metals other than just lithium and beryllium require extensive synthesis chains before one gets to them.
Thanks for the clarification!
Wow, Astronomers are lazy. It’s not hard to make up new terms for things when the existing ones clearly don’t fit. Heck, if making up a word was too difficult they could have used an arbitrary acronym.
Well, when most of what they have to work with is hydrogen, a whiff of helium, and a tiny smattering of literally everything else ever, it’s kinda hard to blame ’em. ;p
Not really. If you look at a periodic table, the vast majority actually are metals.
The vast majority are metals, and saying they all are is wrong (except in as much as authority within the clique is able to redefine such things). It’s also distasteful and lazy to formalise the misuse. I’d be embarassed if I were an astronomer.
Well, Wiktionary claims “metal” used to mean “to mine” a few thousand years ago, so I can’t blame them that much. The astronomers at least didn’t mess up the pronunciation again :-)
Yes, a planet around an old star should raise the odds of old, hence, metal-poor planets, but how much? Old stars have plenty of time to do other things to acquire planets, such as stealing them or creating them while passing through metal-rich nebulas. Can we directly measure the composition of this planet?
In the particular case I linked to, there are two planets around the same star. It is extremely unlikely to pick up multiple planets from floating rogues. As to metal rich nebulas, my understanding is that they aren’t that dense so wouldn’t do much. And at least if that had occurred, we’d likely see the star having higher metal content as well. In this case the iron content of the star is slightly under a tenth as common as it for the sun, and many other metals have more extreme ratios.
Do you have any support for that statement? (I’m not arguing, just curious how does one go about estimating the frequency of planetary capture given what I thought to be very little data.)
Planetary capture is low. For multiple capture events the probabilities are independent (this isn’t quite true, there are some complicating factors but this is very close to true), so the probability of capturing two is roughly the square of capturing a single one, which is estimated as around 3-6% under generous conditions(rogue planet numbers at least equal to the number of stars). So no more than around 1 in every 288 planets should have a double capture event, and the likely number is much lower than that. With around 700 known planetary systems, the chance that a given one is in this category is low, but that number isn’t as important since what needs to be asked is whether it is more likely that they’ve formed around the star or that multiple captures occurred. Note also if one assumes 3% rather than 6% then one gets around 1 in every 1100 planets which means we shouldn’t have even seen any examples. And if one thinks that the rogue planet percentage is lower than 1-1, all of this drops quite quickly.
A related issue is that if the first capture is in a somewhat stable orbit elliptical orbit, the introduction of another somewhat similar size body in orbit can destabilize the first making it get flung out of system, so once the second capture event occurs there’s a chance one will lose the first planet.
I don’t have a reference off-hand, but this is pretty standard logic to the point where multiple capture events as an explanation for strange systems is generally not often considered compared to deciding that it indicates our models are wrong.
ETA: Some of this logic is severely off. I just remembered that more recent estimates drastically increase the number of rogue planets floating around. See e.g. here so the 6% may in fact be an underestimate, in which case multiple capture planets becomes a much more plausible explanation. I don’t know then why it isn’t considered frequently other than the last issue of one capture possibly kicking another planet out of system but that shouldn’t change things by more than a small factor.
Hi Joshua, thanks for answering. Quick follow-up question: how come only “rogue” planets are mentioned in these arguments? (Well, it makes sense for studies about rogue planets, but it seems to happen even in discussions explicitly about captured planets, your comment being an example.) Can’t planets be “exchanged directly” between closely-passing stars? (I mean, without the exchanged planet spending a long time unbound to a solar system, in a sort of larger-scale analogue of close binaries exchanging envelope matter.)
I imagine close encounters are rare in general, but given the large number of binary and multiple-star systems that we seem to see everywhere, and my (admittedly vague) recollections of some rather tight clusters of stars with complicated/chaotic dynamics, it seems like it should be feasible (even common) for stars to exchange planets early (while they’re still part of a young cluster or complex multi-star system, and they interact closely) and then separate taking with them “stolen” planets (my understanding was that a significant fraction of stars in young clusters acquire high velocities and are “evaporated” away from their birth cluster, especially in “tight” clusters). Are the interaction time-frames incompatible with that kind of scenario or something?
Yes, they can happen. But my understanding is that exchange isn’t a likely result of system interaction whereas losing a planet (that is a planet getting sent out of orbit) is much more likely a result than exchange. But this is pushing the limits of my knowledge base in this area.
I find it hard to even take the idea of the Great Filter seriously, given that we don’t have a good definition of what life, let alone intelligent life, is. Generalizing from one example is not very productive.
One doesn’t need much in the way of definitions here to see the problem. The essential problem is that we don’t see anything out there that all shows a sign of intelligence. No major stellar engineering, etc. The fact that intelligent life and life may have fuzzy borders doesn’t enter into that arrangement much. If you want to to really be careful, you can talk about a version of the Filter that applies to life similar to our own. For our purposes, that’s about as worrisome.
You can, but it’s pointless, unless you think that “life similar to our own” has a significant chance of arising independently of ours. The argument “we are here, so it stand to reason that someone like us would evolve elsewhere” is the generalization from one example (and also a failure of imagination) that I am so dubious about. I see no reason to believe that even given a more or less exact replica of the Solar system (or a billion of such replicas scattered around the Galaxy or the Universe) there will arise even a single instance of what we would recognize as intelligence. This may change if we ever find some non-Earth-originated lifeform. (Go, Curiosity!) Until then, the notion of the Great Filter is just some idle chat.
As you should be.
So in a nutshell in the framework of standard discussions about Fermi issues and the Filter one would just say that one heavy filtration step is that intelligent life as we know it seems unlikely to arise.
This discussion thread is insane.
Essentially, Eliezer gets negative karma for some of his comments (-13, −4, −12, −7) explaining why he thinks the new changes of karma rules are a good thing. To compare, even the obvious trolls usually don’t get −13 comment karma.
What exactly is the problem? I don’t think that for a regular commenter, having to pay 5 karma points for replying to a negatively voted comment is such a problem. Because you will do it only once in a while, right? Most of your comments will still be reactions to articles or to non-negatively voted comments, right? So what exactly is this problem, and why this overreaction? Certainly, there are situations where replying to a negatively voted comment is the right thing to do. But are they the exception, or the rule? Because the new algorithm does not prevent you from doing this; it only provides a trivial disincentive to do so.
What is happening here?
A few months ago LW needed an article to defend that some people here really have read the Sequences, and that recommending Sequences to someone is not an offense. What? How can this happen on a website which originally more or less was the Sequences? That seemed absurd to me, and so does this; as if both suggest that LW is becoming less what it was, and more a general discussion forum.
I suggest everyone to think for a moment about the fact that Eliezer somehow created this site, wrote a lot of content people consider useful, and made some decisions about the voting system, which together resulted in a website we like. So perhaps this is some Bayesian evidence that he knows what he is doing. And even in the case this would turn out to be a mistake, it would be easy to revert. Also, everyone here is completely free to create a competing x-rationalist website, if your worst nightmares about LW come true. (And then I want to see how you solve the problem of trolling there, when it suddenly becomes your responsibility.)
Recently we had also a few articles about how to make LW more popular; how to attract more readers and participants. Well, if that happens, we will need more strict moderation than we have now; otherwise we will drown in the noise. For instance, within this week we have a full screen of “Discussion” articles, some of them containing 86, 103, 191 comments. How many of those comments contain really useful information? What is your estimate, how many of that information will you remember after one week? Do you think that visiting LW once in a week is enough to deal with that amount of information? Or do you just ignore most of that? How big part of a week can you spend online reading LW, and still pretending you are being rational instead of procrastinating?
Perhaps LW needs more users, but it probably needs less text per week (certainly not more); both articles and comments. Less chatting, more thinking, better expressing ourselves. More moderation is needed. And most of you are not going to pay for human moderators, so I think you should just accept the existing rules, and their changes. Or you can always make a competing website, you know; but you won’t do it, and you also know why.
There’s also plenty of Bayesian evidence he’s not that great at moderation. SL4 was enough of an eventual failure to prompt the creation of OB; OB prompted the creation of LW; he failed to predict that opening up posting would lead to floods of posts like it did for LW; he signally failed to understand that his reaction to Roko’s basilisk was pretty much the worst possible reaction he could engage in, such that even now it’s still coming up in print publications about LWers; and this recent karma stuff isn’t looking much better.
I am reminded strongly of Jimbo Wales. He too helped create a successful community but seemed to do so accidentally as he later supported initiatives that directly undermined what made that community function.
Seems to me there are two important factors to distinguish:
how good is Eliezer at “herding cats” (as opposed to someone else herding cats)
how difficult is herding cats (as opposed to herding other species)
To me it seems that the problem is the inherent difficulty of herding cats; and Eliezer is the most successful example I have ever seen. I have seen initially good web communities ruined after a year or two… and then I read an article describing how exactly that happened. From outside view, LW seems to survive for surprisingly long time as a decent website.
The problem with Roko seems to me a bit similar to what is happening now—some people intentionally do things that annoy other people; moderator tries to supress that behavior; contrarians enjoy fighting him by making it more visible and rationalize their behavior as defending the freedom of speech or whatever. The Roko situation was much more insane; at least one person threatened to increase existential risk if Eliezer does not stop moderating the discussion. Today the most crazy reaction I found was upvoting an obvious troll so that others can comment on their nonsensical sequence of words without karma costs! Yay, that’s exactly the behavior you would expect to find in a super-rational community, right? Unfortunately, it is exactly the kind of behavior you will find when you make a website for wannabe smart people.
Wikipedia is different: it is neither a blog nor a discussion forum. And it exists at cost of hundreds of people who have no life, so they can spend a lot of time in endless edit wars. This is yet another danger for LW. Not only new users can overrule the old users, but also the old users who have no life can overrule the old users whose instrumental goals are outside of LW. Users who want to reduce their procrastination on LW will not participate in endless discussions. If there is more content per day, they will simply read less, therefore they will vote less on an average comment, and they will have less word in “community” decisions. There is a risk that the procrastinators will simply optimize the website to fit their preferences—preferences of people who don’t mind spending a lot of time online, therefore e.g. reading comments by trolls and the subsequent discussions is not a problem for them. From their point of view, strict moderation will seem too harsh and fun-reducing.
As a reminder, if someone is convinced that they (as a person, or as a group) have better skills at maintaining a rationalist website, there is always a possibility of starting a new rationalist website. It could be even interlinked with LW, similarly like OB is now. Make an experiment, bet your own money and/or time!
I don’t think any of that addresses the main point: what has Eliezer done that is evidence of good moderating skills? Who has Eliezer banned or not banned? etc.
The question isn’t: “can Eliezer spend years cranking out high quality content on the excellent Reddit codebase with a small pre-existing community and see it grow?” It is: “can Eliezer effectively moderate this growing community?” And I gave several examples of how he had not done so effectively before LW, and has not done so effectively since LW.
(And I think you badly underestimate the similarities of Wikipedia during its good phase and LW. Both tackle tough problems and aspire to accumulate high quality content, with very nerdish users, and hence, solve or fail at very similar problems.)
This just isn’t remotely accurate as a representation of history.
The remainder of the parent comment seems to present similarly false (or hyperbolically misrepresented) premises and reason from them to dubious conclusions.
Eliezer is not so vulnerable that he needs to be supported by bullshit.
My thoughts on the recent excitement about “trolls”, and moderation, and the new karma penalty for engaging with significantly downvoted comments:
First, the words troll and trolling are being used very indiscriminately, to refer to a wide variety of behaviors and intentions. If LW really needed to have a long-term discussion, about how to deal with the “troll problem”, it would be advisable to develop a much more precise vocabulary, and also a more objective, verifiable assessment of how much “trolling” and “troll-feeding” was happening, e.g. a list of examples.
However, it seems that people are already moving on. For future reference, here are all the articles in Discussion which arose directly from the appearance of the new penalty and the ensuing debate: “Karma for last 30 days?”, “Dealing with trolling”, “Dealing with meta-disussion”, “Karma vote checklist?”, “Preventing endless September”, “Protection against cultural collapse”, and hopefully that’s the end of it.
So it seems we won’t need some specialized troll-ologists to work out all the issues. Rather than a “war on trolls” becoming a permanent element of LW political life, I’m hoping that in the long run this is just an episode in the history of LW governance. The site has transformed several times, it will undoubtedly transform again, and this is just a blip, one bump on the road.
I am somewhat interested in the larger issue of how the site might best produce intellectual progress. Viliam links to Grognor’s article, “I Stand by the Sequences” (note that Grognor has since quit LW to join the aphoristic faction on Twitter, like “muflax” and “Kate Evans”, who specialize in producing philosophical one-liners). A similar article from the same period is “Our Phyg Is Not Exclusive Enough”. These articles received some criticism as promoting dogmatism, groupthink, exclusivity, etc.; Alexander Kruel, aka XiXiDu, another LW defector, blogged about them as evidence of this.
However, I very much agree with the impulse behind those articles, even though I dissent from common LW opinion in some major ways. LW is not a site for anyone to talk about anything; it’s not even a site for anyone who considers themselves “rational” to talk about anything. The Sequences do define a philosophy and they need to remain the reference point. Perhaps elements of them will one day, by consensus, be regarded as definitively obsolete, replaced by something better, but they’re still the starting point from which any future progress begins, even if it’s progress by opposition.
A year ago, I wondered what LW would amount to, if anything. LW is protean, it has many dimensions, but I especially meant its place in the history of ideas. I’m now prepared to say that it can amount to something, that it can be a tributary feeding into the common intellectual culture of humanity, but that will require a certain amount of discipline and due diligence on the part of people who do want it to matter that way. There are many things that are working already, for example the division between Discussion and Main (and perhaps the wiki represents an even higher-level distillation). It’s good to have the rambunctious lower level where we are now, as well as the more rarefied and rigorous higher levels. It permits new possibilities to emerge.
Maybe my main message is to serious critics of LW (some of the “trolls” are just critics who aren’t doing it constructively). They can actually contribute to the overall process by being more organized in their criticisms. This is one way that intellectual progress occurs: you have a position, you have an opposite position, and both positions are refined as a result of dialogue. LW, the site and the community, does have the mechanisms and the capacity to take on alternative views and give them a fair hearing, even if they are eventually rejected. Work with that, and we can all benefit.
One more thing I want to point out. It’s often observed that hardly any sequences have been written since Eliezer. In fact, LWer palladias has written about a dozen series of posts on her blog in Sequence format. She recently achieved notoriety for converting from atheism to religion, so her sequences aren’t LW canon, but they represent an interesting example of cross-pollination between very different schools of thought.
Just spotted this thread. The Sequences were indeed the direct inspiration for the format of the linked series of posts I run. Though mine are on a pretty broad range of topics—most recently contrasting Sondheim’s Company with Passion and using both to talk about what the ends of marriage are.
Up voted for this. I can’t believe how many people don’t get it.
He got my downvotes for making terrible arguments defending a change that won’t do what it’s supposed to do, while also doing other shitty things. He was also an overconfident dick about the whole situation. The problem isn’t the rule, it’s the wrong beliefs about how the forums work and how they might be fixed.
That thread is Bayesian evidence against the new poorly thought out rule. The objections that have been raised to it have not even come close to being met. That fact that your own post is a hair breadth away from inflicting negative karma on me should be enough to give you pause.
The reaction to the new rule should not be surprising. If it was surprising, then you should update your model.
Good point about the silliness of people downvoting Eliezer to show their disagreement.
Using the phrase ‘trivial disincentive’ looks like a deliberate reference to this article which would be an unconvincing way to argue that the change won’t cause any problems.
And in general, I don’t think that the change will have really serious side-effects but I’m in favor of changing complex systems in as small increments as possible. The only sensible, currently relevant reason for implementing the new feature (flooding of the recent comments sidebar) that was given can be solved much less invasively by not having comments from crappy threads show up in the recent comments sidebar. For additional soft paternalist goodness, you could also have replies to comments made in such threads not appear in user’s inboxes.
Being able to keep up with all the conversation going on LessWrong seems incompatible with the goal of expanding the community. Reading comments and participating in conversation is a leisure activity. If I were very concerned with being “rational” about my LessWrong usage patterns I would stop reading them at all and stick to just articles (possibly only main section articles if I were really concerned).
I am very confused right now.
A few years ago, I learned that multivitamins are ineffective, according to research. At that point, I have heard of the benefits of many of them, they were individually praised like some would praise anything that’s good enough to take by itself, so I was thinking that multivitamins should be something ultra-effective that only irrational people won’t take. When I learned they were ineffective, I hypothesized that vitamins in pills simply don’t get processed well.
Recently, I was reading a few articles about Vitamin D—I thought I should definitely have it, because the sources were rather scientific and were praising it a lot. I got it in the form of softgels, because gwern suggested it. When they arrived, I saw it’s very similar to pills, so I thought it might be ineffective and decided to take another look at Wikipedia/Multivitamins. Then I got very confused.
Apparently, the multivitamins DO get processed! And yes, they ARE found to have no significant effect (even in double-blind placebo trials), But at the same time, we have pages saying that 50-60% of the people are deprived from Vitamin D and that it seriously reduces the risk of cancer, among with other things (including a heart disease). Can anyone explain what’s going on?
I don’t really follow. A multivitamin != vitamin D, so it’s no surprise that they might do different things. If a multivitamin had no vitamin D in it, or if it had vitamin D in different doses, or if it had substances which interacted with vitamin D (such as calcium), or if it had substances which had negative effects which outweigh the positive (such as vitamin A?), we could well expect differing results.
In this case, all of those are true to varying extents. Some multivitamins I’ve had contained no vitamin D. The last multivitamin I was taking both contains vitamins used in the negative trials and also some calcium; the listed vitamin D dosage was ~400IU, while I take >10x as much now (5000IU).
Is that unsatisfactory?
That would only makes sense if vitamin D is the only one that has any real significant effects or if the other ones who do, are too included in small dosages (this doesn’t seem improbable at all).
I remember seeing studies which doubt that vitamin C would help healing from common cold. No wonder if most other are as insignificant.
Also, just checked some pills of vitamins (for hair, skin and nails) I bought 1-2 years ago. It says “take 3 times a day” and it has 100 IU of vitamin D. It’s also apparently 50% of RDA—most other vitamins/minerals in it are up to 200-250%, and my vitamin D pills are 1250% RDA. Mystery solved, I guess.
Supplements have quality issues often. You’d be surprised what they get away with. Sometimes the coating doesn’t digest, so the nutrients aren’t absorbed. Sometimes they use the wrong form of the substance because it is cheaper. Sometimes they’re even contaminated with lead. I only buy vitamins that have been tested by an independent lab. So far, the best brands I’ve found were Solgar and Jarrow.
(Links are created by writing [ text ] then ( url ), you seem to have used parentheses for both.)
There was much skepticism about my lottery story in the last open thread. Readers should be aware, I sent photographic proof to Mitch Porter by e-mail.
As promised, I made substantial donations to the following two causes:
Brain Preservation Fund Kim Suozzi Fund
Please confirm my name on the list of donors Brain Preservation General Fund
I’m shortly going to be flying out to the EU to work on life extension causes, see my my blog for information: 27 European Union nations in 27 weeks
I see ‘M J GEDDES’ listed. Well done!
Out of curiosity, how much did you donate? (If it was >$500, I forgive you all the crap on OB and SL4; actions are more important than words.)
Well, I’ve opted to focus the bulk of my philanthropy on the ‘Methuselah Foundation’. I joined the 300 and I’ve now pledged $US 25 000. My statement is here:
http://www.mfoundation.org/?pn=donors
Powerful new forces are in play as the board game for Singularity takes a dramatic turn!
Received ’85.00′?
The ’300′ pledge is for $25000 over a span of 10 years.
That’s still $2500 for the first installment, not $85.
They break it down further until it’s like $3 per day, so I don’t know what their installment plan is.
Well, so it was a good decision to play lottery after all!
(I’m joking)
But anyway, congratulations for the success and thanks for the contributions! I personally am going to donate huge amounts of money on similar causes if I get rich. It seems to be the most rational way (according to my goals) to spend them.
Challenge: Steel man Time Cube.
I read the following by Kate Evens on Twitter:
And I became curious. What could LW come up with?
According to Wikipedia:
By rejecting many small spheres in favor of one large cube, Gene Ray has dedicated his life to demonstrating that reversed stupidity is not intelligence.
It seems Yvian has accepted the challenge and made a steel man attempt.
In an awesome way, too, and exactly how I’d do it if I could write better and had more patience. Also, looks like Yvain is turning into the second Moldbuggian Progressive after yours truly!
Consider the great circle passing through your current location and the Earth’s poles, together with the great circle perpendicular to it at the poles. These form 4 lines of longitude, each one of which is experiencing a different day simultaneously (for instance, when it is midnight on your line of longitude, it is 6am, noon, and 6pm on the others). Of course, you might wonder why I would single out these 4 lines of longitude instead of just the one at your current location, giving the traditional 1 day per 24 hours, or all of them, giving infinity days per 24 hours. Of course, it would be ridiculous to say that there is a different day going on in one location and another some infinitesimal distance away from it, so the latter is a non-starter. And the standard 1-day answer ignores the fact that different longitudes do not experience the same day. Counting 4 days occurring at the same time makes sense because then the days are separated by 90-degree rotations of the Earth, and correspond to quadrants of a circle. 90 degrees is the most fundamental angle in geometry, and should be considered the primary unit of rotation, as explained in this video (relevant discussion starting at 4:28).
There is more than one time zone. When you search information about time, Bible is an unreliable source. Also, teachers should not use Bible in classrooms.
Is time cube pro gay and transgender rights? 4 Orientations (Man who likes ladies, man who likes men, Lady who likes ladies, Lady who likes men) and 4 semi-concrete genders (Cismen, Cisladies, FTM and MTF).
Politically, it’s a sort of reactionary multiculturalism; all four “sides” should be kept separate and distinct in all aspects, racial segregation, etc.
Its actually a stealth argument in favour of increased mental health provision, fallen prey to Poe’s law.
Precision First by L. Kimberly Epting on Inside Higher Ed was an interesting read for me.
Training people to guess the teacher’s password has consequences.
Stanislas Dehaene’s and Laurent Cohen’s (2007) Cultural Recycling of Cortical Maps has an interesting argument about how the ability to read might have developed by taking over visual circuits specialized for biologically more relevant tasks, and how this may constrain different writing systems:
This is relevant for discussions about superintelligent AI in that it helps reinforce the case that there are cognitive constraints in our brains that are hard (if not impossible) to overcome, and that a mind which could custom-tailor new cognitive modules for specific skills, unburdened by the need to recycle previously-evolved neural circuitry, could become qualitatively better at them than humans are.
List of public drafts on LessWrong
I’ve found the practice of providing open drafts of possible future articles in the open threads and relevant comment sections has proven quite useful and well received in the past. I’ve decided to now make and maintain a list of them. If anyone else has made similar posts, please share them with me, and I’ll add them to the list.
Konkvistador
Related to: Old material
Against the worst argument in the world
Against moral progress, reworked as an article on More Right
Fragments on moral progress
On Democracy
On conspiracy theories, this draft became the article Conspiracy Theories as Agency Fictions
Is meritocracy inhumane?
An online course in rationality?
The problem with rational wiki, became the discussion post The Problem With Rational Wiki
This should probably be a page on the LW wiki.
I’ve decided I should educate myself about LW-specific decision theories. I’ve downloaded Eliezer’s paper on timeless decision theory and I’m reading through it. I’m wondering if there are similar consolidated presentations of updateless and ambient decision theory. Has anyone attempted to write these theories up for academic publication? Or is the best place to learn about them still the blog posts linked on the wiki?
I’m currently researching TDT, UDT, and ADT. So far as I am aware, there have been no comprehensive presentations of UDT and ADT. Eliezer’s paper itself is a step in the right direction, but is unfinished and has some major flaws.
SI has contracted the philosopher Rachael Briggs to write a paper on TDT for a peer-reviewed, academic journal. Last time I spoke to Luke about it, he said that the pre-print will be done sometime this winter. I don’t know whether the pre-print will available to the general public, or just to internal researchers.
Edit: According to Nisan, the information in the second paragraph is out-of-date.
Rachael Briggs is no longer working on that project. It’s been taken over by SI Research Fellow Alex Altair.
Ah, I was unaware. Thank you for the update. Was there any explanation given as to why she is no longer working on the project? Do we have a revised timeline for the paper’s completion?
I think Briggs wanted to stop; I don’t know why. And I don’t know when the project will be completed.
Edit: According to Nisan, the information in the second paragraph is out-of-date.
Is there a place on this website (or elsewhere) where the major flaws in Eliezer’s paper are pointed out and discussed?
There likely is, but I don’t have any links off-hand. IMO, the major flaw is that some passages are dense and unclear. It’s difficult to understand some explanations and examples as a result. Don’t be discouraged if you have to re-re-re-read a part of a paper in order to decipher the meaning. I certainly had to.
Beyond that, people disagree about TDT itself and have tried to make revisions, as well as revisions to those revisions. (Hence UDT and ADT.) Those flaws are discussed in the blog posts on decision theory, as well in in comment sections. Even still, that information is dispersed and unorganized. So far as I can tell, most of it just exists within the minds of individuals and hasn’t been formally written up yet.
Greater gender equality means that women are less apt to look for status in mates. Hey, it’s just one study, but when does that stop anybody else?
I’m pretty sure greater gender equality in a society translates into women who are less likely to say they look for status in mates. To a certain extent it seems plausible that it influences behaviour, I’m very sceptical of the implied argument that “high status in men” ceases to be a key sexy trait if you just have the right culture though.
Did they put “is well liked by other women” or “someone who my friends consider cool” on that list?
Sexuality is a strange thing. If you consciously think something is sexy, it then becomes sexy for you. At least that’s how it works for me, I’m generalizing from one example here.
In our society the consensus seems to be it doesn’t quite work like that, at least when it comes to things like say homosexuality.
I didn’t say that sexuality is entirely shaped by this, only that it’s influenced. Say, when I read that hourglass-shaped women bodies are supposed to be attractive, I started noticing that I think I’m attracted to that, although one can argue that I used to be before I read it, so I only started noticing that. However, it worked for me for other things, many of which are not liked by many people.
I don’t know, but that last would just reflect the consensus, no matter what it was.
It might be worthwhile to ask men from the various countries what women seemed to be looking for.
I’m not sure this would produce good results. That we have the phrase “he got lucky” indicates men may be clueless about what women want. A better result would be gained data mining online behaviour in response to flirting on say Facebook.
Computational sociology ftw.
“Might be useful” is a weak claim. I was thinking that if men say “women want men with money” in the gender disparity countries and they say “women want good-looking men” in the gender equal countries, it would be confirmatory evidence. Likewise, it might be of interest if men of different ages in the same country have different views of what women want.
There are certainly plenty of men who are convinced they know what women want on the average, if not in particular cases. I wonder how much they’re subject to availability bias.
People may be amused by this Bitcoin extortion attempt; needless to say, I declined. (This comment represents part of my public commitment to not pay.)
School isn’t about learning, SMBC edition.
Short story about the Turing Test, entertaining read.
Consider two versions of that story, with one having the line “At that point, finally, he let me out of the tank.” appended.
So at the beginning of this story there was no AI, there was only nondestructive upload technology, and the researcher sneakily uploaded the ‘testee’ at the beginning of the test.
Ten minute video about human evolution and digestion which argues plausibly that we’re very well-evolved to eat starch—specifically tubers and seeds, though we also have remarkable flexibility in what we eat.
I thought coyotes have at least as wide a range of foods as we do, though.
transhumanist cartoon
Marginal Revolution University
Yet another Online University this one launched on Marginal Revolution. 2012 has been a remarkable ride for Online Education and in many respects is a start of a test to see which theory of what formal education is actually for is correct. Will software and the internet disrupt education like it did the record business?
Amusing commentary by gwern:
Is there anything solid known about eye position (front vs. side of skull) and other aspects of an organism’s life? It seems to me that front of the skull correlates with being a hunter, but (as is usual with biology) there may well be exceptions.
For example, lemurs aren’t especially hunters, but they have eyes in front.
I was thinking that cats are both hunters and prey, and they have eyes in front.
Also, what about the evolution of eye position? How much of a lag is there if living conditions change?
Probably worth noting that fish, even predatory ones, don’t necessarily have binocular vision, and vice versa for herbivores. Sperm whales are the largest living predators and lack it; fruit bats, who don’t hunt, do have it.
There ARE incentives to develop it, or retain it, based on those lifestyle differences, but it makes for a somewhat fuzzy heuristic.
The other thing is this is pretty much restricted to fish and their mutant descendants, the tetrapods. Get outside the chordates and you find different solutions to these problems. Arthropods have several distinct kinds of eye architecture and sometimes their strategies generalize well: house flies (which are prey and scavengers) and dragonflies (which hunt) both have similarly-structure eyes; if anything I think the dragonfly has wider coverage. Spiders often rely on widely-placed eyes of differing strengths and ranges; mantis shrimp only have the two eyes, on stalks, and are renowned predators.
So it might look like a generalizable rule because it applies to so many of the most obvious, easy-to-examine large animals you can find, but remember they’re our close anatomical cousins, and they’re solving the problem with very similar design constraints.
(Also, primates—many primates who spend a lot of time in trees, but don’t hunt, have binocular vision. In their case it’s there because of its benefits for rangefinding and spacial awareness in an arboreal environment.)
I have read stuff that posited that hunters have front eyes (I think the reason given was for more accurate depth perception), and that prey-animals have eyes towards the side of their head to give a wider field of vision.
I’ll see if I can refind any of that stuff.
I didn’t find exactly what I was thinking of (I think it was probably a book), but a section of the Binocular vision wikipedia article has some information (uncited, unfortunately). Specifically:
I was wondering whether the rules might be different for sea creatures because of hydrodynamics. Practically all fish have their eyes on the sides of their heads. It’s possible that understanding hammerhead sharks and flounders would be too hard.
Puffer fish are fish which have eyes at or near the front of their heads, but they aren’t built for chasing things down. I just found out that you can get a puffer fish to chase a laser. I don’t know what that proves. Maybe they chase relatively small slow prey.
Puffers are sometimes pelagic (ocean-going) for parts of their life cycle, but typically they hang out in reefs, brackish areas, or other near-shore zones and hunt smallish prey, “sprinting” it down and delivering a quick snap, or just teasing it out from hiding places among coral or plants. They use the same “sprint” to evade attack.
Puffers also have the ability to swivel their eyes independently, like a chameleon.
I think that the rules are different for sea creatures simply because accurate sight is usually a less useful position sense in water. In most places you can’t see far away no matter how good your eyes are, so just noticing shadows is mostly enough. Sound (including vibrations and currents) tends to be more useful there, hence echolocation and the lateral line, as is smell (see sharks). Basically, you can’t hunt much with sight, but it’s still useful to avoid being hunted.
There are some exceptions, like octopi (big eyes) and some fish with curiously complex sight (poly-chromatic, polarization-sensitive eyes) I don’t have a very good explanation for. But I’d guess they’re a bit like bats for land animals, some accident of evolution probably threw them on a tangent and they found a “local maxima” of fitness.
I’ve just started playing with Foldit, a game that lets science harness your brain for protein folding problems. It has already been used to decode an HIV protein and find a better enzyme for catalyzing industrial processes. Currently, work is under way to design treatments for Sepsis.
A 3 minute talk on the Financial Consequences of Too Many Men. It seems the perceived sex ratio strongly influences male behaviours.
Research on this in the context of online forums such as ours might be very interesting.
A related blog entry by Peter Frost title Our brideprice culture that deals with societal implications of gender imbalance. It begins with highlighting a gender imbalance that many mention when talking about China but don’t notice in is clearly present in the West as well, he then proceeds to discuss the likely consequences for society. The analysis is cogent and somewhat depressing.
A side point:
I disagree, I think this is an indicator of sexual inequality between men.
As a female, I wonder what it means that I don’t react to behaviors like competing for status, class signaling and spending beyond ones means by being attracted—instead, I have the same feeling I get when people are being immature and stupid. Lol. I have thought about this a lot. I am just not attracted to the ordinary symbols of male power—though I seem to have a few triggers. Height doesn’t matter, muscles don’t do a thing and money has no effect. The demonstrations of power I do enjoy are when they’re able to hold up their end of a debate with me (I keep wishing for someone to win against me), or when they’re doing something really, really intellectually difficult. Those things, I do respond to. Fluff? No.
I have to wonder if other women who are as intellectual as I am are the same.
Generally, when someone says that majority of A do X, but you are A and don’t do X, here are some possible explanations:
the statistics is simply wrong;
the statistics is correct about the majority, but you as an individual are an exception, and possibly so are some of your friends (this similarity could have contributed to you being friends);
the statistics is correct about the majority, but within it a minority is an exception, and you belong to this minority, and possibly so do some of your friends;
you are wrong, you are actually doing X, but you rationalize that it’s something else.
Also from the outside, if someone else is saying this, don’t forget:
publication bias—people who don’t fit the statistics are more like to write about it then those who fit are likely to write “me too” (in communities that value independence).
Specifically for this topic, think also about the difference between maximizers and satisficers. If you read that “females value X”, you may automatically translate it as “females are X-maximizers”, and then observe that you are not. But even then you could still be an X-satisficer; you could have a treshold of “status + class + spending”, where people below this treshold just don’t catch your attention, and from the pool above this treshold you select using different criteria. Thus it may seem that “status + class + spending” are not part of your criteria, but they simply make your first filter, and then you consciously focus on your personal second filter.
(Simple example: You are consciously selecting for funny guys, not rich ones. However, you would never give a homeless guy an opportunity to show you how funny he is. Therefore you are effectively selecting for funny and non-homeless guys; you just don’t think about the second part too much. For less obvious example, replace “homeless” with “not having (signals of) university education” or “not living in an expensive city”.)
What’s the difference between the second and the third bullet?
Thanks for seeing that there are multiple options for interpretation. I hate it when people interpret my behavior into a false dichotomy of options, which happens to me frequently, so I am finding this refreshing.
I have a functionality threshold, but I see that as different from a class threshold. For instance, I had a boyfriend that had recently graduated from school. He was unemployed at that point, of course. It took him a very long time to get a job due to the recession. That didn’t deter me from liking him. Why not? I had no reason to think he was dysfunctional, I figured he would get a job eventually.
On the other hand, if I meet someone who reeks of alcohol and obviously hasn’t showered in a week, I’m going to be assuming they’re dysfunctional—that even if their situation could be temporary, they’re probably exacerbating it.
That’s not about class. That’s about wanting only functional, healthy relationships in my life. It’s not a healthy relationship if you have to pay for a person’s food and shelter because they’re not able to get those things for themselves.
If I meet someone who seems functional (has showered, does not reek of alcohol, etc.) and they strike up intelligent conversation (funny is nice but intelligent conversation is more my thing) but happen to be homeless, I will judge them based on how functional they are. I would not invest much until they get back on their feet, because I know better than to think that seeming functional and actually being functional are the same thing, but I wouldn’t refuse to talk to them if they seemed interesting and functional.
Why invest in a guy who just graduated but not the homeless guy? Well let’s ask this: what did the recent graduate do wrong? Nothing. Nothing is out of the ordinary if a recent grad is looking for work. That’s normal. That’s not a red flag. The homeless person, though may have done something to cause their situation. That is an abnormal situation, a red flag. I won’t be sure they are capable of supporting themselves until I see it. On the other hand, the recent grad has just spent several years doing hard work—they’ve demonstrated that they’re functional enough to be capable of supporting themselves.
That’s what’s important for me—whether people are able to support themselves, maintain stability, and be functional in general.
That is class signalling (of a particular class) and winning debates is competing for status.
You have your own sexual preferences and the traits that you are not attracted to appear less intrinsically worthy. Another woman may say she isn’t attracted to “Fluff” like intellectual displays and rhetorical flair and instead is only attracted to the ‘things that really matter’ like social alliances, security and physical health.
This seems tautologically likely.
Lol thank you Wedrifid, that was refreshing, and you were pretty good.
I disagree with you, but you’re welcome to continue the disagreement with me. (:
Just because other people use those as signals that a person is in a particular place in a hierarchy does not mean that:
A. I believe in social hierarchies or that social hierarchies even exist. (I see them as an illusion).
B. The specific reason I am attracted to these qualities is due to an attraction to people in a certain position in the social hierarchy.
The reasons I want someone who is able to defeat me in a debate are:
It gets extremely tedious to disagree with people who can’t. I end up teaching them things endlessly in order to get us to a point of agreement, while learning too little.
I might get careless if nobody knocks me down for a long time. It’s not good for me.
It is rather uncomfortable and awkward in a relationship or even a friendship if one person is always right and the other always loses debates. That feels wrong.
“Fluff, no.” vs “You have your own preferences and other people see your preference as fluff.”
If I said I had a million dollars, but really, I was a million dollars in debt, would that be an empty claim? Yes. If a person is spending beyond their means in order to signal that they have money, they’re being dishonest. So that’s fluff.
If social hierarchies don’t actually exist, and a person signals that they’re in one, is that real, or is it a fantasy? if they don’t exist, it’s fluff.
“This seems tautologically likely.”
Okay, this was an embarrassing failure to use clear wording on my part. Although you’re not actually disagreeing with me, you got me good, lol.
That was fun. Feel free to disagree with me from now on.
Can you clarify what you mean by this?
These are decent reasons to intentionally seek out someone who can out debate you, however as far as actual attraction goes they make just as much, if not more sense as post-hoc rationalizations as real reasons. As Yvain has explained all introspection of the type you are engaging is prone to this error mode and while reasons your reasons 1 & 3 aren’t completely inconsistent with our knowledge of human attraction they don’t fit as well as the hypothesis that you are attracted to behaviors that signal high IQ and/or status while side-steping your issues with the most common ways of displaying those traits (this is largely based on what I’ve been told in various psychology classes, I don’t have the original studies that my professors based their conclusions on on hand).
-edit if anyone knows how to make blockquote play nice with the original formatting let me know, I think this works for now.
On introspection biases: For minor things, I wouldn’t be surprised if I make errors in judging why I do them, because it can take a bit of rigor to do this well. But if something is important, I can use meta-cognition and ask myself a series of questions (carefully worded—this is a skill I have practiced), seeing how I feel after each, to determine why I am doing something. I carefully word them to prevent myself from taking them as suggestions. Instead, I make sure I interpret them as yes or no questions. For instance: “Does class make me feel attracted?” instead of “Should I feel attracted to class?”—it’s an important distinction to make, especially for certain topics like fears. “Am I afraid of spiders because I assume they’re poisonous?” will get a totally different reaction (assuming I am not afraid of them) than “Would I be afraid of spiders if I thought they are all poisonous?”
It takes a little concentration to get it right during introspection.
So we’ll start with class for example. I ask myself “Do I find class attractive?” and I can ask myself things like “Imagine a guy with lots of money asks me out. How do I feel?” and “Imagine a guy who has things in common with me asks me out, how do I feel?” If you ask enough questions for compare and contrast, you can get pretty good answers this way.
To make sure I’m not just having random reactions based on how I want to feel, I come up with real examples from my recent past. In the last year or so, I have been asked out by or dated a lot of different people with varying amounts of income. There were a lot of guys who are making 6 figures—this is because I tend to attract well paid IT guys. I liked some of them but didn’t like all of them. Some of the guys making 6 figures didn’t attract me whatsoever. So income doesn’t make me like a guy all by itself.
I can ask “Does having a high income make me like them more?”
The two top attractions of all time, for me, were to an underpaid writer and a college student.
I can ask “Does availability of men with lots of money have anything to do with it?”
After dating something like five or ten guys who make around 6 figures over the last year and someodd, the one I liked best actually makes a moderate income. There is another guy that does make a large income that I liked quite a bit. But if the fact that guys who make 6 figures are available was going to interfere, it wouldn’t make sense that I’d have liked the guy with a moderate income so much.
So, there are ways to determine what your real motivations are—but it takes skill, and requires more rigor that the quick answers these people are giving in the studies, for sure.
Believing oneself to be an exceptional case was a common failure mode among the subjects of studies summarized in Yvain’s article. When confronted with the experimental results showing how their behavior was influenced in ways unknown to them, they would either deny it outright or admit that it is a very interesting phenomenon that surely affected other people but they happened to be the lone exception to the rule.
That doesn’t really preclude your introspective skills (I actually believe such skills can be developed to an extent) but it should make you suspicious.
Have you done any reading on cognitive restructuring (psychotherapy)? It’s interesting that people on this forum believe this is impossible when a method exists as a type of psychotherapy. Have you guys refuted cognitive restructuring or are you just unaware of it?
I’m aware of cognitive restructuring. Note that I haven’t said that introspection is completely useless or even that the specific type of introspection you describe is totally impossible, just that you are very confident about it and there’s a common pattern of extreme overconfidence.
This type of hypothetical questioning is notoriously unreliable, people ofter come up with answers that don’t reflect their actual reactions, If you read closely Yvain’s article already gives several examples. It’s also one of the methodologies that my psychology teachers highlighted as sounding good, but being largely unreliable.
This is better, but between the general unreliability of memory and the number other factors that would need to be controlled for its still not that great. Particularly since you do feel attracted to men who are more dominate as debaters.
It occurs to me that since this debate is about me and my subjective experiences, there’s really no way for either of us to win. Even if we got a whole bunch of people with different incomes and did an experiment on me to see which ones I was more attracted to, the result of the experiment would be subjective and there would be no way for anyone to know I wasn’t pretending.
I still think that there are ways to know what’s going on inside you with relatively good certainty. Part of the reason I believe this is because I’m able to change myself, meaning that I am able to decide to feel a different way and accomplish that. I don’t mean to say I can decide to experience pleasure instead of pain if I bang my toe, but that I am able to dig around in the belief system behind my feelings, figure out what ideas are in there, improve the ideas, and translate that change over to the emotional part of me so that I react to the new ideas emotionally. If I was wrong about my motivations, this would not work, so the fact that I can do this supports the idea that I’m able to figure out what I’m thinking with a pretty high degree of accuracy. I would like to write an article about how I do this at some point because it’s been a really useful skill for me, and I want to share. But right now I’ve got a lot on my plate. I think it’s best for us to discontinue this debate about whether or not my subjective experiences match my perceptions or your expectations, and if you want to tear apart my writings on how I change myself later, you can.
Your links are bookmarked, so if your purpose was to make sure I was aware of them, I’ve got them. Thanks.
Thanks for those links by the way, they are interesting.
A. If you ask the right questions and juxtapose things so that you’re getting a more well-rounded view it is not the same thing as just asking yourself one question. You can use strategy with it, which is what I was trying to show in my example, but I guess you missed it.
B. I followed it up with “to make sure I’m not having random reactions”. You are seeming to argue against a piece of a technique as if it was the whole thing. That’s not getting anywhere.
No, that is your perception of what I said. I did not say “I want someone who can defeat everyone else in debate.” I said “I want someone who can defeat ME in debate.”
Do you see now how you took what I said and applied a pattern to it? I am getting tired of trying to show you this.
A. I didn’t miss it the problem is that the questions don’t give you accurate information to begin with. B. No I’m pointing out that part of the technique adds little to nothing and that the remainder, while not as flawed, isn’t enough for the level of confidence you seem to exhibit. I have a lot more I could say on this but won’t.
These are also serious misunderstandings of my points, but that brings me around to my final conclusion. I may be misunderstanding you ( I’m almost certain you’ve been misunderstand me), which makes me feel even more confident when I say that I see no benefit in engaging you further, at least on this topic . Since you raised points A&B before this notification I decided to post the short version of my reply to them anyways, but I was already doubting the wisdom of bothering with this post’s grandparent. Your subsequent posts, here and in other threads have made up my mind. edit-your parallel post has reduced my disinfest in talking to you generally, but still leaves me thinking that this particular conversation is a dead end.
Hmm. Perhaps I will understand the nature of these misunderstandings at some point in the future.
This is common for me, unfortunately. I’m not sure what to do about it, but I’ve been thinking about this a lot.
Okay. Well thanks for not deeming me useless to talk to.
I have bookmarked the list of biases you gave me. On first glance it looks like I’m familiar with these but I will review them further at some point to see if I am unaware of or have forgotten any. Here is a link for you, too: cognitive restructuring—it’s a psychotherapy technique very much like what we’ve been discussing. I hope I have opened your mind a little bit to the possibility that a person (perhaps you) might be able to gain access to their inner thoughts and feelings and re-write themselves. I believe there is also a method that helps one get closer to enlightenment which is taught by Buddhists, but I can’t remember what that’s called. I do not feel our discussion a complete waste of time, but, as I mentioned, I agree that continuing to disagree would not be useful.
My social hierarchy view:
Imagine a picture of a bunch of people. As you’re looking at it, a ring jumps out at you. Your brain is recognizing a pattern, in a sea of heads. So, you take a crayon and you draw a circle over the picture, connecting all the little heads like little dots—in a circle. You say “It’s a social circle.” In fact, the people in the picture do not know each other at all. The circle is irrelevant.
That’s how I see social hierarchy. I’ll explain more specifically:
Nearby, there’s a gigantic technology company, (well, Seattle has several of them), tens of thousands of employees each, a lot of them making 6 figures. These guys are near the top of the social hierarchy, right?
Well, not too far away, I bet there are a bunch of poor people who pick food for a living. They’re barely getting paid. Who has the power?
The IT workers can buy whatever they want. But they need the poor workers to survive.
The poor people can’t buy whatever they want, but they don’t need the IT workers to survive.
If all of the IT workers decided to quit, what would happen to the poor workers? They’d still pick food, and they’d be fine.
If all the poor workers stopped picking food, what would happen? It will spoil, and the IT people won’t eat.
Another example:
You’re in France, it’s 1789, you’re rich and privileged, you’re part of the bourgeois. Well the rest of the population decides they’re not having it. Goodbye!
Who had the power?
The rich and privileged thought THEY had the power, but the people had it all along.
So, first of all, this view that the rich people are somehow at the top of a structure is inaccurate. The structure is really more of a system, there is no top or bottom.
Another problem, two examples: Random person wins the lottery. They are upper class now, no? Not too long after, the money is gone. (This is common, from what I have read.) What class are they? A greedy woman finds herself a rich man and marries him. She has his credit cards, she can spend what she wants to. Is she upper class now, or is she just a prostitute? If class and status are not inherent to the person, it’s improper to attribute these qualities to people as if they were.
Choosing based on class is a hasty generalization that probably makes you somewhat more likely to choose somebody who is going to survive and be able to help pay for offspring, somewhat more likely to choose somebody functional over somebody dysfunctional, but it isn’t rational. The qualities that determine whether somebody is going to survive, be functional and help with offspring are a lot more complicated than that.
It would really make more sense to assess the overall situation when choosing a mate, not use some oversimplified model of a system that’s far more complicated than the word “hierarchy” implies. That’s why I’m seeing this as irrelevant pattern recognition—we’re seeing triangles in noise, thinking we’re seeing something useful.
Um, I wouldn’t call any of this arguing for the nonexistence of social hierarchies. More like arguing that hierarchies are unstable, context-dependent and, well, social.
Ah, but once the IT workers invent food-picking robots, poor people are screwed.
They can always eat whatever the striking food-pickers eat (and will win the competition to get it, if there’s a scarcity problem). As a side-question, what does a food-picking job actually entail? (I’m not a native speaker)
The IT-workers vs food-pickers example doesn’t really hold up very well and even if it did it wouldn’t be an argument for the nonexistence of social hierarchies except under very narrow and artificial definitions of the word ‘hierarchy’.
Different people are treated differently. Some are deferred to more than others. Some are regarded with suspicion more than others. Those differences tend to be stable over time within a given group. That’s your social hierarchy. That those relations are different in different groups, that they can change rapidly in unusual circumstances and that ultimately they are determined by the contents of human minds doesn’t mean they aren’t a real phenomenon.
If I showed you a scribble and told you it was a circle, would you feel it was a good argument if I said “This type of circle is unstable and context-dependent, and, well, it is this type of circle”.
I think your argument would be “You need to choose something other than a circle to model this amount of complexity. The circle’s not working”.
That’s what I am saying.
And if the survival of poor people is threatened enough, they’ll kill all the rich people and take all their stuff. They do outnumber them.
The following is meant in a neutral sense like “those who do it for a living are likely to be better at it” / practice makes perfect NOT in the sense that “nerds are weak” / hasty generalization. I am a nerd, and I know that some of us do work out, know martial arts or just don’t fit the weak, scrawny stereotype:
A bunch of computer nerds who sit at their desk all day are going to beat up people who exercise for a living?
It is assumptions like the ones you just made that keep highlighting for me how illusory social hierarchy is. People act as if upper class people are always going to win, as if they’re better in every way—this is ridiculous. I chalk this up to looking for a false sense of security—and finding it.
If the debate was about whether people ACT like there’s a social hierarchy, you’d have won. However, the point I made was that social hierarchy is an illusion, not that people don’t believe in the illusion.
My dad is a computer nerd who sits at his desk all day. Also, he has a black belt in jiujitsu.
Be less free with generalizations.
As martial artists have pointed out for a long time, holding a black belt is a fairly weak predictor of success in a true fight.
That depends on the type of martial art. As far as I know, jiujitsu mostly focuses on grappling and throwing and is practiced in pairs where people alternate between performing a technique and having it performed on them by their partner. This should be far more difficult to screw up than a striking art in which you can create the illusion of learning by having people strike at the air repeatedly.
How to tell a martial art (e.g. art used for making war in the old days) from a “martial” art (e.g. a version of soccer one drives the kids to two times a week): it doesn’t award belts.
It would be a good point except that you may be implying the opposite generalization—that poor people don’t also know martial arts. And even if more nerds do know martial arts, if they’re sedentary while their opponents exercise for a living, does that not give them a disadvantage?
Certainly, if they are, in fact, sedentary. The assumption that after “sitting at a desk all day” they go home and sit on the couch all evening is part of the stereotyping that Alicorn may reject. The training required to get a black belt in jijitsu is rather intensive and also the kind of thing nerds seem more likely to engage in. “Nerds” vs “People who exercise a lot” is just rather useless as a dichotomy—especially once highschool is over.
I’m a self-described nerd with a sedentary IT job that exercises, myself, and I know a lot of us get exercise. Here’s my point: do you know a single nerd who exercises 40 hours a week? I don’t. That’d be over half your free time. If you’re a low-paid worker picking food all day, you might be forced to exercise more than 40 hours a week in order to make ends meet. But there’s a huge difference between intentionally getting exercise a few times a week because you know you’re otherwise sedentary versus exercising all day long, just the same way that there’s often a huge difference in skill level between people who do something for a hobby and people who do it for a living.
(You are currently pursuing the question of fighting capability of nerds. What valuable lessons does this help anyone to learn? Alicorn’s comment that triggered this thread contained a general point (“be less free with generalizations”), but such points don’t seem to be present in the consequent discussion.)
I’m interested. The subject is the impact of lifestyle choices on physical fitness and the associated combat potential. It’s probably more practically useful than the majority of conversations. The initial generalization was legitimately offensive but discussing the topic at all is perfectly legitimate. You aren’t obliged to participate but suggesting the conversation is in some way unacceptable for any reason beyond your personal preference is ill founded and unwelcome.
I’m not suggesting that it’s “unacceptable” (I’m not sure what that means; it seems to indicate way more emphasis than I’m applying). I personally somewhat dislike discussions like this being present on LW, of which this one is not special in any way, and normally act on that with my single vote; on this occasion also with an argument that elucidates the distinction relevant for my dislike.
The distinction is between object level discussions for their own sake and discussions used as testing ground for epistemic tools. These often flow into each other for no better reason than free association.
I’m sorry you feel offended, Wedrifid. I am still not sure why I should see my statement that people who are sedentary at work are less likely to win a fight as people who exercise for a living as inherently offensive, since I meant it in the spirit of “those who do something professionally tend to be better at it than those who do it as a hobby” not “nerds are weak compared to everybody else (even compared with other people who don’t exercise for a living).” Maybe part of the offense is that you knew that the type of exercise that food pickers get isn’t as optimal as what a nerd who exercises as a hobby would get. I hope you can see that my intent was more “those who do it for a living are likely to be better at it” / practice makes perfect not “nerds are weak” / hasty generalization. I updated my post. hoping it is fixed
Note the difference between feeling personally offended and acknowledging that I would not consider it unreasonable for another to claim offense in a circumstance. I was trying to convey the latter. In a context where Vladimir was attempting to deprecate the conversation I was was expressing disapproval of and opposition to his move but chose to concede that one comment in particular as something I did not wish to defend. I don’t know, for instance, whether or not Alicorn personally felt offended but social norms do grant that she would have the right to claim offense given the personal affiliations she mentions.
It is applicability of this in particular that I disagree with. It is true that people who do something professionally tend to be better than those who do it for a hobby but having a job that happens to involve some physical activity is not remotely like being a professional exerciser and is far closer to the ‘hobbyist’ end of the spectrum. In fact, I argued that someone who exercises as a hobby (I specified the an approximate level of dedication, using your thrice weekly baseline) will be more physically capable than someone who has some exercise as a side effect of their occupation.
For what it is worth my expectation is that the greatest difference in physical combat ability between various social classes (and excluding anyone qualifying for a disability) will be greater variability in the higher classes than in the lower ones. From what I understand those who actually exercise professionally (athletes, body builders, etc), high level amateur ‘exercisers’ and those with a serious exercise hobby are more likely to be in classes higher than those represented by the ‘fruit picker’ and manual laborer. Yet, as you point out, professionals are also able to be completely sedentary and still highly successful.
(It also occurs to me that class distinctions, trends and roles may be entirely different where you live than where I live. For instance, “Jock” is a concept I understand from watching teen movies but not something representative of what I ever saw at school. The relationship between physical activity, status and role just isn’t the same.)
You’re right, this discussion is not getting anywhere. I think we’re just practicing our debate skills or enjoying disagreement. There are plenty of better topics to debate on.
I remember once we had a big Open Thread argument about Pirates Vs Ninjas. IIRC it involved dozens of posts and when somebody pointed out that it had gone on too long, and how silly it had become, somebody else argued that it was, in fact, a useful rationality exercise.
Perhaps this [edit: cutting the conversation short] is a sign that the community has matured in some way.
Me. I’m training for a marathon that I’m running in a matter of weeks and I’m not willing to give up my weight training in the mean time. The thing is, if I wanted to be in optimal combat condition I would exercise less.
There is… but you seem to be suggesting that the difference is in favor of the light exercise all day long implied by fruit picking. That is a terrible form of exercise. On the other hand consider exercising actively and deliberately three times a week because you want to be fit. Off the top of my head, say, 45 minutes of weights followed by interval training. When I’m not doing endurance training that’s approximately the program I use as a default and it is the kind of training that gives significant fitness benefits and if you are going to actual train at all then all spending your day doing manual labor is going to achieve is make you too tired to train properly and put you more at risk of overtraining if you do.
I did not think of that.
Hauling around baskets of apples and climbing trees might not be light exercise. But it might be a terrible form of exercise.
I think you’re right that doing exercise designed to train for combat would be better than arbitrary food picking exercises for 40 hours a week. After all, if food picking was the best kind of exercise, there should be some way to optimize even that. I honestly don’t know whether the type of exercise the average nerd actually gets would lead to better combat advantages than the type of exercise that food pickers get, but you did think of a way to corner me.
That is making me happy.
Normally, in this circumstance it would be my turn and I’d go see if there were any figures for these, but since they’re more likely to support your point than mine, and you might enjoy nailing me with them, I will leave the opportunity open.
On type of exercise I am confident but I am not at all sure about prevalence. If we, say, ranked all computer programmers and all fruit pickers in order of combat prowess and took the median of each I would tentatively bet on the fruit picker if given even odds when we place them in an unarmed fight to the death.
Aww. You didn’t nail me.
I did some research to see whether this might be right, here it is:
Harvard Men’s Health Watch, May 2004 issue
It looks like I won here, but I thought of some reasons why I may still have lost:
Females can be as big as males, and I’m sure that some have the muscle building bonuses comparable to the average male, but from what I’ve read and observed, males are more likely to have these benefits than females. Females can have the aggressive tendencies associated with testosterone, but do not get them as frequently as males do. Females can be nerds but most nerds are male. Food pickers may have a higher percentage of females than nerds do. Therefore the food pickers might be at a disadvantage in unarmed combat. (Though adding guns would change that completely.)
Nerds may exercise more than the average person in order to compensate for the stereotype that nerds are weak. I didn’t see any research specific to how much exercise nerds do or what type they use, but it is possible that this group is more fit than average.
Having a nerdy personality may make them more likely to research the best way of exercising, and measure their progress, making exercise more effective for them.
Do you see more factors that we haven’t taken into account?
My apologies, I must forgotten all about my ultimate goal of nailing you and got all caught up in just saying things because they happen to represent an accurate model of the world as I see it. Where are my priorities?
Naturally one of the most basic skills of debate is to only attack the soldiers of the enemy while smoothly steering the conversation away from any potential weaknesses in one’s own position. Even the simple process of filtering evidence and only mentioning that which favours one’s own bottom line can go a long way toward both winning a debate and making the discussion utterly useless as anything other than a status transaction or political platform.
As someone who enjoys being ‘beaten’ in discussions I’m curious whether you draw a distinction between ‘losing’ to a barrage of clever debate tactics exploiting the know human vulnerabilities and ‘losing’ in the sense that your opponent knew something that you did not and was able to communicate that new information to you effectively and clearly in a form that prompted you to learn. It is the latter form of ‘losing’ that I prize heavily while the former tends to just invoke my ire and contempt.
I know better than to think I know what your motives are. I did hope that you wanted to kick my ass, though. I was merely expressing my disappointment, not an expectation.
If I know information that could mean that my position is wrong, it will not be my position, or I won’t start a debate. I would instead begin with “I don’t know whether A or B is true. For A, I have this info. For B, this info.” and so would never attempt to convince anybody of A or B in that case, but just hope we worked it out.
The times when I actually decide to debate with somebody, there’s a reason for it—there is potential harm in them not realizing something. The way you just did with me when you thought I was making a generalization about nerds.
I suppose the reason I don’t lose often enough is because I “choose my battles” very effectively. Perhaps when people point out my flaws I am quick enough to accept them that it doesn’t turn into a debate.
I’m not interested in empty wins. I am interested in convincing people of important things when they don’t get them. Most people are tiring to debate with, for me, although that’s because people who haven’t spent a comparable amount of time on self-improvement tend to give me such a logical fallacy ridden pile of spaghetti code that it’s simply not fun to untangle. I don’t believe that I am using unethical tactics to win. There are times when I know my opponent does not want the full volume of information I have—like in my IT job, my non-IT boss does not want every detail, so I intentionally give him a simplification—but all the relevant stuff that I know he will care about is included. He likes simple explanations better and complains if I give him the technical details. That’s all that I can think of right now, but your question will have me watching out for a while.
But shouldn’t I be confused enough, somewhere, that it’s necessary to untangle me through debate? Or shouldn’t I know someone who makes my mind look like a mess of logical fallacy ridden spaghetti code? That seems to be the experience that I miss—that sense that there’s somebody out there who can see all of this better than I do. There is an imbalance in this that I do not like.
Wanting to lose is about this inequality. I want to see minds that look well-orchestrated. I am tired of what it does to me to anticipate spaghetti code.
You should stop thinking about discussions in these terms.
Imagine you have 100 instances where you do a bunch of research, with the intention of having an unbiased view of the situation. Then you tell somebody about the result and they don’t agree. But they don’t support their points well. So you share the information you found and point out that their points were unsupported. They fail to produce any new information or points that actually add to the conversation. You may not have been trying to win, but if they’re unable to support their points or supply new information and yet believe themselves to be right, when you destroy that illusion, the feeling of “oh I guess I was right” is a natural result.
Imagine that during the same period of time, this happens to you zero times. Nobody finds a logical fallacy or poorly supported point. This is not because you are perfect—you aren’t. It is probably due to hanging out with the wrong people—people who are not dedicated to reasoning well. Knowing I am not perfect is not reducing the cockiness that is starting to result from this, for me. It is making me nervous instead—this knowledge that I am not perfect has become a vague intellectual acknowledgement, not a genuine sense of awareness. The sense that I have flawed ideas and could be wrong at any time no longer feels real.
Now that I am in a much bigger pond, I am hoping to experience a really good ass kicking. I want to wake up from this dream of feeling like I’m right all the time.
The reason I want to lose is because I agree with you that I shouldn’t see these debates as thing for me to win. I am tired of the experience of being right. I am tired of the nervousness that is knowing I am imperfect, that there are flaws I’m unaware of, but not having the sense that somebody will point them out.
I just want to experience being wrong sometimes.
Your comments are consistent with wanting to be proved wrong. No one experiences “being wrong”—from the inside, it feels exactly like “being right”. We do experience “realizing we were wrong”, which is hopefully followed by updating so that we once again believe ourselves to be right. Have you never changed your mind about something? Realized on your own that you were mistaken? Because you don’t need to “lose” or to have other people “beat you” to experience that.
And if you go around challenging other people about miscellaneous points in the hopes that they will prove you wrong, this will annoy the other people and is unlikely to give you the experience you hoped for.
I also think that your definition of “being wrong” might be skewed. If you try to make comments which you think will be well-received, then every comment that has been heavily downvoted is an instance in which you were wrong about the community reaction. You apparently thought most people were concerned about an Eternal September; you’ve already realized that this belief was wrong. I’m not sure why being wrong about these does not have the same impact on you as being wrong about the relative fighting skills of programmers and fruit-pickers, but it probably should have a bigger impact, since it’s a more important question.
That’s insightful. And I realize now that my statement wasn’t clearly worded. What I should have said was more like:
“I need to experience other people being right sometimes.”
and I can explain why, in a re-framed way, because of your example:
I don’t experience being double checked if I am the one who figures it out. I know I am flawed, and I know I can’t see all of my own flaws. If people aren’t finding holes in my ideas (they find plenty of spelling errors and social mistakes, but rarely find a problem in my ideas) I’m not being double checked at all. This makes me nervous because if I don’t see flaws with my ideas, and nobody else does either, then my most important flaws are invisible.
I feel cocky toward disagreements with people. Like “Oh, it doesn’t matter how badly they disagree with me in the beginning. After we talk, they won’t anymore.” I keep having experiences that confirm this for me. I posted a risk on a different site that provoked normalcy bias and caused a whole bunch of people to jump all over me with every (bad) reason under the sun that I was wrong. I blew down all the invalid refutations of my point and ignored the ad hominem attacks. A few days later, one of the people who had refuted me did some research, changed her mind and told her friends, then a bunch of the people jumping all over me were converted to my perspective. Everyone stopped arguing.
This is useful in the cases where I have important information.
It is unhealthy from a human perspective, though. When you think that you can convince other people of things, it feels a little creepy. It’s like I have too much power over them. Even if I am right, and the way that I wield this gift is 100% ethical, (and I may not be, and nobody’s double checking me) there’s still something that feels wrong. I want checks and balances. I want other people with this power do the same to me.
I want them to double check me. To remind me that I am not “the most powerful”. I am a perfectionist with ethics. If there is a flaw, I want to know.
And I don’t go around challenging people about miscellaneous points hoping for a debate. I’m a little insulted by that insinuation. I disagree frequently, but that’s because I feel it’s important to present the alternate perspective.
I am frequently misunderstood, that is true. I try to guess how people will react to my ideas, but I know my guesses are only a hypothesis. I try my best to present them well, but I am still learning.
Even if I am not received well at first, it doesn’t mean people won’t agree with me in the end.
It’s more important to have good ideas than to be received well, especially considering that people normally accept good ideas in the end. Though, I would like both.
Robots can fix that too!
I would consider it a horrible argument and I consider this to be a pretty bad analogy.
This looks like a definitional dispute. Like you’re not denying that there’s something out there but you’re denying that it can be called a ‘hierarchy’.
And here it seems to be a disagreement over what it means to ‘exist’. I agree that social status isn’t in any way intrinsic to people and that it’s important and healthy to keep that in mind but calling it an illusion seems too strong. If people act like there’s a social hierarchy then having a notion of social hierarchy in your model of the world will allow you to predict those people better. I interpreted you as saying that the concept of social hierarchy is a free-floating belief completely disconnected from reality.
And now for something far less serious:
Remember, in this scenario the rich people have ROBOTS.
I was thinking more about outbidding them at the food market. The scenario under consideration is that food-pickers went on strike, not total disorder and dissolution of society which the idea of violently competing for food implies.
Okay, define the “something” that supports the hierarchical view. I don’t believe in it, so I can’t define it for you, and if you want to convince me over to your side, you have to support the idea that a real hierarchy pattern exists that is not just a perception.
If you do not mean to argue that the popular perception of something equates to that thing actually existing, then we are, in fact, in agreement about what existing means. Don’t you think?
I agree with this, but that’s not the context in which I originally stated that I don’t believe in social hierarchy, and it doesn’t confront my original statement that seeing a hierarchy in our social patterns is an illusion.
Okay. Poor people can steal those, too.
Because a bunch of starving people are definitely going to wait in line patiently at the food market.
Okay. All the food pickers go on strike. We’ve only got so much time till the food rots. Now what? If they don’t get back to picking food soon, there won’t be any food. If there is no food, society will dissolve. That was my point.
Actually I’m pretty sure farmers across the world are better off with IT workers existing than not.
Okay. How did farmers survive before the industrial revolution? Your point that they’re better off does nothing to hurt my point that poor people don’t need IT workers to survive.
It might not undermine the strict, literal interpretation of your words (and that is questionable; you did say ‘They’d still pick food, and they’d be fine’ which is different from merely saying that they’d survive, somehow.) But it does undermine the more general point that poor people are less dependent on the rich people than the converse.
Seeing as the statement ‘societies can survive without IT’ is trivially true but not very interesting, it was reasonable for Konkvistador to guess some interesting generalization of what you actually, argue against it and expect that it will bear on your opinion. If he failed you could have ignored him or explained why it didn’t work which would also provide everyone with more information about your picture of the world.
Pointing out that Konkvistador didn’t address your literal point isn’t very helpful. It is an illustration of what happens when you treat discussion as a game in which points are scored by saying anything that contradicts the literal meaning of your opponent’s statements while avoiding classical logical fallacies. You’re going to say a lot of boring things because they’re technically true and you can’t have your opponents scoring points against you, right? There are no points. There are no winners. No one is playing that game with you. (They might be playing a different game—one of fighting for social approval. But around here, being enthusiastic about adversarial debate is a sure way to loose at that.)
I don’t think we had ~7 billion farmers before the industrial revolution.
There’s a limit to the power you can have that people can’t take away from you (I believe one of the Name of the Wind books had a nice quote about that). If you want more power than a self-sufficient farmer, you have to rely on other people for that power.
For that matter, a lot of the things the IT guys want, the poor workers don’t have. If something happens to those workers, the IT guys might drop down a few rungs on the hierarchy, say to just about where the poor workers are, or maybe still a bit higher. They’re still no worse off than they would’ve been if they tried not to depend on anyone.
Could you elaborate? Do you see all social constructs as being illusory?
Sure, I clarified that here
It’s an infationary use of “illusory”. “Social constructs” describe certain regularities in the real world, maybe not very useful regularities often presented in a confusing manner, but something real nonetheless. “Illusory” usually refers to a falsity, so its use in this case doesn’t seem appropriate. Furthermore, being a bad fit, this word shouldn’t be used in explaining/clarifying your actual point, otherwise you risk its connotations leaking in where they don’t follow from your argument.
How do you define winning? From my observation of your comments here, you refuse to concede even when your arguments no longer make sense. Maybe they just get tired and pretend to yield, or look for a girl with less ego.
Being wrong and not making sense to somebody isn’t the same thing. If you want to really nail somebody at debate, you generally have to corner them really good by highlighting a flaw in a key point or points that destroy the supports for their belief. If you see the way that Wedrifid undermines my points, those are some examples of the types of attacks that might corner me into a defeat.
You’re right to be concerned that my ego might be too big—I am concerned that I may become careless, and think that I’m going to win and then fail because I was overconfident. So far, I haven’t had a big problem with that, but if this goes on long enough, I could start doing that.
Which is why I keep asking for it. I’ve added a request for honest critiques into a few of my discussions now, hoping that people will eventually feel comfortable with debating with me, if they’re not now.
As for specifically why somebody might not make sense and yet not be wrong… well that could range anywhere from a common misunderstanding, to being bad at explaining your ideas (I admit that when trying to explain a new idea I am frequently misunderstood—there’s a pattern to my problem which is really difficult to explain and even more difficult to compensate for, so I’m not going to get into that here). It is also possible that the audience was not ready for the message, didn’t know a concept that was required to understand it or something, didn’t get enough sleep, really there are so many reasons why stuff can fail to make sense, yet not be wrong.
And then there’s the problem of getting the person to realize they’ve lost. Not all failures to realize you’ve lost are due to ego. We all want to protect ourselves against bad ideas, and nobody knows where the next bad idea is coming from. You often have to go over a lot of pieces of information with them until they get it, and sometimes it’s hard to get at their true rejection. Sometimes you think you’re right and the other person just isn’t listening to you, but really they happen to be right. There is so much confusion in the world. It takes a pretty good amount of skill to convince someone they’ve lost.
This approach to debating strikes me as exemplifying everything bad that I learned in high school policy debate. Specifically, it seems to me like debate distilled down to a status competition, with arguments as soldiers and the goal being for your side to win. For status competitions, signaling of intellectual ability, and demonstrating your blue or green allegiance, this works well. What it does not sound like, to me, is someone who is seeking the truth for herself. If you engaged in a debate with someone of lesser rhetorical skill, but who was also correct on an issue where you were incorrect (perhaps not even the main subject of the debate, but a small portion), would you notice? Would you give their argument proper attention, attempt to fix your opponent’s arguments, and learn from the result? Or would you simply be happy that you had out-debated them, supported all your soldiers, killed the enemy soldiers, and “won” the debate? Beware the prodigy of refutation.
Adversarial debates are not without their usefulness, such as in legal and political processes. It’s true that they are generally suboptimal as far as deliberative truth-seeking goes, but sometimes we really do care about refuting incorrect positions and arguments (“killing soldiers”) as clearly as possible.
I agree. I think it’s really important to be able to support a point when you really do have one. That some people were able to win debates—which takes a lot of skill—was required for humanity to progress. How else would we have left behind our superstitions? The problem isn’t trying to win the opponent over to the truth, the problem is trying to win the opponent over for other reasons. If a person was very good at debate, how would you make the distinction? Especially if everyone else is trying to win for the sake of ego? It’s not easy to tell the difference between a person who wins because they have more of the truth or are clever in the way they defending it, versus a person who wins because they’re more tenacious than their competitor.
A person who does have the most complete understanding of the truth can be attacked to the point of tedium with logical fallacies until they get bored and wander away. A group of people who are all debating for the sake of ego will not only be likely to insist that the debaters who are best at defending truth are wrong, but they will project their own motives onto that person and insist that they, too, are debating for the sake of ego. Add to that the fact that nobody believes something that they think is wrong, which leads to everybody thinking that they’re right, and it can get to be a pretty big mess.
This gets very confusing.
I frequently make improvements when people point out flaws and admit to my mistakes.
Here is one
a much funnier one
I think that if someone defeated me in a debate, I would realize it—and that would be fine, because I’d develop an improved perspective afterward and that would be fun. This is why I’m itching for an ass-kicking.
I think you’re confusing “I am sick and tired of winning debates” for “I think all debates should be won.” I don’t know how this happened as the message I intended is very different from the one that you took.
I want to learn more during disagreements but I’m not because I keep winning.
I cannot wait until someone really kicks my ass.
If I meet someone who is not very good at rhetoric but has a great point, I am not sure what I would do. That’s a really good question, and an important thing to be aware of. I will think about that for a while. I’ll try harder to detect good points in bad wording.
Actually, you (you personally, not some general “you”) often don’t notice when you lose on this forum, because people give up on you and disengage, some explicitly, some silently. You might be misinterpreting this as winning, but, given that neither you nor they changed their minds, and since neither is better off, both parties lost.
You like metrics, so here is one. If you look back through your exchanges here, what is the ratio of the number of threads where people are convinced by your logic (or you are convinced by theirs) to the number of threads where people simply stopped replying or tapped out?
I regarded my exchange with Epiphany on intelligence & the gifted as an example of this.
I regarded our exchange the exact same way. Unfortunately, that doesn’t give us any insight into the subject.
To your credit, you had a good point and I realized that there was an additional factor that supported your point that you may not know about, so I tossed it in:
http://lesswrong.com/lw/kk/why_are_individual_iq_differences_ok/77vs
To my credit, you asserted that a person claiming an estimated IQ of 220 must be lying or from the future but completely failed to acknowledge my point when I said we have used IQ tests in recent decades that did give scores like those due to miscalibration, so people who can honesty claim an IQ score that high are not, by default, lying. You reacted as if I was assuming a perfectly accurate method was used and this guy’s true IQ was 220. However, I had stated that I was arguing that your assertion that the person must be “lying or from the future” was incorrect.
http://lesswrong.com/lw/kk/why_are_individual_iq_differences_ok/77f5
This is why I got irritated with you and wanted to write you off.
What we need to be asking here is not “Who irritates the most people during debates?”—people can be irritated by the difficulty of being made to grapple with good reasoning skills just as easily as the annoyance of tolerating poor reasoning skills. The question should not even be “Who gives up on their debates most frequently?” because if your opponent is just shooting logical fallacy silly string, it’s justified to end it—so you don’t always lose a precious learning opportunity when you cut it short. What I think we should be asking is “When we get frustrated in a debate, how can we tell where the problem is?”
You know, I think I’ve rested enough from our debate now that if you wanted to take me up on my open invitation to administer ass kickings to my ideas, I’d be up for another bout with you.
Regardless of the merit of intellectual masochism it may be politically expedient for you to ease up on using this language to describe your interactions. If you already find it infuriating that shminux is able to quote you for the purpose of doing reputation damage then shame on you if he fools you twice. Be more careful with your words in order to not make yourself an easy target.
To put it another way, talking about how much you like ass kickings and inviting ‘bouts’ is not the optimal way for you to provoke the kind of quality intellectual challenge you desire.
Alright, I see that you probably have a good point Wedrifid. I would like your advice if you have some. Also, did you get the two emails I sent around 20 hours ago?
But how does anybody know who was ultimately right? In order to make a statement like this, you must first assume that you know who was right. If there was a debate about it, then it’s likely to have been the sort of topic where there’s some kind of ambiguity—either obvious ambiguity or some hidden pitfall that one person or the other is trying to point out—so what’s the chance you’re right about who won? If the debate was never finished, then there are points that haven’t been heard yet. Sometimes a good point very far into a debate can change the whole outcome. It seems to me that often the entire reason for a disagreement is that one or both people were missing some information that changes their perspective. As you’re aware, reality is very complex—there are a lot of different specializations that people can learn and it can take years to learn enough to have a good understanding. Sometimes applying the information or concepts from one specialization to a discussion with someone outside that specialization radically changes the outcome of the discussion. There can be a LOT of information to compare during a debate—and although it would be nice to know which pieces are missing from the other person’s perspective immediately, and although we can all make guesses, often this is not apparent until the topic has been discussed in depth. I bring specializations to this group that are different from the main specializations the group has. For example, I know a lot about psychology. Being a nerd, there are a lot of things I’ve researched and learned about that may be different—and here I am exchanging information with others who have all researched things that may be different from what’s common to the group. There is going to be a lot of information to exchange before it’s clear what perspective is best, and before an agreement is possible.
I agree that it’s best if people agree after the debate. Coming to a point where people have exchanged enough information where they can actually agree can be very difficult. If people are giving up before the process is complete, I’m not sure what I can do about it. I have started to see a pattern where there are certain sets of information that I have which seem to be root causes for a large ratio of disagreements, so I have begun writing posts about them. Unfortunately nobody can unload all their relevant information in any small amount of time, and the fact that we have different information will cause disagreements until then. I am calibrating with you guys by reading the sequences and beginning to write some articles to bridge these gaps. It will take me some time.
If I’m wrong about something, I hope I’ll figure it out eventually. If you, or anybody else wants to be persistent with me, or recommend a specific article that you think will fill an ignorant patch in my head, I’d be happy to try to get further.
You ignored both my points: the definition of winning in a discussion (at least on this forum) as updating and a specific way of measuring it. We both lost. Tapping out.
That’s all right here: I agree that it’s best if people agree after the debate.
An unusual fact: I think you are one of the few Lesswrongers to use ‘debate’ to refer to something other than formal debates. More specifically, I think that you are using the string ‘debate’ to refer to what most on this forum would call arguments or discussions or disagreements.
This is unfair to me, Shminux. I joined on August 12. That a lot of my debates are unfinished is probably due to the fact that debates can take time to reach a conclusion. It’s annoying that you and 13 voters seem to think that anything can be gleaned from taking an inventory of them at this time. That’s like giving a person a test that takes an hour and scoring them after five minutes.
It didn’t dawn on me that you might actually be serious about wanting me to go count up all my debates and see how many were unfinished until today—because that would be so ridiculously inconclusive.
Secondly, that you quoted me out of context is making me look like an ass. Having “I want to learn more during disagreements but I’m not because I keep winning.” floating there is probably going to be interpreted as “I keep winning on this forum” when, in reality, I’ve only been here for a few weeks—I hadn’t had any wins or losses at that point—and a key reason I joined is because I was hoping to get my ass kicked.
You want to know the real situation? I’m not getting enough intellectual stimulation in real life. I’m in too small a pond. That’s the context in which I am saying “I keep winning.” I was really looking forward to losing some debates HERE for that reason. It looks to be a bigger pond.
You’re making me look like an ass, and there’s no good reason for it.
You want to talk about why you (and perhaps some of your friends) and I are frustrating one another? Let’s talk about it. But don’t lets mix up this frustration with a bunch of other things or go creating metrics that won’t accurately measure diddly and wouldn’t support your point (that a bunch of unfinished debates means I’m losing a bunch of debates) anyway.
Let’s focus on the frustration and see if we can figure out why it happens.
Outside view!
You’re a smart person on Less Wrong. So are your opponents. My prior for you being on the right side of the debate is < 50%, by symmetry. (I assign a nontrivial probability that both participants are wrong.)
I can know that something is happening statistically without being able to point to a single definite instance of it.
Thanks, now we’ll see whether anything worthwhile results from this.
You should pay more attention to epistemic tools at this point and not particular points of debate. Disagreement should be parsed as being about the general reasoning tools you use, not the subject matter that triggered the problem. It doesn’t matter whether you win a debate, you might learn or fail to learn something useful about general reasoning tools in both cases, so a focus on winning/losing debates is wrong if you are trying to fix that particular problem. If you don’t pay enough attention to general reasoning methods, you may end up continuing to accept occasional defeats and celebrate frequent triumphs without getting significantly better at not generating new flawed arguments (that your opponents, given the general incompetence, won’t be pointing out to you).
(You may learn something useful from the sequence on words. See also this post and its dependencies: A Rational Argument.)
My point is that you’re busy talking about winning and losing debates, as if they were some sort of contest. That view is quite different than viewing debates as an opportunity to seek truth. It places incentives on things like protecting arguments you’ve made, even when they’re wrong, and attacking your opponent’s arguments, even when they’re right. It lets you declare victory when you find a major flaw in your opponent’s argument, even if they were right and you were wrong.
Would you notice if your opponent had a sound piece of logic, but didn’t have the rhetoric to support it? Would you be able to extract that subset of their argument, and change your mind based on it, even if their conclusion was wrong? What if they evidence and argument they presented was enough to show that your position was subtly wrong, but you failed to notice because you were focused on the fact that their position was even wronger?
Would you notice if their logic and conclusion was flawed, but they had some evidence you hadn’t looked at before, which made your position more tenuous? Would you stop to properly reevaluate your whole position, and reduce your certainty in it? Or would it just be one more argument against an army? Or would you not even get that far, because you attacked the obvious flaw, without looking at the whole argument—because that one flaw was severe enough that it was the only thing you had to worry about to win?
I still maintain that winning and losing debates is about status, not truth-seeking. (Note that it’s not a zero-sum game for the participants; a well-fought contest will be positive-sum, a poorly-fought one negative sum, even if one participant comes out better in either one.) And you know what? That’s fine, and likely appropriate for getting what you want, in many contexts. But, if you’re here, I’m hoping it’s not all you want in every context. And I suspect that taking different approaches in different contexts will be epistemically hazardous; we don’t compartmentalize nearly as well as we’d like, sometimes.
On Less Wrong, people mostly seem to prefer “discussing things to find out the truth” instead of “arguing to win.” See this post here:
http://lesswrong.com/r/discussion/lw/ecz/whats_your_rationalist_arguing_origin_story/#comments.
My impression is that people will be more willing to discuss things with you, if you use the discussion style that’s mostly used on this site.
I’m not sure that’s actually true.
I wouldn’t say it is either but it is true that “discussing things to find out the truth” is actively advocated as a norm over “arguing to win”. That makes a big difference. Enough that I only take breaks from lesswrong in disgust every couple of years whereas on, say, the MENSA discussion boards I have frequented my tolerance wears thin far more quickly. With ‘debating as sparring’ as the human default having active pressure against that tendency at least helps reduce the phenomenon to more tolerable levels without eliminating it.
The fact that so many LWers disengaged from Epiphany that she thinks she has never lost an argument strongly suggest that it is true (ETA: at least in some contexts), and also that the norm has a downside of making some newcomers highly overconfident. (I can’t resist pointing out that a feature suggestion of mine probably would have helped here.)
Go on? It seems to be the most approved of/most encouraged way to discuss things, and there seems to be a lot less arguing to win than on other forums.
It means that the narratives surrounding pop-distillations of evolutionary psychological accounts of human sexuality shouldn’t be given too much weight when evaluating actual human beings, mostly.
Same here; I tend to find it actively repellent.
Hahahaha! Love it.
(: This makes me want to take a survey.
Your description of being attracted to intellect in men gave me the urge to find a way to debate you. Since this would probably count as competing for status, do you think you would find it attractive in person (assuming I actually could keep up with you)?
EDIT: I’m in a relationship and not seeking another: I’m just curious about your response to men trying to attract you with intellectual signalling.
Clearly you are not
In the Transactional Interpretation, Cramer claims:
What is it about “absorbers” (which seems very much like a magical category, morally equivalent to “observers”) which make them non-magical and therefore different from observers. When I go through and replace “absorber” with “observer” and the like, the result seems to say more or less the same thing. Therefore either I’m missing something (which is quite likely) or there’s no ontological difference between the two concepts.
EDIT: The most interesting thing about the linked essay is that he compares TI to straw-Copenhagen in a way that is eerily similar to the way EY compared MWI to straw-Copenhagen.
I don’t think that you are missing anything.
An interesting blog post by Razib Khan on “Atheism+”.
I personally love nothing more than a Great Loyalty Oath Crusade.
Linked from Richard Carrier is this piece:
I really hate it when someone tells me not to do something in a way that really makes me want to do it. I mean, I never thought of literally telling someone to self-abuse themselves anally before reading this post.
These comments seem terrible
They are usually better. I’m not sure why he isn’t wielding the moderator rod as harshly as usual, perhaps he is afraid of coming off as partisan?
the explanation is banal. 10 hour days at my “day job” + i sleep 6 hours + and have a daughter. not much on the margin. i devote way more time to moderation of comments than a typical blogger as it is, so it shows when i cut back.
i don’t see what that has to do with anything. LW people say stupid things all the time.
addendum: i don’t have much experience on this forum, but i am friends with people associated with the berkeley/bay area LW group. as i said, LW people say stupid things all the time. but, LW people tend to not take it personally when you explain that they’re being ignorant outside of domain, which is great. so my last comment wasn’t really meant as negatively as it might have seemed. but the back & forth that i have/had with the LW set does not translate well onto my blog, where there is usually a domain-knowledge asymmetry (i’m pretty good at guessing the identity of commenters who know more than me, and usually excuse those from aggressive moderation, because i wouldn’t know what to moderate).
There is a reason I usually state “the comments are well worth reading” when linking to your blog posts here. You are clearly doing something right, while there are of course false positives people can point to, the losses from those are far outweighed by the gains.
LW if anything is remarkably bad at this kind of gardening. We don’t down vote well meaning but clueless commenter’s enough and when we do one merely has to complain about being down voted to inch back into positive karma.
I agree, yet for some reason suspect that your ideal would see an entirely different subset of comments downvoted to oblivion and suspect I would just leave if you had your way (and that you would do likewise if I had my way). From what I have seen I’d also leave in an instant if Razib had that kind of power.
This is the advantage of having the moderation influence distributed (among multiple moderators or in this case just voting) rather than in the hands of one individual. Neither one of us has enough power to change the forum such that it is intolerable to the other. The failure mode only comes when the collective judgement is abysmal and even then it is less catastrophic to one ego holding sway.
Really? Honestly I think I would find a forum moderated by you well worth visiting and depending on how much time you put into it, might be much better than LW.
I think we probably agree on 90% of posts that should be down voted but aren’t.
Almost certainly and likewise probably more agreement than between randomly selected individuals. The problem comes if any part of that 10% happens to include things that I am strongly averse to but which you consider ok and use. I wouldn’t expect you to hang around if I started banning your comments—I certainly wouldn’t take that kind of treatment from anyone (unless I was getting paid well).
I never understood people who get all offended and scream censorship if one or two of their posts get moderated while the vast majority are let through. If however you’d feel that a quarter or a third of my comments where objectionable I wouldn’t bother commenting any more, though I might keep reading.
I wouldn’t accept too many more than, say, two or three a year that I reflectively endorsed even after judgement. But I wouldn’t call it censorship. It’s some guy with power exercising it with either (subjectively) poor judgement or personal opposition to me. It’s not something I prefer to accept but I’m not going to abuse the word ‘censorship’.
i wasn’t expecting much from that thread. i was more curious about the rationale of the atheism+ proponents. i got confirmation of what i feared....
That isn’t surprising. The reasoning:
… struck me as odd. Before seeing your contradiction and then confirming your judgement for myself I had been substituting “and yet despite that” for “thus”. Fallout from Razib banning or driving away quality commenters has reached even here. At a first approximation I expect such moderation to support the ego of moderator and drive away any intellectual rivals, not guarantee that it is worth reading.
It takes more than ‘vigor’ to make moderation beneficial.
Don’t multiply your anecdotes, since your source is just gwern getting banned for a while.
It is easy to speak like this since it appeals to the anti-authoritarian impulse of the average LW reader but I invite you to inspect the uninformed drivel one can read in the comment sections on some other quality blogs dealing with similar topics.
I would argue based on comparison to other such blogs that the occasional mistakes are worth it to maintain a good signal to noise ratio. I am not alone in this assessment.
Excuse me? No it isn’t. You are mind reading, and incorrectly. (Discussions at that time brought attention to other who didn’t wish to bother with Razib.)
No. I don’t want to compare to a known inferior solution and the endorsement being evaluated was that they were worth reading, not that elsewhere on the web is worse. There is a reason I don’t tend to hang out in the comments sections of personal blogs. They aren’t an environment that provides incentives for valuable comment contributions and neither lax moderation not vigorous moderation in defense of self interest produce particularly impressive outcomes. Actual ‘moderate’ and vaguely objective moderation is rare. Lesswrong’s karma system is far superior and produces barely tolerable comment threads most of the time.
The karma system in itself is not what made this site interesting, not by a long stretch. While some very bad comments did make it through that now don’t, Overcoming Bias before the karma system had interesting discussions as well.
The karma system is a key feature of what made LW what it is, but it isn’t exceptional in this. Just as vital where the features of its demographics, the topics we chose, the norms and culture that developed. If any of those wash out LessWrong becomes nothing but a smaller suckier reddit.
That would indeed be a strange position for someone to take.
I didn’t mean to state that was what you where saying but I was questioning why you seem so sure moderation is an inferior solution based on conversations on LessWrong sucking less. I pointed out that seems rather weak evidence since OB didn’t suck much more.
if LW gave me dictatorial powers i would have nuked this sub-thread a long time ago, and saved a lot of people productive time they could have devoted to more edifying intellectual pursuits.
also, as a moderate diss, i don’t delve deep into LW comments much anymore. but some of these remind now me of usenet in the 1990s. what i appreciate about the ‘rationality’ community in berkeley is that these are people who are interested in being smart, not seeming smart.
I follow your comments, because you usually have something interesting to say—and usually something that gets a little close to the borders of what is permissible on less wrong.
Now, sorry to say, your recent comments have become boring. Has Less Wrong become even more repressive, or did you just run out of things to say?
You are right on my recent comments being somewhat boring. In the past I’ve been told by people that they tend to read my posts because they are usually high quality correction or fun gadflyish needling.
Maybe my comments are more boring because there are fewer things wrong in interesting ways? Not that I would imply there are fewer things wrong in general unfortunate. I mostly agree with all recent criticisms I’ve made but some of it was pretty dull to write, I guess that shows. There are some signs that the political discourse is on a lower level than it was. I unfortunately often end up talking about politics, as I saw politically motivated stupidity on some topics. The other explanation is that I’ve been using the site to procrastinate more and thus didn’t bother to abstain from marginal comments. There is however no excuse for spending way too much time on useless crappy meta debates as I did about a week or two ago.
When I think of what posts of value I think made in the past 30 days in which I’m apparently among the top contributors all I can think of that is of real value are the link posts. Which aren’t bad, as I think LessWrong doesn’t as a community does not update when exposed to good ideas and material from the outside. That this is the only kind of recent posts I see value in does shows I haven’t either not taken the time or had the inspiration for new original ideas or synthesis.
Perhaps I need to study more new material, perhaps I need to do more thinking, perhaps I need a break. On the other hand I do think LW didn’t really learn what I hoped it would from my old comments, so maybe this is more a problem of me sounding like a broken record because I have to keep repeating the same points, since this bores me I do it more poorly than before. So perhaps I need a new venue.
I’ve been meaning to take another month’s leave from the site starting some time this September, to improve the quality of my writing. I guess this is as good as any day to start. Since this topic is open I may as well ask for any more specific feedback you or anyone else on the site might have.
My suggestion to you is the same as the one I gave to wedrifid: write more posts relative to comments. Comments are for asking/answering questions, or fixing mistakes in other people’s posts, or debates. If you think you have something to teach LW, please do it via posts, where you can organize your thoughts, put in the effort necessary to bridge the inferential gaps, and get the attention you deserve.
(If the suggestion doesn’t make sense to you, I’d be interested to know why. As I said, I made the same suggestion to wedrifid before, but he didn’t respond to agree or disagree, nor did he subsequently write more posts, which leaves me wondering why some LWers choose to write so few posts relative to comments.)
It does make sense to me. I seem to have massive will failure when it comes to writing actual articles. I’ve tried to fix this by writing more public and private drafts, but these generally come out disappointing in my eyes. Also writing comments requires little motivation while writing articles feels like work.
While the strategy of comment>draft>article has done some good, it isn’t good enough at all. No more commenting until I write up an article that is worth submitting, if I don’t, well too bad.
It does make sense to me. I seem to have massive will failure when it comes to writing actual articles. I’ve tried to fix this by writing more public and private drafts, but these generally come out disappointing in my eyes. Also writing comments requires little motivation while writing articles feels like work.
While the strategy of comment>draft>article has done some good, it isn’t good enough at all. No more commenting until I write up an article that is worth submitting, if I don’t, well too bad.
Nuke’m solution to the Newcomb problem: tell Omega that you pick what he’d have picked for himself, were he in your situation. That’ll Godel him.
I think this should be “I’ll pick the opposite of what you’ll predict me to pick”, otherwise Omega will Loeb you...
How? I thought that asking Omega to model himself would throw him inside the model, which is bad enough. Yours is just blatantly incompatible with Omega being able to predict your choice, so he probably would not offer you to play.
(Also, unfortunately neither is permitted as a reply, since the Omega’s prediction is withheld from the player, according to the setup.)
(semi-OT but strikes me of interest) “You know the science-fiction concept of having your brain uploaded to a computer and then you live in a simulation of the real world? Going to work for Google is a bit like this.” Openness in the wider culture outside open source.
An interesting analogy. If we were to apply it to uploads, one wonders whether the Googlers are more or less productive once inside the Google bubble...
We are not the first to have meta discussions. Where are the best ideas on technical and social means to foster productive and reduce unproductive discussion? Are there bloggers that focus on getting the best out of “the bottom half of the Internet”?
Maker’s Schedule and Manager’s Schedule by Paul Graham
Anybody know what happened to user RSS feeds? It used to be you could get them with “lesswrong.com/user/username.rss″, but that now says no such page.
2 separate related comments:
1) I’m moving to Vienna on the 25th. If there exist lesswrongers there I’d be most happy to meet them.
2) Moving strikes me as a great opportunity to develop positive, life-enchancing habits. If anyone has any literature or tips on this i’d greatly appreciate it
I’m in Vienna through the 29th, and know of at least one other LWer here. PM me and we can meet up.
Game AI vs. Traditional (academic) AI
Peter Watts considers the wisdom of The Conspiracy.
A short draft for an article where I criticize Yvain’s Worst Argument in the World
Did you mean that to be “Worst Argument in the World”?
Ok this is a somewhat embarrassing typo. I saw a few problems with it but I didn’t mean to imply it was the worst article in the world. Fixed.
Sorry for missing the stupid questions thread, but since the sequences didn’t have something direct about WBE, I thought Open thread might be a better place to ask this question.
I want to know how is the fidelity of Whole Brain Emulation expected to be empirically tested, other than replication of taught behaviour ?
After uploading a rat, would someone look at the emulation of its lifetime and say,” I really knew this rat. This is that rat alone and no one else”.
Would only trained behaviour replication be the empirical standard? What would that mean in terms of emulating more complex beings, who might have their own thoughts other than the behaviours taught to them? Please point me to any literature on the same. I checked the WBE roadmap and replication of trained behaviour seems to be the only way mentioned there.
You didn’t miss the stupid questions thread, you can still post there. It doesn’t really matter how old a thread is.
People with pet rats notice personality differences.
Rats do have personality differences and I would expect people to ‘notice’ differences in personality even if they didn’t exist.
Rats even seem to have IQ of sorts. Truly, our fuzzy little friends are often underestimated.
Thanks for all the replies. Sorry for the delay in response.
Does this mean that in terms of empirically evaluating brain emulations, we will have to “walk blind” on the path of emulating higher and higher organisms until we reach a level of complexity, like rats where we can truly state that a personality is being emulated here and not just a generic instance of an animal?
Probably. I’ve seen proposals for testing uploads (or cryonics) by learning simple reactions or patterns, but while this is good for testing that the brain is working at all, it’s still a very long way from testing preservation of personal identity.
The world (including brains) is strictly deterministic. The only source of our mental contents are our genetics and what we are “taught” by our environments (and the interactions between them). The only significant difference between rat and human brains for the purpose of uploading should be the greater capacity and more complex interactions supported by human brains.
Was reading up on the Flynn effect, and saw the claim it’s too fast to reflect evolution. Is that really true? Yes, it’s too fast, given the pressures, for what Darwin called natural selection, given the lack of anything coming along and dramatically killing off the less intelligent before they can reproduce. But that’s not the only force of evolution; there’s also sexual selection.
If it’s become easier in the last 150 years for women to have surviving children by high-desirability mates, then we should, in fact, see a proportionate increase in the high-desirability characteristics. And since IQ and socioeconomic status are correlated, and SES is a known high-desirability characteristic, we would expect an increase in IQ accordingly, insofar as IQ is heritable.
And, in fact, there is a change in society that would do that — increasing urbanization. Not only have cities become healthy enough to have non-negative population RNIs for the first time in history, but they’ve also become the home of the majority of the human species for the first time in history. Studies of infidelity rates show it does, in fact, correlate fairly strongly with urbanization (probably for the logical reasons that increased population density increases opportunities and urban anonymity makes it easier to conceal from a mate).
So, the urbanization of the last 150 years increased successful infidelity. The usual models of sexual selection indicate that successful infidelity by women should result in high SES men having more children. IQ is correlated with high SES. IQ seems to be heritable in large part. And the period where we would expect high SES men to have more kids is matched by an increase in the general population’s performance on tests of IQ.
I’m currently operating without good access to scientific journals to see if this has been considered and debunked, or not considered, or considered and put forward. But, at least sitting here just thinking about it without the resources to test it (or even model it effectively mathematically), it seems an increase in the genes that increase IQ as a result of sexual selection could be a plausible explanation of the Flynn Effect.
But there’s also an opposing evolutionary pressure: educated women have fewer children.
The Reproduction of Intelligence attempts to quanitfy this effect:
Shouldn’t they be checking for number of grandchildren?
I’d assign a low probability to this hypothesis. Most of the Flynn effect seems to occur on the lower end of the IQ spectrum moving upwards. Source. This is highly consistent with education, nutrition and diseases hypotheses, but it is difficult to see how to reconcile this with a sexual selection hypothesis.
Also, I’m not sure that your hypothesis fits with expected forms of infidelity. One commonly expected form of common infidelity would be generally with strong males while trying to get a resource rich males to think the children are there’s If such infidelity is a common pattern, then one shouldn’t expect much selection pressure for intelligence, if anything the opposite.
The fraction of the population which engages in infidelity even in urban environments is not that high. Infidelity rates in both genders are around 5-15%, but only about 3% of offspring have parentage that reflects infidelity. Source, so the selection impact can’t be that large.
One thing worth noting though is that one of the pieces of evidence for disease mattering is that there’s a correlation between high parasite load and lower average IQ, but your hypothesis would also cause one to expect such a correlation since reduced parasite load would be better correlated with better medicine and more functional urban environments in general. This is evidence in favor of your hypothesis.
I’m not aware of any obvious way to test your hypothesis. I’d be curious if you have any suggested things to look at or if anyone else has any ideas.
It reconciles quite well, actually.
The greater the genetically-determined status differential between a woman’s husband and her a potential lover, the more differential advantage to the woman’s offspring in replacing the husband’s genes with those of a higher-quality male. So the lower the status of the husband, the greater the incentive to replace his genes with another’s.
Assuming for a moment IQ is 100% heritable and IQ is linear in advantage, the woman with an IQ of 85 and a husband of IQ 85 will see her kids have an IQ of 85 if she’s faithful, and 115 with a lover of 145, for a net advantage to her kids of +30 IQ if she strays. If a woman and her husband are IQ 100, the same lover will raise the IQ +22.5; her kids get less advantage than Mrs. 85. In the case of Mr & Mrs. IQ 115, the advantage is only +15. For Mr & Mrs. IQ 130, the advantage to cheating is only +7.5. For Mr. & Mrs. IQ 145, cheating with a lover of IQ 145 doesn’t benefit her kids at all, while for Mr & Mrs. IQ 160, she wants to avoid having kids by a lover of IQ 145.
So, it is precisely the women on the low end that have the greater incentive to cheat “up”, which we would expect would result in more cheating, and thus the low end where IQs would increase the most.
Also, the lower status the woman’s husband, the easier it is to find a willing lover of higher status, and thus the greater the opportunity to replace the husband’s genes with another’s. Mrs. 85 can find a lover with IQ 100 more easily than Mrs. 100 can find a lover with IQ 115, even though the both have the same incentive to find a lover of +15 IQ points. Mrs. 115 has even more difficulty finding a lover of IQ 130, and so on.
So, it is precisely the women on the low end that have the greater opportunities to cheat “up”, which we would expect would result in more cheating, and thus the low end where IQs would increase the most.
Assuming monogamous and assortative marriage, there’s a serious limit to how high resource/high status a male a woman can marry relative to her own status. Assuming she’s already landed the best husband she can manage, it is then in her subsequent interest to acquire the best genes for her kids she can. Insofar as better genes result in higher status, this would translate to favoring high-status males as lovers to produce kids supported by the best-she-can-manage husband. If status correlates better with IQ rather than strength in human societies, well, we’d expect that to select for IQ.
Ahah. My data on this was substantially out-of-date. That is a serious blow to the hypothesis.
(Hmm. Except that in modern welfare states, the government has replaced the husband as supporter on the low end of the socioeconomic ladder, so maybe the effect is now most strongly happening among the children unmarried women, which would cause a drop-off in children of infidelity corresponding to the rise of out-of-wedlock births? Meh.)
Yeah, me neither.
A test would be to look wether there is a correlation between cheating and IQ and whether this correlation is influenced by sex. Also, asymetrical incidence of STDs with respect to the sexes could also be an indicator.
How would you test this model?
There are so many other factors, you’re probably getting mostly noise there. For instance: I read somewhere that depending on whether babies drink breast milk or formula, they may lose 10 points (to formula) - the reason stated was lack of omega 3. What about lead paint chips? We have banned lead, that should increase IQ—after an initial decrease when lead paint began to be used. (There’d be a similar increase / decrease cycle with the invention of formula.) The point of these two is that as we learn more, we may be preventing a lot of things that previously caused children brain damage. And then there are other health factors which we’ve improved. In the great depression, I read 10% of the population starved to death. Starvation, for those who survive it, can cause brain damage. Were there other starvations before this, that had stopped happening? When did helmets become popular for people riding bicycles and skateboards and such?
There are just too many factors.
Heh, and I read somewhere that here in America, the Flynn effect has stopped. O.O
1) Sure. I’m not claiming the Flynn effect is genetic; I’m disputing the common claim that it can’t be genetic.
2) Whether the Flynn effect has stopped or not is an area of ongoing dispute; some studies suggest it merely paused for a while. And if it has ended . . . that might merely mark that America’s reached the new equilibrium point under urban infidelity conditions.
“I’m disputing the common claim that it can’t be genetic.”
Oh, sorry.
I have found out the hard way, myself, that it’s really best to start with a single sentence that makes one’s point clear in the very beginning. Maybe that would help your commenters respond appropriately.
When people say that it’s “too fast,” they are making a quantitative claim. The Flynn effect is a standard deviation per generation. Under your scenario of no selection on women, this would require that the bottom half of the bell curve to have no biological children. 50% cuckoldry, perfectly correlated with IQ? Even men who think they’ve been cuckolded don’t have that high a rate.
Poll: What is the smallest portion than be considered a “vast majority” of the whole? What about a “vast, vast majority”?
You’re asking a question about language use here, yes?
Depends on the context.
If 90% of U.S.voters voted for a particular policy proposal I would comfortably describe that as a “vast majority”, but if only 90% of sulfur atoms remained in an unstoppered container of sulfur at STP I would describe that as a “startlingly small percentage”.
On a minute’s thought, I’d say 2 standard deviations above mean portion-size for the context under discussion.
Two great posts from Julian Sanchez: Intellectual Strategies: Precisification and Elimination, and its follow-up On Partly Verbal Disputes. Related to our conception of “Rationalist Taboo”, and to Yvain’s Worst Argument in the World post.
Sample quote:
Trying to measure the shadow economy
People thinking hard about measuring something they have no exact way of checking.
It is possible there simply isn’t any such experimental material. If I had to bet on it I would say it is more likely there is some than not, though I would also bet that some things we wish where done haven’t been so far. In the past I’ve wondered if we can in the future expect CFAR or LessWrong to do experimental work to test many of the hypotheses based on insight or long fragile chains of reasoning we’ve come up with. I don’t think I’ve seen anyone talk about considering this.
While mention of say CFAR doing this, the mind jumps to them doing expensive experiments or posing long questionnaires with small samples of psychology students and then publishing papers, like everyone else does. This is something that may not be worth their effort but seems doable, the idea of LWers getting into the habit of testing their ideas on human rationality however seems utterly impractical.
Or is it? How useful would it be if we had something like this site visited by thousands or tens of thousands, where high karma LessWrong posters and CFAR researchers could submit their quizzes and online experiments. How useful would it be if we made such a data set publicly available? What if we could in addition to this data mine how people use apps or an online rationality class?
Would it be useful? I think it would. Would it be used? There are many publicly available data sets and we see little if any original analysis based on them here. We either don’t have norms encouraging this or we don’t have enough people comfortable with statistics doing so. Problems like this aren’t immutable. The Neglected Virtue of Scholarship noticeably changed our community in a similarly profound way with positive results. Is building knowledge this way even possible in a field that takes years to study? A fair question, especially for tasks that require technical competence, the answer is yes.
Any thoughts, information, or research about selective effects of arranged marriages?
Users love simple and familiar designs – Why websites need to make a great first impression
I need help in explaining this case to myself.
I just talked to someone and she praised her doctor, because she complained from chest (armpit) pain, and the doctor, untraditinally, cured her with accupuncture on the spot. I asked her and she said the pain was going on for a few weeks (and was quite intense), and it disappeared on the next day. Some bias IS expected of her (more so than from the average person).
Maybe it’s just random chance plus unconscious exaggeration, but I doubt it could have been so strong. After I started writing this, I looked up on Wikipedia to confirm that there are no working forms of accupuncture, and this gave me the idea that it might have been placebo. Any other explanations I couldn’t think of? I find some similar cases to be quite surrealistic, given the premise that the treatments used were proven to be ineffective. My own grandmother was supposedly sick of cancer and the doctors told her she had no more tnah a month of life, then some alternative medicine practicioner told her to drink turtle blood and she did so and she’s alive now, more than 10 years later. It’s extremely unlikely that my father lied about this, but I thought it was incompetent doctors—incompetency amongst physicians is quite common, according to my personal anecdotes and the theory I use to support it.
Do you need a different explanation? The super-surprising effectiveness of placebo feels a bit offensive to us truth-seekers; but the universe and our brain-architecture isn’t required to play fair with us, alas. In certain occasions, the deluded and deceived may have an advantage.
? Am a bit confused because when I read the Wikipedia article, it says that accupuncture (both “real” and “sham”) was seen to be effective in combatting pain. So where did you read that it was ineffective?
Oh damn I missed that. I got too distracted by the Effectiveness research section. So there you go, I found a reasonable explanation, although I was more looking forward to some sort of fundamental bias that effects everyone, which I must have somehow missed. Would have been a good explanation to some things.
Still, I’m waiting for someone to appear with a very good hypothesis of the cancer case. I’m not saying there has to necessarily be one, but there might be. Placebo was in fact a very good hypothesis, but I’m not sure if you can cure cancer with placebo (“Yes, you can” would close the case).
Edit: I looked it up, apparently placebo doesn’t affect cancer. Surprising.
Does profit maximizing software eat the world and go darwinian ?
I don’t think that is a good description of what happened. Konkvistador But that is a rather huge topic… it seems to me Konkvistador that the arbitrary thing they optimize for may turn out to be something that makes them eat up a lot of reality Konkvistador also the humans present a sort of starting anchor, what do humans want? They want information processing, they want energy, they want food, they want metal, finished products Konkvistador What do companies trying to provide this want? Information processing, energy, maintenance, finished products and more CPU cycles Konkvistador While of course there may be tragedy of the commons situations where local profit seeking is harmful enough that it destroys the market, but overall I think the market will probably prove more resilient than humans Jello_Raptor no, not entirely accurate, but what happened was a classic bubble compounded by people manipulating the system so that the ratio of over deviated wildly, and sometimes even went negative. Konkvistador also remember these are now replicators kind of Konkvistador If I have a Robot that is trying to earn enough money so it can build more robots to earn more money Konkvistador It is not hard to imagine a machine or a machine system just starting to self-replicate on its own if say the market crashes Jello_Raptor right Konkvistador As the economy gets worse and worse it realizes that building its own units is cheaper than making them on the market Jello_Raptor right Konkvistador as long as building more units with stuff it can’t monetize (since the market is now hypothetically chasing something not really tied to reality) Konkvistador means those units earn a tiny bit more doing that stuff that can be monetized Konkvistador he’ll probably build them in exchange for a share of their profits Jello_Raptor right Konkvistador If robot variation is low Konkvistador it might be imaginable that the thing the untied to reality thing market optimizes for has huge returns Konkvistador that make the cost of doing real world stuff too high from a opportunity cost perspective Konkvistador not enough capital for it and the robots all being perfectly rational don’t make any mistakes investing into it Jello_Raptor ok, you know how you memory dump whenever coding and lose track of what you’re doing, that just happened to me vis a vi this convo Konkvistador as soon as some thing makes the mistake and it replicates that mistake in new units Darwin is back Jello_Raptor also gotta go
Could one train an animal* to operate a turing machine via reinforcement mechanisms? Would there be any use for such a thing? (Other than being able to say you have an organic computer...).
*Obviously you can train humans, and I guess likely great apes as well. But what would be the lower bound on intelligence? A rat? An insect?
What exactly do you mean by “operate a turing machine.”
If you have a simple enough machine and translation of symbols on the tape to stimuli for the animal, it seems easy (in principle) to use classical and operant conditioning on a rat to push the appropriate buttons to change the machine’s states.
Physicists cast doubt on renowned uncertainty principle.
This isn’t from The Onion—” ‘real’ or from The Onion” is macro uncertainty—it seems that, by being clever, it’s possible to do somewhat better measurement of subatomic particles than was expected. Does the article look sound? If so, what are some implications?
The title of that article is extremely misleading. The uncertainty principle, as understood in contemporary physics, is a consequence of the (extremely well-confirmed) laws of quantum mechanics. Momentum-space wavefunctions in quantum mechanics are Fourier transforms of position-space wavefunctions. As a consequence, the more you concentrate a wavefunction in position space, the more it spreads out in momentum space, and vice versa. More generally, there will be an “uncertainty principle” associated with any two non-commuting observables (two operators A and B are non-commuting if AB—BA is not 0). Any experiment challenging this version of the uncertainty principle would be contradicting the basic math of quantum mechanics, and the correct response would be to defy the data.
But this experiment does not challenge the uncertainty principle, it challenges Heisenberg’s original interpretation of the uncertainty principle. Rather than seeing the principle as a simple consequence of the mathematical relationship between position and momentum, Heisenberg concocted a physical explanation for the principle that appealed to classical intuitions. According to his interpretation, the uncertainty principle is a consequence of the fact that any attempt to measure the position of a particle (by say, bouncing photons off it) disturbs the particle, which leads to a change in its momentum. The correct mathematical explanation of the uncertainty principle, given above, does not make any reference to measurement or disturbance, you’ll notice.
Anyway, this experiment only challenges Heisenberg’s version of the uncertainty principle, not the actual uncertainty principle. Far from contradicting the math of quantum mechanics, the falsity of Heisenberg’s interpretation is actually predicted by that math, as shown by Ozawa. The abstract of the paper you link makes it clear that the authors are distinguishing between the actual uncertainty principle and Heisenberg’s interpretation:
I guess you could refer to Heisenberg’s interpretation of the uncertainty principle as “Heisenberg’s uncertainty principle”, but it seems to me that that is just a recipe for confusion. Intelligent laypeople will get the impression that this is some profound and fundamental sea-change in the foundations of quantum mechanics. It isn’t.
Thanks.
I’m wondering (assuming that the work pans out) whether there would be technological implications even though the foundations of physics aren’t shaken at all.