Open Thread, May 16-31, 2012
If it’s worth saying, but not worth its own post, even in Discussion, it goes here.
- May 24, 2012, 1:21 AM; 3 points) 's comment on People v Paper clips by (
If it’s worth saying, but not worth its own post, even in Discussion, it goes here.
Walt Whitmanisms
The original:
Sark Julian:
HonoreDB:
Me:
Steven Kaas:
Steven Kaas:
Welcome to Life: the singularity, ruined by lawyers.
(Humor, three-minute YouTube clip.)
The dark arts are in action here. Beware, lest you may Generalize from fiction..
It’s an interesting vision, but lawyers have nothing to do with the problem. The problem is commercialization of something that or moral intuitions say should not be commercialized.
Being upset at lawyers about this state of affairs is like being angry at a concrete truck for building the foundation of a building in an offensive location.
The majority of the stuff on that guy’s website is pretty interesting. He’s got several TED talks, one of which is essentially on prediction markets.
If you had to pick exactly 20 articles from LessWrong to provide the greatest added value for a reader, which 20 articles would you select?
In other words, I am asking you to pick “Sequences: Micro Edition” for new readers, or old readers who feel intimidated by the size and structure of Sequences. No sequences and subsequences, just 20 selected articles that should be read in the given order.
It is important to consider that some information is distributed in many articles, and some articles use information explained in previous articles. Your selection should make sense for people who have read nothing else on LW, and cannot click on hyperlinks for explanation (as if they are reading the articles on a paper, without comments). Do the introductory articles provide enough value even if you won’t put the whole sequence to the selected 20? Is it better to pick examples from more topics, or focus on one?
Yes, I am hoping that reading those 20 articles would encourage the reader to read more, perhaps even the whole Sequences. But the 20 articles should provide enough value when taken alone; they should be a “food”, not just an “appetizer”.
It is OK to pick also those LW articles that are not part of the traditional Sequences. It is OK to suggest less than 20 articles. (Suggesting more than 20 is not OK, because the goal is to select a small number of articles that provide value without reading anything more.)
Now let’s try it differently. Even if you feel that 20 articles is too small subset to describe the richness of this site, let’s push it even further. Imagine that you can only list 10 articles, or 7 articles, 5 articles, 3 articles, or just 1 single best articles of the LessWrong. It will be painful, but please do your best.
Why? Well, unless one of us puts their selection of 20 articles on the wiki ignoring the others, the resulting selection will be a mix of something that you would select and something that you wouldn’t. The resulting 20 articles will contain only 10 or maybe less articles from your personal “top 20” selection. So let’s make it the best 10 articles.
However I ask you to avoid using strategies like this: “I think articles A and B are good. A is better than B, so if I have to choose only one article, I should have chosen A. But article A is widely popular, and most other people will probably choose it too, therefore I will pick B, which maximizes the chance that both A and B will be in the final selection.” Please avoid this. Just pretend that the remaining articles will be chosen randomly (even if other people have already posted their choices), so you should really choose what you prefer most. Please cooperate on this Prisonner’s Dilemma.
Also, please explain your reason behind selecting those articles. Maybe you see an aspect others are missing. Maybe others can suggest you another article which fulfills your goal better. (In other words, if you explain yourself, others can extrapolate your volition.)
My choice, the most important three articles:
Why truth? And… -- it contains motivation for doing what we do, and explains the “Spock Rationality” misunderstanding
An Intuitive Explanation of Bayes’ Theorem—a biology/medicine example focusing on women, and an interactive math textbook (great to balance the LW bias: male, sci-fi, computers, impractical philosophy, nonstandard science)
Why Our Kind Can’t Cooperate—a frequent fail mode of unknowingly trying to reverse stupidity in real life, important for those who hope to have a rational community
then these:
How to Be Happy—a lot of low-hanging fruit for a new reader, applying science to everyday life; bonus points for being written by someone else
Something to Protect—bringing the motivation to the near mode; the moral aspect of becoming rational
Well-Kept Gardens Die By Pacifism—a frequent fail mode of online communities; explanation of the LW moderation system
and then these:
Making Beliefs Pay Rent (in Anticipated Experiences) -- the difference between a useful and useless belief, and how to avoid discussing mere words
Knowing About Biases Can Hurt People—warning about a possible fail mode for people who enjoy reading the articles about (other people’s) biases
Guessing the Teacher’s Password—education is an important topic for many people
How to Beat Procrastination—an important topic for many people online, and also very popular one (might bring hyperlinks to LW)
Note: I think that each these articles can be read and understood separately, which in my opinion is good for total newbies. People are expecting short inferential distance, and you must first gain their attention before you can lead them further. If they will enjoy the MicroSequences, they will more likely continue with the Sequences. I also think these articles are not controversial or weird, so they will give a good impression to an outsider. The selection includes math, instrumental rationality, social aspects of rationality.
Funny thing, it was rather painful to reduce my suggested list to only 10 articles, but now I feel happy and satisfied with the result. Please make your own list, independently of this one. (Imagine that you have to select 10 or less articles for your friend.)
OK, my first shot, probably just to encourage people to do better than this:
The Simple Truth (humans love storytelling)
Why truth? And… (the motivation for what we do)
What Do We Mean By “Rationality”?
An Intuitive Explanation of Bayes’ Theorem (medical examples = serious business)
Making Beliefs Pay Rent (in Anticipated Experiences) (important point)
The Virtue of Narrowness (important point)
Guessing the Teacher’s Password (people will agree with this)
Universal Fire
Universal Law
Mind Projection Fallacy
Politics is the Mind-Killer
Applause Lights (an example of mindkilling)
Affective Death Spirals
Reversed Stupidity Is Not Intelligence (important point)
Uncritical Supercriticality (mindkilling in extreme)
Knowing About Biases Can Hurt People (important point)
Something to Protect (our connection to reality)
Why Our Kind Can’t Cooperate (community building)
Well-Kept Gardens Die By Pacifism (community protecting)
3 Levels of Rationality Verification
Practical Advice Backed By Deep Theories (instrumental value of LW)
How to Be Happy (something you can try at home)
EDIT: Oops, it was more than 20, I was in a hurry. The more important (IMHO) articles are now marked by a bold font, with explanation added.
Seems to me like you need Mysterious Answers to Mysterious Questions in there. That’s far and beyond one of my favorites.
(You can single-space your posts by putting two spaces at the end of each line. Do this, for it will save scrolltime.)
I’m going to avoid repeating ones on your list, entirely because I think repetition is bad. Here I go:
Twelve Virtues of Rationality (my favorite thing Eliezer has ever written bar none)
A Fable of Science and Politics
Transhumanism as Simplified Humanism
Absense of Evidence Is Evidence of Absense
The Fable of the Dragon-Tyrant
The Proper Use of Humility
Expecting Short Inferential Distances (I think maybe this one and Knowing About Biases Can Hurt People should maybe be the first parts of the Sequences everybody reads.)
Truly Part of You
Newcomb’s Problem and Regret of Rationality
Three Dialogues on Identity (another favorite of mine)
The 5-second Level
Cached Selves
Learned Blankness
Why You’re Stuck in a Narrative
Your Inner Google
Errors vs. Bugs and the End of Stupidity
Yes, a Blog (more of an appetizer, I guess)
A Parable on Obsolete Ideologies
The trouble with picking stand-alone posts is that Eliezer’s sequences of posts are so much better.
What are the ones you would include if you were including repeats? (Viliam_Bur is asking for an absolute top 20, not several independent lists of good posts.)
Who exactly is “Simple Truth” aimed at? As far as I can tell, the message is that worrying about the cashing out the meaning of truth is not worth the effort in ordinary circumstances. That’s true, but it is a fully generalizable counter-argument to studying anything—worrying about the meaning of “quantum configuration” has no practical payoff, even though building things like computers relies on studying those sorts of things. Likewise, the meaning of truth is really hard if you actually examine it.
Put differently, religious people don’t disagree with us about truth means, they disagree about what is actually true. And they are wrong, for the reasons detailed in “Making Beliefs Pay Rent.” In short, no real person is analogous to Mark, so no real person’s philosophical positions are contradicted by the story.
To repeat, the story doesn’t solve any real questions about truth, it simply says they are practically [Edit] unimportant (which is true, but makes the story itself pretty unhelpful),
For me the message of “Simple Truth” was that the intelligence should not be used to defeat itself. To be right, even if you can’t define it to philosopher’s satisfaction, is better than to be wrong, even if you can find some smart words to support that. The truth business is not about words (that’s signalling business), but when you are right, nature rewards you, and when you are wrong, nature punishes you. (Although among humans, speaking truth can cause you a lot of trouble.) At the same time it explains the origins of our ability to understand truth—we have this ability because having it was an evolutionary advantage.
Or maybe I just like that the annoying wise-ass guy dies in the end.
This is not about religious people, who disagree about what is actually true, as you said. This is about people who try to do “philosophy” by inventing more complex ways to sound stupid… errr… profound, and perhaps they even sometimes succeed to convince themselves. People who say things like “there is no truth”, because for anything you say they can generate a long sequence of words that you just don’t have time to analyze and debunk (and even if you did, they would just use a fraction of that time to generate a new sequence of words). If you didn’t meet such people, consider yourself lucky, but I know people who can role-play Mark and thus ruin any chance of a rational discussion, and for a non-x-rational listener it often seems like their arguments are rather important and deep, and should be addressed seriously.
Anyway, the “Simple Truth” is kinda long, which I enjoyed, but other people may hate; so it is probably no harm in removing it, as long as “Making Beliefs Pay Rent” and “Something to Protect” stays in the list.
I agree with this feeling, but “Do the impossible” or one of the nearby posts raises this point more explicitly and more effectively.
The problem with “Simple Truth” is that—beyond the message I highlighted - the text is too open ended. Mirror-like, the story contains whatever philosophical positions the reader wishes to see in it.
There are two possible kinds of people who can do this. (1) People with useful but complicated theories that you happen not to understand, and (2) stupid people—who might be poorly parroting a useful theory. Please don’t let the (negative) halo effect of the second type infect your view of the first type of people.
Generally, your objection pattern matches with the argument that law is too complicated. Respectfully, I disagree.
I think you mean “practically unimportant” in your last sentence.
I’ve always understood the purpose of that article to be to pre-emptively foreclose objections of the form “but being rational is irrelevant, because you can’t really know what’s true” by declaring them rhetorically out-of-bounds.
Indeed a typo, thanks.
I’ve always taken the objection you mentioned as invoking the problem of reliability of the sense (i.e. Cartesian skepticism), not the meaningfulness of truth. In the story, Mark is no Cartesian skeptic (of course, it’s hard to tell, because Mark is a terribly confused person)
I think skeptical objections to Bayesian reasoning are like questions about the origin of life directed at evolutionary theory. The criticisms aren’t exactly wrong—it’s just that the theory targeted by the criticism is not trying to provide an answer on that issue.
I’d add something like Keep your identity small, Beware Identity.
Which twenty have the highest number of votes?
These, but that’s probably not the best way to go about making a list. Many of the top posts require prerequisites, and there are some equally good posts that are not as heavily upvoted because they were published on OB or in LW’s infancy.
I actually started working on something similar, but it never really took off and real-world responsibilities prevented me from working on it for a while. Feel free to pick up where I left off. Anyway, here’s my first attempt (I may try again later):
Twelve Virtues of Rationality
The Cognitive Science of Rationality
The Bottom Line
Making Beliefs Pay Rent
An Intuitive Explanation of Bayes’ Theorem
Knowing About Biases Can Hurt People
A Fable of Science and Politics
Hindsight Devalues Science
Taboo Your Words
The Least Convenient Possible World
The Apologist and the Revolutionary
Mind Projection Fallacy
Confidence Levels Inside and Outside an Argument
The Fallacy of Gray
Ugh Fields
Cached selves
Conjunction Fallacy
Understanding Your Understanding
Humans are not automatically strategic
How to Beat Procrastination
I don’t know if the intention here is to debate other people’s choices, but: my wife started The Simple Truth because it was the first sequence post on the list and quickly became frustrated and annoyed that it didn’t seem to lead anywhere and seemed to be composed of “in jokes.” She didn’t try to read further into the Sequences because of the bad impression she got off this article, which is an unusually weird, long, rambling, quirky article.
I actually like The Simple Truth but I don’t feel that it makes a good introduction to the Sequences. But hey, this is just one data point.
I predict that when your wife read “The Simple Truth” she was not acquainted with (or was not thinking about) the various theories of truth that philosophers have come up with. I like it a lot, but when I first read it I was able to see it as a defense of a particular theory of truth and a critique of some other ones.
(In particular, it’s a defense of the correspondence theory, though see this thread.)
Edit: In other words, I think “The Simple Truth” appeals mainly to people who have read descriptions of the other theories of truth and said to themselves, “People actually believe that?!”
You’re correct. What I love about the Sequences in general is that it’s a colloquial, patient introduction to lots of new concepts. In theory, even somebody with no background in decision theories or quantum mechanics can actually learn these concepts from the Sequences. The Simple Truth is significantly different in tone and style from the majority of Sequence posts and the concepts which that post satirizes are not really introduced before the comedy begins.
If you go to http://wiki.lesswrong.com/wiki/Sequences and choose the first option (1 Core Sequences), then choose the first listed subsequence (Map and Territory), the very first post is The Simple Truth. The second choice is What Do We Mean by Rationality? which really, really seems like it should be the first thing a newcomer reads.
Same here, though I think it does depend on the readers background. People who strongly disbelieve in the concept of objective truth might find it helpful to have that taken care of before starting the sequences proper, but even then I’m not sure if the simple truth is the best way.
You might be right—I’ll have to re-read it. I put this list together based on my memory of what these posts are like, and given how volatile memories are, I may be mistaken about their quality.
Edit: You’re right. I’ll change my list accordingly.
What is your intended audience, and what is the intended effect of reading these sequences? “Politics is the Mind-Killer” and “Well-Kept gardens die by pacifism” seem particularly relevant to online communities, for instance.
It was intended for new people on LW, who should be introduced to our “community values” (even without reading the whole Sequences). Also for smart people outside LW, who are curious what is LW about; and might decide to join later.
In both cases, the goal is make clear what LW (and x-rationality) is, and what it is not, in a short amount of text. Perhaps writing a new text would be better, but making a selection of existing text should be quicker.
Yes, but I think they also apply well offline. People can discuss politics in person, too. The lesson of well-kept gardens is indirect: some people are net loss, and if you don’t filter them out of your social network, your quality of life will go down.
Now I added some explanations to my list, so the message is like this:
there is such thing as a truth/territory, and it has consequences in real life
to know = to make good predictions
it’s not about speaking mysteriously or using the right keywords, but about understanding the details
protect your values, don’t use your intelligence to defeat yourself
don’t let your emotions and biases make you stupid, but also don’t try to reverse stupidity
a rational community is a great idea, but it requires specific skills
here is how to use rationality to improve your everyday life
Nice idea—but maybe we should compress things further? I’ve read most of the sequences, but think/hope they could be condensed to about 10-20 pages with the core messages, in such a way that would be more accessible outside these realms.
I guess the idea is to find 20 articles that provide both ideas and arguments out of those that already exist. After there is some solution, it becomes easier to write those 20 pages than when starting from scratch. Obviously, 20 paper pages you mention have yet to be written—and “20 best articles for isolated reading” may be written already.
Stuff by Yvain
On the applications of bad translation.
I think “words don’t have meanings, people have meanings” is overdoing the concept, but not by much.
Nice find!
Genes are overrated, genetics is underrated
by Razib Khan
Genes and genetics go together in very nearly the same way as words and language.
Or, even more closely, as terms in a mass of spaghetti code.
Understanding the genetics of an organism is hard, because what they are trying to do is to simultaneously reverse engineer that mass of code and learn what the terms are.
A very basic introduction to empiricism and Occam’s Razor.
They also missed the theory that is shaped like a star, but without the extraneous nonsense in the middle. Which is exactly as simple as their preferred theory.
So I’m entering an argument over fictional evidence, which is already a losing move, but who cares.
Taking the convex hull of the observations is obviously the right thing to do!
If you asked a mathematician for the simplest function from a point set in the plane to a point set in the plane, they’d flip a coin and say either the constant function that’s always the empty set or the constant function that’s always the plane. But that’s silly, because those functions don’t use your evidence.
(Other constant functions are out, because there’s no way to pick between them.)
So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they’d say the identity function. That’s not silly, but if you want a theory that’s not just a recapitulation of your evidence, it won’t help you.
(Projections or other ways of taking subsets are out because there’s no natural way to pick individual points out.)
(Things like the mean are out because of measure-theoretic difficulties.)
So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they’d say the convex hull. It has all sorts of nice properties (idempotent, nondecreasing, etc.) and just sort of feels like the right thing to do with a point set.
On the other hand, sticking line segments between the points (and in a hard to specify order) is a few more “next”s down the list and only makes sense for finite point sets with pretty special geometry anyways.
Ovulation Leads Women to Perceive Sexy Cads as Good Dads (HT: Heartiste)
I think it is isn’t much disputed that ovulating women seem to find dark tirade and some other personality traits more sexy when ovulating, so to me the above sounded like a clear example of the halo effect. Sexy men will seems smarter and kinder than they are, because any positive trait seems to beef up our perceptions of people in other areas as well. But even as my mind slowly noted that this should effect how they see the odds of a man caring for other women’s children and that I don’t have any info to suggest that women are more prone to halo effect for male sexiness in general during ovulation, I saw the authors had considered this:
I guess heterosexual women should be conscious of this bias, especially those desiring family formation or perhaps when judging in other contexts about which adult men they want their children to interact with. While obviously they probably aren’t wrong about how sexy they find someone, they are biased when it comes to the other traits they, judging from their stated preferences, seek to maximize in such men.
LessWrong’s worst posts:
http://lesswrong.com/r/all/top/?count=2811&after=t3_327
The most heavily downvoted post in Less Wrong history is actually not on that list. Curi’s “The Conjunction Fallacy Does Not Exist” was removed by Eliezer on the basis of it being massively downvoted and too stupid to productively discuss.
(If anyone wishes to see this article, it can be read on Curi’s user page, but one can’t view it or its comments directly.)
The link doesn’t work.
It works for me, but only after changing my preferences to view articles with lower scores (my cutoff had been set at −2).
It works for me. ???
I can’t get it to work either. Maybe just c&p the text?
Dinosaur Comics today involves WBE
http://www.qwantz.com/index.php?comic=2208
I once asked Ryan North, via the twitters, if he was a transhumanist. He said he wouldn’t accept the label, but T-Rex is obviously a transtyrannosaurist.
Some vague ideas about decision theory math floating in my head right now. Posting them in this raw state because my progress is painfully slow and maybe someone will have the insight that I’m missing.
1) thescoundrel has suggested that spurious counterfactuals can be defined as counterfactuals with long proofs. How far can we push this? Can there be a “complexity-based decision theory”?
2) Can we write a version of this program that would reject at least some spurious proofs?
3) Define problem P1 as “output an action that maximizes utility”, and P2 as “output a program that solves P1″. Can we write a general enough agent that solves P1 correctly, and outputs its own source code as the answer to P2? To stop the agent from solving P1 as part of solving P2, we can add a resource restriction to P2 but not P1. This is similar to Eliezer’s “AI reflection problem”.
Thoughts on problem 3:
Although A(#P2) won’t return #A, I think eval(A(#P2)(#P2)) will return A(#P2), which will therefore be the answer to the reflection problem.
It’s trivial to do at least some:
Sure, but that’s too trivial for my taste :-( You understand the intent of the question, right? It doesn’t call for “an answer”, it calls for ideas that might lead toward “the answer”.
To tell the truth, I just wanted to write something, to generate some activity. The original post seems important and useful, in that it states several well-defined and interesting problems. Seeing it staying alone in a relative obscurity of an Open Thread even for a day was a little disheartening :)
Wikipedia experiment finished: http://www.gwern.net/In%20Defense%20Of%20Inclusionism#sins-of-omission-experiment-2
Close to zero resistance to random deletions. Most disappointing.
I was persuaded.
The Essence Of Science Explained In 63 Seconds
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about science and our physical world. Apologies if it has been linked to before, especially since I can’t say I would be surprised if it was.
I know quite a bit about crypto and digital security. If I could find the time to write something, which won’t be soon, is there something that would interest LessWrong? (If you just want to read crypto stuff, Matthew Green’s blog is good; “how to protect a nascent known-to-be-actually-working GAI from bad guys” will read like “stay the fsck away from any mobile phones and the internet and don’t trust your hardware; bring an army”, which won’t be terribly interesting.)
First time this has happened since the 30day karma score was implemented. Lesswrong addictions are apparently easy to squelch!
I also like this one. Lucky timing to check in at the round number!
Go you!
I’ve noticed your absence, FWIW.
Can someone help me corrupt this wish?
“Give humans control over their own sensory inputs.”
Everyone falls into a coma where they get to control their own individual apparent reality. Meanwhile they all starve to death or run into other problems because nothing about the wish says they need to stay alive.
Doesn’t discontinuation of the sensory experience count as a lack of control?
Well, the wish doesn’t say “give me the ability to control my sensory experience forever”. If you die, your ability to control your body is discontinued, but that doesn’t mean you couldn’t control your body.
can you expand a little on this?
Suppose that a person with locked-in-syndrome wished for voluntary control of their body. Their disorder is completely cured, and they gain the ability to control their body like anyone else. Would you say that their wish wasn’t really granted unless they never die?
personally yes, but I realize this is strange.
Hmm, possibly. But everyone stuck in their own sensory setting with no connection to anyone else is still pretty bad.
You aren’t necessarily stuck anywhere. How the statement “I want to talk to Brian” gets unpacked once the wish has been implemented depends on how “control” gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can’t control what Brian says once we’re talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I’m having trouble coming up with examples though.
And if “I want to talk to Brian” is parsed that way doesn’t that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?
So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn’t screened off the number of options you have is huge. There’s no way to present these options to a person in a way that satisfies “maximum control”. As soon as we get into suggesting actions we’re back to the problem of optimizing for what makes humans happy.
This is highly helpful BTW.
None of that control is automated, and this manual control is the only source of input.
hahaha please specify wavelengths of light that will hit each receptor. Very good.
Exactly! It’d be pretty sucky.
A low-inferential-distance perspective on the inferential distance concept.
I like the Operations Research subreddit. Other people looking for applied rationality might like it, too. This probablistic analysis of problems with federal vanpools is a characteristic example.
Looks interesting; I’ve subscribed.
Using large scale genetic sequencing has for the first time found the cause of a new illness. Short summary here and full article here. In this situation, an individual had a unique set of symptoms, and by doing a full exome scan for him and his parents they were able to successfully locate the gene that was creating the problem and understand what was going wrong.
Setting up policies to discuss politics without being mind-killed-- I’m linking to this in the early phase because LWers might be interested in following the voluminous discussions on that site to see whether this is possible, and it will be easier to start from the beginning, and also possible to make predictions.
I haven’t heard this problem mentioned on here yet: http://www.philosophyetc.net/2011/04/puzzle-of-self-torturer.html
What do you think of the puzzle? Do you think the analysis here is correct?
It’s a good puzzle, and the analysis dealing with it is correct.
How is it even possible for A and B to be indiscriminable, B and C to be indiscriminable, but A and C to be discriminable? It seems like if A and B cause the exact same conscious thoughts (or whatever you’re updating on as evidence), and B and C do, then A and C do. I think in practice, what’s more likely is that you can very weakly probabilistically discriminate between any two adjacent states.
If the difference between A and B is less than the observer’s just-noticeable-difference, and the difference between B and C is as well, it doesn’t follow that the difference between A and C is.
Frank Arntzenius (a philosopher at Oxford) has argued something along these lines.
I don’t think that article is paywalled (though I’m using a university computer, logged on to my account so I’m not sure whether I automatically get passed through any paywall that may exist).
Chunking of sensory input happens at a lower layer in the brain than consciousness. So if you have learned that two stimuli are the same then they might be indistinguishable to you unless you spend thousands of hours deliberately practicing distinguishing them even if there is a detectable difference, and even if you can distinguish stimuli that are just a bit further apart.
Luke’s comment on just how arse-disabled SIAI was until quite recently (i.e., not to any unusual degree) inspired me to read Nonprofit Kit For Dummies, which inspired me to write a blog post telling everyone to buy it. Contains much of my bloviating on the subject of charities from LessWrong over the past couple of years. Includes extensive quote from Luke’s comment.
Does any know of any good online recourses for Bayesain statistics? I’m looking for something fairly basic, but beyond the here’s what Bayes theorem is level that Khan academy offers.
pick up a used textbook for cheap, I don’t remember which one is good, but there’s a textbook recommendation thread somewhere.
I would be interested in setting up an online study group, preferably via google hangout or skype for several key sequences that I want to grok and question more fully. Anyone else interested in this?
I currently do not have time, but it may be helpful if you state which sequences you intend to look at.
Meta-ethics for starters.
Good choice—I’ve read all of it, and I still don’t have a really good idea what it says. Please do post something if you can make an accessible and concise summary.
I’m hoping to do some reading on music cognition. I’ve got a pretty busy few months ahead, so I can’t say how far I’ll get, and I’m not used to reading scientific literature, so it’ll be slow going at first I’m sure, but I’d like to get a better grasp of this field.
In the vein of lukeprog’s posts on scholarship, does anyone here know anything on this field, or where I might begin to learn about it? I’ve got access to a library with a few books dealing with the psychology of music and I can get online access to a small few journals. I’ve also read most of Levitin’s Music and Your Brain which is a reasonably good pop-science (and largely pop-music) introduction to the topic, and Wikipedia actually has a graded reading list that seems promising.
Any other thoughts?
Suppose that, after some hard work, EY or someone else proves that a provably-friendly AGI is impossible (in principle, or due to it being many orders of magnitude harder than what can reasonably be achieved, or because a spurious UFAI is created along the way with near certainty, or for some other reason).
What would be a reasonable backup plan?
Try really hard to get reasonably safe oracle AI? Focus on human uploading first?
All good questions, I hope someone at SI asks them, instead of betting on a single horse.
This play in NYC looks pretty sweet. It looks like it touches on some concepts like Godshatter, idea from Three Worlds Collide, and a healthy understanding of the idea that technology could make us very very different from who we are now.
Looks like it’s stopped running for now, though.
To give potentially interested parties a greater chance of learning about Light Table, I’m reposting about it here:
It sounds like it might be a useful program for any complicated project, even if the project isn’t a program.
As a programmer, I am tempted to say “unless the project is actually a large program”. “Large” is relative, of course.
Of course, I have seen LightTable before the comment on LW, and I tried to imagine applying it to any basically data-crunching (as oppposed to mostly UI) program. Visualising computation may look like a good idea. Unfortunately, at the level it is demonstrated in the demo, it is simple enough for anyone who even tries to write a big program to keep it in mind.
When you have multiple layers of abstraction and each of them has a reason to do non-trivial double loops (which is not that much if you can say what each level is doing and why) what we see in demo would become overcluttered. I am not sure whether LightTable demo will grow into a tool to make UI fine-tuning more comfortable or it will try to invent some approaches that work for back-ends and isolated data-crunching. In the former case it will stay a niche thing but may become a well-olished narrow-focus tool. In the latter case it will have to transform so much that it is hard to tell whether the current developer will succeed.
I seem to remember someone posting on Less Wrong about software that locks your computer to only doing certain tasks for a given period (to fight web-surfing will-power failures, I guess). After some cursory digging on the site, I couldn’t find it. Does anybody remember the thread were this kind of self-binding software was discussed or at least the name of some brand of this software?
(Ideally I would like to read the thread first, and get a sense of how well this works.)
How old are you?
I’m 41. I’m curious what the age distribution is in the LW community, having been to one RL meetup and finding I was the oldest one there. (I suspect I was about 1.8 times the median age.)
I love what the LW community stands for, and age isn’t a big deal… youthful passion is great (trying to hold onto mine!) and I suspect there isn’t a particularly strong correlation between age and rationality, but life experience can be valuable in these discussions. In particular, having done more dumb things and believed more irrational things, and gotten over them.
Iodine post up: http://www.gwern.net/Nootropics#iodine
I’ve been working on this off and on for months. I think it’s one of my better entries on that page, and I imagine some of the citations there will greatly interest LWers—eg. not just the general IQ impacts, but that iodization causes voters to vote more liberally.
I also include subsections for a meta-analysis to estimate effect size, a power analysis using said effect size as guidance to designing any iodine experiment, and a section on value of information, tying all the information together.
My general conclusion is that it looks like I should take some iodine, but currently self-experimentation is just too hard to do for iodine.
Ever since getting an apartment of my own I’ve found that, well, I spend more time alone than I used to. Rather than try to take every possible action to ensure that I’m alone as little as possible (which is desperate some of the time and silly a lot of the time) I want to try to learn to like being alone.
So what are some reasons to enjoy spending time alone as opposed to spending it with other people? Or other suggestions about how to self-modify in this way?
Not sure if this counts as “alone” but you could schedule regular skype video calls with friends/relatives. It took some doing, but I’m a lot happier living alone when I still see and talk to my family a few times a week. I’m actually surprised more people don’t do this.
Thank you for your advice, but I don’t think that’s exactly what I’m looking for. Rather than seek out human contact because I’m not comfortable being alone, I would rather be comfortable being alone and then seek out human contact for its own sake.
I’m looking for a book recommendation on anthropology. I have almost no prior knowledge of the field. I’m after something roughly equivalent to what The Moral Animal was for evolutionary psychology: from-the-ground-up stuff that works by itself and doesn’t assume assume significant background knowledge or further reading for a payoff. An easily accessible pop-writing approach à la The Moral Animal is a must-have; I can’t read academic textbooks.
I’m reading Ursula Vernon’s Digger (nominated for the Graphic Novel Hugo), and it’s very much in the spirit of extrapolating logically from odd premises. Digger (a wombat) is sensible and pragmatic and known to complain about how irresponsible Dwarves are for using magic to shore up their mines.
My major (field of study) in college/university is most likely going to be philosophy. I’m an avid reader of this blog, and as such have internalized many LW concepts and terminology, particularly relating to philosophy. In short, should I cite this site if I make use of a LW concept—learnt several years ago on here—in a paper for a philosophy class? If yes (and I’m leaning towards yes), how?
In general, if one internalizes a blog-specific idea off of the Internet and then, perhaps unintentionally, includes it in a somewhat unrelated undergraduate paper, how does one go about referencing the blog—especially if the idea came from a comment that has since disappeared and/or cannot be found?
This is so far hypothetical, but I am sure that this situation will occur at least once in the next few years.
How you cite it depends on the citation format for the paper as a whole, but most major formats now have instructions on how to cite blogs. So check the reference book/website for whatever formatting school section about how to cite blogs. A decent example is the owl’s guide to citing “electronic resources” in MLA which is a fairly common style for philosophy papers.
Edit-fixed typo
An excellent debate between SIAI donor Peter Thiel and George Gilder on:
“The Prospects for Technology and Economic Growth”
I suggest skipping the first 8 minutes since they are mostly intro fluff. Thiel makes a convincing case that we are living in a time of technological slowdown. His argument has been discussed on LessWrong before.
I found Gilder so annoying (information does not trump the laws of physics!!) that I listened to this instead—Thiel and Niall Fergusson at Harvard.
Does Gilder say anything intelligent? If he doesn’t, does he get squashed flat?
There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one’s predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you’re bound to be underconfident in your ~A-type predictions.
Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don’t—in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as “predictions” whereas ~A-type beliefs don’t. Some potential factors in what counts as a “prediction”: belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.
Yeah, we have discussed this before.
Question about anti-akrasia measures and precommitments to yourself.
Suppose you need to do action X to achieve the most utility, but it’s somewhat unpleasant. To incentivize yourself, you precommit to give yourself reward Y if and only if you do action X. You then complete action X. But now reward Y has become somewhat inconvenient to obtain.
Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?
Is there an equivalent reward that is easier to obtain?
Can you provide some specific examples?
Let me make one.
Suppose you are reading your favorite blogs, when the idea strikes you, “Okay, I need to do X, but I can’t do it without an incentive. I shall order chicken wings, which are delicious, upon X’s completion.”
Dozens of minutes later, X is finished! But wait! You fell victim to the planning fallacy! Everywhere in the city that delivers chicken wings is closed now because X took longer than you thought it would.
In this case, it would be fairly senseless to wait until the next day to order the wings, as by then the reward would be completely disconnected from the action. Driving 35 minutes to get them would also be pretty senseless. I don’t know about driving 15 minutes.
This seems like a fairly difficult problem, but also one that simply will not occur very often, especially if you make your incentive something that’s unlikely to be difficult to obtain by the time you finish X.
That’s how I interpreted it as well, but I’m not sure the OP is distinguishing the signalling purpose of pre-commitment strategies from mechanisms of pre-commitment..
Reputations of pre-commitment are about signalling credible consequences in circumstances of asymmetric information. When bargaining with oneself, information is about as symmetric as it can get. It’s not like you mistrust your future self’s willingness to go through with getting chicken wings. Any obstacle to getting them is transparent to all parties (you), and shouldn’t impact your future expectation of being able to reward yourself unless you’re staggeringly incompetent at obtaining chicken wings. If that’s the case, you’ll probably factor this in when planning your incentive.
Mechanisms of pre-commitment are a more salient tool when bargaining with oneself over time (cf. dynamic inconsistency ), but only when your goals are inconsistent over time. Post-X you presumably wants chicken wings as much as pre-X you, but is more informed about the cost of obtaining them. There is presumably a level of expense pre-X you would sensibly commit to for the specified reward. If some sort of catastrophe occurred as soon as you’d finished X, pre-X you wouldn’t expect post-X you to crawl through the dust with your one remaining limb muttering “must...get...chicken...wings...”
The issue seems to boil down to “are you staggeringly incompetent at rewarding yourself? If not, don’t worry.”
Are you entering into a sub function of the original x/y assessment here? As in if X is done, Y, but Y is a function in itself of assessing the optimal reward for X?
If it’s still important to add a reward of Y (in addition to the personal value of having completed X), you probably need to substitute with something novel and maintain the understanding that it is a reward for X (even if not the originally scoped one).
It depends on the difficulty of obtaining Y relative to its pleasantness, but in general I would say yes. Specifically, good anti-akrasia measures are valuable enough that you should be willing to go through quite a bit of effort to persevere them. Thus if the effect of obtaining Y in these circumstances is to persevere your precommitmint ability and than it is worth going expending a large amount of effort on Y. But, you should also keep in mind the possibility that you will develop a negative association between fulling your precomintmints and then having to go through a large amount of effort for a reward that isn’t worth it. Is their some “guilty pleasure” or other suitable reward that you could substitute for Y, that would be keeping in the spirt of the bargain you made with yourself?
It’s not very related to LW or rationality (although in technical terms it touches on Pascal’s Mugging), but I want to post this underrated “creepypasta” anyway; it’s one of my favourites and I remembered it after flipping through that hippie blog that Will linked me to: