Open Thread, April 2011
It seems we have agreed that open threads will continue but that they will go in the Discussion section, so here’s this month’s thread.
It seems we have agreed that open threads will continue but that they will go in the Discussion section, so here’s this month’s thread.
I propose we make a series of exercises to go along with the articles of the sequences. These exercises could help readers know how well they understood the material as well as help them internalize it better.
This looks like another of the good ideas people have on here that then doesn’t get done. I’m sick of that happening.
If folkTheory creates one exercise as an example, I will make another. I hereby commit to this. If I don’t follow through within 2 weeks of folkTheory posting his example, please downvote this comment to negative 10. FolkTheory, please PM me when you have yours so I make sure I see it. Thanks.
Everybody else who wants to see this succeed, feel free to post a similar comment.
EDIT: I’m doing an exercise for Words As Hidden Inferences, and will post the exercise as a discussion post no later than April 17, 2011. If it doesn’t match what folkTheory was envisioning, I’ll make edits but won’t lose the karma.
EDIT2: It’s up.
I’m glad this is actually happening, and at great speed.
I hereby commit to doing exercises for ‘Belief in Belief’ and ‘Bayesian Judo’ by what appears to be our standard commitment: Deliver by April 17, 2011 or downvote to −10
Note: It’ll be a combined exercise for those two articles as they’re very related.
Same commitment.
I propose we create posts in /discussion/ for each post in the sequence containing exercises for that post. I will create a Wiki page now where people can indicate that they have taken charge of creating exercises for any specific post. If I do not edit this comment with a link to said Wiki page within two days, downvote this comment to −10.
Edit: Project page. If I do not take charge of creating exercises for at least one page within two days, downvote this comment to −5.
Edit: Claimed “Making Beliefs Pay Rent (in Anticipated Experiences)”. If I do not submit a page of exercises within two weeks, downvote this comment to −10.
I claimed ‘Belief in Belief’ and ‘Bayesian Judo’, see here.
Added to index.
Edit: I’d like to beta for that one—I’ll PM you an email address.
Unrelated question: is the “if I don’t do this, downvote to −10” meme new?
It seems to have started with this thread.
I’ve seen it before, but it’s not a common thing.
Would you be willing to beta read my exercise for “Words as Hidden Inferences”? If you say yes I’l email you a word document.
Absolutely, I’d love to.
Now that this is happening, I suggest a post (maybe discussion, maybe main) noting that it is happening and with progress and commitments so far.
Seconding. As the one who propsed it, I’d suggest folkTheory should make it.
Done.
THIS. THIS. Read parent.
I was thinking of starting a sequence of articles summarizing Heuristics and Biases by Kahneman and Tversky for people who don’t want to buy or read the book.
I bought it, and it seems like something like this would help make me actually stick through reading it long enough to make me finish it. And make it more memorable.
Would people want that?
Edit: I guess the answer is Yes. I should make time for this.
.
Wikipedia has the “Simple English” version, maybe there could be a similar parallel version of the LessWrong wiki? Although I find reading the Simple English Wikipedia a rather mind-numbing experience.
.
Are you familar with youarenotsosmart.com? It might be more what you’re looking for.
.
It would be fun, but I’m not sure how memorable it would be. Maybe do them as jokes?
Couldn’t hurt to do as a recap though.
.
Like, weighty and burned into my brain in a way that makes it a part of my natural reaction to things.
I guess if they were short enough to memorize the list though, I could just memorize it and go through it when I was worried about a bias.
.
No, but I associate it with length.
Like, I’m normally more affected by novels than blog posts.
.
By all means :-) Links to relevant Sequences articles should be achievable as well.
Yeah. I intend to use existing material whenever appropriate.
IIRC, there are quite a few articles on specific cognitive biases floating around here already, they’re just not well indexed.
You may find this site interesting as well.
Thanks, this is really helpful.
EDIT: Would it make sense to just try and get this guy to post on LW himself?
Have we ever tried to do that before?
Please do this.
I’ve just read “Hell is the Absence of God” by Ted Chiang, and upon finishing it I was blown away to such extent that I was making small inarticulate sounds and distressed facial expressions for about a minute. An instant 10⁄10 (in spite of its great ability to cause discomfort in the reader, but hey, art =/= entertainment all the time).
I’m compelled to link to a html mirror but I suppose it hasn’t the author’s permission. Anyone who’d like to read the story now may look at the first page brought up by googling the title. This is the book in question.
I’m curious as to the opinions of those who have read it.
I think people on Less Wrong might enjoy my personal favourite Ted Chiang story “Understand”, about nootropics. It’s also been made available in full on Infinity Plus with permission, here: http://www.infinityplus.co.uk/stories/under.htm
Ted Chiang is a master. If you haven’t I recommend reading at least the rest of the stories in the collection that has that one.
To me, it felt like an extrapolation of a lot of existing beliefs. IF you believe that god causes miracles and sends people to heaven or hell, and ALSO that god is unknowable to lesser beings, this is the kind of world that you get.
Emotionally very intense, but essentially an argument against a point of view that I don’t have a connection to—the idea that God is substantially inimical to people, but wants worship.
I was raised Jewish (the ethnicity took, the religion didn’t), so I fear malevolent versions of Christianity, but I don’t exactly hate them in quite the way that people who expect Christianity to be good seem to.
ETA: It may not be a coincidence that Chiang’s “Seventy-Two Letters” is one of my favorites among his stories.
James Morrow (another sf author who spends a lot of time poking at Christianity) doesn’t do much for me, either.
I seem to be jumping to conclusions about your reaction. What do you think made the story so affecting for you?
I just read it because of this comment. I was pretty impressed by the few Chiang stories I’ve read before (Nancy mentions “Seventy-Two Letters” which I was amazed by). He has a very smooth prose style that reminds me of one of my favorite SF authors, Gene Wolfe, and seems to have an intellectual depth comparable to another favorite of mine, Jorge Luis Borges.
I have no idea what to make of this one. I’m baffled. I’m horrified, I think. The final lines twist the dagger. Do I take it as a reductio of divine command theories of morality? Of an investigation of true love? Or what?
There are small notes attached to each story in my book. The note to this one contains:
(…) For me one of the unsatisfying things about the Book of Job is that, int he end, God rewards Job. (…) One of the basic messages of the book is that virtue isn’t always rewarded; bad things happen to good people. Job ultimately accepts this, demonstrating virtue, and is subsequently rewarded. Doesn’t this undercut the message? It seems to me that the Book of Job lacks the courage of its convictions: if the author were really committed to the idea that virtue isn’t always rewarded, shouldn’t the book have ended with Job still bereft of everything?
The story reminded me immediately of the Book of Job and thus subsequently I was confirmed in my suspicion.
A primary role of the Book of Job in the Bible is the reconciliation of reality with a belief in God. It is a crucial point because the empirically experienced reality is that good and bad things happen to people without the apparent influence of some higher being. People may take (or historically have taken) the grandiose and fantastic biblical stories of God’s exploits at face value, but one’s conviction can endure only so much stress that arises from faith’s incongruity with everyday reality. This makes the Book of Job exceedingly self-conscious as of the Old Testament’s standards; it has to be precisely aware of how faith and reality works and the differences between the two, which makes the book sound like it was written by an atheistic marketing expert. But because there is no God the tension cannot be fully neutralized no matter how clever the moralizing is. Thus, the purpose of Job’s ultimate reward is to „bribe” the readership into accepting the moral of the story even though the bribe itself contradicts that moral. The bribe cannot be left out from the Book of Job because then faith would either turn into nothing – because there is no morally meaningful influence from any agent—or believers would have to believe in an explicitly malevolent deity. The illusory promise of divine reward can’t be fully disposed of. There actually is a limit to how morally repugnant your religion can be.
Back to Chiang’s story. Nancy Lebovitz thought that it expresses the idea that God is evil but wants worship. I think this is not the case; rather it is an actually quite faithful reiteration of the Book of Job, with the difference that it tries to realize Job’s message to its full and horrific extent.
Now, the workings of Chiang’s mortal world are essentially the same as our world; the miracles and punishments seem to be just genuinely random, much like most real world accidents and flukes of probability. The angels are just another type of accident. The note also says:
Thinking about natural disasters led to thinking about the problem of innocent suffering. An enormous range of advice has been offered from a religious perspective to those who suffer, and it seems clear that no single response can satisfy everyone; what comforts one person inevitably strikes someone else as outrageous.
However, the epistemic situation of the inhabitants of Chiang’s world differs because they have strong evidence of God, Heaven and Hell. This, I think, is to illustrate the general situation of religious people: they live in a real world and believe in a world of God, Heaven and Hell. The blatant and almost parodic depiction of divine evidences in Chiang’s story serves the purpose to draw attention away from the boring usual atheism vs. theism debate; here a theistic epistemic situation is the premise. In a way, Chiang’s mortal world depicts the world of real-life theists, with Heaven and Hell representing the two ways they can go. Heaven is blind faith, Hell is atheism and the middle world is the unstable world of doubts, rationalizations and constant inner conflicts. It is quite a masterful spin on the Christian universe, where the middle world is also an unstable stage, but with the conflicting forces of moral good and bad.
In the story Heaven is associated with the heavenly light that actually makes one blind. The blind faith scenario of heaven is a total rejection of all individual sense of morality. The Hell scenario is the „decide for yourselves” one. Because the mentioned parallel between Chiang’s mortals and religious people, the main point of the story, I believe, is that if you believe in a God that doesn’t exist, you are going to be pushed around by a neutral universe anyway, and trying to reconcile faith with reality would only cause more mental anguish. If you want to permanently keep your faith, you have to make yourself completely and irreversibly blind, and be ready to accept an arbitrary amount of potential suffering. Just like Job did.
Aside from this above stuff there is also the subject of Sarah, the protagonist’s deceased wife but I haven’t yet thought about that in detail. Plus there are the marvelous depictions of lots of religion vs. innocent victims coping mechanisms etc.
A whole book of his is available on Google Books. I’ve read the first 2.5 stories so far and they are all good, but varying shades of unpleasant.
I’ve read it and had the same reaction. Most of Chiang’s fiction is very good, but this story is my favorite.
There’s a fresh Metafilter thread on John Baez’s interview of Yudkowsky. It also mentions HP:MoR.
Noticed this comment:
So people actually do start thinking of the Enlightenment era school of philosophy, like some earlier commenters feared. I also remembered a couple of philosophy blog posts from a few years ago, The Remnants of Rationalism and A Lesson Forgotten, which seem to work from the assumption that ‘rationalism’ will be understood to mean an abandoned school of philosophy.
Redefining established terms is a crank indicator, so stuff like this might be worth paying attention to.
I think Eliezer can’t be reasonably accused of trying to redefine “rationality” and the problem is on the part of the Metafilter commenter. It seems easy enough to fix though. Just point them to http://en.wikipedia.org/wiki/Rationality or http://books.google.com/books?id=PBftMFyTCR0C&lpg=PA3&dq=rationality&pg=PA3#v=onepage&q&f=false
Good call. There being an Oxford Handbook of Rationality with a chapter on Bayesianism seems to show that the term is acquiring new connotations on a bit wider scope than just on LW.
Tangentially, looking through this, I note that it appears to address the circularity of basing utility on probability and probability on utility. It claims there’s a set of axioms that gets you both at once, and it’s due to Leonard Savage, 1954. How has this gone unmentioned here? I’m going to have to look up the details of this.
We need a decent “Bayesian epistemology” article on LW. The SEP one may suck. And EY’s “Intuitive Explanation” is, IME, nothing of the sort.
If the Metafilter commenter is saying that the book is mistitled because rationalism is the opposite of empiricism, his or her comment doesn’t make sense considering that the book’s title uses “rationality”, not “rationalism”. (Compare Google hits for rationality versus rationalism.)
I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler’s Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found more entertaining uses for my time.
Today I saw that a workmate of mine was reading Lee Strobel’s The Case For Faith earlier. My policy is to not discuss politics or religion at work, so I didn’t bring it up there.
I hadn’t read that particular book before, so I was curious about its arguments. Reading over the summary, I remembered again why I quit reading Christian apologetics—they are really boring.
The subtitle of The Case Against Faith is A Journalist Investigates the Toughest Objections to Christianity, and is quite untrue. I can almost dismiss each chapter in the time it takes to yawn. Even if Strobel had good answers to the Problem of Evil, or proved that religious people historically have been less violent than non-religious people, or somehow found a gap in current understanding of evolution, he would still be leagues away from providing evidence for a god, let alone his particular god.
I remember being similarly bored by a Christian-turned-Atheist’s book John Loftus’ Why I Became an Atheist. A common criticism of atheist writers is that they don’t engage the more sophisticated arguments of theists. This book illustrates why—the sophisticated arguments are stupid. Loftus accepts Christian scholars’ ideas, arguing within spaces previously occupied by dancing angels (e.g. he says on p.371 “In a well-argued chapter… Lowder has defended the idea that Jesus’ body was hastily buried before the Sabbath day… but that it was relocated on the Sabbath Day to the public graveyard of the condemned...”).
Most of us here would probably lose a live debate in front of an audience against someone like Lee Strobel. Even so, it’s a little disappointing to me that even the most skilled theist debater’s signature attack relies on bits like “This first cause must also be personal because there are only two accepted types of explanations, personal and scientific, and this can’t be a scientific explanation.” Because winning the debate by refuting that would be a waste of intellect.
Running Towards the Gunshots: A Few Words about Joan of Arc was the first thing which gave me a feeling of why anyone would want to be Catholic. However, that’s the emotional side, not the arguments.
tl, dr (and be warned, the piece is highly political): Joan of Arc is the patron saint of disaffected Catholics—not only does the rant give a vivid picture of what it’s like to love Catholicism, it’s so large and so old that there’s a reasonable chance that it will have something to suit a very wide range of people.
Per talk page—I have just updated the jargon file on the wiki, making it actually a list of jargon with definitions. I’ve also folded in the previous acronym file, as a jargon file should be a single page. Point your n00bs here. Since it’s a wiki, feel free to fix any of my quick one-line definitions you don’t like.
Bayesians caught smuggling priors into Rotterdam harbor
(I’m new here and don’t have enough karma to create a thread, so I am posting this question here. Apologies in advance if this is inappropriate.)
Here is a topic I haven’t seen discussed on this forum: the philosophy of “Cosmicism”. If you’re not familiar with it check Wikipedia, but the quick summary is that it’s the philosophy invented by H. P. Lovecraft which posits that humanity’s values have no cosmic significance or absolute validity in our vast cosmos; to some alien species we might encounter or AI we might build, our values would be as meaningless as the values of insects are to us. Furthermore, all our creations and efforts are ultimately futile in a universe of increasing entropy and astrophysical annihilation. Lovecraft’s conclusion is: “good, evil, morality, feelings? Pure ‘Victorian fictions’. Only egotism exists.”
Personally I find this point of view difficult to refute – it seems as close to the truth about “life, the universe and everything” as one can have and remain consistent with our current understanding of the universe. At the same time, such a philosophy is rather frightening in that a world of egomaniacal cosmicists who consider human values to be meaningless would be seem to be highly unstable and insane.
I don’t claim to be an exceptionally rational person, so I’m asking the rationalists of this forum: what is your response to Cosmicism?
cousin_it and Vladimir_Nesov’s replies are good answers; at the risk of being redundant, I’ll take this point by point.
The above is factually correct.
The phrases “cosmic significance” and “absolute validity” are confused notions. They don’t actually refer to anything in the world. For more on this kind of thing you will want to read the Reductionism Sequence.
Our efforts would be “ultimately futile” if we were doomed to never achieve our goals, to never satisfy any of our values. If the only things we valued were things like “living for an infinite amount of time”, then yes, the heat death of the universe would make all our efforts futile. But if we value things that only require finite resources, like “getting a good night’s sleep tonight”, then no, our efforts are not a priori futile.
Egotism is an idea, not a thing, so it’s meaningless to say that it exists or doesn’t exist. You could say “Only egoists exist”, but that would be false. You could also say “In the limit of perfect information and perfect rationality, all humans would be egoists”, and I believe that’s also false. Certainly nothing you’ve said implies that it’s true.
The Metaethics Sequence directly addresses and dissolves the idea that everything seems to be meaningless because there is no objective, universally compelling morality. But the Reductionism Sequence should be read first.
Very well expressed. Especially since it links to the specific sequence that deals with this instead of generally advising to “read the sequences”.
Wow fantastic thank you for this excellent reply. Just out of curiosity, is there any question this “cult of rationality” doesn’t have a “sequence” or a ready answer for? ;)
The sequences are designed to dissolve common confusions. By dint of those confusions being common, almost everybody falls into them at one time or another, so it should not be surprising that the sequences come up often in response to new questions.
You’re welcome. The FAQ says:
“[R]eality has a well-known [weird] bias.”
The standard reply here is that duh, values are a property of agents. I’m allowed to have values of my own and strive for things, even if the huge burning blobs of hydrogen in the sky don’t share the same goals as me. The prospect of increasing entropy and astrophysical annihilation isn’t enough to make me melt and die right now. Obligatory quote from HP:MOR:
So in other words you agree with Lovecraft that only egotism exists?
Wha? There’s no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I’d try to save it. The fact that insectoid aliens may not adore kittens doesn’t change my values one bit.
That’s certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no “right,” only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ—and they often will—it must be either “good fences make good neighbors” or “who-whom.” In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the “correct” counterfactual outcome of war.
Though bargaining may still look like “who whom” if one party is much more powerful than the other.
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don’t know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
By ‘negative-sum’ do you really mean ‘negative for all parties’? Because, taking ‘negative-sum’ literally, we can imagine a variant of the Prisoner’s Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
I suppose I meant “Pareto-suboptimal”. Sorry.
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
I don’t understand your comment. There can be many Pareto optimal outcomes. For example, “Alice gives Bob a million dollars” is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
As I commented on What Would You Do Without Morality?:
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what’s the correct moral engine to have in one’s mind (if there is a single correct moral engine to have in one’s mind—and even assuming what’s in one’s mind has a tremendous effect on one’s observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don’t actually care about that except insofar as it affects the observed behaviour.
It’s perhaps worthwhile pointing out that even as there is nothing to compel you to accept notions such as “cosmic significance” or “only egotism exists”, by symmetry, there is also nothing to compel you to reject those notions (except for your actual values of course). So it really comes down to your values. For most humans, the concerns you have expressed are probably confusions, as we pretty much share the same values, and we also share the same cognitive flaws which let us elevate what should be mundane facts about the universe to something acquiring moral force.
Also, it’s worth pointing out that there is no need for your values to be “logically consistent”. You use logic to figure out how to go about the world satisfying your values, and unless your values specify a need for a logically consistent value system, there is no need to logically systematize your values.
Read the sequences and you’ll probably learn to not make the epistemic errors that generate this position, in which case I expect you’ll change your mind. I believe it’s a bad idea to argue about ideologies on object level, they tend to have too many anti-epistemic defenses to make it efficient or even productive, rather one should learn a load of good thinking skills that would add up to eventually fixing the problem. (On the other hand, the metaethics sequence, which is more directly relevant to your problem, is relatively hard to understand, so success is not guaranteed, and you can benefit from a targeted argument at that point.)
You know, I was hoping the gentle admonition to casually read a million words had faded away from the local memepool.
Your usage here also happens to serve as an excellent demonstration of the meaning of the phrase as described on RW. I suggest you try not to do that. Pointing people to a particular post or at worst a particular sequence is much more helpful. (I realise it’s also more work before you hit “comment”, but I suggest that’s a feature of such an approach rather than a bug.)
Do please consider the possibility that to read the sequences is not, in fact, to cut’n’paste them into your thinking wholesale.
TheCosmist: the sequences are in fact useful for working out what people here think, and for spotting when what appears to be an apposite comment by someone is in fact a callout. ciphergoth has described LW as “a fan site for the sequences”, which it’s growing into more than, but which is still useful to know as the viewpoint of many long-term readers. It took me a couple of months of casual internet-as-television-time reading to get through them, since I was actively participating here and all.
Sequences are a specific method of addressing this situation, not a general reference. I don’t believe individual references would be helpful, instead I suggest systematic training. I wrote:
You’d need to address this argument, not just state a deontological maxim that one shouldn’t send people to read the sequences.
I wasn’t stating a deontological maxim—I was pointing that you were being bloody rude in a highly unproductive manner that’s bad for the site as a whole. “I suggest you try not to do that.”
Again, you fail to address the actual argument. Maybe the right thing to do is to stay silent, you could argue that. But I don’t believe that pointing out references to individual ideas would be helpful in this case.
Also, consider “read the sequences” as a form of book recommendation. Book recommendations are generally not considered “bloody rude”. If you never studied topology, and want to understand Smirnov metrization theorem, “study the textbook” is the right kind of advice.
Actually changing your mind is an advanced exercise.
Friendly AI: A Dangerous Delusion?
By: Hugo de Garis—Published: April 15, 2011
http://hplusmagazine.com/2011/04/15/friendly-ai-a-dangerous-delusion/
Hugo presents 3 main arguments:
The Evolutionary Engineering Argument.
The Cosmic Ray Argument.
The Naïve Asimov Argument
They all look hopeless to me.
The latest XKCD was brilliant. :)
I have only just discovered that Hacker News is worth following. Since the feed of stuff I read is Twitter, that would be @newsycombinator. I started going back through the Twitter feed a few hours ago and my brain is sizzling. Note that I am not a coder at all, I’m a Unix sysadmin. Work as any sort of computer person? You should have a look.
The YC/HN community was initially built on Paul Graham’s essays, just like LW was built on Eliezer’s sequences. Those essays are really, really good. If you haven’t read them already, here’s a linky, start from the bottom.
I have indeed :-)
It’s annoying that @newsycombinator is to the linked pages themselves, not to the Hacker News discussion.
I actually got to OB/LW through Hacker News.
I have known about Hacker News for ages, mentally filing it away as yet another Internet news aggregation site. However, I just happened to look at @newsycombinator and was quite surprised at how much of it was gold.
It is another news aggregation service, but it just happens to be the best :). There is a credible hypothesis that it’s not as good as it used to, as well. But it’s still head and shoulders over everything else (minus LW). I also came to OB via HN if I recall correctly.
Is it just me, or do you feel a certain respect for Harold Camping? He describes himself as “flabbergasted” that the world didn’t end as he predicted. He actually noticed his confusion!
(I can’t find the Open Thread for May 2011.)
He also predicted that the world would end on May 21, 1988 and September 7, 1994. I don’t think respect is appropriate.
Too bad! I see that the latest reports have him updating to October, so he didn’t attend to his confusion for very long this time either.
Via 538: How Feynman Thought on the Freakonomics blog.
Reposting from the latest HP:MoR discussion thread, since not everyone reads recent comments and I’m not sure this warrants a full post:
Fanfiction.net user Black Logician has announced Harry’s Game, a spinoff of HP:MoR which branches out around Chapter 65-67 of the original fic. From his post at the HP:MoR review board:
Please use ROT13 for spoilers when discussing Harry’s Game.
The writing errors in this story are very distracting. I did not click past chapter 1. Is there something to recommend it so strongly that I should get over the bad grammar etc.?
I also found the spellign and grammer mistakes to be distracting. The story itself does not quite compare to Yudkowsky or Rowling’s work, but it’s quite witty and makes some good rationalist points.
I just can’t leave this alone.
Intentional!
If you can get someone to write you a fully-spoiling summary, that might be better.
I’d read one of those. Any volunteers?
It’s a lot more Ender’s Game like than MoR already was. The ideas are good to decent, the execution questionable and the writing poor (by fanfiction worth reading standards, decent by average fanfiction standards). I found it fairly enjoyable, but I mostly managed to tune out the quality of the writing.
I’d recommend it to anyone who loves MoR for the clever plots, and anyone who enjoys the clever plots and can get over bad writing.
I just had a startling revelation. I had been glancing now and then at my karma for the last few days and noticed that it was staying mostly constant. Only going up now and then. This is despite a lot of my comments getting a whole bunch of upvotes. So naturally I figured I had offended one or more folks and they were downvoting me steadily to keep it constant. I don’t exactly tiptoe around to avoid getting anyone offside and I don’t really mind that much if people use karma hits as a way to get their vengeance on. It saves them taking it out via actual comments in the slightly more real social reality.
But I just looked at the bottom of the sidebar and slapped myself. Left aligned text formatting in a limited space is roughly equivalent to taking significant figures (with a floor instead of a round). Oops. Apparently nobody hates me after all, just, well, too many people love me. :P
My nomination for Worst Use of the word “Bayesian”, April 2011. This may answer my earlier question as to whether creationists, birthers, etc adopting the notion of Bayes’ theorem is a good idea or not. Remember: choose your prior based on your bottom line!
To anyone who knows: How active are the fortnightly Cambridge, MA meetups? There seem to be very few RSVPs on the meetup.com page, but I suppose it’s possible that if there are any regular attendees they don’t always bother RSVPing.
We generally just don’t bother RSVPing. Median attendance is 4, occasionally much more.
Hypothetical situation: Let’s say while studying rationality you happened across a technique that proved to give startlingly good results. It’s not an effortless path to truth but the work is made systematic and straightforward. You’ve already achieved several novel breakthroughs in fields of interest where you’ve applied the technique (this has advanced your career and financial standing). However, you’ve told nobody and, since nobody is exploring this area, you find it unlikely anybody will independently discover the same technique. You have no reason to believe others would apply this technique to areas you value and therefore doubt the benefits of sharing it widely. There could be a significant first mover advantage to being the only person who practices the technique.
Questions: What do you do? Would you share it with the world on principle? Would you try to establish a trusted group to practice the technique? Would you keep it to yourself until you could improve your position to the point where you’d have greater control and wouldn’t have to watch hopelessly as the technique was applied to immoral ends by others with greater resources than you? Or is there another option?
Trusted group.
Screenshot from our ongoing intelligence explosion:
howtomacke arobotinstchrocshin
I used to have a hobby of reading Christian apologetics to get a better understanding of how the other side lives. I got some useful insights from this, e.g. Donald Miler’s Blue Like Jazz was eye-opening for me in that it helped me understand better the psychology of religious faith. However, most books were a slog and I eventually found (more entertaining uses for my time)[http://projecteuler.net/index.php?section=problems].
Today I saw that an workmate of mine was reading Lee Strobel’s The Case For Faith earlier. My policy is to not discuss politics or religion at work, so I didn’t bring it up there.
I hadn’t read that particular book before, so I was curious about its arguments. Reading over the summary, I remembered again why I quit reading Christian apologetics—they are really boring.
The subtitle of The Case Against Faith is A Journalist Investigates the Toughest Objections to Christianity, and is quite untrue. I can almost dismiss each chapter in the time it takes to yawn. Even if Strobel had good answers to the Problem of Evil, or proved that religious people historically have been less violent than non-religious people, or somehow found a gap in current understanding of evolution, he would still be leagues away from providing evidence for a god, let alone his particular god.
I remember being similarly bored by a Christian-turned-Atheist’s book John Loftus’ Why I Became an Atheist. A common criticism of atheist writers is that they don’t engage the more sophisticated arguments of theists. This book illustrates why—the sophisticated arguments are stupid. Loftus accepts Christian scholars’ ideas, arguing within spaces previously occupied by dancing angels (e.g. he says on p.371 “In a well-argued chapter… Lowder has defended the idea that Jesus’ body was hastily buried before the Sabbath day… but that it was relocated on the Sabbath Day to the public graveyard of the condemned...”).
Most of us here would probably lose a live debate in front of an audience against someone like Lee Strobel. Even so, it’s a little disappointing to me that even the most skilled theist debater’s signature attack relies on bits like “This first cause must also be personal because there are only two accepted types of explanations, personal and scientific, and this can’t be a scientific explanation.” Because winning the debate by refuting that would be a waste of intellect.
Today’s SMBC has an amusing take on the simulation argument and attempting to guess the goals of the simulation creators.
For some reason, that comic reminds me of a particular Isaac Asimov story.
Does anyone else have religiophobia? I get irrationally scared every time I see someone passing out pocket bibles or knocking on doors with pamphlets. I’m afraid of...well, of course there isn’t much to be afraid of, or else it wouldn’t be a phobia.
Not really. I only have annoyance that whenever I see such people I’m always too busy to talk to them and find out more about what religion they are. I consider this to be evidence that there is a deity and that that deity treats me sort of how one might treat a cat when one has recently obtained a laser pointer.
I don’t get scared when I see people doing this, but I do have an irrational desire to go get into a long useless argument. I’m always too busy to have to fight it, though.
Fortunately, we have a defensive weapon (PDF) to hand.