Welcome to Less Wrong! (2010-2011)
This post has too many comments to show them all at once! Newcomers, please proceed in an orderly fashion to the newest welcome thread.
- Reliable Sources: The Story of David Gerard by 10 Jul 2024 19:50 UTC; 381 points) (
- 20 Apr 2011 22:05 UTC; 54 points) 's comment on Official Less Wrong Redesign: Call for Suggestions by (
- Less Wrong and non-native English speakers by 6 Nov 2011 13:37 UTC; 39 points) (
- Welcome to LessWrong (For highschoolers) by 26 Nov 2011 15:47 UTC; 35 points) (
- 6 Nov 2011 14:51 UTC; 16 points) 's comment on Less Wrong and non-native English speakers by (
- 26 Dec 2011 15:11 UTC; 14 points) 's comment on Welcome to Less Wrong!, part 2? by (
- 21 Jul 2011 16:25 UTC; 13 points) 's comment on Normal Cryonics by (
- 12 Oct 2010 0:54 UTC; 13 points) 's comment on Entropy, and Short Codes by (
- 27 Jun 2011 17:44 UTC; 10 points) 's comment on asking an AI to make itself friendly by (
- 28 Jul 2011 3:17 UTC; 9 points) 's comment on New Post version 1 (please read this ONLY if your last name beings with a–k) by (
- 11 Dec 2010 16:32 UTC; 7 points) 's comment on A sense of logic by (
- 18 Jul 2011 17:56 UTC; 6 points) 's comment on To Speak Veripoop by (
- 5 May 2011 19:55 UTC; 5 points) 's comment on Your Evolved Intuitions by (
- 26 Apr 2011 14:37 UTC; 5 points) 's comment on Mind Projection Fallacy by (
- 3 Nov 2011 16:07 UTC; 5 points) 's comment on 2011 Less Wrong Census / Survey by (
- 9 Nov 2010 0:38 UTC; 5 points) 's comment on Rationality Quotes: November 2010 by (
- 2 Sep 2011 15:50 UTC; 4 points) 's comment on Science Doesn’t Trust Your Rationality by (
- 22 Jul 2011 20:03 UTC; 4 points) 's comment on [SEQ RERUN] One Argument Against An Army by (
- 24 Nov 2010 19:13 UTC; 4 points) 's comment on Inherited Improbabilities: Transferring the Burden of Proof by (
- 5 Jan 2011 15:42 UTC; 4 points) 's comment on New Year’s Predictions Thread (2011) by (
- 19 Aug 2010 18:49 UTC; 4 points) 's comment on Existential Risk and Public Relations by (
- 7 Aug 2011 1:26 UTC; 3 points) 's comment on Beware of Other-Optimizing by (
- 31 Aug 2010 13:47 UTC; 3 points) 's comment on Allais Malaise by (
- 29 Jun 2011 15:37 UTC; 3 points) 's comment on [SEQ RERUN] Belief as Attire by (
- 16 Sep 2011 17:31 UTC; 2 points) 's comment on Disguised Queries by (
- 30 Aug 2010 21:01 UTC; 2 points) 's comment on Five-minute rationality techniques by (
- 28 Jul 2011 1:42 UTC; 2 points) 's comment on New Post version 2 (please read this ONLY if your last name beings with l–z) by (
- 14 Nov 2011 20:45 UTC; 2 points) 's comment on Drawing Less Wrong: Should You Learn to Draw? by (
- 8 Jun 2011 20:25 UTC; 1 point) 's comment on Configurations and Amplitude by (
- 30 Oct 2011 1:31 UTC; 1 point) 's comment on Doublethink (Choosing to be Biased) by (
- 20 Jul 2011 2:33 UTC; 1 point) 's comment on Guessing the Teacher’s Password by (
- 10 Dec 2010 18:37 UTC; 1 point) 's comment on Were atoms real? by (
- 5 Nov 2011 3:39 UTC; 1 point) 's comment on Less Wrong link exchange by (
- 1 Aug 2011 18:16 UTC; 1 point) 's comment on New Post version 2 (please read this ONLY if your last name beings with l–z) by (
- 4 Jun 2011 8:58 UTC; 1 point) 's comment on Nature: Red, in Truth and Qualia by (
- 14 Aug 2011 0:50 UTC; 1 point) 's comment on No One Knows What Science Doesn’t Know by (
- 29 Apr 2011 9:41 UTC; 0 points) 's comment on Reminder: London meetup, Sunday 2pm, near Holborn by (
- 20 Apr 2011 1:59 UTC; 0 points) 's comment on Do meetups really have to go on the front page? by (
- 13 Aug 2011 3:55 UTC; 0 points) 's comment on Theory of Knowledge (rationality outreach) by (
- 8 Jun 2011 6:51 UTC; 0 points) 's comment on What bothers you about Less Wrong? by (
- 28 Oct 2010 18:51 UTC; 0 points) 's comment on Art and Rationality by (
- 16 Nov 2011 4:53 UTC; 0 points) 's comment on Weekly LW Meetups: Melbourne, Canberra, Zurich, Mountain View by (
- 3 Nov 2011 23:26 UTC; 0 points) 's comment on Rationality Quotes November 2011 by (
- 11 Dec 2010 16:37 UTC; 0 points) 's comment on A sense of logic by (
- 18 Jul 2011 20:41 UTC; 0 points) 's comment on To Speak Veripoop by (
- 17 Jul 2011 0:01 UTC; 0 points) 's comment on Meta: Test by (
- 29 Aug 2011 23:19 UTC; 0 points) 's comment on Are there any Lesswrongers in the Waterloo, Ontario area? by (
Hello!
I’m a 32 year old physics PhD, working (so far) on the oh-so-fashionable subfield of graphene and carbon nanotubes. I took Quantum field theory, which is a little unusual for an experimentalist (though not positively rare). I have a background in programming, and a moderate degree of interest in AI.
I came here by way of the Methods of Rationality. After reading that, and upon seeing that there was a sequence on quantum mechanics, I had a suspicion that it wouldn’t be terrible. This suspicion was vastly exceeded. I never encountered the slightest technical flaw, which is better than many physicists can produce on the subject, let alone philosophers and amateur physicists.
I began wandering and seeing what else there was, and it is good. The atmosphere also seems quite good around here, so I thought I’d join the community rather than treating it as a collection of essays and comments.
So here I am.
~~ Edited to add: ~
I am not sure how this got so many upvotes. Was it the praise? The brevity? That I’m a physicist? The score just stands out on the page a bit, and I’m not at all sure why.
upvoted, because I’ve been wondering how the QM sequence is looked upon by physicists :)
I’d be interested to know that myself.
I’ve only spoken with a few because it’s a potentially awkward subject. I recall one other strongly and one other regular-strength in favor of MW+decoherence (both in my rough age-group);
one classmate said “decoherence, as I understand it, is a little more reasonable sounding than most”, for ontology, but uses the Copenhagen interpretation when thinking about epistemology;
one professor was against MW just on uneasiness grounds, but didn’t have a firm opinion;
one professor with the philosophy “If it’s just quantum mechanics, I’m not interested. If it’s not quantum mechanics, I’m not interested”, which is formally equivalent to MW + decoherence but without the explicit acknowledgement that it is;
one who was against everything, especially the part with everything in it;
and too many “Let’s stop talking about this/I’m not qualified to have an opinion/Aargh” to count.
~~
In this tiny sample of mostly experimentalists:
People with a preference for the Bohm guide wave interpretation: 0
People with a preference for more sophisticated just-QM interpretations such as transactional or consistent histories: 0
People who accept wavefunction collapse as real: 1 on the fence.
A survey on the subject could be interesting.
It’s because you’re a physicist who commented about the QM sequence. I, and apparently a lot of other people who’ve read it, really wanted to know if we’ve absorbed any mistakes. Thanks for giving a more informed opinion than most of us can bring. :)
I can’t answer for anyone else, but I think graphene work sounds pretty cool, so here’s an upvote from me!
Hello Less Wrong!
I was on facebook and I saw a wall post about the fanfiction Harry Potter and the Methods of Rationality. I haven’t read fanfiction much since I was a kid, but the title was intriguing, so I clicked on it and started reading. The ideas were interesting enough that I went to the author’s page and it brought me here.
Anyways, I’m a 22 year old female person. I’m graduating from college in 2 weeks with a chemistry major and I have no real plans, so it makes posting about my life situation a little awkward right now. I’ll probably be heading back to the Chicagoland area and trying to find a job, I guess.
I can already tell that this site is going to wreak havoc on my ability to finish up all my projects, study for finals, and hang out with my friends. I just spent a couple hours reading randomly around and I can tell I’ve barely scratched the surface on the content. But after I almost died laughing at the post about the sheep and the pebbles I was hooked. Really, I just want to be a freshman again so I can spend my time staying up all night thinking and talking and puzzling things out with EZmode classes and no real responsibilities.
Anyways. I’m pretty excited about getting through the material on here. I love learning to understand how other people think, and how that helps them reach the conclusions that they reach. It’s always terrifying when I realize that someone has posited an argument or a scenario that challenges my interpretation or understanding of the universe in a way that I can’t easily refute- especially when I can’t refute it because I realize they’re right and I’m not.
Oddly enough, one of the scariest experiences of my life was when someone told me about the Monty Hall problem- two goats one car. A friend explained the scenario and asked me if I would switch doors. I jokingly replied that I probably wouldn’t, since I was clearly already lucky enough to miss the goat once, I shouldn’t start questioning my decision now. The friend told me that I was being irrational and that by switching, I would have a better chance of picking the car. I remember being scornful and insisting that the placement of the goat occurred prior to my choosing a door, and revealing one of the other doors could have no impact on the reality of what prize had already been placed behind what door. The friend finally gave up and told me to go look it up.
I looked up the problem and the explanation, and it sent me into a bit of a tailspin. As soon as I read the sentence that explained that by switching, I would end up with the car 2⁄3 of the time as opposed to 1⁄3 of the time, I felt my intuitive ideas being uprooted and turned on their head. As soon as it clicked, I thought of 4 or 5 other ways to think about the problem and get the right answer- and of course it was the right answer, because it made logical and intuitive sense. But then thinking back to how sure I had been just 10 minutes ago that my other instincts had been correct was horrifying.
Remembering how completely comfortable and secure I had felt in my initial reasoning was so jarring because it now seemed so obviously counter-intuitive. I’m usually very comfortable refining my ideas in light of new ones, incorporating new frameworks and modifying the way I understand things. But that comfortableness derives from the fact that I’m not actually that attached to many of my ideas. When I was in high school, my physics teacher stressed the importance of understanding that the things we were studying were not the true nature of reality. They represented a way of modeling phenomena that we could observe and quantify, but they were not reality, and different models were useful for different things. Similarly, I usually try to keep in mind that the majority of the time, the understanding I have of things is going to be imperfect and incomplete, because of course I don’t have access to all the information necessary to make the perfect model. It followed that I should strive to be as adept as possible at incorporating new information into my model of understanding the universe whenever possible without resisting because I had some attachment to my preexisting ideas.
But the the case of the Monty Hall problem, I was confident that I understood the whole problem already. It seemed like my friend was trying to confuse my basic understanding of reality with a mathematical wording trick. Coming to an understanding of how deeply flawed my reasoning and intuition had been was exhilarating and terrifying. It was also probably at least a bit dramatized by the caffeine haze I was in at the time.
I think I still have a lot of ideas and ways of thinking that aren’t quite rational. I can find inconsistencies in my understanding of the world. I know that a lot of them are grounded in my emotional attachment to certain ways of thinking that I have in common with people with whom I identify. I’m afraid if I really think about certain things, I’ll come to conclusions that I either have to deliberately ignore or accept at the cost of giving up my ability to ignore certain truths in order to favor my personal attachments (Sorry that sentences was convoluted- I can’t think of a better way to phrase it at 8 AM when I’ve been up all night).
Sometimes I’m legitimately afraid I might drive myself crazy by thinking. Even in college I have a hard time finding people who really want to talk about a lot of the things I think about. My roommate is the most wonderfully patient person in the world- she sits for hours and listens while I spout ideas and fears about all kinds of physics and philosophy and everything in between. And even though she can follow most everything (sometimes it takes some explaining), she doesn’t even really find it very interesting. But there are times when I’m seriously concerned that I could go out of my head just from thinking and getting too close to my own horizon of imponderability and trying to conclude something or anything.
So yeah. I’m not quite sure if that’s quite what we’re supposed to do with introduction posts. In retrospect, I think I probably took way too long to drag out a rather boring story that could have been summarized in a few sentences and confided enough fears and weirdness to be off-putting and possibly discredited as a rationalist. Anyways… I’ve put off biochem proposals for hours reading here and now writing this, so I’m going to stick with it instead of redoing the whole thing and running out of time and failing to graduate. Props if you got through it all. Hopefully by the time I’m done here I’ll be sophisticated enough to say all this in a few concise sentences. Is eliminating excess rambling part of rationality? But yeah- I’ve never really read other people’s ideas about all these topics, and I’m kind of pumped about it. If I can understand even a bit of what y’all are talking about and figure out how to be a little less wrong I’ll be a happy camper.
Welcome! I love your story about the Monty Hall problem. Consider putting it as a toplevel anecdote in the Discussion Section.
I’m very interesting in reading your future posts! It sounds like you have a lot of potential and a lot of learning to do, which is always the most exciting combination. I wish I could be your roommate and get to hear all of this!
Definitely an interesting intro, and it’s good to see someone care so much about whether they understand the world.
Approximate quote from Being Wrong: Adventures in the Margin of Error: “How does being wrong feel? Exactly like being right.”
That was an awesome introduction post. I like the way you think.
Hey everbody,
I’m a PhD Student in Physics. I came across Lesswrong when I read Eliezer’s interview with John Baez. I was very intrigued by his answers: especially with his idea that the world needs to understand rationality. I identify with rationalism and especially with Lesswrong, because it just clicked. There were so many things in the world which people accepted and which I knew were just plain wrong. And before I found Lesswrong, I was a frustrated mess. And when I found Lesswrong it was a breath of fresh air.
For example: I was a pretty good debater in college. So in order to be a better debater, I started reading more about logical fallacies, which are common in argument and debate, such as ad hominem, slippery slope, appeal to authority etc . And the more I learnt about these, the more I saw that these were exactly the techniques common in debate. I was forced to conclude that debating was not about reaching the truth, but about proving the other person wrong. The people in debating circles were very intelligent; but very intelligent in a useless (and maybe harmful) way. They were scarcely interested in the truth. They could take any argument, twist it, contort it, appeal to emotions and use every fallacy listed in a beautiful way to win. And moreover, that was the exactly the kind of person I was becoming. In retrospect, it’s clear to me that I got into debating only out of desire for status and not for any actual interest in the truth. But as soon as I saw what I was becoming, I walked away. I guess, the kernel of honesty left in me from being a student of physics rescued me in the end.
Second example: One of the first articles that really brought me into reading major portions of Lesswrong was the article on Doublethink by Eliezer. So when I was going through a phase of depression, I thought that religion held the key. Now, I did not believe in any kind of spiritual god or any spiritual structure whatsoever. But my family is extremely religious and I saw the happiness they got from religion. So I tried. I tried to convince myself that religion has a very important social function and saves people from anomie and depression. I tried to convince myself that one could be religious and yet not believe in god. I tried to go through all the motions of my religion. Result? Massive burnout. My brain was going to explode in a mass of self-contradiction. That post by Eliezer really helped me. There’s a line in there:
As I read these lines, I literally felt a huge wave of relief sweep over me. I wasn’t going to be happy with religion. Period. I wasn’t going to be happy with self-deception. Period. And I knew I had finally found people who ‘got it’.
So that was a glimpse of how and why I got interested in Lesswrong. I’m reading the Sequences and looking around these days. I hope to start posting soon. And also attend LW meetups in my city.
I’m deeply interested in ideas from evolutionary psychology, neuroscience, computer science and of course physics! I work broadly on quantum information theory.
Cheers!
-Stabilizer
Welcome, Stabilizer!
Interesting that you say this...I haven’t had the same experience at all. I was raised basically agnostic/atheist, by parents who weren’t so much disapproving of religion as indifferent. I started going to church basically because I made friends with a girl who I had incredibly fun times hanging out with and who was also a passionate born-again Christian. I knew that most of the concepts expressed in her evangelical Christian sect were fallacious, but I met a lot of people whose belief had allowed them to overcome difficult situations and live much happier lives. Even if true belief wasn’t an option for me, I could see the positive effect that my friend’s church had, in general, in the community it served. And I was a happier, more positive, and more generous person while I attended the group. There was a price to pay: either I would profess my belief to the others and feel like I was lying to a part of myself, or I wouldn’t, and feel like ever-so-slightly an outsider. But maybe because of my particular brain architecture, the pain of cognitive dissonance was far outweighed by the pleasure of having a ready-made community of kind, generous (if not scientific-minded) people eager to show me how welcoming and generous they could be. I have yet to find something that is as good for my mental health and emotional stability as attending church.
That being said, a year of not attending church and reading LessWrong regularly has honed my thinking to the point that I don’t think I could sit back and enjoy those church services anymore. So that avenue is closed to me now, too.
For what it’s worth, it depends a lot on the church service: I know quite a few very sharp thinkers whose church membership is an important and valuable part of their lives in the way you describe. But they are uniformly members of churches that don’t demand that members profess beliefs.
One gentleman in particular gave a lay sermon to his church on Darwin’s birthday one year about how much more worthy of admiration a God who arranges the fundamental rules of the universe in such a way that intelligent life can emerge naturally out of their interaction, than is a God who instead must clumsily go in and manually construct intelligent life, and consequently how much more truly worshipful a view of life is the evolutionary biologist’s than the creationist’s, which was received reasonably positively.
So you might find that you can get what you want by just adding constraints to the kind of church service you’re looking for.
Sounds like the Unitarian church that my parents took us to for a few years...I’m not sure why they took us, but I think it might have had more to do with “not depriving the children of a still-pretty-typical childhood experience like going to Sunday school” than with a wish to have church an important part of their lives.
I would probably enjoy the Unitarian community if I joined for long enough to really get to know them… I’m sure the adults were all very kind, welcoming people. Still, the two churches that I’ve attended the most are High Anglican and Pentecostal. The Anglican cathedral is where I sang in the choir for more than five years, and the music is what really drew me; although the Anglican church is very involved in community projects and volunteering, almost the whole congregation is above the age of fifty, and the young people who do attend are often cautious, conservative, and not especially curious about the world, which reduces the amount of fun I can have with them.
Surprisingly enough, in the Pentecostal church where the actual beliefs professed are much more extreme, most of the congregation are young and passionate about life and even intellectually curious. They are fun to hang out with...in fact, I frequently had more fun spending a Friday night at a Pentecostal event than at a party. They took their beliefs seriously and really lived according to how they saw the Bible, even though I have no doubt their actions would have been considered weird in a lot of contexts and by many of their friends. I think a lot of the apparent mental health benefit of this church came from the community’s decision to stop caring about social stigmas and just live. This is, I think, what I most respected about them...but for a lot of the same reasons, I now find their ideas and beliefs a lot more jarring than those of the Anglican church.
I have no doubt that there are churches on all sides of the continuum: “traditional” communities, like the Anglican church, which are socially liberal and also composed of fun young people...and also fundamentalist evangelical churches which have ossified into organizations with strict rules and a lot more old people than young people. Maybe somewhere out there is a church that has all the aspects I like (singing, rituals, fun young people who do outrageous things together and bond over it) and is also bearable non-evangelical, non-fundamentalist, and socially liberal, but I haven’t found it yet.
I used to have that kind of brain architecture for quite some time, and I kind of miss it. But as I started studying more and more physics, it just became harder and harder. So, I guess the trade-off got really skewed at some point of time.
I have to mention that my religiosity kind of went through cycles. There was a time when I was an internally-militant (not very outspoken) atheist, followed by a period of considerable appreciation for religion, and again followed by a (currently) pretty comfortable atheism. If I think back to my first episode of atheism (religion was my default state as I was born in a pretty religious family), I guess I was pretty uncomfortable with it, in the sense that I felt that a lot more needed to be explained. In the intervening episode of religiosity, I appreciated the exact things that you mention about religion, but I just didn’t like all the baggage, i.e. the time and money spent in rituals. My religion was Hinduism, which is highly ritualistic, but enjoys some nice philosophies. I still like some of the philosophy but I dislike most of the ritual.
Funny. That’s probably a brain architecture thing, too, but I really enjoy a lot of the High Anglican rituals at the church where I used to sing in choir. The traditional carols that all of us know by heart, every single word… The ministers and the bishop in their beautiful robes leading the choir in a procession around the cathedral while we sing in insane harmony… Stuff like the ritual of turning out all the lights and everyone leaving in the dark on Maundy Thursday (day before Easter Friday) to symbolize Jesus’ death. It’s all very theatrical, and very moving, and usually makes me cry.
I have a feeling that you might be talking about a different kind of ritual, though, if you’re frustrated by the amount of time and money spent on them.
Building and running a church, paying for a bishops education and the time he works there, training children to sing, and all of the time people spend there is not a small investment. Multiply that by all the churches in the world, and add the cost of various missions and church plants to spread religion, or the charities which do their work sub-optimally because they take religion more seriously then saving lives and I imagine that the figure would become inappropriately ludicrous. Not that just eliminating religion would make us all much more efficient, humans are very gifted at wasting time and money.
I’ve heard that argument before, and it does have a lot of weight. In this case, though, are we talking about religion or about costly ritual? Both are cultural phenomena, and they’re frequently found together, but there are religions that aren’t into ritual at all, like Quakers, who are best known for their simple, silent style of prayer and worship, and don’t go around building fancy cathedrals). And there are costly “rituals” which are not related to religion at all: football, for example, or theatre.
Agreed that churches which run charities may run their sub-optimally from an atheist’s point of view, since a lot of the time one of the unstated aims of their charity is to convert people. (This used to make me furious when I attended the Pentecostal church mentioned in one of the parent comments.) But we were talking about ritual, and I was specifically talking about deeply moving, meaningful rituals. It just so happens that the ones that have meaning to me are religious in nature. I know a lot of people find arts and theatre meaningful, and likely there are people who find watching sports meaningful, in a similar way. There’s some kind of human instinct to gravitate towards activities that are communal, repetitive, and have a sense of tradition that imbues them with meaning. There’s also a human instinct to think superstitiously, which I don’t share much, and which makes it hard for me to really enjoy those meaningful moments in church.
Nitpick: yes, paying for a bishop’s work and teaching children to sing is something that happens “under religion’s umbrella.” That doesn’t make it bad! I learned to sing better through the church choir (for which I was paid a monthly stipend for the community service of singing during Sunday worship!) than I would have in the $400-per-month children’s choir, which I probably wouldn’t have been allowed into...most people thought I was tone deaf until I proved them wrong. Bishops who organize community events and charities are doing something good for the community, whether or not it’s sub-optimal, and face it...are any human activities run optimally? Yes, it’s possible to have a better community-runner than a church, but the amount of money that goes into churches right now does produce something of value!
I’ve heard that argument before, and it does have a lot of weight. In this case, though, are we talking about religion or about costly ritual? Both are cultural phenomena, and they’re frequently found together, but there are religions that aren’t into ritual at all (like (http://en.wikipedia.org/wiki/Quakers), who are best known for their simple, silent style of prayer and worship, and don’t go around building fancy cathedrals). And there are costly “rituals” which are not related to religion at all: football, for example, or theatre.
Agreed that churches which run charities may run their sub-optimally from an atheist’s point of view, since a lot of the time one of the unstated aims of their charity is to convert people. (This used to make me furious when I attended the Pentecostal church mentioned in one of the parent comments.) But we were talking about ritual, and I was specifically talking about deeply moving, meaningful rituals. It just so happens that the ones that have meaning to me are religious in nature. I know a lot of people find arts and theatre meaningful, and likely there are people who find watching sports meaningful, in a similar way. There’s some kind of human instinct to gravitate towards activities that are communal, repetitive, and have a sense of tradition that imbues them with meaning. There’s also a human instinct to think superstitiously, which I don’t share much, and which makes it hard for me to really enjoy those meaningful moments in church.
Nitpick: yes, paying for a bishop’s work and teaching children to sing is something that happens “under religion’s umbrella.” That doesn’t make it bad! I learned to sing better through the church choir (for which I was paid a monthly stipend for the community service of singing during Sunday worship!) than I would have in the $400-per-month children’s choir, which I probably wouldn’t have been allowed into...most people thought I was tone deaf until I proved them wrong. Bishops who organize community events and charities are doing something good for the community, whether or not it’s sub-optimal, and face it...are any human activities run optimally? Yes, it’s possible to have a better community-runner than a church, but the amount of money that goes into churches right now does produce something of value!
Please do not sign your posts. That information is conveyed by the username listed at the top of the post.
Welcome to lesswrong, I’m quite new here too. I read your intro and think you would probably thoroughly devour Edward De Bono’s “I am right, you are wrong”. I agree with you regarding debating (and criticism) and so does De Bono, he writes about it quite elegantly.
Cheers, peacewise.
I have a Physics question for you: is time continuous? I mean, is any given extent of time always further divisible into extents of time?
As far as I understand it : any time smaller than Planck’s time (around 10^-43 second) is not meaningful, because no experiment will ever be able to measure it. So the question is kinda pointless, for all practical purpose, time could be counted as integer units of Planck’s time.
I’ve read that too, but I get confused when I try to use this fact to answer the question. On the one hand, it seems you are right that nothing can happen in a time shorter than the Planck time, but on the other hand, we seem to rely on the infinite divisibility of time just in making this claim. After all, it’s perfectly intelligible to talk about a span of time that is one half or one quarter of Planck time. There’s no contradiction in this. The trouble is that nothing can happen in this time, or as you put it, that it cannot be meaningful. But does this last point mean that there is no shorter time, given that a shorter time is perfectly intelligible?
Suppose for example that exactly 10 planck times from now, a radium atom begins decay. Exactly 10 and a half planck times from now, another radium atom decays. Is there anything problematic in saying this? I’ve not said that anything happened in less than a Planck time. 10 Planck times and 10.5 Planck times are both just some fraction of a second and both long enough spans of time to involve some physical change. If there’s nothing wrong with saying this, then we can say that the first atom began its decay one half planck length before the second. This makes a half Planck length a meaningful span of time in describing the relation between two physical processes.
Well, the correct answer up to this point is that we don’t know. We would need a theory of quantum gravity to understand what’s happening at this scale, and who knows how many ither step further we need to move to have a grasp of the “real” answer. Up to now, we only know that “something” is going to happen, and can make (motivated) conjectures. It may indeed be that time is discretized in the end, and talking about fractions of planck time is meaningless: maybe the universe computes the next state based on the present one in discrete steps. In your case, it would be meaningless to say that an atom will decay in 10.5 Planck times, the only thing you could see is that at step 10 the atom hasn’t decayed and at step 11 it has (barring the correct remark of nsheperd that in practice the time span is too short for decoherence to be relevant). But, honestly, this is all just speculation.
Thanks for the response, that was helpful. I wonder if the question of the continuity of time bears on the idea of the universe computing its next state: if time is discreet, this will work, but if time is continuous, there is no ‘next state’ (since no two moments are adjacent in a continuous extension). Would this be important to the question of determinism?
Finally, notice that my example doesn’t suggest that anything happens in 10.5 planck times, only that one thing begins 10 planck times from now, and another thing begins 10.5 planck times from now. Both processes might only occupy whole numbers of planck times, but the fraction of a planck time is still important to describing the relation between their starting moments.
Warning: wild speculations incoming ;)
I don’t think continuous time is a problem for determinism: we use continuous functions every day to compute predictions. And, if the B theory of time turns out to be the correct interpretation, everything was already computed from the beginning. ;)
What I was suggesting was this: imagine you have a Planck clock and observe the two systems. At each Planck second the two atoms can either decay or not. At second number 10 none has decayed, ad second 11 both have. Since you can’t observe anything in between, there’s no way to tell if one has decayed after 10 or 10.5 seconds. In a discreet spacetime the universe should compute the wavefunctions at time t, throw the dice, and spit put the wavefunctions at time t+1. A mean life of 10.5 planck seconds from time t translates to a probability to decay at every planck second: then it either happens, or it doesn’t. It seems plausible to me that there’s no possible Lorentz transformation equivalent in our hypothetical uber-theory that allows you to see a time span between events smaller than a planck second (i.e. our Lorentz transformations are discreet, too). But, honestly, I will be surprised if it turns out to be so simple ;)
Do you think you could explain this metaphor in some more detail? What does ‘computation’ here represent?
Just a side-note… I don’t think this was supposed to be a ‘metaphor’.
Fair enough. How does the view of the universe as a computer relate to the question of the continuity of time?
http://en.wikipedia.org/wiki/Digital_physics (It’s been years since I read that article; I’m going to read it again...)
I read that too as soon as I saw thomblake’s reply. I’m a newcomer here, and I hadn’t heard of this view of physics before so it was very informative (though the quality of the wiki article isn’t that high, citation wise). I’ve also been talking to a physicist/philosopher about this (he’s been saying a lot of the same things you have) and he gave me the impression that if there’s a consensus view in physics, it’s that time is continuous...but that this is an open question.
Is this computationalist view of physics popular here, or rather, is it more popular here than in the academic physics community? It seems as though a computationalist view would on the face of it come into some conflict with the idea of continuous time, since between any state and any subsequent computed therefrom there would be an intermediate state containing different information than the first state. But I’m way out of my depth here.
In your example you’re using the term “now”. That term already implies a point in time and therfore an infinitely divisible time. The problem is that while you certainly could conceive of a half planck time you could never locate that half in time. I.e. an event does not happen at a point in time. It happens anywhere in a given range of time with at least the planck length in extend. Now suppose that event A happens anywhere in a given timeslice and event B happens in another timeslice that starts half a planck time after the slice of event A. You can not say that event B happens half a planck time after event A since the timeslices overlap and thus you cannot even say that event B happens at all after event A. It might be the other way round. So while in your mind this half planck length seems to have some meaning in reality it does not. Your mind insists on visualizing time as continuous and therefore you can’t easily get rid of the feeling that it were.
Why do you say that the time slices overlap? It seems on your set up, and mine, that they do not. The point seems to be just that nothing can happen in less than a Planck time, not that something cannot happen in 10.5 Planck times. The latter doesn’t follow from the former so far as I can see. But I’m not on firm ground here, and I may well be mistaken. (ETA: But at any rate my example above doesn’t involve anything happening in 10.5 Planck times. Everything I describe in that example can be said to occur in a whole number of planck times.)
And ‘now’ doesn’t imply infinite divisiblity: we could have moments of time whether or not time is infinitely divisible, and we would need to refer to them to talk about the limit between two planck times anyway. And we cannot arrive at moments by infinite divisibility anyway, since moments are extensionless, and infinite division will always yield extensions.
Ah, english is not my native language. With “event B happens in another timeslice that starts half a planck time after the slice of event A” I meant timeslice B starts half a planck length after timeslice A started, so the second half of A overlaps with the fist of B.
B does not happen at 10.5 planck times after now. It happens somewhere between 10 and 11 planck times after “now” and you cannot tell when. Do not visualize time as a sequence of slices.
Edit: My point is, it’s simply impossible to visualize time. If your brain insists on visualizing it, you will never understand. Because whenever you visualize a timeslice you visualize it with a clear cut start and a clear cut end. But that’s not how this works.
Edit2: Maybe I’m just reading your response wrong. My point is that the precision in your example is the problem. There is no event that happens at a time with a precision smaller than one planck length. So 10.5 is just as wrong as 0.5.
Ahh, I see, I think I misunderstood you. I’m not sure I understand why A and B overlap. The claim about Planck times is that nothing can happen in less time. Does it follow from that that all time must be measured in whole numbers of Planck times? A photon takes one Planck time to pass through one Planck length, but I can’t see anything problematic with a cosmic ray passing through one Planck length in 10.5 Planck times. In other words does the fact that the Planck time is a minimum mean that it’s an indivisible unit?
I don’t think anything in my example relies on visualizing time, or on visualizing it as a series of slices. But I may be confused there. Do you have reason to think that one cannot visualize time? I suppose I agree that time is not a visible object, and so any visualization is analogical, but isn’t this true of many things we do visualize to our profit? Like economic growth, say. What makes time different?
No. The claim is that nothing is located in time with a precision smaller than the planck time.
I don’t really doubt that you’re right. Most everything I read on the subject agrees with or is consistant with what you’re saying. But the idea is still very confusing to me, so I appreciate your explanations. Let me try to make my troubles more clear.
So far as I understand it, a Planck time is a minimum because that’s the time it takes the fastest possible thing to pass through the minimum possible length. If something were going 99% the speed of light, or 75% or any percentage other than 100%, 50%, 25%, 12.5% etc. then it would travel through the Planck length in a non-whole number of Planck times. So something traveling at 75% the speed of light would travel through the Planck length at 1.5 Planck times. Maybe we can’t measure this. That’s fine. But say something were to travel at a constant velocity through two Planck lengths in three Planck times. Wouldn’t it just follow that it went through each Planck length in 1.5 Planck times? It may be that we can’t measure anything with precision greater than whole numbers of Planck times, but in this scenario it wouldn’t follow from that that time is discontinuous.
Mathematically speaking, you can say “in average it travelled for 1 Planck length in 1.5 Planck time”. But physically speaking, it doesn’t mean anything. Quantum mechanics works with wavefunction. Objects don’t have an absolutely precise position. To know where the object is, you need to interact with it. To interact with it, you need something to happen. Due to Heinsenberg’s Uncertainity Principle (even if you consider it as a “certainity principle” as Eliezer does), you just can’t locate something more precisely in space than a Planck length, nor more precisely in time than a Planck time. Done at quantum level, objects don’t have a precise position and speed. So saying “it moves at 0.75c so it crosses 1 Planck length in 1.5 Planck time” doesn’t hold. It can only hold as an average once the object evolved for many Planck times (and moved many Planck length).
I see. But this raises again my original worry: does QM’s claim about Planck times actually say anything about the continuity of time? Or just something about the theoretical structure of QM? Or just something about the greatest possible experimental precision? Does a limit on the precision of time at this level imply that these are actual indivisible and discontinuous units?
Maybe I’m just too steeped in pragmatism to notice, but it seems your question has already been answered. For example:
No, a limit on precision tells you that it’s not meaningful to ask whether or not there are actual indivisible and discontinuous units. There’s no experiment that could tell the difference.
I think pragmatism is a fine approach here, but could you clarify for me what your think the answer to my question is exactly? If it’s not meaningful to ask whether or not there are indivisible and discontinuous units, then is the answer to my question “Does QM’s claims about Planck time imply that time is discontinuous?” simply “No” because QM says nothing meaningful about the question one way or the other?
In ‘pure’ QM (without gravity), the Planck length has no special significance, and spacetime is assumed to be continuous. But we know that QM as we know it must be an approximation because it disagrees with GR (and/or vice versa), and the ‘correct’ theory of quantum gravity might predict weird things at the Planck scale. So far, most proposed theories of quantum gravity have little more predictive power than “The woman down the street is a witch; she did it”, though some do predict stuff such as the dispersion of gamma rays I’ve mentioned elsewhere.
We’re trying to dissolve the question by pointing out that there exists a third option besides “continuous” or “discontinuous”. So the answer to “Does QM’s claims about Planck time imply that time is discontinuous?” would be “No, but neither is it continuous, but a third thing that tends to confuse people.”
Edit: retracted because I don’t think this is helpful.
For a start the classical hallucination of particles and decay doesn’t really apply at times on the planck scale (since there’s no time for the wave to decohere). There’s just the gradual evolution of the quantum wavefunction. It may be that nothing interesting changes in the wavefunction in less than a planck time, either because it’s actually “blocky” like a cellular automata or physics simulation, or for some other reason.
In the former case you could imagine that at each time step there’s a certain probability (determined by the amplitude) of decay, such that the expected (average) time is 0.5 planck times after the expected time of some other event. Such a setup might well produce the classical illusion of something happening half a planck time after something else, although in a smeared-out manner that precludes “exactly”.
That’s a good point about decay, but my example only referred to the beginning of the process of decay. I wasn’t trying to claim that the decay could take place in less than one, one, or less than one trillion planck times. The important point for my example is just that the starting points for the two decay processes (however long they take) differ by .5 planck times. Nothing in the example involves anything happening in less than a Planck time, or anything happening in non-whole numbers of Planck times.
But the thing is : how can you measure that the decay differs by .5 Planck times ? That would require an experimental device which would be in a different state .5 Planck times earlier, and that’s not possible, according to my understanding.
Good point. I agree, it doesn’t seem possible. But this is what confuses me: no measuring device could possibly measure some time less than one Planck time. Does it follow from this alone that a measuring device must measure in whole numbers of Planck times? In other words, does it follow logically that if the planck time is a minimum, it is also an indivisible unit?
This is my worry. A photon travels across a planck length in one planck time. Something moving half light-speed travels across the same distance in two planck times. If Planck times are not only a minimum but an indivisible unit, then wouldn’t it be impossible for some cosmic ray (A) to move at any fraction of the speed of light between 1 and 1/2? A cosmic ray (B) moving at 3⁄4 c couldn’t cover the Planck length in less time than A without moving at 1 c, since it has to cover the planck length in whole numbers of planck times. This seems like a problem.
It could be like that something moving at 3⁄4 c will have, on each Planck time, a 3⁄4 chance of moving of one Planck length, and a 1⁄4 chance of not moving at all. But that’s how I understand it from a computer scientist point of view, it may not be how physicists really see it.
But I think the core reason is that since no signal can spread faster than c, no signal can cross more than one Planck length over a Planck time, so a difference of less than a Planck time can never be detected. Since it cannot be detected, since there is no experimental setting that would differ if something happened a fraction of Planck time earlier, the question has no meaning.
If time really is discreet or continuous doesn’t have any meaning, if no possible experiments can tell the two apart.
Of course, given any experiment, spacetime being discrete on a sufficiently small scale couldn’t be detected, but given any scale, a sufficiently precise experiment could tell if spacetime is discrete at that scale. And there’s evidence that spacetime is likely not discrete at Planck scale (otherwise sufficiently-high-energy gamma rays would have a nontrivial dependency of speed on energy, which is not what we see in gamma-ray bursts). See http://www.nature.com/nature/journal/v462/n7271/edsumm/e091119-06.html
Thanks for the post and for the very helpful link.
The difference between discreet or continuous time is a concern of mine because it bears on what it means for something to be changing or moving. But I’m very much in the dark here, and I don’t know what physicists would say if asked for a definition of change. Do you have any thoughts?
Well, the nature of time is still a mystery of physics. Relativity killed forever the idea of a global time, nad QM damaged the one of a continuous time. Hypothesis like Julian Barbour’s timeless physics (which has significant support here), or Stephen Hawking’s imaginary (complex number) time could change it even more.
Maybe once we have a quantum gravity theory and an agrement over the QM interpretation we could tell more… but for now, we’ve to admit we don’t know much about the “true nature” of change or movement. We can only tell how it appears, and since any time smaller than Planck time could never be detected, we can’t tell apart from that if it’s continuous or discreet.
Well, I’m not so much asking about the true nature of change or movement but rather just what we mean to say when we say that something is changing or has changed. I take it that if I told any layperson that a block of wood changed from dark to pale when left out in the sun, they would understand what I mean by ‘changed’. If interrogated as to the meaning of change they might say something like “well, it’s when something is in one condition at one time, and the same thing is in another condition at another time. That’s a change.”
But obviously that’s quite informal and ill suited to theoretical physics. On the other hand, physicists must have some basic idea of what a change or motion is. Yet I cannot think of anything more precise or firm than what I’ve said above.
If you go deep enough in physics, you don’t have “wood”. You just have a wavefunction. The wavefunction evolves with time in “classical” QM physics, and just exists statically in timeless physics.
And “the same thing” doesn’t mean much, since there is nothing like “this electron” but only “one electron”.
Saying that a piece of wood changed is an upper-level concept, which you can’t directly define in fundamental physics, but only approximates (like “pressure”, or “wood”, or “liquid”). The way you define your high level approximation doesn’t really need to know if the lower level is continuous or not. The same way you won’t define “liquid” differently just because we discovered that protons are not indivisible, but made of quarks.
Of course, lower level can be relevant : for example the fact there is no such thing as “this electron” contributes to saying that personal identity depends of configuration more than of “the same matter”. But it’s only a minor argument towards it, for me.
Fair enough, but surely the idea is to explain wood and the changes therein by reference to more fundamental physics. So even if the idea of change doesn’t show up at the very most fundamental levels, there must be some level at which change becomes a subject of physics. Otherwise, I don’t see how physics could profess to explain anything, since it would have nothing to do with empirical (and changable) phenomena.
I’d love to talk more about that. Do you see configurations as platonic? And if our configuration is in constant flux (as is hard to doubt) on some level, do we therefore need to distinguish essential aspects of the configuration from accidental ones? And wouldn’t this view admit of two distinct persons having the same personal identity? That seems odd.
Well, I will say that a movie is “the same movie”, whatever it is stored on analog film, optical support, magnetic support or ssd storage. The content and the physical support are different issue. I’ll say that a movie “changed” if you cut or add some scene, or add subtitles, … but not if you copy the file from your magnetic hard disk to an USB key, even if there are much more differences at physical level between the HD and the USB key.
The same is true for personal identity, in my point of view. The personal identity is in the configuration of neurons, and even in the way changes propagate on the neural network, not in the specific matter distribution. Then, personal identity is not binary (am I the same I was one week ago ? and 20 years ago ?). But to a point yes, you can theoretically have two distinct “persons” with the “same” personal identity, if you can duplicate, or scan, a person.
I’m sorry, I really don’t know. In fact, I don’t think I even know what the majority opinion is among physicists (if there is one).
At the face of it, it seems like if spacetime is discrete, then up until now, the unit of discreteness is small enough to allow us to do calculus (which assumes continuity) with impunity, even at the smallest of scales our experiments go to. So, as far as experimental evidence goes, there’s no reason to believe in discreteness. But I guess your question is whether there are any theoretical arguments which suggest discreteness… to which I really don’t have an answer. If I understand some interesting argument in the future, I’ll get back to you.
Thanks, I’ll look forward to it.
Hello Less Wrong!
I’m 16, female, and a senior in high school. Before I started reading here, I was not particularly interested in math, science, or rationality (which I had never really heard of). I stumbled on Harry Potter and the Methods of Rationality in October, and fell in love immediately. I read through the whole story in one night, and finally made the leap to Less Wrong during Eliezer’s hiatus.
I started on Less Wrong by reading Mysterious Answers to Mysterious Questions and within three posts I realized that, for the first time in my life, I was surrounded by people significantly smarter than me. Some people would probably have been excited about that; I was terrified. I promised myself that I wouldn’t post—wouldn’t even create an account, to avoid the temptation of posting—until I had read all the sequences and understood everything everyone said.
In retrospect, that may have been setting the bar a little too high for myself, especially since seven more sequences were added while I was reading. I eventually revised my standard to “I will not comment until I’m sure I actually have something to add to a discussion, and until I understand the things I have read well enough to explain them convincingly to 4 of my friends.”
The fact that I had to set all of those hurdles for myself just to have the self-confidence to create an account probably tells you a little about myself—I’m not ordinarily insecure, but I was so excited to find something like this I was very worried about “messing it up”. I’ve now read about 90% of the sequences and 98% of everything posted on Less Wrong in the last few months, and understood almost all of it (the quantum physics and decision theory sequences still confuse me). I’m not sure “read everything before you start to contribute” is generally a good guideline for new visitors, but for me it was perfect. I changed my mind about a lot of important things along the way—if there’s enough interest, I may discuss this in a post about exposing more teenagers to rationality.
So, thank you all for this great site! I hope I’ll be able to contribute.
Welcome. Just remember: don’t take the posts on LessWrong as gospel, so to speak, just because of their source. Eliezer has posted about this several times, though, so you most probably need no reminding.
Thanks! I worried for a while about changing my mind too much on the basis of one blog, and I still don’t agree with the Less Wrong consensus on everything, but overall I’ve found them very helpful. Anything specifically you would view with a skeptical eye?
Nothing specific that I can think of! There are some posts I might disagree with, but I don’t think there are any systematic errors being made.¹ I agree with the conclusions laid out in most of the posts here, and with Mr. Yudkowsky’s posts in particular. It’s just easy to become so enthusiastic about becoming rational “the LessWrong way” that you end up losing that rationality! But this is not so easy as it might be with other topics, perhaps.
¹(An example of a post of Eliezer’s that contains some things I disagree with would be “Circular Altruism”; I posted my views and some counter-examples there, so I won’t go into it here. However, I recognize many people do agree with him, so I’m not claiming to be entirely certain his conclusions are wrong—my point is just that it’s a rare individual who never arrives at an incorrect conclusion!)
My name is Scott Starin. I toyed with the idea of using a pseudonym, but I decided that this site is related enough to my real world persona that I should be safe in claiming my LW persona.
I am a spacecraft dynamics and control expert working for NASA. I am a 35-year old man married to another man, and we have a year-old daughter. I am an atheist, and in the past held animist and Christian beliefs. I would describe my ethics as rationally guided with one instinctive impulse to the basic Christian idea of valuing and respecting one’s neighbor, and another instinctive impulse to mistrust everyone and growl at anyone who looks like they might take my food. Understanding my own humanity and human biases seems a good path toward suppressing the instinctive impulses when they are inappropriate.
I came to this site from an unrelated blog that briefly said something like “Eliezer Yudkowsky is frighteningly intelligent” and linked to this site. So, I came to see for myself. I’ve read through a lot of the sequences. I really enjoyed the Three Worlds Collide story and forced my husband to read it. EY does seems to be intelligent, but I’m signing up because he and the rest of the community seem to shine brightest when new ideas are brought in. I have some ideas that I haven’t seen expressed, so I hope to contribute.
One area where I might contribute is from my professional interest in the management of catastrophic risk of spacecraft failure, which shares some ideas with biases associated with existential risk to the human species. Yudkowsky’s book chapter on the topic was really helpful.
Another area is in the difference between religious belief and religious practice. The strong tendency to reject religious belief by members of the LW community may come at the expense of really understanding what powerful emotional, and yet rational, needs may be met by religious practice. This is probably a disservice to those religious readers you have who could benefit from enhanced conversation with LW atheists. Religious communities serve important needs in our society, such as charitable support for the poor or imprisoned and helping loved ones who are in real existential crisis (e.g. terminally ill or suicidal), etc. (Some communities may even produce benefits that outweigh the costs of whatever injury to truth and rationality they may do.) It struck me that a Friendly AI that doesn’t understand these needs may not be feasible, so I thought I should bring it up.
I hope readers will note my ample use of “may” and “might” here. I haven’t come to any firm conclusions, but I have good reasons for my thoughts. (I’ll have to prove that last claim, I know. As a good-faith opener, I do go to a church that has a lot of atheist members—not agnostics, true atheists, like me.) I confess the whole karma thing at this site causes me some anxiety, but I’ve decided to give it a try anyway. I hope we can talk.
(Since I’m identifying myself, I am required by law to say: Nothing I write on this site should be construed as speaking for my employer. I won’t put a disclaimer in every post—that could get annoying—only those where I might reasonably be thought to be speaking for or about my work at NASA.)
Welcome then! Your first idea does sound interesting, and I look forward to heard about it. Don’t worry too much about Karma.
Welcome!
Understanding and overcoming human cognitive biases is, of course, a recurring theme here. So is management of catastrophic (including existential) risks.
Discussions of charity come up from time to time, usually framed as optimization problems. This post gets cited often. We actually had a recent essay contest on efficient charity that might interest you.
The value of religion (as distinct from the value of charity, of community, and so forth) comes up from time to time but rarely goes anywhere useful.
Don’t sweat the karma.
If you don’t mind a personal question: where did you and your husband get married?
We got married in a small town near St. Catharine’s, Ontario, a few weeks after it became legal there.
Thanks for the charity links. I find practical and aesthetic value in the challenging aspect of “shut up and multiply,”(http://lesswrong.com/lw/n3/circular_altruism/), particularly in the example you linked about purchasing charity efficiently. However, it seems to me that oversimplification can occur when we talk about human suffering.
(Please forgive me if the following is rehashing something written earlier.) For example, multiplying a billion people’s suffering for 1 second to make it equal to a billion seconds of consecutive suffering to make it seem way more bad than a million consecutive seconds—almost 12 straight days—of suffering done by one person is just plainly, rationally wrong. One proof of that is that distributing those million seconds as one-second bursts at regular intervals over a person’s life is better than the million consecutive seconds because the person is not otherwise unduly hampered by the occasional one-second annoyances, but would probably become unable to function well in the consecutive case, and might be permanently injured (a la PTSD). My point is there’s something missing from the equation, and that potential lies at the heart of the human impulse to be irrational when presented with the same choice as comparative gain vs. comparative loss.
As you say, a million isolated seconds of suffering isn’t as bad as a million consecutive seconds of suffering, because (among other things) of the knock-on effects of consecutivity (e.g. PTSD). Maybe it’s only 10% as bad, or 1%, or .1%, or .0001%, or whatever. Sure, agreed, of course.
But the moral intuition being challenged by “shut up and multiply” isn’t about that.
If everyone agreed that sure, N dust-specks was worse than 50 years of torture for some N, and we were merely haggling over the price, the thought experiment would not be interesting. That’s why the thought experiment involves ridiculous numbers like 3^^^3 in the first place, so we can skip over all that.
When we’re trying to make practical decisions about what suffering to alleviate, we care about N, and precision matters. At that point we have to do some serious real-world thinking and measuring and, y’know, work.
But what’s challenging about “shut up and multiply” isn’t the value of N, it’s the existence of N. if we’re starting out with a moral intuition that dust-specks and torture simply aren’t commensurable, and therefore there is no value of N… well, then the work of calculating it is doomed before we start.
OK, I now understand the way the site works: If someone responds to your comment, it shows up in your mailbox like an e-mail. Sorry for getting that wrong with Vaniver ( i responded by private mail), and if I can fix it in a little while, I will (edit: and now I have). Now, to content:
Thanks for responding to me! I didn’t feel like I should hijack the welcome thread for something I didn’t know hadn’t been thoroughly discussed elsewhere. So I tried to be succinct, and failed and ended up garbled.
First, 3^^^3 is WAY more than a googolplex ;-)
Second, I fully recognize the existence of N, and I tried to make that clear in the last statement of content-value in my answer to you, by recalling the central lesson of “shut up and multiply”, which is that people, when faced with identical situations presented at one time as gain comparisons, and at another time as loss comparisons, will fail to recognize the identity and choose differently. That is a REALLY useful thing to know about human bias, and I don’t discount it.
I suppose my comment above amounts to a quibble if it’s already understood that EY’s ideas only apply to identical situations presented with different gain/loss values, but I don’t have the impression that’s all he was getting at. Hence, my caveat. If everyone’s already beyond that, feel free to ignore.
I agree that dust-specks and torture are commensurable. If you will allow, a personal story: I have distichiasis. Look it up, it ain’t fun. My oily tear glands, on the insides of my eyelids, produce eyelashes that grow toward my eyes. Every once in a while, one of those (almost invisible, clear—mine rarely have pigment at all) eyelashes grows long enough to brush my eyes. At that instant, I rarely notice, having been inured to the sensation. I only respond when the lash is long enough to wake me up in the middle of the night, and I struggle to pull out the invisible eyelash. Sometimes, rarely, it gets just the right (wrong) length when I’m driving, and I clap my hand over my eye to hold it still until I get home.
If I could reliably relieve myself of this condition in exchange for one full day of hot stinging torture, I would do so, as long as I could schedule it conveniently, because I could then get LASIK, which distichiasis strictly disallows for me stasis quo. I even tried, with electrolysis, which burned and scarred my eyelids enough that the doctor finally suggested I’d better stop.
So, an individual’s choices about how they will consume their lot of torture can be wide-ranging. I recognize that. These calculations of EY’s do not recognize these differences. Sometimes, it makes sense to shut up and multiply. Other times, when it’s available (as it often is), it makes sense to shut up and listen. Because of that inherent fact, of the difference between internal perception and others’ external perception of your suffering, we have a really useful intuition built in to, in otherwise equal situations, defer to the judgment of those who will suffer. We optimize not over suffering, but over choice. That is our human nature. It may be irrational. But, that nature should be addressed—not only failing to multiply human suffering sufficiently objectively.
This topic interests me quite a bit, and I think it would be well-received here if you focus on the practice and ignore the belief. EY has a number of posts that are unabashedly influenced by religious practices.
Vaniver, I thought the message from you in my mailbox was private, so I responded in a private manner. But, it was a copy of this public posting; I’ve got the hang of it now. I cannot, however, figure out how to recover the private response I sent you and post it here as a public reply. Feel free to do so if you like!
There’s a button in the grey tab when you’re in your messages labeled “sent”. In the upper left.
Thanks js, here was my response to Vaniver, responding to the “initiation_ceremony” link, as mundane as it may be:
The initiation sequence was funny. And very Agatha Christie, revealing the critical piece of information just as Poirot solves the mystery! 11/16. Would they have let him in?
Hi, I’m Alison—I used to be a professional tarot reader and astrologer in spite of having a (fairly average) science degree. I recovered from that over 15 years ago and feel it would be valuable for more people to understand how I came to do it and how I changed my mind. I am also a 45 year old woman, which makes me feel in a tiny minority on LW.
I’ve been reading large chunks of the sequences for the last year, as well as books like Risk: The Science and Politics of Fear and a bunch of rationalist blogs (and been thoroughly sucked into HPMOR).
Topics I’m particularly interested in include day to day rationality, tackling global warming, rationality from the perspective of people with mental health issues and tackling irrationality while maintaining polite and less arrogant discourse.
Hi Alison! Welcome to LessWrong! I’m always happy to see people who are interested in maintaining politeness on here.
I have a friend who is a professional psychic/ magician/ tarot reader, and he is extremely rational (uses cold reading and builds technology stuff for tricks.). I don’t think you necessarily have to give the profession up if it’s something you enjoy. So long as you don’t fall prey to the trap of believing your own schtick.
I would love to hear your story of how you came to change your mind!
Glad to have you here!
I’m with you! There’s quite a culture divide between “win the argument” and “get along”, and since I spend more time in the latter camp, Less Wrong was unpalatable for me at first.
There’s also “point out errors”, which is different from “win the argument”.
May I ask, at that time did you thoroughly believe that you were actually able to predict the future?
Also, with the benefit of hindsight, do you consider yourself to have used the dark arts?
Hi there (belatedly)! I believe we’ve met, way back when.
Hullo Less Wrongers,
I am a philosopher working mostly on methodology and causal inference, though I also dabble in (new wave) experimental philosophy—not to be confused with the straight-up physics that went by that name from the days of Newton and Boyle until some time in the mid-nineteenth century. ;)
I just finished my PhD (in history and philosophy of science) and started as an assistant professor of philosophy at the University of Illinois in Urbana-Champaign on August 16th.
From time to time over the last two or three years, I’ve glanced at Less Wrong and found it engaging. I am a bit depressed at the pessimism often displayed with respect to contemporary philosophy, but part of that depression is the recognition that the critiques are pretty reasonable. Anyway, I thought I should officially sign on so that I can throw in my two cents and expose my thinking to severe—but, hopefully, courteous—testing.
Welcome!
Don’t worry, 99% of philosophy is crud, but only because 99% of everything is crud. (That doesn’t sound as reassuring as it did in my head. :-) )
Only 99%? That sounds low. ;)
Which 99% are you talking about?
I thank the Ravenclaw Harry Potter for bringing me here. I’ve been lurking for a couple of weeks. My first clue that I’d feel at home here was learning that Eliezer taught himself physics by reading the Feynman lectures.
I’m an evolutionary ecologist by training, and a self-taught Python programmer and GIS analyst. I currently work at a community college, where I do a lot of one-on-one biology-teaching. I spend a lot of time thinking about where students go wrong when they’re thinking about science, and how to help them think more about their own thinking. (In my department we call it metacognition.) I’m also the father of a four-year-old, and so I also spend a good part of my home-life confronting and responding to some pretty fascinating cognitive and philosophical puzzles. (Her latest interest: the origins and arbitrariness of names.)
I’ve been developing as a rationalist (without the label) since who-knows-when during childhood, but I trace my more careful, articulated thinking about my own thinking to my early grad-school days, when I spent a lot of time fretting over how scientists should think about nature and problem-solving.
I’m looking forward to learning some new cognitive habits (my current thing is to think of—and teach—many cognitive skills as habits) and reinforcing some that I already have.
I’m bad at this.
Oh well here goes.
Hi there! I’m Erik. I’m 20 years old.
I am a pure math major at the University of Waterloo. I am half way through my third year here.
That being said, I am largely an autodidact, which I gather is pretty common around these parts. Up until age 13 or so I was primarily interested in physics. In the course of trying to learn physics, I inevitably had to learn some math. So I did, and I never looked back. I can actually pinpoint the exact moment, all those years ago, when I became sure that I would spend the rest of my life doing math. But I won’t bore you with such an excessively personal story.
My mathematical interests are fairly broad. My single greatest fear is that I will probably have to specialize at some point, to learn truly focus on one subject area; To think that I could ever actively decide not to want to learn all the things. I plan to delay this for as long as possible.
I tend to lean towards what I consider to be a pragmatic form of ultrafinitism. Other mathematicians tend to punch me when I talk about that though. A favourite pet problem of mine is to try to work how to recover things like eg real analysis without having to talk about infinity. That’s a pretty tame example, but try doing this for all the math you know and it gets pretty interesting!
I also have a few interests outside of math and physics.
I like anime; A few of my recent favourites include Redline, Mahou Shojou Madoka Magica and Nichijou, all from this past year.
I like video games. My usual approach here is to play a few games very deeply. My all time favourite game is Super Smash Bros Melee, which still has an amazing competitive scene today. I am also a big fan of, and occasional participant in, TASing. I used to speedrun Super Metroid a lot, and I started working on TASing it back in 07 for a while. That proved to be too tedious for me though, so I mostly just watch the runs these days.
I listen to a pretty broad range of music as well. I’ve tried learning to play both piano and guitar, but never got past the “embarrassingly bad” stage.
In terms of rationalist origin story… Uhh not much interesting really to say here. My parents aren’t religious, so I never had that influence. And I’ve been surrounded by and versed in physics and the sciences more generally for literally as long as I can remember. I have an old habit of periodically taking a piece of knowledge that I catch myself taking for granted and forcing myself work out exactly why I know that thing. An easy example: How do you know how far away the sun is? Or a little trickier: How do you know that everything is made out of atoms, and how do you know how small they are? I think I formed this habit because it saved me from having to ever remember very much; I figured out pretty early on that keeping my belief web as connected as possible would save me a lot of effort. I think this is also related to my fear of specialization.
I had a brief period when I was very vocal about atheism. I got tired of that pretty quickly though. For the most part the community just seemed pretty boring: Yep. We still don’t believe in God. GO TEAM.
LW stands out as something special though. It’s not just a lot of people who also don’t believe in silly nonsense. It’s not just about bring everyone up to some baseline of sanity. It’s about striving for an as-of-yet unimagined level of rationality. That’s just awesome and I want to be a part of it.
Terry Tao has a really cool presentation on that topic: The Cosmic Distance Ladder.
Parallax effects are a surprisingly good reason to reject heliocentrism. Wrong, of course, but it does seem to fail the sniff test—and about all the Greeks had to work with were sniff tests of varying sophistication.
Although now I kind of wonder how Aristarchus’ critics explained his observations.
That was long, but very good. People underestimate the ancient Greeks—it’s awesome to see the whole set of calculations laid out. (This reminds me guiltily of a post I keep meaning to write doing something similar for Atomism.)
First thing you can do to become better at this: Don’t start by telling people you are bad at it. If it was really important that we know that you are bad at it we could probably figure it out for ourselves!
Hi everyone, my name is Jesse. I was introduced to LessWrong by my sister, Julia, a couple years ago and I’ve found the posts here fantastic.
Since college, I’ve been a professional atheist. I’ve done communications/PR work for three secular nonprofit organizations, helping to put a friendly face on nontheistic people and promoting a secular worldview/philosophy. It doesn’t exactly pay well, but I like knowing that I’m part of making the world a more rational place.
I’m fascinated by a lot of the same things you are—psychology, rationality, language. But as a communications director, I have a particular passion for effective communication and persuasion. The “A Human’s Guide to Words” sequence was invaluable in shaping my understanding and practice.
The question currently on my mind (among others) is: “Does it make sense to call a particular persuasion technique unethical? Or does it entirely depend on how it’s used?”
Let me know what you think, and I look forward to being a part of this community!
Jesse
Some questions to ask:
Am I making people stronger, or weaker?
What would they think if they knew exactly what I was doing?
If lots of people used this technique, would the world be better off or worse off? Is that already happening and am I just keeping pace? Am I being substantially less evil than average?
Is this the sort of Dark Art that corrupts anything it touches (like telling people to have faith) or is it more neutral toward the content conveyed (like using colorful illustrations or having a handsome presenter speak a talk)?
(I’ve recently joked that SIAI should change its motto from “Don’t be jerks” to “Be less evil than Google”.)
“Am I making people stronger, or weaker?” That’s a very important question, and sometimes hard to get right.
Consider a theist for whom the belief in god is a fundamental aspect of his life, whose faith makes him strong because it gives him something to protect. Breaking (or weakening) his belief in god before he built himself a line of retreat can do much more harm than good.
What should be done is first building the line of retreat, showing him that even without a god, his life does not become pointless, his ethics won’t crumble to dust, and the thing he wants to protect is still worth protecting. And then, but only then, showing to him that his belief in god is not only unnecessary, but also making him weaker.
Great questions!
Regarding the second one, “What would [people] think if they knew exactly what I was doing?”—I absolutely agree that it’s important as a pragmatic issue. If someone will get upset by a technique—justified or not—we need to factor that into the decision to use it.
But do you think their discomfort is a sign that the technique is unethical in any meaningful sense, or merely socially frowned upon? Society tends to form its conventions for a reason, but those reasons aren’t necessarily tied to a consistent conception of morality.
That said, I agree that if people get upset by a practice, it’s a good warning sign that the practice could be unethical and merits careful thought. …Which could be exactly what you meant by asking the question.
By the way, I’m looking forward to meeting you at Skepticon next month—I’ll be moderating a panel you’ll be on!
If people get upset by a technique, that is a harm, but if their suffering that harm has good consequences, upsetting them was, all else equal, a good thing to do. So upsetting people is always related to ethics as more than just a sign.
Unethical things are frowned upon to the extent people feel (at some level) frowning impacts that sort of action; regarding blame:
Society often has good reasons behind its moral classifications.
Use your gut.
I just checked out the Skepticon list of speakers. Laughter was induced by the picture of David Silverman.
Didn’t know the story behind that one, so thank you Know Your Meme. That’s the face he made when Bill O’Reilly said “You can’t explain why the tide goes in.”
First I thought “Oh great, another believer in n gods for n=0”, but after looking through your site I realized that it is much more about rationality and a secular approach to life, not just telling people that faith is a bad thing.
As for the morality of a particular persuasion technique, “do unto others...” is still a golden rule, despite its inherent biases and religious connotations.
Bienvenidos, Jesse!
You may or may not be aware, but this has been discussed at some length around these parts; Dark Arts is an okay summary. (Edit: A particularly good post on the subject is NTLing.) If you’ve already read it and think the topic could stand more elaboration, though, I’m with you.
Oh, and “professional atheist”? Totally awesome.
Thanks for the tip!
I’ve come across some of this material, but haven’t read it in a systematic way. I very occasionally refer to persuasion as ‘the dark arts’ - I think that phrase/connection came from LW originally.
Earlier this year I gave a talk on the psychology of persuasion, synthesizing some of the fascinating studies that have been done. Rather than present the most blatant techniques as manipulation, I framed them as known weaknesses in our minds that could be exploited if we weren’t wary and aware. Thus: defense against dark arts. Combining rationality and Harry Potter! Hey, that would be a great fanfiction! (Yes, I’m aware of Harry Potter and the Methods of Rationality and have done my best to spread it far and wide.)
Thanks for the support regarding my job: I’ve loved doing it and hope to do more for the secular movement!
I think the best approach is to read the sequence on a Human’s Guide to Words before subject specific material.
In particular at least the first nine (until Neural Categories) and also Categorizing Has Consequences Where to Draw the Boundary and Words as Mental Paintbrush Handles.
/clears throat suggestively
Are you volunteering for the post of LessWrong’s DADA professor? The space is open if you want it, though Yvain has previously submitted an application. It should also be noted that a certain someone doesn’t seem interested in the job (probably a good thing, on balance).
That depends—would I die horribly and mysteriously after a year?
No, of course not! Whatever gave you that idea?
(You might be found in a closet with three fifth-year Hansonians, though...)
I would say that any persuasion technique that requires plain lies is unethical. Lies are contagious and break trust, while trust is required for any constructive communication.
Now, it may be a lesser evil in some situations. But a lesser evil is still evil, and should be avoided every time it can be. So yes, to me, you can call a technique itself unethical. Some exceptional situations may force you to do something unethical, because the alternatives are much worse, but that can be said to anything (you can always construct an hypothetical situation in which a given ethical rule will have to be broken), so if we want to keep that “ethical” word, we can apply it to something like openly lying.
Particular persuasion techniques are called different things depending on if they are used ethically.
That’s one useful way to make a distinction! And, honestly, probably the one I lean toward. That’s probably the way I’d use the words, but even so I’m trying to figure out whether there’s a sensible and coherent way to call a persuasion technique unethical as a reflection on the technique, rather than solely the consequences.
I’ve thought about it another way—if a particular technique is far easier (and more likely) to be used in a way that reduces utility than it is to use in a positive way, society should be wary of it, and perhaps call it an unethical practice. I’m thinking of some alleged pick-up artist techniques that are based on lowering a woman’s self-esteem and sense of self-worth. (Disclaimer: this is second or third-hand information about PUA, so I could be misrepresenting it. Regardless of whether it’s practiced by PUA, the hypothetical holds.)
The first step might be to back up and see whether there’s a perfectly coherent way to distinguish among persuasion techniques, in case that becomes important.
Sure, there are sensible ways to distinguish among them. But if you had a good idea of what your subject’s matter was like, and a good idea of how you would want it to be, and you had sufficient power, you could talk softly to them, or torture them, or disassemble their atoms and reshape them into a nearly identical version that had a few changed opinions, or barbecue them and feed them to a child and teach the child the opinions you wanted them to have. All four ways begin with an interlocutor and end with a person made out of mostly the same atoms thinking largely what you set out to have the person you are talking to think. (Note: I do not claim that for every mind, persuasion would work.) While these methods are distinct, there is a continuum of possibilities along the influence-manipulation-reconstruction-recycling axis.
I don’t think there is a solid, sharp boundary marking a difference in kind between “influence” and Dark Art style “manipulation”.
On slavery, which everyone agrees is always wrong...right?
Salutations,
I am a 22-year-old middle-class male from the Boston area. I was diagnosed with Asperger’s Syndrome at a young age, and have lived most of my life on medication, primarily Concerta. I found this site after reading all of Harry Potter and the Methods of Rationality in one sleepless night and wanting to read more about rationality. I consider myself to be a rationalist-in-training; while I am capable of actually changing my mind (I believe), I am a procrastinator and let my emotions get the better of me at times. I am pleased to find a community of rationalists, as I can learn from them and better my own skills as a rationalist. I will likely not post very much, but the posts I do write will hopefully be of high quality. (I find that negative incentives, e.g. karma downvotes, have a powerful effect on me; also, I am a perfectionist and want anything I do to be done right the first time according to objective criteria, such as using proper grammar and such.) I can type approximately 50 words per minute (hunt-and-peck) and am obsessed with roller coasters. I hope that I will be accepted into the Less Wrong community.
Sincerely,
Alaeriia
Hello all,
I’ve been following discussions on LW for about 6 months now and have been urged by another member of the community to join in commenting. I’ve been hesitant to join, but now that I’ve moved to a state in which I don’t know a soul, I’m finding myself reading discussions here more than usual.
I think participation in LW can help me do things better at my job (and in life generally). Discussion here seems a good resource for testing out and working through ideas in a non-combative, but rigorous setting.
My field is evolutionary biology and I recently have spent a lot of time thinking about:
1) Whether people “trained” in the sciences believe they are inherently more objective and clear thinking than those in other fields, and as a consequence do not work hard to make sure their thinking and communication IS clear and objective. I’m not sure that all people receiving a science education are actually well trained to think empirically (I include my own education here), but a degree in science gives them the impression that they are.
2) What are the obstacles to understanding evolutionary biology? I find that students, after having taken an evolutionary biology course, STILL fundamentally don’t understand. This makes me despair of the general public ever accepting the evolutionary theory that provides them with medical treatment and forensic science.
I’d be interested in discussing the various obstacles to understanding evolution and thinking up streamlined solutions for helping public audiences, high school teachers and undergraduates in particular to overcome those obstacles. Some I’ve identified in undergraduate classes are:
Field specific language that means something totally different in everyday use. Fellow newcomer JesseGalef’s post on overcoming the curse of knowledge is relevant.
Students don’t have a working knowledge of probability, stochastic processes, distributions, and variance.
Students can’t distinguish between characteristics/predictability of an individual and characteristics/predictability of a distribution.
-Students have trouble considering non-additive effects/interactions.
-Previous miseducation. People have had a cartoonish and inaccurate concept of evolution pounded into their brains by many media sources both friendly and unfriendly to science. Search “Evolution” under Google image and you’ll see what I mean.
Anyway. If there’s interest, I suppose I’ll be around.
“I find that students, after having taken an evolutionary biology course, STILL fundamentally don’t understand.”
Could you elaborate on this? I haven’t taken an evolutionary biology course, but I’d love to know what to look out for if I do decide to take one.
Hi all, call me Flay.
I’m a 20-year old graphic design student and traditional artist (figure drawing, mainly) with an array of other odd interests on the side, from costume makeup to programming. Although I do enjoy what I do, and it can certainly be very challenging, I sometimes feel there are parts of my analytical mind being neglected. Reading a few of the sequences here and being thrown all of a sudden back in to the deep end of reason made me realise how much I miss the sensation, and so I decided to register. One of my driving motivations is to try to optimize myself as much as possible, and achieve all I can. As you could guess I’m more than a little perfectionistic, although I’m slowly learning to be less uptight about the whole deal.
I came across Less Wrong while I was researching the singularity movement. I don’t consider myself a rationalist yet (or a follower of the singularity movement for that matter), only because I have a great deal more reading to do first. In particular, I haven’t finished reading through the core sequences yet, but I intend to do so soon.
Looking forward to meeting everyone!
There is an optimal amount of uptightness :-) Welcome!
True, perhaps one day I’ll find it. =P
Thanks!
Welcome!
Thank you!
Hi there everyone, I’m a programmer by trade and a video game maker by inclination. I first ran across Less Wrong while random-walking through tvtropes. I read a little of it, found it daunting but fascinating, and it… sat in my bookmarks for about a year after that.
Later, I random-walked upon Harry Potter atMoR, and it rekindled my interest. I’d read a chapter, get on lesswrong, and try and find all the tricks that harry (or other characters) used for that chapter. It was still slow going, because I wanted not just to read the material, but to absorb it and become stronger (Tsuyoku Naritai!)
I… pretty desperately needed it. I grew up in a rural community with an absolutely abhorrent school system, even by the standards of the american school system. I had a middle-school understanding of math and logic going into college, and am still recovering from the effects of a bad start (Bayesian theory and the QM sequence are on the very edge of what’s possible for me, but stronger, stronger, I will learn)
I ‘came out’ as an atheist two years ago to my parents, and began rearranging my life insurance to go to an Alcor membership two weeks ago. All in all, I’m not terribly new to ‘critical’ thinking in terms of not taking a claim at face value, but still learning how to truly deeply analyze claims as a rationalist.
So um.. hi
Welcome! ’nother Programmer here, and game maker too (I think there are a few of us here). D’you have any nice games to show?
Just a (very primitive) version of Space Hulk I made in school and a metroid-vania style platformer that never reached completion before the team split. I’m still building up a website for myself and a couple of my fellow designers (www.selfemployedheroes.com) that I’ll post them to as soon as I can.
Not much I know, but I literally just graduated at the end of February. Still hunting for that first job where I can really make a name for myself.
Hello all. I’ve been meaning to introduce myself in the old welcome thread for a while now.
I found this site shortly after Overcoming Bias while doing research for an open source project I’m planning to make public within the next few months. The project is peer-based and derived from what I learned about decision making in anthropology classes. (Don’t worry, the methods have been Bayesian since before I knew the term.)
In addition to teaching myself Java and a variety of other languages to put that project together, I also do some 3D design and printing. Trying to build a strong skillset for a post scarcity world brought about by personalized manufacturing. Any time now....
I had a lot of early childhood exposure to both the occult and organized religion. I feel that by early 20s I pretty well exhausted everything mysticism and esoteric knowledge has to offer. I have a tendency to get defensive when entire traditions are dismissed by those who have only cursory familiarity. When a group of people pursue a discipline they believe to be useful for centuries, some of their methods and conclusions may be useful.
Studied Materials Engineering and Anthropology (no degree—long story). Volunteered for many years at an industrial history museum (Master Weaver, Journeyman Potter, Tanner, and Millwright). Have found work drawing maps, cooking food, and running games (RPGs). I picked my current job in a highly rational manner, and it is so boring and methodical that I yearn to program robots to do it. I try not to deceive, always try new things, and try to live longer. Plus, I love and tend to abuse parentheses().
Great site, btw.
Could you write about what you got out of mysticism? (I suppose that the third sentence could be interpreted as a reason why not.)
Here’s one idea: [http://lesswrong.com/lw/37k/rationality_quotes_december_2010/3250?c=1]
Hello Less Wrongians! I’m a 17 year old American student who found Less Wrong through Common Sense Atheism, and has lurked here for several months. Only today did I decide that this was a community I wanted to take the next step with; actually join.
I’ve always had a rationalist “pull.” Though for most of my life it manifested itself in a Traditional Rationalist way, I have a profound drive to find out what is the case. I was raised as a Roman Catholic, though not a particularly strict one, but abandoned this very quickly (fifth grade), helped along by a love of science and a penchant for philosophical questioning which had begun in childhood. My education has been tumultuous. I’ve always been a bright kid, but for much of my school career felt that I was being held back, so I did most of my learning from books and the internet on my own time; after I’d finished a test early, or at lunch, or after school. This wasn’t helped by a massive bout of anxiety I encountered in middle school surrounding rather vicious bullying I suffered for my perceived sexuality (though those harassing me were technically correct—I’m gay). Still, I managed to maintain my As so that I could go to a private high school, and I only had to do two years of middle school as my parents had finally agreed to skip me ahead a grade.
Through high school I studied a lot of philosophy and science, which clarified my thinking and solidifed my orientation as a Traditional Rationalist, but I still faced many seemingly insurmountable philosophical puzzles. It was by stumbling on fields that Less Wrong is known for—decision theory, cognitive science, etc.- that I started to dissolve questions that seemed impossible to answer. My voracious hunger for truth was actually being met, and real progress could be made. A perfect storm of intense autodidactism and general online reading led me to stumble upon Less Wrong, which further clarified and informed my general philosophy, which I’m confident I can refer to as “rationalist.”
To wrap up, because I skipped a year of school I graduated high school this June at age 17, and am taking a year off before I head off to college in fall 2012. During this period I’m ratcheting up my already intense autodidactism in a wide variety of fields (using Less Wrong, Khan Academy, and other such resources as well as textbooks) and am studying physics as the private protege of a professor at a nearby university. I intend to study physics or economics in college, as while I love philosophy, most of it is worthless and it is much easier to teach oneself/study on the side than the former two fields.
Why do you intend to study physics or economics in college?
Because I’m strongly interested in both subjects, could very well pursue a career in one of them (or related fields), and there are excellent resources for both in the university system, especially for physics (research opportunities, labs, etc.).
I think the consensus around here is that too many high IQ people go into physics compared to what is socially optimal. Unfortunately my Google-fu is failing me and I can’t find the posts/discussions I have in mind. (Anyone want to help me out?) The closest I could find is Paul Christiano’s The Value of Theoretical Research.
There’s also the comment of Peter Thiel at the 2009 Singularity Summit, referenced here.
But in any case note that studying physics in college does not necessarily commit one to “going into” physics. Indeed, Robin Hanson now studies economics professionally but started out studying physics!
Thanks, I think between you and gwern you’ve probably covered what I had in mind. From your linked comment:
It might be hard to argue that everyone currently working on string theory should shift their attention, but much easier to argue that at the margins, we need more highly capable people working on creating a positive Singularity, or reducing existential risk, or aging, and fewer doing theoretical research. It’s unlikely we can make all string theorists shift their attention anyway, but I feel like we’d be doing some good if we could change a few people’s minds (like Celestia’s for example). Do you disagree?
Sure, but if one doesn’t intend to pursue a career in physics, why not study something more generally useful, like computer science?
You can do both. Some of the value of adding physics is that it’s a credible signal and your classmates are a cut above most other departments (and you do pick up some problem-solving techniques).
Well, you might be thinking of http://lesswrong.com/lw/1hh/rationality_quotes_november_2009/1ac4 - either de Grey or the mathematician story would do.
Welcome!
I got a physics / econ double degree, and I recommend against studying econ in college, unless there are some really good professors at the college you go to. What you suspect about philosophy is true, and even more true for econ. I learned ~2 things in the econ classes I took that I hadn’t learned in my personal reading on the subject (whereas I learned quite a bit of physics in classes), and so feel like those classes were wasted opportunities. I strongly recommend a field like computer science instead, if you have the least bit of aptitude for programming. If not, psychology seems like it could be super useful, but the cognitive science is few and far between, or electrical engineering fits with physics pretty well.
(I do recommend reading Adam Smith’s On The Wealth of Nations at some point if you haven’t already. It’s easy enough to get through, and it’s a remarkably good foundation for the field.)
((Also, *brohoof* :3))
Welcome here !
Hello. I found LW from two directions: first, I’m serious about philanthropy, and saw references to LW on GiveWell. Second, my husband and I are reading aloud from Harry Potter and the Methods of Rationality each night.
I’m a grad student in social work. I find that social work has a lot in common with some of LW’s goals (mainly self-improvement). Given that LW is aimed at very high-functioning people, which most social work is not, it uses some different methods. But I suspect LW could benefit from some ideas from social work.
Welcome! If you haven’t already, you may want to check out some of LessWrong’s posts on efficient philanthropy and Luke’s sequence on the scientific knowledge behind self-improvement.. People’s brains work (mostly) the same way, whether aspiring rationalists or the beneficiaries of social work, so I’d be very interested in reading your perspective on self-improvement in your field.
Welcome! And nice to meet you :)
I’d be interested in hearing about social work.
I am a (shy) NEET who has been stalking the blog for some months now but only recently made an account.
Unfortunately, I cannot really remember how I came across Less Wrong but it quickly started affecting me in the same way TV Tropes does (I have about 10 LW tabs open at the moment).
I find the site really interesting and helpful, yet don’t expect to comment that often. I feel as if my English and general knowledge are still not on the average level here so I’ll read and read until that improves.
I enjoy anime, computer games, looking at images of cute things, Lolita Fashion and reading, among other things.
I dislike sports, don’t -usually- find television or movies interesting and mostly dislike social interaction in person (its fine if I do it through the internet).
I tried studying psychology at a local university but all of the classes were full of nonsense (picture a statistics teacher who said his class was not about math but about arithmetic...) and the hall just outside was full of smokers at all times. I have sensitive lungs and can’t tolerate smoke.
I hope to learn a lot here~
-Marcy
Hello Less Wrong,
I am a 22 year old, caucasian lower class community college student interested in becoming more rational in order to achieve the goal of being useful to the human species. I am a student whose education is taking far too long for financial reasons but I am pursuing a BS in Computer Science and a minor in Cognitive Science because I want to understand human rationality at a deeper level. From there I will decide from my performance in classes if I am smart enough to tackle grad school. I often feel outclassed when reading the discussions here but I plan to learn enough to be useful in conversation just as quick as I can. I intend to become as rational as I am able with my meat brain. I became an atheist in High School, likely about at age 16, but have always deeply suspected there was no god since some brain worm burrowed into my head when I was 6 and said “If something is moral, then it is moral for its own reasons, not because God said so.” Though the exact thought that I mulled over in my Sunday School class was “God has to play by the rules.” That lead me to always be the devil’s advocate in theological discussions (I was raised in a private christian school) so my deconversion was expected and those more liberal theists who were friends with me beforehand have not changed their opinion of me to a great degree. I’ve been an aspiring rationalist as long as I can remember, even when I was a Christian I thought faith was a stupid idea, but I didn’t know about Probability Theory and Biases until now. I value being right. I want my beliefs to be correct ones. Wanting to be right is the most perfect goal, because from it flows all others. Not perfect in the sense of goodness, but perfect in the sense that nothing can be added or taken away. If you want to cure polio then you must have correct beliefs about Penicillin. If you want to take over the world you must have correct beliefs about the current political system so that you can manipulate it. If you want to program in python it helps to have correct beliefs about it’s syntax.
Thank you for making me progressively more sane.
Welcome to Less Wrong !
Hello, I’m a government and economics double major in an all-women’s liberal arts college in Massachusetts. I discovered Less Wrong through an economics professor who gave a lecture on why it is important to be a rationalist. As an ex-lit. major, the sequence on “A Human’s Guide to Words” caught my eye, and I’m currently working my way through it. I look forward to learning more.
Welcome to LW, Mirai! A Human’s Guide to Words is one of my favorite sequences too.
Welcome!
Hi! I want to use the Rationality Methods to improve my understanding of myself and how to improve. I guess you could say I had a strange way of “waking up” to Rationality. Many say they looked to rationality after realizing their religion was …. yeah. Well… That was a bit strange for me. when my parents married, “I was born about a year later”, they were both from christian families and just went with it. When they realized that Christianity didn’t match with the way things actually worked, the explained it all out to me. I was 5. Naturally that got my 5 year old mind thinking, “Wait.… Daddy was WRONG???”. It took him about 2 hours to explain this strange new concept to me. That was step 1, on my path to rationality. I… am a 13 year old, confident, curious young male who decided that he wanted to skip the 30 years of bad habits and jump to the rational part. For my security, call me “Ambition”.
Welcome :) We need more awesome young people around here, beware of too much rationality overload though the sequences have been known to cause very large amounts of meta-cognition and symptoms similar to brain freeze.
Hiya! Welcome to Less Wrong.
That sounds like a good experience to have as young as possible, finding out that your world view is susceptible to being wrong and needing to be changed. The longer you wait for the first one of those, the harder it is to avoid just closing your eyes to it. Now, though, you’re more mentally prepared if it ever happens again.
It sure was. As you can guess I’m not your average teen. Hopefully this time advantage will give me a head start on Rationality, and allow me to go far with it.
Hi Less Wrong!
Decided to register after seeing this comment and wanting to post give a free $10 to a cause I value highly.
I got pulled into less wrong by being interested in transhumanist stuff for a few years, finally decided to read here after realizing that this was the best place to discuss this sort of stuff and actually end up being right as opposed to just making wild predictions with absolutely no merit. I’m an 18 year old male living in the UK. I don’t have a background in maths or computer sci as a lot of people here do (though I’m thinking of learning them). I’m just finishing up at school and then going on to do a philosophy degree (hopefully—though I’m scared of it making me believe crap things)
I’ve found the most useful LW stuff to be along the lines of instrumental rationality (the more recent stuff). Lukeprog’s sequence on winning at life is great! My favorite LW-related posts have been:
The Cynic’s Conundrum: Because I used to think idealistically about my own thought processes and cynically about other people’s. In essence I fell into comfortable cynicism.
Tsuyoku Naritai! (I Want To Become Stronger): Because this was just really galvanizing and made me want to do better, much more than any self-help stuff ever did!
A Suite of Pragmatic Considerations in Favor of Niceness: Fantastic as I tended (and still tend to) be mean for no real reason and this post put a lot of motivation towards stopping. I’ve actually started to have niceness as a terminal value now, which is a tad odd.
So anyway, I’m happy to have registered and I hope to get stronger and have fun here!
I suppose I should introduce myself.
I’ve been reading Overcoming Bias and Less Wrong intermittently for more than a year. I only recently became active, posting a few comments and attending a meetup in Irvine, CA.
I’m a 25-year-old computer systems administrator for businesses in L.A. county, but my real passion is philosophy, and I hope to return to school and become a philosophy professor one day.
Though I was raised an evangelical Christian and pastor’s kid, I now write the popular atheism blog Common Sense Atheism and also host three podcasts: one on philosophy, one on meta-ethics, and one on Christianity. On that site I’ve also posted many Less Wrong-related posts.
P.S. Thanks to orthonormal for this post and for a fun list of ‘instant gratification’ posts on Less Wrong.
I’ve been impressed with CSA, and the “digest of LW sequences” posts are really well done. Keep up the good work!
The first hit is always free...
Thanks.
But, note that I’m not blogging the sequences at CSA. I’m blogging through all of Eliezer’s writing, chronologically. One day I may return and attempt one-post summaries of some of the shorter sequences, but I’m hoping somebody on Less Wrong will beat me to it.
I think that’s the most inviting community post I have ever read. I’ve been a lurker for awhile with almost no participation. Lately I’ve started catching up on old articles. My background is raised in a Jesus people hippie cult and thus took a long road to atheism and attempted rationality.
In other forums I tend to participate more (I’m a software developer, so that’s plenty of online community). However I’m at LessWrong to learn, and so I don’t have much to contribute at present. Which reminds me, I love this place for not being ivory tower. I find too much of this type of community in other forums to be biased towards academia (and somehow proud of it). It’s a nice contrast here.
Wow, thanks! It’s been said with some justice that LessWrong is ridiculously forbidding, so it’s nice that it doesn’t always come across that way.
The first few times I got down voted it hurt a bit, but it is a signal (in many cases) that something with my commenting was wrong, and as long as that is the case I prefer to have it pointed out. Note that there are also people being helpful when you commit errors, or write articles. I think the less inviting feeling can come from the higher regard for content. In some atheism forums where I post we have super nice theists posting, and getting respected just for being honest and decent people. Which is fine, but they do not get any flack for the content they write. On LW you don’t get additional karma points for being a nice person.
PS: welcome
I think it’s pretty intimidating at first glance, but a good bit of effort seems to go towards helping newcomers get to where they ought to start (this post is an example). This seems like the key thing to me, and I think it’s done reasonably well. Every time anyone makes a sincere, well-intended, and not condescending “Welcome to Less Wrong” reply comment, I think the community gets a little more inviting.
: ) It’s certainly challenging, and of course leans towards ivory tower, quite reasonably though considering high concept is intrinsic to the subject matter.
Hello all, I’m a 17 year old High School senior. I discovered Less Wrong through the author page at HP:MoR. I had considered myself a rational person for some time, but the Sequences here have really opened my eyes to the glaring errors I was making as a Traditional Rationalist. Consequently, this site has already changed my life for the better and I really just want to thank all the main contributors here. Thank You!
Also, I am looking to Major in Cognitive Science in college and any suggestions as to good schools to apply would be appreciated, along with any advice as to reading or preparation I should do before entering this field.
Welcome to less wrong! I don’t know enough about you to predict where you could be accepted, but MIT and Caltech are both great schools for anyone who wants to study Cognitive Science.
I would like to second his request. I too would love some reading material, besides the Sequences which are pretty awesome by themselves, on cognitive science and rationality.
Welcome!
Hi. I’ve been lurking here for awhile, because my son is a major contributor. I recently confessed that I was reading his posts and he urged me to register and contribute. I made my first comment a few minutes ago, in response to “What hardcore singularity believers should consider doing.”
I think I’m probably atypical for this site. I’m a 58 year old, female, clinical social worker. I’ve worked in mental institutions, foster care for the disabled and, for the past 21 years as a play therapist with children. I’m also a part-time artist and a volunteer executive director of a non-profit organization. I’m not sure that I am a “rationalist”.
Hello, all. I’m Joe. I’m 43, currently a graduate student in computational biology (in which I am discovering that a lot of inference techniques in biology are based on Bayes’s Theorem). I’m also a professional software developer, and have been writing software for most of my life (since about age 10). In the early 1990′s I was a graduate student at the AI lab at the University of Georgia, and though I didn’t finish that degree, I learned a lot of stuff that was of great utility in my career in software development—among other things, I learned about a number of different heuristics and their failure modes.
I remember a moment early in my professional career when I was trying to convince someone that some bug wasn’t my fault, but was a bug in a third-party library. I very suddenly realized that, in fact, the problem was overwhelmingly more likely to be in my code than in the libraries and other tools we used, tools which were exercised daily by hundreds of thousands of developers. In that instant, I become much more skeptical of my own ability to do things Right. I think that moment was the start of my journey as a rationalist. I haven’t thought about that process in a systematic way, though, until recently.
I’ve known of LW for quite a while, but really got interested when lukeprog of http://commonsenseatheism.com started reading Eliezer’s posts sequentially. I’m now reading the sequences somewhat chaotically; I’ve read around 30% of the sequence posts.
My fear is, no matter how far I progress as a rationalist, I’ll still be doing it Wrong. Or I’ll still fear that I’m doing it wrong. I think I suffer greatly from under-confidence http://lesswrong.com/lw/c3/the_sin_of_underconfidence/ , and I’m very risk-averse. A property which I’ve just lately begun to view as a liability.
I am coming to view formal probabilistic reasoning as of fundamental importance to understanding reality, and I’d like to learn all I can about it.
If I overcome my reluctance to be judged by this community, I might write about my experiences with education in the US, which I believe ill-serves many of its clients. I have a 14-year-old daughter who is “unschooled”. The topics of raising children as rationalists, and rational parenting, could engender some valuable discussions.
I might write about how, as an atheist, I’ve found it practically useful to belong to a religious community (a Unitarian Universalist church). “Believing in” religion is obviously irrational, but being connected with a religious community can in some circumstances be a rational, and non-cynical, move.
I might also write about software debugging as a rational activity. Though that’s kind of obvious, I guess. OTOH debugging is IMO a severely under-valued skill in the field of software development. Most of my work is in soft real-time systems, which requires a whole different approach to debugging than interactive/GUI/web application development.
I might write about my own brief bout with mental illness, and about the process of dealing with a severely mentally-ill close relative, from a rationalist perspective.
My favorite sentence on LW so far: “Rationalists should WIN.”
If you have the time and inclination to test this, you can use this site to discover your level of under- or over-confidence, and adjust appropriately.
In any case, welcome to LessWrong! I look forward especially to hearing about the process of unschooling; there is (very rightly) an impression here on LessWrong that raising a child is one of the hardest tasks; it seems like also taking responsibility for their education is even more daunting!
Hello all!
I was pointed to LW by a friend who makes a lot of sense a lot of the time. He suggested the LW community would take some interest in an education project I’ve been working on for over two years, The Sphere College Project. Before introducing myself I spent a few weeks perusing LW sequences. This could go on for quite some time, so I’ll go ahead and jump in.
I’m 50 years old, born and raised in the US in a series of towns throughout South Carolina. I had aptitude for mathematics and music. I pursued music and became a formidable trombonist living in NYC and playing classical and jazz music. I could sight-read anything. In 1982 my girlfriend’s father worked for IBM, so I got to play around with his IBM PC. I was hooked (particularly loved “Adventure”, but could only fit math/computers into my scant spare time. I did read “Godel, Escher, Bach” while studying trombone at the Eastman School of Music. Later, while doing my DMA in music I observed that most of the musicians I encountered in their 50s, 60s and 70s didn’t appear to be loving the life anymore, so I decided I would leave music entirely, and began taking courses in math/physics/computer science at Columbia. I discovered that I had greater aptitude than I had previously thought, and I truly enjoyed these subjects. After a Master’s in CS at Wake Forest University (thesis in graph theory—love it!) I worked at Data General with some exceptional software engineers. It was there that I learned more about optimizing my own processes. Later, I pursued a PhD in CS at Georgia Tech, researching Computer Networking. I was fascinated with global communication systems.
I had done work in the arts and the sciences, but knew that my facility in the humanities paled in comparison, so I chose to seek a position at a small liberal arts college in the northeast, which would allow me to interact closely with professors in many disciplines. I accepted a position at Ursinus College. The great advantage of Ursinus for me was that all (meaning “most”) professors were required to teach the freshman seminar course—primarily a humanities course. What better way to learn the humanities than to be thrust in front of sixteen 17 and 18 year olds? It was transformative for me, helping me identify what I truly wanted to do with my life: help people learn what they want to learn. So I didn’t get tenure (3 years ago) and found myself on the market. I started looking at positions at wasn’t excited about my options, now that I had some experience in what we like to call higher education.
So like a good software engineer, I identified my primary requirement: have as much impact on the world as possible. How? By providing education for the huge population of adults who do not fit the traditional model of higher education; by teaching people in the way they learn by providing the environment that fits them best; by making it financially accessible to anyone who wishes to engage in their education; by making the program proceed at their schedule, not a “hard-coded” two- or four-year schedule; by allowing them to first identify what they are passionate about and wish to accomplish with their lives, then helping them gain the directly related interdisciplinary skills they need, then gaining practical experience in their field; and by making it all fun for them.
All this made perfect sense to me. I couldn’t find an institution that had all the required elements, so I decided to found The Sphere College Project. It’s been a monumental struggle (typical businesspeople don’t grok the model at all), but even in our resource-limited state it’s been working well for some of the students, including one who had no concept of negative numbers when she began. I’m currently working to scale up our model. I’m convinced it’s going to happen, because it must. Meanwhile, I’m doing everything I can to connect with people who agree that a new model of education is of critical importance to creating a functional society.
I’m pleased to join you here, and look forward to reading more.
Richard Liston
Welcome to Less Wrong!
That’s kind of impressive, an application of the “outside view” in just the way recommended by Daniel Gilbert’s “Stumbling on Happiness”.
I know someone who compared lifespans of poets vs. prose writers, and went into prose as a result.
I’m amused; that’s like some twisted literature version of Newcomb’s dilemma—if you would seriously consider choosing between prose and poetry on that basis, then Omega filled only one box. Or something like that.
Agree: the vast majority are not rational enough to be able to do that.
Hi to everyone!
I first arrived to this site several months ago, and I’ve been a voracious reader since then. So, after this period of “mad and desperate studying” (“studio matto e disperatissmo” as Leopardi would say) I think I am probably ready to stop lurking and start to actively participate. Despite having a scientific background (I have a Ph.D. in theoretical physics, even though I’m doing a completely different job at the moment) I never encountered before the concept of rationality as it’s explicitely stated here. In fact, I used to think I was a very “rational” person, in the more generic use of the word, before reading the Sequences and discovering that… well, I wasn’t. It’s still a long way before I reach the level of many notable members of this community, but I would say that LW helped me make a big step in the right direction. I want to emphasize this concept: there are a lot of good places where you can obtain knowledge, very few that can teach you how you should handle it. It’s though to do it on your own, so thanks LW!
Finally, I’m from Italy, and would love to know if there are other fellow LWers that would like to start an italian chapter of the conspiracy. Also, I think it would be great if we could manage to translate some of the Sequences: I managed to raise interest in some of the topics among my friends, but many of them can’t read English well enough (or at all). Let me know what you think about it
Italian translation project.
See here.
Also, welcome!
Thank you, I wasn’t aware that a similar project already existed. I’m more than willing to collaborate! As soon as I have some free time I’ll write more in the proper discussion.
Hello, I found Less Wrong after a friend recommended Methods of Rationality, which I devoured in short order. That was almost a year ago and I’ve been lurking LW off and on ever since. In June I attended a meetup and had some of the best conversation I’ve had in a long time. Since then, I’ve been attacking the sequences more systematically and making solid progress.
I’m in my late 20′s, live in Los Angeles, and work in the entertainment industry (after failing miserably as an engineering student). It’s my ambition to produce stories and science fiction that raise the sanity waterline of our society. Film and television science fiction has never come close to approaching the depth and breadth of imagination and thoughtfulness of literary science fiction and I’d like to be a part of the effort to close that gap, however slightly.
I have a hypothesis that the sociological function of stories is to communicate lessons about desirable or undesirable human behavior and translate them from an intellectual idea that can’t be grasped by us on an intuitive level to an emotional idea that can, in the process making it more likely we’ll remember them and apply the lesson to our own behavior. Almost like a mnemonic device.
For example, I could give a three hour lecture on the importance of reputation and credibility in group dynamics. Or I could tell the story of the boy who cried wolf in under three minutes and communicate the same idea in a way that is intuitively graspable on an emotional level and is therefore much more likely to be retained.
Anyway, my grasp on this idea is far from complete and I hope this community can help me get a better handle on it, ultimately resulting in propagating ideas that contribute to the optimization of humanity.
Welcome!
(pneumonic → mnemonic)
Thank you, fixed.
.
I accidentally posted the following comment earlier today in the May 2009 Introduction page. Hal suggested I re-post it here, where it belongs:
Those of you who were at the 2010 SIngularity Summit in San Francisco last weekend might have seen me. I was hovering around “the guy in the motorized wheelchair.” I am Hal Finney’s spouse and life partner. Although I am new to Less Wrong, and very ignorant when it come to HTML and computers, I have been a Rationalist ever since I was a child, to the dismay of my mother, teachers, and legions of other people I interacted with. I met Hal while an undergraduate at Caltech. And as they say, the rest is history.
This past year, Hal and I have had to completely alter projections of our future together. Hal was diagnosed with ALS (Amyotrophic Lateral Sclerosis, better known in the US as “Lou Gehrigs Disease”). Since his diagnosis in August of 2009, Hal has physically changed in very obvious ways. His speech has become slow, quiet, and labored. His typing has gone from rapid-fire 120 WPM to a sluggish finger peck. His weekly running (50-60 miles per week in February 2009) stopped being possible in November of 2009, and now Hal gets around in a motorized wheelchair. Eating, always a pleasure before, is now a challenge—much concentration is involved to avoid choking. The most recent and worrisome manifestation of the weakening in Hal’s voluntary muscles is his breathing. However—all of these changes have been to Hal’s body. The machine that Hal’s brain controls through efferent output to interact with the environment. Inside, he is the same brilliant guy I have known for well over half of my life.
I was very impressed with the people I met at the Singularity Summit. What a relief to be around creative individuals who think rather than just act. Who problem solve, rather than just react. Who can understand Hal’s and my intention to keep his magnificent brain alive and give him a way to communicate, even if he loses all movement.
I am happy that a community of rational people exists. And I’m looking forward to interacting with this community, along with Hal, for many more years.
Well, I never did get around to introducing myself in the original thread, so I might as well post something here.
I spent six years as an infantry soldier, did most of a History degree before dropping out in disgust, have a Post Apocalyptic scifi novel currently in negotiations with a publisher, I used to be a math prodigy but now I can barely remember Calculus, taught myself auto mechanics over the period of one month after buying a car for a pack of cigarettes, I ride a motorcycle, I have some sort of mutant ability to talk cops down when they start feeling violent, and am drastically over skilled and under employed.
I’m hoping to contribute to the community more substantially than just leaving comments; I have a couple of posts I’m working out in my head. The first is a summary of TVTropes—what it is and why it’s important—the other being a guide to using the Dark Arts.
I really regret my math not being up to par for this community; I tend to understand things on a gut/instinctual level (ie: I can catch a ball, but have trouble calculating the trajectory) but my math’s too rusty to ‘prove’ most of my ideas.
Despite a deep-seated desire for it to be otherwise, I dwell in the banker-run metropolis of Calgary, Alberta.
Also, I have a blog where I write about how Vile and Unconscionable it is, living in this dystopia: www.staresattheworld.com
Is this along the lines of Robin Hanson’s endorsement?
I somehow missed that post of his; the short answer is yes. The world that tropes describe is—I believe—Magic. When you start seeing the dynamics of how that world works, you can pinpoint the roots of many of our biases.
Hello everyone,
I am a 31-year-old physicist and have been following LW since before it split from OB. It is one of the sites I spend most time reading, even though I never delurked before—I suspected, probably correctly, that it would induce me to spend even more time in it (“Less Wrong Will Ruin Your Life”, as TVTropes might put it). However, I have recently moved into an area where regular meetups are going on, so I thought it would be worthwhile to get involved in the community and try to meet some of its members.
Welcome!
If you can cope with TVTropes, LessWrong shouldn’t be too addictive.
And who said I am coping well with TVTropes? ;)
Hi everyone,
So well...
I’m a 30-yo french man, working as a Free Software developer (mostly in Python and C) and system administrator, deeply interested in “science” (maths, physics, biology, computer science, …) since as far as I can remember. I define myself as a rationalist and a humanist.
What I value is not easy to explain in a few lines, but to say in three words I would say : humanity (human beings, or any sentient being able to show the quality of humanity like altruism and curiosity), truth (making the map closer to the territory, to use LW terminology) and progress (the idea that we can make the future a better place than the past).
I discovered Less Wrong through… “Harry Potter and the methods of rationality” which a fellow free software developer pointed me to, and I started reading the sequences since then. I find them deeply interesting. I’m not yet fully convinced about the Singularity (or least, it being in a mater of decades and not of centuries or more) nor about trans-humanism, but I do view them with a positive, if yet doubtful, glance.
As for how I went into rationality… well, I was more or less born into it, my parents being maths teachers. My studies in maths and physics (before switching to computer science) and my childhood love of science-fiction probably played a big role in that too.
But it also comes from discovering than in order to pursue “progress” and to protect humanity, we need to make our map reflect better the territory. I then added “truth” to my core values. As Eliezer said, you need something to protect.
Before discovering LW, I was a “traditional rationalist”, but I’m slowly evolving (or at least, I think I am, but I may be only believing that I believe...) to a “bayesian rationalist” in reading the posts.
Good day to everyone !
Welcome! Great to have one more LWer from France, potentially one more person to talk to at our infrequent meetups. Are you in or near Paris?
Thanks (to you and others) for the welcome !
To answer your question, yes, I’m from Paris’ suburbs, so I can easily go to Paris.
Welcome!
I’m just a regular woman, with regular intellectual capabilities who is struggling to complete a degree in physics, math and CS while working part time, taking care of my seven-month old full-time, spending quality time with my husband, satisfying my parents’ and inlaws’ wishes to keep in touch and see their granddaughter, and trying to pursue the truth and grow in wisdom during the wee hours of the night. I am an orthodox Jew who is currently undergoing a crisis of faith—reading things like LW persuade my intellect, reading things on Judaism persuade some other part of my being. I became an orthodox Jew after doing some independent reading and studying from the age of 14 (before that I thought religion was just an obsolete and irrational barrier to the enlightenment and advancement promised by science). I don’t care if I get voted down to hell for saying that (I don’t believe in hell anyways). That is just how I’m feeling personally at this point in life. I’m not here to get high karma—just here to read as much as possible learn, perhaps change my mind and act to the best of my knowledge. I have been fascinated by science for as long as I can remember, became intrigued with philosophy a few years ago, and love to learn autodidactically. However, I feel my knowledge is fractured and chaotic, since a lot of what I know is what I have taught myself from books and the internet, usually not in any structured logical manner. I’m hoping that one day some pattern will emerge from the chaos of my mind. I have been reading LW and Overcoming Bias for a while. I came across these sites after reading “The Singularity is Near” and doing some searching on the web.
Does that mean you’re a convert? I hear that’s not a trivial matter...
I hear you! =) I’ve found a useful way to organize my knowledge is to think about the epistemic bases for the various types of knowledge, i.e., “how do I know?” Scientific, common sense, philosophical, mathematical, something I heard at the pub… etc.
Well, first of all, I doubt you’ll get voted down severely for merely identifying as a theist, but you will if you make arguments for theism that display some obvious mistakes the community recognizes.
Don’t worry too much about karma anyway. It’s mostly for keeping comments relevant to the subject at hand, so we can have a discussion of, say, “ethics from a materialist perspective” that actually gets off the ground, without constantly having to reinvent the wheel and argue materialist vs. theistic ethics from the ground up.
That said, however, pay attention when you’re downvoted a lot, as it probably means that several members of the community think you made a mistake in reasoning.
Welcome! =)
This is generally relevant and well said. I’m stealing it for the post, if you don’t mind.
By all means!
Hello, I am a British psychology student (studying out of country, presently). I stumbled upon this website after doing a little research following Eliezer’s recent Skepticon talk on Youtube. I have greatly enjoyed learning about rationality within psychology; heuristics, biases, and Bayes rule are central to the course.
I am at that stage where I am beginning to narrow down which areas of research I would like to enter into, and this area is becoming increasingly interesting to me and may one day guide my decision; but while I personally define as a skeptic and have done for some time now, I feel I am new to many areas of rationality, i.e. the “higher level” topics. There is always something more to learn. I apologise if I am I shy contributor at first, I can find such environments of discussion a little daunting when I myself feel inexperienced. I am going to spend some time in the near future exploring here a little more, and familiarizing myself with the articles/sequences on LW; I look forward to achieving a little more knowledge, and hopefully contributing to the community here.
About me personally; I enjoy archery, chocolate, debating and reading. Rebecca
Hello, LessWrong.
I am an 18 year old senior in high school interested in evolutionary psychology and cognitive science. I’ve actually been lurking around this site for over four months before I finally got brave enough to introduce myself. I always considered myself to be rational, but after looking through the core sequences, it slowly dawned on me how horribly wrong I was, and what a ways I have to go to “upgrade” my rationality and hopefully maintain a meaningful conversation with anyone here.
I was raised in a non-religious home where I was encouraged to seek out many different belief systems and see which one fit me the most. I ended up rejecting every mainstream religion I came across, which I suspect is what my parents were hoping for. I officially became an atheist at around age twelve, and I suffered somewhat of an existential breakdown shortly after that as I was desperately searching for a meaning or purpose to the universe and not being able to find one. I didn’t like the idea of living in a meaningless universe and I suffered from extreme depression for many years, which worried my friends and family. I was sent to a therapist because my schoolwork and social life were suffering due to my sense of hopelesness.
I then came across the idea of transhumanism at age fifeen after hearing the word and typing it in on Google out of curiosity, and that day my entire life changed for the better. All of a sudden I was being introduced to concepts like indefinite life extension, recursively self-improving artificial intelligence, mind uploading, apotheosis, and the like. My mind was blown. For the first time in many years, I was feeling a sense of real hope and purpose. I decided that working for the transhumanist project and a positive singularity was what I wanted to do with my life.
This site is pretty damn awesome. I’m busy reading the core sequences and Methods of Rationality, and I’m about 70% through with both. I’m loving them. Being introduced to cognitive heuristics and biases has really helped me grow as a person and as a budding rationalist, and I am now extremely humbled since discovering that I’m not nearly as rational or logical as I thought I was. Discussions here are always very high-quality, engaging, and enlightening, which is something you don’t find very often on the Internet (or, really, anywhere), and I’m a bit nervous at the prospect of engaging in serious discussions with a bunch of people who are several intellectual levels above me. I’ve always been bright, but not spectacularly so, so I hope I won’t get downvoted into oblivion by getting into discussions that are way over my head. (I tend to do that.)
So thank you LessWrong, and I look foward to interacting with everyone here!
Welcome !
Hi, LessWrong.
There isn’t too much to say about me. I’m a Kiwi 16 year old high school student who’s been interested in a lot of the topics discussed here for a long time. I stumbled across HPMoR a few months ago. After reading through that, I came here and now I’ve read through pretty much all of the sequences. I’m definitely getting better at decision making and evaluating information, but I don’t think I’m at the same level as most of you just yet.
I’m going to be busy for the next couple of months with exams, and then a trip to Ecuador, but hopefully when I get back I’ll be able to take part in the community properly. I have a bad habit of being unnecessarily shy, even online, with people I have respect for. I’m going to try to change that this time. It should be easier than it has been in the past, because I have a lot of questions to ask, and sometimes even ideas to add to the conversation.
Cheers.
Welcome !
And as a personal note : great that you go to Ecuador, I love that country ! I hope you’ll enjoy your trip :)
I got a PhD in engineering, but I am interested in many fields, and I will post about my definition of super liberal arts education and ultra liberal arts education. I have an energy, environmental and global poverty background, but I am continuously searching for the most important areas to do research on and to give charity to. I now think this is existential risks, so I am developing a framework for quantifying this. I am an atheist, but I appreciate the community and intellectual discussion of the religion Unitarian Universalism, where many people are atheists. I’m not sure when I identified myself as a rationalist, but I have had many discussions and given many presentations that have provoked much disagreement from the emotional theists and environmentalists. I have been interested in trans-humanism since reading The Age of Spiritual Machines. I came to felicifia and this site through Alan Dawrst when I was researching cost-effectiveness of reducing animal suffering.
Hi
My name is Ali and I’m 24 year-old. I graduated in software engineering and currently, I’m in second year of Master of Science in Artificial Intelligence. Machine learning is my primary interest; however, I am extremely enthusiastic about other subfields of AI, cognitive science, psychology, physics and biology. I love to learn the assembly code fragments underlying high level processes in the universe and to see how complexities are decomposed into simple components by science.
Being born in a religious country, my first steps in the way of rationalism began by questioning the religious beliefs in my adolescence. Since then, I learned to live with probabilities, evidences and explanations.
I found Less Wrong by searching about singularity. I’m sure there is a lot here for me to learn, but I hope someday I’ll be able to contribute.
(English is not my first language, so I apologize for any error in my writing. :D)
Your English is great. If you don’t mind could you talk about the use of the article “the” in your native languages? (Standard Arabic, a dialect and perhaps others?)
I personally feel strongly (although I am maybe the only one) that people should refrain from talking about “the singularity” since the word “singularity” covers several very different and incompatible ideas. I think it often causes confusion the way people sometimes talk about “evidence for the singularity” or “the likelihood of the singularity”. To talk about the idea of “a singularity” is better, much as you said, or sometimes “a technological singularity”.
My native language is Persian (Farsi). There is no definite article in Persian and the specific object/ person/ idea which a noun refers to is determined from the context.
I agree with you about the ambiguity of the word “singularity”. Not only there are different definitions for “singularity” in AI, the term is also applied in other contexts (e.g. economic singularity, gravitational singularity). I think, as you said, talking about “a singularity” is more appropriate.
Welcome!
Your writing is perfect. (Ha! Only just caught myself before I posted “Your writing is prefect.” Oops.)
Hello, I’ve been reading articles on LW for some time, but even if I’ve slowly began to grasp what you’re teaching, the community in general seemed so far above me in terms of however you want to measure intellectual capacity, I didn’t even feel entitled to post. Might as well start here.
I’m a 21.7 years old university student from Slovenia, Europe. My interests are primarily maths, physics and computer science. Biological sciences interest me somewhat, but my knowledge in that area is on a layman’s level. For philosophy, politics or social sciences I’ve never cared much. My passing interest in arts has been described as true random in taste by those with an affiliation to a particular genre, and I have little artistic talent myself. Professionally, I study electrical engineering and instruct high-school mathematics to pay for my living costs. My hobbies include Free software activism (helping in local communities, mostly), programming, backyard astronomy and mountain biking. I’ve been reading a lot of science and science fiction material since I was a child.
This section intentionally left blank.
Although the environment I grew up in isn’t traditionally religious, most people ascribe to what can only be described as irrational beliefs and practices. No organised belief system, either, just little bits of ‘wisdom’ like “only clip your nails on Thursdays, during the day”, “when you sneeze, don’t think about your descendants”, “sleep with your socks under your pillow”, and so on. Even during my early youth, I was frustrated by the fact that there were these actions I was supposed to perform that made no sense, and the only explanation I was provided for them was “they bring luck” or “doing it otherwise is bad luck”, and I wasn’t provided any explanation for that. At the age of 12, I catalogued most of these practices that I suspected were complete nonsense (I even gave some the benefit of the doubt) and conducted a semi-scientific experiment, doing the precise opposite of what I was supposed to do for a month—this is why I excluded some of the non-obvious ones to me at the time, like “don’t talk under a doorway”, because in my model, the more sense it made, the stronger the consequences of disobeying it would be. Unsurprisingly, nothing tragic or out of the ordinary happened during my month of covert disobedience—and I considered one month to be the limit of long-term consequences at the time. I considered this conclusive proof that everyone in my family circle suffered from collective insanity. However, to my surprise, they were completely unwilling to be talked out of it, or to even talk about it at all. This frustrated me immensely, and I grew distant from my family with years. A few months ago, in an internet discussion over irrational beliefs, an LW member directed me to this site for an explanation of some psychological concept—I can’t remember precisely which.
Advocates of this would have much better results if they never said anything. The next time i sneeze, there’s a good chance that I think of descendants, much higher than if I hadn’t read this.
Welcome to Less Wrong!
Hello everyone, I’m a 24-year-old graduate student from Italy. I found this site after reading someone quoting Yudkowsky: “Quantum physics is not “weird”. You are weird.” I’ve been reading this blog the whole past few days. :-)
Welcome!
Benvenuto. :-)
That is delightful, welcome! Did you have a look at the quantum physics sequence?
I’ve read a few of the posts in it, and I’m going to read the rest in the next few days.
Hello, everyone.
Apparently I was supposed to introduce myself here when I joined the site. Looks like I’m about two (?) months late. I’m not really sure when I registered my account, but I just started actually commenting recently.
Anyway, I’m a 21 year old Biomolecular Engineering/Pre-medicine student living in the backward state that just put Intelligent Design in the state curriculum (And also recently proposed outlawing teachers mentioning homosexuality in the classroom before the 9th grade, among other remarkably boneheaded things). I know a marginal amount of programming—most of what I do is visual basic to go along with my Excel spreadsheets or MATLAB work for class, but I really enjoy it. I also know marginal amounts of C++ and PHP, but I’m not entirely sure why I’m telling you this.
I was introduced to Eliezer’s work sometime this spring (April?) by a friend who (without having read it herself) posted HP:MOR on my Facebook wall, and said it was right up my alley. I read it in two weeks, and was hungry for more. Since he wrote it under the pen name “LessWrong”, it actually took a bit of digging to find out who actually wrote it, but I gradually uncovered it. (I keep an impeccably documented collection of quotes and wanted the proper attribution. Eliezer has about 6 or 7 quotes in my collection now...) I then started reading all of Eliezer’s OB posts from the beginning, and I’m currently on Leaky Generalizations, taking a rather lengthy hiatus, since I’m busy doing research and studying for the MCAT. And honestly, I’m wasn’t a huge fan of the evolution sequence, since I already knew most of it, being highly related to my major, and it was highly technical.
But I can thank Eliezer for my identification as a transhumanist—I’ve worn many labels in my day, including everything from Christian to Objectivist, but I have never identified with a philosophy as strongly as I do transhumanism. His e-mail regarding Yehuda’s death was one of the most moving things I’ve ever read.
Areas where I seem to disagree with, as I’ve seen it called, the LessWrong Hive Mind:
Cryonics—This is mostly out of ignorance, if anyone can point me to some respectable and unbiased sources of information, I would be greatly obliged. I have a difficult time finding most of what I’ve seen linked to to be credible. Regardless, I’m not too incentivized to research the matter, since I don’t have the means with which to afford it.
Mind stimulating drugs—I don’t take anything psychoactive, including caffeine. (Edit: In clarification, I try to avoid anything psychoactive. I would obviously take a psychoactive drug if it would save my life, significantly reduce pain, etc.) This is for a variety of reasons, primarily because I feel I have an addictive personality, and that the medical studies seem to show that there is little to no effect after long term use. (See another of my favorite blogs)
Hobbies: Collecting quotes, video games, programming, photography, contract bridge, biking, Diplomacy, college football
Favorite post so far: Mysterious Answers to Mysterious Quesitons
Most thought-provoking post so far: Pascal’s Mugging
Anyway, this was really just a way to wake up because I was dozing off while studying for the MCAT, and I think I’ve said about everything I wanted to say. This quickly became one of my favorite sites, and I count myself lucky to have discovered it.
On cryonics:
For: Alcor’s FAQ
Against: Sadly, not much. Paul “ciphergoth” Crowley collects anti-cryonics writing, and it sucks.
You can almost certainly afford it. Eliezer said he paid less than $200/year. I know how expensive a photography hobby is; you’re not dirt poor. For a potentially life-saving treatment, that’s pretty cheap—people routinely pay more for treatments with worse odds that’ll buy them less than ten years.
Well, it’s more a question of what my parents are willing to pay for, to be honest. And I don’t have any real photography equipment, I just enjoy reading about it, and taking pictures on my point and shoot.
Hey, I think I’ve seen you around the forum.
I feel similarly about psychoactive drugs. I do consume small amounts of caffeine (via chocolate and the occasional caffeinated tea), but I try to avoid it since even those amounts can make me jittery and thus I don’t drink coffee at all. I don’t feel any desire to take recreational drugs, legal or otherwise. I suspect this qualifies as an unusual tendency, so it’s always interesting to meet people who feel similarly. Nevertheless, I have a tendency not to mention this fact spontaneously for fear that people will feel I’m judging them.
Hi, welcome to Less Wrong!
There is respectable science backing up various parts of cryonics. This page has some titles of relevant papers. For more specific information, about which of the following are you most skeptical?
the mind is in the brain
the mind’s information is preserved by vitrification
it will someday be possible to recover this information and run the mind, either in a brain or elsewhere
As for finances, you can get a life insurance policy that’s about as expensive as medical insurance, that will pay out to the cryonics org in the event of your death. This is the way most people sign up, and it’s apparently feasible on a limited budget. I can’t say for myself, because I don’t have the control over my own finances I’d need to sign up.
Please look into cryonics more carefully. It could save your life, and even if you decide it’s not for you, the choice is important enough to make it an informed one.
Hello, My name is Dave Coleman. I was raised Atheist Jewish, and have identified as a rationalist my whole life. Browsing through the sequences, I realized I had failed to recognize some deeply ingrained biases.
I value making myself and others happy. Which others, and how happy, is something I’ve always struggled with. I used to have a framework with Jewish ethics, but I’m realizing that those are only clear in comparison to Christian ethics. Much of what I learned and considered was about how to make the Torah and Talmud relevant to modern, atheistic life.
I’m realizing the strong bias we had against saying “maybe it’s not relevant, since it was written by immature goatherders 3500 years ago who had no knowledge of science or empathy for those outside their tribe.” Admitting that wouldn’t sound wise, so we twist and turn with answers, cluttering what could be a solid system of ethics.
For a while I’ve considered myself a reconstructionist Jew, with the underlying ethos of “do all Jewish traditions by default, but don’t do anything that has a good reason not to be done.” I’ve realized that not polluting my mind with incorrect and biased thought patterns is a good reason to avoid many things.
Another recent change has been an understanding of Judaism in terms of evolutionary fallacies. There is a strong sense in Judaism of being a Chosen People, and of a universal intention that Jews survive as Jews. Assimilation may be the biggest struggle for Jews, bigger even than persecution.
I realized that this is the same fallacy that sees intent in a species’s characteristics. I had been labeling aspects of Judaism that lead to survival as being virtuous themselves—all of the dietary rituals to keep separate from goyim, the fear and guilt of assimilation. Even the love of learning and the drive to succeed has undertones of “thrive, for that is how you will survive the next pogrom.” Preservation of the culture is virtuous, therefore anything that keeps the culture alive is virtuous.
I remember my first Differential Equations class, when we learned that the function that is its own derivative is f(x)=e^x, and the function that is its own second derivative is f(x)=sin(x). There was this eerie confusion as I first thought that those functions were just a possible solution, and then realized that they described the only solutions. I found it very disturbing that I couldn’t describe whether the sine looked as it does by virtue of being its own second derivative, or whether it was its own second derivative by virtue of looking as it does. I still feel slightly uneasy that I can’t assign a causal relationship in one direction or the other.
That’s how I view Judaism now. The characteristics of all species and memes are a solution to the equation of survival. There is no intent or deeper meaning than that, and I think I’ve finally let that go.
Oh, and I got here from Reddit, where someone posted a link to the Paperclip Maximizer.
e^-x is its own second derivative. sin(x) is its own fourth derivative (note relation to e^ix).
And welcome to LW! (he said)
Causality doesn’t have much meaning when applied to mathematics.
Following up to EY’s comment:
e^x is its own second derivative too. There are two functions that are their own second derivative, and four which are their own fourth derivative.
Cool! So what are the other two (out of three) functions that are their own third derivative? What does their graph look like? And does all this have anything to do with Laplace transforms? Does a sufficiently smooth function have a 1.5th derivative?
Yes, welcome to LW.
I think so.
More precisely there is a 2-dimensional parameter space of functions that are their own second derivative, i.e., any function of the form Ae^x+Be^-x for any constants A and B.
Is there a generic form of that for any nth derivative?
Sum over integers k from 1 to n of A(k)*e^(e^(2*i*pi/k)*x) is its own nth derivative, for all A.
Yes.
Of course, you mean they are the only solutions that satisfy certain initial conditions.
Well, that they are the family of solutions, allowing for various transformations.
*-Disclaimer, I haven’t looked at a differential equation in 6 years.
Hey! Great site—I look forward to reading the archives and new articles.
How did I come to rationalism?
I didn’t realize it for a long time, but my first rational response was at a very young age. Some bully girl at school cornered me with her friends as said “You’re stupid!”. My response: “Nuh-UH!” (pause) “Hey, I get better grades than you! You’re stupid, not me!”
I couldn’t pick out the fallacies (hers and mine, lol) back then, but even then I knew that she was wrong, that I wasn’t stupid just because she said so. I remember being very excited with I found out that my undergrad Philosophy 101 was called “Critical Thinking” and that’s where I was formally introduced to logical fallacies. Logical fallacies have always been to me a way of speaking and thinking truthfully, a way to keep myself honest and to make sure others are being honest with me.
I am new to the online critical thinking movement, which I discovered through Pharyngula, the Skeptic’s Guide to the Universe, and Brian Dunning’s Skeptoid podcast and Here Be Dragons film.
I like the anecdote. Was your response effective?
Nah. I got pushed into the wall and heckled by the same gang for most of the rest of elementary school. :P
Hi, all. My name is Tyler Curtain. I am a theorist with the Department of English and Comparative Literature at UNC Chapel Hill. My training is in computer science (undergrad and grad) and English (grad). I teach graduate and undergraduate courses in theory, as well as courses in science fiction and fantasy. My research interests include philosophy of biology, evolutionary theories of language, linguistics, philosophy of language, and theoretical computer science.
It ain’t your professor’s humanities any more. The world has shifted.
Hi, everyone! I’m Filipe, 21, from Rio de Janeiro, Brazil. I’ve dropped out of Chemical Engineering in the 4th semester, and restarted College after one year off with Mathematics, from scratch. I thought redoing the basic subjects, if I worked hard through them, would be a good idea. It probably would, but so far I’ve studied those subjects with the same sloppiness of before, heheh. Now I’m one semester off College, due to depression, obsessive thoughts and some suicidal tendecies. Some of this is related to a deconversion from Christianity at age of 18: I was really devout and lived for the religion. My father is a pastor and my whole family continues to be serious about Christianity and it’s pretty obvious that I’m the greatest source of suffering in my parents’ lives, as they believe I’m going to end up suffering eternally if I don’t return to my former beliefs. It also relates to having been a sort of a child prodigy (many family members, even those who don’t like me a lot, testify that I could read at age of 2) and now not being able to excel academically, because of those problems and because of akrasia. Speaking of which, I have never read the sequences even though I’ve being reading this site for some months. I guess this may change when I convince my parents to buy me an e-reader. Sorry for the babbling and the sloppy English.
In this post, your command of English is indistinguishable from a native speaker’s. If you have an estimate of how fluent in typing English you are, I suggest you strengthen it :)
I haven’t learned how to upvote comments yet. I’ll upvote yours when I have.
The little thumbs-up and thumbs-down at the bottom left of each comment. EDIT: how to retract...
Heheh, thanks.
How can an effect like that have only one cause?
Do you mean that their source of suffering = me + misguided beliefs, not just me?
Basically, yes.
I agree, but now I’m not sure how I’d rephrase it.
There’s no law that says reality must be describable in simple English.
I don’t criticize what you wrote! I ask you to not believe a thing merely because the thing is the exact meaning of words you selected, when you selected those imperfectly-fitting words because there were none better.
Ah! I see. Thank you.
Hi All!
Generic Stats: 28 year-old Ohioan; Found LW through HPMoR, and lurked for a while, but finally created a profile after filling out the survey; BA in History. Was halfway through an MS in Human Factors Engineering when I got divorced and couldn’t afford it any more. Don’t plan on going back in the near future, but I did manage to get published during my time in grad school, which was pretty nifty.
I grew up with Easter-and-Christmas Roman Catholicism, though I also got a bit of Judaism from my dad (a Soviet emigrant). Got more heavily into Christianity in my teens, which lead to becoming an atheist when I was around 17.
I am sensitive to feminist concerns about what our culture teaches young girls, as I fell victim to it myself: I had a complete disregard for science and math, despite a very high aptitude for them. It wasn’t until I self-studied my way back through math for my engineering requirements that I actually internalized the belief that I was good at this. The general “Not-Getting-It-ness” of many commenters in regards to gender issues tended to turn me away from LW at first, but there is a lot of good stuff here, besides.
About me personally: I enjoy Joss Whedon, TED talks, and Neil Gaiman. I am devoted to my dog, Gryffindor, and he has been with me for 11 years. I work primarily in child care and enjoy imparting nuggets of rationality to my kiddos in ways that don’t conflict with the family’s world views (I have a tendency to work for extremely conservative religious families ranging from Mormons to New Earthers). I am poly, and enjoyed seeing some of that represented here. I have had an insane amount of crazy hobbies ranging from medieval re-creation to bharatanatyam (Classical Indian dancing)
If it would not be inconvenient to you, could you unpack what you mean by “Not-Getting-It-ness”? That is, specific examples that you find problematic?
If you would prefer not do this, could you recommend a source that would assist in understanding the method you used to arrive at this result? That is, a source that would allow one to understand the cognitive-algorithm that produces the result “Not-Getting-It”?
Of course! I tend to agree with orthonormal—in writings by men, women are often talked about as the “Other” and not the audience.
EY has written a similar argument . But then in this piece, he makes multiple accusations that women tend to talk about men as “Other” without ever providing any sort of evidence to back it up. He just takes it as some obvious de facto truth that doesn’t even need justification. I personally was put off at this.
Some more good ones to read include this argument which mentions that you shouldn’t forget the historical context/ culture that people are coming into these discussions from, and this piece, which posits that the essence of the “Taking Offense” is a percieved lowering of social status.
I also recommend a quick perusal of the comments therein.
From my personal experience, one of the early things I did upon finding Less Wrong (after some explorations in the sequences) was to click on the tags of subjects I was interested in (gender, social, etc). Somehow, a vast majority of the articles’ comment sections ended up devolving into repetitive arguments about PUA. Looking back, this was probably due to my navigating by clicking on links within the article I was already reading, which lead me to stay within a subject range that could devolve into PUA discussions, and not so much that PUA is in fact mentioned in the vast majority of posts. My opinions on this (although probably more positive than you would expect of an average female) are a whole different subject which I can expound upon if need be, but I assume that you could guess how a female would feel when she goes to a blog supposedly about rationality, and all the comments are about PUA.
Finally, I would like you to imagine yourself as the only male in a Women’s Studies class. Even if the language always remains respectful and your classmates encourage your participation, I’m sure you can visualize many respectful debates where you would get frustrated that the other members of your class just don’t “Get It”...LW is a similar situation, just with the genders reversed.
I would like to mention that I have in fact been the only female in engineering classes, and would like to point out that any time your race/gender/belief system is in the vast minority, there is bound to be additional pressure there. My views on that subject best summed up by these comics .
Finally, I would like to comment that in my introduction, I was operating in a social interaction mode (aka I was posting in a “Introduce Yourself” thread (social interaction), not a “Let’s Have A Rational Discussion” thread (factual/debating interaction). Even a polite request (such as the one made) to rationalize my feelings would not be acceptable in most social spheres outside of LW. (unless the claim I made was completely outside reality, such as “I was driven away by the intense focus of the LW community on ice cream.” In which case a “Say whaaat?” is a completely acceptable response, lol) Here it is de rigeur. I wouldn’t be surprised if this also tended to draw away many women. (And I would like to clarify that I am not trying to attack you personally at all, I am just using your response as an example of the LW culture.)
I realize this post is quite old, but there’s clearly a norm of conversation I’m not understanding. I don’t want to cross peoples boundaries, but I have a hard time understanding them.
Could you be so kind to explain to me why one would be offended by that?
Sorry for not responding to this sooner. Thank you for explaining your view. I have only two statements to make.
Apologies for failing to abide by the relevant norms of conversation. (This is not sarcasm. Without body language, it is hard to demonstrate this. However, perhaps I can express myself better with this photograph of a chimpanzee.)
http://www.ebookanoid.com/wp-content/uploads/2010/10/embarrassed-chimp.jpg
If I were to anthropomorphize, the chimp would be thinking the chimp equivalent of “D’oh.”
After the recent romance thread (which was not qualitatively worse than the previous threads), stating that Lesswrong has a “Not-Getting-It-ness” with regards to gender is perhaps something of an understatement.
http://graphics8.nytimes.com/images/2007/08/27/science/chimp.reach533.jpg
If I were to anthropomorphize this chimp, the chimp would be thinking the chimp equivalent of “Really, folks? Really?”
PS- I really which there were a “Preview” button, or a way to edit posts in Not-A-Tiny-Text-Box.
I’ll be doing some editing now, but it will only be clarity, not content. :)
Chrome lets you edit the size of its textboxes by dragging the lower right corner. Don’t know if the same goes for any other browsers.
Oh, wow! That’s super-helpful! Thanks!
You can do it in Firefox, but I didn’t realize this until you pointed it out just now.
This seems rather unnecessary, but I’m posting here so that other people have a reference to my intro to rationality, if they’re so inclined to read about it.
At the time of this posting I’m a 19 year-old male college student of middle class origins living in Vancouver, Canada, if that makes a difference. I was raised in a nonreligious home by politically centrist and humanist parents.
Having friends who were a bit nerdy and considered themselves rational in an irrational world, sane in an insane world, etc. they were very interested in a film called “Zeitgeist: Addendum” which confirmed their worldview at the time. I too watched the film and we were in awe of the Venus Project. http://www.thevenusproject.com http://zeitgeistmovie.com/
The Venus Project sees a bulk of humanity’s problems as the result of faulty human psychology being propogated by social stratification in a money economy. The creators of the Venus Project believe that by creating material abundance through the application of technology that the Law of Supply and Demand can be superceded and hence money no longer needs to exist. In a global society with no social stratification, a culture based upon values derived through use of the Scientific Method could then be propogated to prevent all future global-scale conflicts. I would describe it as post-scarcity technocratic marxism/anarcho-communism.
We got involved in an online community built around the Venus Project, with aims to participate in an intentional community of some sort. . Originally we thought the Zeitgeist Movement would be about reaching conclusion about how civilizations could reduce existential risk, and then using some form of mass media to get this message out. Ultimately, we found that the organizaiton was too focused on inert political activism, as well as the regional group being very autocratic. Around the same time, a friend of mine interested in Singulatariansim and transhumanism discovered LessWrong and got the rest of us interested. We no longer participate in any formal or public organizations, seeing them as mostly ineffective, instead just being a group of friends interested in the problematic lack of rationality in societies.
In other words, we found organizations that had a sound epistemic rationality, but without instrumental rationality, they became stagnant. To figure out how to effectively communicate rationality to others is as important a goal as learning about it myself.
We have switched to a more individualist and modest focus: just trying to understand the world and improve our own lives, moving onto something bigger in the long-run. We are doing this with much inspiration and influence from LessWrong.
In the near future I will read HP:MoR and the sequences and move on from there.
Welcome!
Of all the people other than you that there are, this reference will be most important to eggman_2013.
Hey, I’m a 20 year old medical student, I’ve always had almost compulsive need to know the “truth”. In retrospective I have been moving towards LW for a long time, first off I came in contact with Aubrey De Grey’s campaign against aging, and decided as a 17-year-old that I wanted to dedicate my life to that cause (hopefully the problem gets solved before I die so I don’t have to spend whole my life battling aging). Then from that I moved on to other transhuman ideas but got a bit skeptical about Ray Kurzweil’s senario, began thinking about brain-uploading meant + morality + meaning of life + free will --> got depressed, read Dennett → got a lot better, saw a few videos of Eliezer Yudkowsky and “thought he seems like a super-sane person, wonder if he stands on solid ground” found Less Wrong, prioritized becoming a more rational person.
Still a bit skeptical about plausibility the singularity happening any time soon(<50 years), so I right now I’m doing stem cell (hES, IPS) research, when my studies allow. But really enjoying LW (as well as finding it really useful).
Cheers! (And sorry about the “my life story”)
Welcome to Less Wrong, Wix! Kudos to you for working in anti-aging research.
-
You seem really good, you haven’t made any errors that I’ve noticed.
Welcome!
I’m Tuvia Dulin, and I ended up on these forums after reading Harry Potter fanfiction. I suspect that this is a common story among the membership.
I’ve tried to be rational ever since I learned what rationality was, but it wasn’t until I suffered a psychotic episode that I learned what the true consequences of irrationality were. That was many years ago, and I have since completely recovered, but in some ways I’m glad for the experience; it taught me that without rationality, you have nothing dependable or sane.
“Rationality” is defined a bit differently here than in other places. There is good justification for this. It makes me suspicious any time I hear someone discuss the meaning of a word, as it makes it likely they are invalidly trying to argue by definition, but here “rationality” has a close meaning within and without LW, closer than any other word, and also sufficiently close that it is better to use it than use a new word.
I don’t have time to read all of those posts right this second (though I will over the next couple of days), but if you could just briefly explain how I’m misusing the word, that would be cool.
You’re not misusing the word.
After the local use of “rationality” was established, a second word actually gained a meaning that is nearly as close to “rationality” (as used here) as “rationality” (as used elsewhere) is.
That word is “winning”, used in the broadest and most general way, as popularized by Charlie Sheen. This doesn’t imply an endorsement of any particular thing related to him, but the term “winning” did approach what is meant here by “rationality” around New Years, or whenever that media flurry occurred.
Even so, “winning” might be a bit farther from LW “rationality” than standard “rationality”.
Others might disagree with my assessment.
Welcome to Less Wrong! Your name totally rocks. Is it your legal name?
Oy, tell me about it! (Actually, do tell me about it, if you want to. I’m interested in developing systematic techniques to cope with mental illness. Or at least in building scientifically sound bases for kvetching about it.)
Yes, Tuvia Dulin is my born and legal name. When I need a pseudonym, I’m known as Blake Alexander Hannon.
“Mental illness” is a very broad category, and I’m not sure if my way of dealing with what happened to me would work for other disorders as well. I’ll talk about this at length when I have time; for now, I’m afraid I’ve got to run.
Five quick questions, five fast answers. Fast and perhaps somewhat rambling.
I’m an Australian, a few years shy of thirty, who has generally done things for his own reasons rather than simply going along with everyone else. After secondary school I got a job or two, became heavily involved in a fringe political group for a few years and only then decided to go onto to university. Bachelor of Science (Chemistry) - hopefully the last BS from the education system I’ll put up with. I’ve just very recently dropped out of Honours and moved the 1000km home to Melbourne, which was the most difficult decision I think I’ve ever faced. Not being easy, it stretched my relevant skills to their limit, and in the end it was quite nice to learn that I can make choices as a rational adult human. Or at least as some approximation thereof.
Every now and then I attempt to express my personal values in a system like those used in the Ultima games. Most recently, my three principles of virtue were Curiosity, Truthfulness, and Playfulness. Curiosity I have valued for as long as I can remember—my primary school motto included “live to learn” which I took to heart. Honesty has been an absolute for me since a particular incident in my late teens. Play I’ve valued especially since reading Schiller but creativity in general I’ve valued much longer. I find Internet “memes” and other banal forms of conformity an affront to creativity; people should find and use their own words.
I’ve never really identified as a rationalist per se, but as I say I’ve always tried to have my own reasons for why I do things, or why I think things. Tried with varying levels of success. Even at the age of ten I thought that knowledge was power, and that mathematics in particular should be seen as equipping my mind with tools to better solve problems. The only book to really open my mind or change the way I question things was Dune, first read in early high school, which raised my standards for self-control and for long-term planning. To say the least. The fringe political outfit I joined made a pretence of rationality, on reflection, which pretence I for one took quite seriously. And then when I was looking to do honours this year, one of the possible projects was “something something Bayesian something”, which was enough prompting to pick up a book on the subject and read it. I picked up Jaynes’ textbook, and people still look at me funny when I say I read a statistics textbook for fun and loved every minute of it. One of those “yes! this is the way that things work and I’ve never seen it put in words this well before” books. Or put into mathematics as well, perhaps.
Happening across Less Wrong after all that just seemed fitting. Turns out there are people out there with similar values to me—I even know some of them. I read some posts by HughRistik (a friend was engaged in an exchange with him on some blog, probably ‘Alas!’) and I was quite impressed with the way he argued, to put it mildly. Found a link to his comments here, bookmarked it as a place to check out one day. Eventually came back to do so, recognised a couple of people from the xkcd forums (Hi Vaniver!), read the Harry Potter fanfic (and was mightily impressed), read the Twilight fanfic and was even more impressed (I took great delight for a couple of days in telling people that I’d found the perfect expression of something I’d been trying to say for years, and that it was “something Bella said in a Twilight fan fiction”.)
I’ve started on reading the sequences (just moved interstate and it was easier to bring ebooks than physical ones) but I still would’ve put off making this intro-thread type post. But I’m planning to attend the local meet-up on Friday, and that makes for a useful deadline.
Eee, what was it? :D
The last sentence of these three: “My reasons for preferring to dissuade him [Mike] were entirely about myself. I hadn’t yet begun to scratch the surface of what I wanted out of dating or romance or anything in that department. And it seemed like a uniquely hazardous thing to uninformedly test by experiment, both for myself and for anyone else involved. ”
A concise explanation of my feelings towards courtship and such things.
Hopefully Alicorn got her ego tickled a bit :) Personally, I prefer this one:
“I accept your apology,” I said. I’d gotten into the habit of saying that instead of “it’s okay” when I was fourteen, having noticed that I often wanted to accept apologies for things that were not really okay.
I wish I figured that one out by myself at any age.
Welcome to LessWrong, and I look forward to seeing you this Friday!
Hello, LW-ers.
I’m not exactly new—I’ve been lurking for a long time, soaking up all the glorious sanity from a few sequences and a lot of individual essays. And I’ve made a few comments. Still, I’d like to introduce myself properly. : ) (The main reason for this is that I think I need to lighten up and stop thinking of this site as a Sacred Order of Pedestaled Supergeniuses where my humble intellect doesn’t belong, in order to grow.)
Insofar as anyone wants to know, I’m a 24 year old fellow, I have a Master’s degree in linguistics since last year and now I spend my days as a humble translator. Somehow I fare better with intellectual pursuits if they’re a hobby rather than how I make a living.
I think I’m a rationalist for one okay reason and one rather unflattering one.
The okay reason is that I’ve lived with a psychological diagnosis since I was… maybe 8 or so, so from very early on I’ve been quite aware of the fact that my brain is broken and needs fixing. I think I made more thinking errors than other people, but also importantly I made unusual thinking errors that stood out. My gut instincts clearly leading me in the wrong direction a lot, my feelings often being noticeably fickle and inconsistent. Rationalism has always helped me cope with the confusion caused by that sort of thing.
The rather unflattering reason is that it makes me feel smart. I’m not proud of it, but I’m not going to lie. I have a long-standing horrible habit of trying to win debates to aid my self-esteem. Entering controversial discussions and melodramatically grand-standing in them is a guilty pleasure I’m still working on cutting the heck out. (Not to worry, though—I wouldn’t drag down the wonderful level of this place with that sort of silliness.)
I tend to ramble a bit in my writing, and I can only hope to approximate the level of clarity you’ll be used to. But I do my best to improve. : )
Hello all!
I’m a twenty year old college student studying physics. My introduction to LessWrong has most likely been lost to the ravages of time (although there’s this nagging feeling I was linked here by a random forum post on GameFAQs). That was about a year, year and a half ago. I’ve read about halfway through the sequences via the haphazard method of “Wow that’s interesting I guess I’ll drop the next hour or so reading it.” While I realize that finishing the sequences is highly recommended, I haven’t seen a significant amount of large-inferential-distance-statements-oh-geez-what-is-going-on here type posts so I think I’ll be fine despite my incompleteness.
As to the more pertinent question of my road to rationality, well, I was raised in China where religion was nearly nonexistent and my first exposure to the Bible was a picture book which I treated more or less like Greek or Egyptian myths (~8 years old). This lead to a natural interest in the New Atheism movement which articulated my unspoken problems with religion and exposing me to the skeptics community as well (15-17 years old). However, a small nag at the back of my mind floated that there was something I was doing wrong if I was pursuing truth, despite the apparent correctness of the atheist position!
In comes LessWrong (~19 years old). In some cases, merely repeating things that I had thought and agreed with (but never acted upon! so basically not anything I valued) to opening up entirely new avenues of thought (Mostly newcombtype problems and decision theories). A post that Yvain made a while back about X-rationality, which downplayed the clarity of thought afforded by reading LessWrong, was in complete opposition to my own experience. I felt something close to constant… joy I suppose? as I observed previously confusing and opaque subjects become understandable and transparent. Where’s Waldo with model fitting induced utilitons if you will.
The catalyst for joining the community though, was the meetup here in the San Diego area. While it would be inaccurate to say that I’m unsatisfied with my life, I feel as if a lot of my satisfaction arises out of complacency and adherence to the status quo rather than a response to accomplishing any goals (a poor man’s wirehead indeed!). Going to a meetup with a lot of smart, engaged and most of all unconfused people might clear up my confusion for my life goals, but the real goal here is to meet new people.
Perhaps I’ll just use this account as a karma, PM and meetup bot, I do have a busy schedule. Or perhaps I will try to contribute to the community. Either way, the plan is to have fun, take names and fall off the shoulders of giants repeatedly.
Note, Micaiah is not my real first name, it arose out of a conversation where a friend compared me to the Biblical prophet, because I frequently make unpleasant predictions which turn out to be true anyway.
Greetings!
I drafted what is apparently too long an introduction to fit into a comment. Rather than try to work out how to rewrite the whole thing to fit into some unknown maximum length, I’ll break it up into parts.
PART 1:
Greetings!
I’ve been lurking since early 2010. I’ll finally take the plunge and actually engage with the community here.
I’m a Ph.D. student in math education. It’s a terribly named field, it would seem; everyone seems to think at first that this means I’m training to either (a) teach math or (b) prepare future math teachers. It’s actually better thought of as a subfield of psychology that focuses on mathematical cognition as well as on teaching and learning.
I grew up in a transhumanist household. My father signed us all up for cryonics when I was about five years old, I think it was. At the time I was just starting to realize that if death is inevitable for others, then that might mean that death is inevitable for me. I remember going up to my mother and father in the kitchen and asking, “Am I going to die someday?” They looked at me and said, “No, we’re signing all of us up for cryonics. That means if we die, they’ll just bring us back.” I remember being so excited about signing the life insurance policy that I misspelled my name. On the way out of the insurance agent’s office I asked “Does this mean I’m immortal now?” I literally leaped and squealed with excitement when they said yes.
In retrospect, I can recognize that as a tremendously defining point in my psychological development. Most people I’ve known who have signed up for cryonics know the feeling of an immense weight they didn’t even know about being lifted once everything is finalized. Although I know better than to trust my memory, I do recall learning over the course of a few days before that event how to “wear” that weight before I finally asked my family about it. I took my being signed up for cryonics as blanket permission to cast that weight off by just assuming that I would live forever. I do realize now that they were oversimplifying things, but I think it still had a very powerful effect on the basic makeup of my psyche: whereas everyone else seems to have to learn how to recognize and let go of the burden of mortality, It has never been meaningfully real to me.
Unfortunately, I can see now how that gave me permission to be complacent in a lot of important areas through most of my life. If you know that you and your closest loved ones are immortal and that anyone else can become immortal if they so choose, there’s no sense of urgency to do what you can to end death. Instead, the only real danger as far as I could tell was deathism, since that mental poison would permanently and needlessly have the net effect of making people commit suicide. But even then, my concern wasn’t that deathism might halt immortalist efforts; my concern had always been that individuals I care about might needlessly choose to die because of this ubiquitous mental disease. That was always a sad possibility, but on a core emotional level I felt confident that mortality would be obliterated in my lifetime and that the people I most cared about—mainly my family—would be there with me one way or another. So no real problems, right?
When you think this way, it makes some rationalizations way too easy. I missed a lot of opportunities in my teens because I didn’t have hardly any courage to do what others thought might be a bad idea or even much self-awareness to decide on a sense of purpose (although I don’t think I knew enough to have any idea how to define a purpose without baseless recursion). So instead of saying something like:
...I would say something more like this:
The problem was that until relatively recently, I didn’t apply the metacognitive effort needed to recognize what this necessarily must do to my life as a general algorithm. It actively discourages ever reflecting carefully even on major life decisions. And that’s ignoring the issue that immortality isn’t guaranteed even to transhuman cryonicists.
That said, I’m immensely grateful I never “caught” the deep terror of mortality. The basic emotional sense of okayness wasn’t the problem at all; the problem was that it made too many stupid things too easy for me to rationalize, and I simply hadn’t been raised with the right kind of metacognition to counter that stupidity. From what I’ve been able to learn and observe, it seems that metacognition is much easier to teach than is a basic emotional sense that the future will be okay.
I can say, however, that if it hadn’t been for Eliezer and Less Wrong, I probably would still be making the same stupid mistake.
(Continued...)
PART 2 (part 1 here):
I had the pleasure of meeting Eliezer in January 2010 at a conference for young cryonicists. At the time I thought he was just a really sharp Enneagram type Five who had a lot of clever arguments for a materialist worldview. Well, I guess I still think that’s true in a way! But at the time I didn’t put much stock in materialism for a few different reasons:
I’ve had a number of experiences that most self-proclaimed skeptics insist are a priori impossible and that therefore I must be either lying or deluded. I could pinpoint some phenomena I was probably deluded about, and I suspect there are still some, but I’ve had some experiences that usually get classified as “paranormal” that are just way too specific, unusual, and verified to be chance best as I can tell. And I’m under the impression that these effects are pretty well-known and scientifically well-verified, even if I have no clue how to reconcile them with the laws of physics. But I’ve found that arguing with most die-hard materialists about these things is about as fruitful as trying to converse with creationists about biology. They know they’re right, and as far as they’re concerned, one either agrees with them or is just stupid/deluded/foolish/thinking wishfully/worthless/bad. I don’t have much patience for conversation with people who are more interested in proving that I’m wrong than they are in discovering the truth.
It seemed to me that the hard problem of consciousness probably came from assuming materialism. Since it’s such a confusing problem and I was pretty sure that we can be more confident that we experience than that experience is a result of something more basic, it seemed to me sensible to consider that consciousness might the foundation from which the laws of physics emerge. (Yes, I’m aware that this sounds very much like a common confusion about quantum mechanics, but what I was thinking at the time was more basic than that. I was distinguishing between consciousness and the conscious mind. I’m not so sure anymore that this makes sense, though, since the mind is responsible for structuring experience, and I’m not sure what consciousness without an object (i.e. being conscious without being conscious of something) would mean.) But even if consciousness weren’t the foundation, I was pretty sure at the time that materialism didn’t have even an in-principle plausible approach to the hard problem. At the time, that seemed like a pretty basic issue since, without exception, all of our evidence that materialism is consistent comes from conscious experience (or at least I lack the imagination to know how we could possibly have evidence we use and know that we can trust but that we aren’t aware of!).
But I’ve always tried to cultivate a willingness to be wrong even if I haven’t always been as good at that as I would like. So when it became clear to me that Eliezer scoffed at the idea that the hard problem of consciousness might be fundamentally different than other scientific challenges, I asked him if he’d be willing to explain to me what his take was on the matter. He pointed me toward his zombie sequence) since he understandably didn’t want to take the time to explain something he had already put effort into writing down.
About a month later, I finally read that sequence. That had the interesting effect of undermining a lot of mystical thinking that had taken refuge behind the hard problem of consciousness, so I was really intrigued to read what else Eliezer had put together here. For reasons that would quite a while for me to explain, I quickly became really hesitant to read more than a small handful of LW articles at a time, and I wasn’t sure I really wanted to become part of the community here. So I just sort of watched from the sidelines for a long time, occasionally seeing something about “Friendly AI” and “existential risk” and other similar snippets.
So I eventually started looking into those things.
I learned that there’s a great deal of hunger for help in these areas.
And I realized that I had been an utter fool.
I have sat complacently on the sidelines entirely too long. It has become clear to me that we need less preparation and more action. So I am now stepping up to take action.
I’m here to do what I can henceforth for the future. I’m starting by plugging into the community here and continuing to refine my rationality to what extent I can, in the aim of solving what heady problems I can. (One that’s still close to my heart is finding effective ways of eradicating deathism. I’ve actually encountered some surprisingly promising directions on this.) Once I’ve had a chance to attend at least one of the meetups (as I had to abandon the one after Anna’s talk for personal reasons), I hope to encourage some regular meetups in the San Diego area (at least as long as I don’t drive everyone here nuts!). Beyond that, I’ll have to see where this goes; I’m not sure any of what I’ve just named is the most strategic boon I can offer, but it’s a start and it seems very likely to quickly steer me in the best direction.
Of course, suggestions are welcome. I’m interested in doing what I can to eradicate the horror of death and exalt a wonderful future, and if that means I need to change course drastically, so be it.
I look forward to working with all of you.
Thank you for reading!
First a suggestion: I think it would make sense to change the topic to “Welcome to Less Wrong! (2010&2011)”. I was confused whether I should post here or on the original “Welcome to Less Wrong!”
Then to the actual topic of my comment:
Hello!
I’ve been lurking a couple of months now, the rationality mini camp finally activated me to do something instead of just passively soaking up information. I wasn’t selected, but I definitely do not regret applying for the camp.
Some info about myself, I grew up on the south coast of Finland and went to a Swedish-language school. Consequently I’m bilingual (Fin&Swe) and also acquired a strong interest in languages—besides the aforementioned I speak English, German, Russian and French. My other hobbies are skiing (both downhill and cross-country), travelling and car repairing.
LW was the biggest reason why I bought myself a Kindle—namely I wanted to read the sequences during commuting but carrying the laptop arround was too tiresome. Thanks to jb55 for making eBook-versions of them! I’ve made my way through around 80% of the sequences, although I’ll have to reread at least the quantum mechanics one with pen and paper at hand.
My location is in France, 2 km from the Swiss city of Basel. I’m currently doing an exchange year in ETH Zurich, but the apartment prices in Zurich together with the fact that my fiancee studies in Basel led us to choose France instead. My main subject is operations research, in a nutshell statistics/mathematics flavoured with lots of simulating. I’m very interested in decision analysis and decision theory. The information about cognitive biases on LW has exceeded that what I learned on the university course about decision analysis, don’t know though whether this tells more about the course or LW… Furthermore the self-development interest and striving (Tsuyoky Naritai!) is something I share with the community.
Looking forward to summer meetups in Southern Finland! (Might organize one myself once I’ve relocated to the area)
Welcome to LessWrong!
I’m not new here, but I never introduced myself and have recently started participating more; it makes sense to say a few words.
Hi. My username is my full name. I’m 34 years old, male, and live in Tel-Aviv, Israel with my wife and two year old daughter. I’ve lived the first half of my life so far in the USSR, the second half in Israel; consequently my native language is Russian, and I also speak Hebrew. I’m a secular Jew.
I work as a software engineer in a large corporation, doing interesting things. I try to maintain and extend some knowledge of math and physics (I’ve studied math in graduate school in the past, but didn’t finish the degree). I read books, mainly fiction in English and Russian. I have insatiable curiosity about countless academic fields and disciplines, in hard sciences, social sciences and humanities, and have acquired much shallow knowledge in many of them, very little deep knowledge in any. I have some online presence in English, mostly due to open-source work I did in the past (not much recently), but my primary online presence is through my blog, which is written in Russian.
I’ve been reading OB/LW since late 2007, mainly lurking, with a few comments. Stopped reading save a rare peek in summer 2009, and came back this month. Consequently I read most of the sequences as they were published, but I missed or skipped a fair amount and plan at some point to re-read many of them. Among the topics popular in this community, I’m more interested in Bayesian probability/statistics, epistemology, philosophy of science, rationality, cognitive biases, math/physics. Less interested in FAI, the Singularity, PUA, status, and drama-heavy topics.
I probably self-identify more as a skeptic than as a rationalist, but I don’t feel strongly about that. My contributions so far have usually had a contrarian bent, but I don’t aim to be a gadfly, I just tend to be more excited by things I disagree with. Will try to balance this to some degree.
I now wish I knew Russian!
Hey everybody, I know I came across this late, but lately I’ve been becoming a more avid reader of the site, and thought I’d follow with the post’s suggestion and give my introduction.
I came here from Overcoming Bias(via various econoblogs), although that doesn’t really mark the beginning of my push into becoming a rationalist. The big turning point for me was coming across a NIH article that was linked to by econlog or marginalrevolution. Both of the two introduced me to Baye’s Theorem, and how it could explain how so many publications in the medical literature could be statistically significant, yet incorrect(I think the paper estimated nearly half).
I had been struggling with social anxiety and had really screwed things up with a girl I really liked because of a few fundamental misunderstandings. In a clearer state of mind I was able to realize that I had an entirely wrong perception of what people thought of me and this girl in particular. But I couldn’t explain why I would have such a skewed view of my world until I learned how to apply Baye’s in how we evaluate our decisions.
Starting from the simple introduction into Baye’s where one is asked to evaluate the problem of estimating the probablity someone has a disease based on a single diagnostic test, I learned how the false positives completely warped what the probability would be. I began to think about how many ‘false positives’ I may be clinging onto in my life, and how I could be getting so damn many of them. If I kept looking for any probable sign that someone didn’t like me, especially while ignoring signs that I’m doing fine, I was gonna get a crap load of false positives, but would have relatively good reasons to believe them. I also began to realize how many coincidences there are in the world, and how many wrong theories these coincidences could validate if I kept looking in the wrong places and asking the wrong questions.
All of this in turn got me interested in the theory of the mind and cognitive biases—specifically thinking about how we unconsciously construct priors in our head, how we are lead into asking which questions, and how many different ways this can go wrong. I set out on a process to make that process go less wrong, and now I am here on this site introducing myself.
Hi, my name is Tyler and I’ve been lurking LW for the last few months. I’m a full-time university student in California. Like others, I’ve refrained from posting because I feel I’m not yet quite up-to-date on many of the issues discussed here, though i’d considered many of them before ever finding LW.
I found LW through Yudkowski.net which I found through one of Eli’s more technical articles that popped up on a google search when I was first becoming interested in Artificial Intelligence. Since then, i’ve developed an interest in the big R.
As I read the sequences (I’m nearly through and I’ve been at it a while now) I am often pleasantly surprised when Eli brings up a topic that i’d previously considered, and even more so when he explains it. Overall, the zeitgeist of the LW community really appeals to me. I’m often frustrated at listening to people i know say things that would get torn apart here on LW. I guess i’m just glad to know that there’s a community here to which i can both learn tremendously, and hopefully contribute.
I’m working on filling in the holes right now, and the old adage “the more you know, the more you know you don’t know” is really having its way with me right now.
I think I’m missing something. Is this common jargon?
Rationality?
Thanks. I can see why nobody felt the need to respond to that one.
Well, I didn’t find it obvious either. (Or I wouldn’t have said anything. Not big on sarcasm.)
I stumbled over here from Scott Aaronson’s blog, which was recommended by a friend. Actually, LessWrong was also recommended, but unfortunately it took a while for me to make it over here.
As far as my descent in to rationality goes, I suppose I’ve always been curious and skeptical, but I never really gave much direction to my curiosity or my skepticism until the age of 17.
I always had intellectual interests. In 3rd and 4th grade I tought myself algebra. I ceased to pursue mathematics not too long after that due to the disappointment I felt towards the public school system’s treatment of mathematics.
After my foray into mathematics, I took a very strong interest in cosmology and astronomy. I still remember being 11 or 12 and first coming to realize that we are composed of highly organized cosmic dust. That was a powerful image to me at that time.
At this point in time I distinctly remember my father returning to the church after his mother and sister had passed away. The first church we went to was supposedly moderate. I was made to attend Sunday school there. I did not fare so well in sunday school. During the second session I attended the subject of evolution was brought up. Now, I had a fascination with prehistoric animals and had several books that explained evolution at a basic level accessible to young adults, so when the teacher challenged evolution and told me that the concept of God was not compatible with it, I told her that she must be wrong about God (this was really an appeal to authority, since I considered anyone who had written a book to be more authoritative than anyone who hadn’t). Well, she didn’t take that well and sent me to stand in the corner. My parents didn’t take well to that (both of them being fairly rational and open to science and my mom not being religious at all, but rather trying to support my dad). And so was borne my first religion-science conflict!
Once I entered high school, my artistic interests came to the foreground and pushed science and mathematics into the background. I developed my skill as a visual artists and as a guitarist. I studied music theory and color theory and played. It was enjoyable work and I took it to the point of obsession. My guitar playing especially, which I would practice for hours every night.
Eventually I decided that I wasn’t happy with making art, I wanted to explore something I felt was much deeper and more meaningful. Thus began a period of self reflection and a search for personal meaning. I decided that I wanted to explore my childhood interests, and so I began to study calculus and mechanics during my senior year of high school. It was also at that point that I read Crime and Punishment, Steppenwolfe, The Stranger and Beyond Good and Evil.
Soon I found my way to Kant and Russell. They in turn led me to Frege, Wittgenstein and Quine. My desire to understand myself soon extended to a desire to understand the world around me. Shortly after turning 18, I read Quine’s Methods of Logic and was surprised by how natural it felt to me (up until the undecidability part, which threw me through a loop at the time).
By that time, I had begun my major in mathematics. I took every (read every seemingly interesting) course I could to get as broad a view as I could as quickly as possible. This past year (my junior year of college) I took my first few graduate courses. The first was theory of computation. I had no prior experience with the material, everything was new. It was a somewhat transformative experience and I have to say that it was probably the most enjoyable class I’ve ever taken. I also took a graduate sequence in mathematical logic and learned the famed incompleteness theorems.
I am interested in fighting ignorance in myself and in others and I find that I like the premise of this blog. My current interests include Bayesian Probability (thanks to this site and Eliezer, and to some extent the friend who recommended it to me as well), the game of GO, physics (I am woefully ignorant of real physics, and have decided that I need to read up on it), mathematical logic, Fluid Concepts & Creative Analogies (Hofstadter), cognitive science, music, history and programming. It is not hard to get me interested in something, so the list is much more extensive than that and is highly subject to change.
Well, I feel like I’ve rambled up a storm here.
Hi! I posted on the other thread that I was around, but I guess I should introduce myself.
I guess the weirdest thing about me (relative to the community) is my age—I’m still in high school and have been lurking LW since its creation and OB before that… I’m in the Montgomery Blair Magnet program, which has pretty thoroughly taught me that I’m by no means especially smart.
I got interested in the whole rationality thing after reading some of the articles that were tangentially related to the more philosophical articles that I was interested in* and found on Hacker News. The metaethics sequence seemed much less forced than a lot of the other considerations of morality that I had heard (mostly from a Christian background), which only piqued my interest further.
Short note: Harry Potter and the Methods of Rationality is pretty much the best introduction to rationalist topics for people my age that I’ve ever seen, I recommended it to a few friends, one of whom started reading it, lurking LW, and convincing others to read as well.
The article most tangibly helpful in my life was http://lesswrong.com/lw/i0/are_your_enemies_innately_evil , mainly in that it helped me realize that everyone seems reasonable to themselves and that you don’t get anywhere when you argue as if they’re totally wrong. It’s helped a lot in resolving interpersonal issues, and is probably one of the major factors of my being elected President of my school’s FIRST robotics team.
*My interest in philosophy started about 3 years ago, mostly as a result of my freshman physics class and reading Godel Escher Bach.
Welcome!
“Are your enemies innately evil” is one of my all-time favourite posts too. I now think politics is the single biggest source of rationality failures out there (way bigger than religion).
You can find loads of otherwise really good skeptics out there who have a political view (which is fine) that they seem to think is as perfect, scientific and objective as Maxwell’s equations (not fine). Politics is epistemically dangerous.
I am new to this site. I am a former Mortgage and Derivatives Trader on Wall Street. I am one of the few ex Wall Streeter’s who has experienced a crisis of conscience. I am an empirical skeptic who is cynical by nature but I have only recently started to sit down and try to figure out why people act stupidly and irrationally. Naseem Taleb, author of the Black Swan & Fooled By Randomness is one of my favorite authors and I truly believe that after all of my years trading it all comes down to random luck not any type of skill.
Welcome, great intro!
Do you think there are any types of traders who are closer to the mark? It’s been a while since I read Black Swan, but I seem to recall Taleb was a “quant,” and that he made a good deal of money thereby (NB: I have near zero knowledge of finance of any sort).
jaymani,
Have you seen:
http://www.pbs.org/wgbh/pages/frontline/creditcards/ ?
Luck may be a small part, but I think cognition is the better part.
Sorry, if this is to bold, I’m new at this as well.
Oh, hi. I’m an autodidact programmer in my early 20s working for a small company. A lot of programmers tend to be hacker sorts who like making things, but I mostly only care about achieving a deeper and more intuitive understanding of the world. I am interested in a lot of things, but I tend to concentrate alternately on math, CS, linguistics, philosophy, history, and literature.
I don’t identify as a rationalist or make very rational decisions, but I share a lot of intellectual interests with the community, and there aren’t really any other public spots on the web where smart people are discussing a variety of topics without a ton of noise and bullshit.
I don’t have enough background in some of the jargon and shared historical discussion here to contribute to many of the more topical discussions, but hopefully as I catch up on the archives I’ll be able to comment more often.
Hacker News is pretty nice:
http://news.ycombinator.com/
Does anyone have more recommendations?
My impression is that Hacker News is above average, but still a noticeable notch below LW. Same goes for sites like the Richard Dawkins and JREF forums (perhaps two notches in those cases), and the comments sections of blogs of various academics (such as Overcoming Bias).
Skeptical sites are good, but not great, because being a good skeptic is different from being a good rational thinker. You can probably get by as a skeptic knowing only “extraordinary claims require extraordinary evidence” and the basics of the scientific method.
I agree with this, and in particular, although there are generally smart people on Hacker News, there are a ton of people who are interested in talking about business and startups 24⁄7, a topic I find extremely boring.
I’m a big fan of MetaFilter (http://www.metafilter.com/). The commenters there are charming and often pretty smart, but the spirit of discussion is usually somewhat less serious.
The key thing here separating Hacker News from LW is the “variety of topics”. While HN is officially centered around startup culture (which like cata, I have no particular interest in), the community is happy to link to and discuss just about anything of intellectual interest. Today there’s a link about punctuation marks for indicating irony.
The level of discourse might not be quite up to LW, but the subject matter is a lot more inclusive.
I find it strange that you would say that. (And I’ve read a lot of Hacker News.)
Given an arbitrary aspect of reality (e.g., an aspect of human life or of the world around us) I think you are just as likely to be able to start a discussion of it here as on Hacker News if you can meet LW’s higher standard for rationality.
In other words, I think Hacker News is simply more tolerant of worthless ways of discussing topics, not tolerant of more topics.
(Of course, Hacker News is more worthwhile than most places on the web.)
I just found it, and I’ll probably be disappointed, but http://blogs.law.harvard.edu/philg/ looks pretty good so far.
I’ve read PG for a year or three now, and he’s very one-note—railing against government waste and repression of business, and he’s not the most rigorous or deep libertarian thinker I’ve ever read. I keep reading because every so often he writes about something like more efficient higher-education or why women aren’t in STEM fields in large numbers which is worth all the dross.
Hello, good time of day.
My name is Victor, I’m 19. I’m a student of computer science from Russia (so my English is far from perfect, and probably there will be lack of articles; please excuse me).
There wasn’t any bright line between rationalist!Victor and ordinary!Victor. If I remember correctly, five years ago I was interested in paranormal phenomena like UFO, parallel worlds or the Bermuda Triangle (I’m not sure I truly believed in it, probably I just had fun thinking about it: but I might have confessed the cached thought about scientists not knowing important things about the world) and liked reading the pop-science books at the same time. Then I realized that there is a beauty, honesty and courage in the scientific worldview and shortly thereafter, I became a person from the Light Side: not because science was true, but because it was fun.
But at least I rejected the Bermuda Triangle. I was too honest to leave inconsistencies in my pool of beliefs; so long, pseudoscience!
Maybe at the same time I discovered the concept of the utility function and blog of a psychologist arguing that there is nothing wrong with an egoism. Something clicked in my mind; the explanation of human behaviour was beautiful in it’s simplicity, and there were some interesting implications of this explanation. Then Dawkins and realization that evolution is just a natural continuation of the laws governing non-organic matter. Evolution was fun, and also it was true. I became an Guardian Of The Evolution, and I was fighting superstitions. It was point of no return: it was impossible to defend telepathy again (why there aren’t any telepathic wolves?).
There was moment of marvel, when I realized that there wasn’t any reason to expect any intellectual feats from a naked ape living in town; our brain wasn’t adapted to the current environment, but it is still working, and it is working much better than you should reasonably expect. Intelligence is fragile, and humanity is the underdog I should root for. At that time, I had already known about cognitive biases, but my feelings towards this topic became different after this insight.
I don’t remember when I started reading LW. I might have learned about utility functions here, but I’m not sure. LW was changing me gradually. In the course of two or three years I have been noticing some small changes: I started admiring the scientific method, I understood the power of the intelligence, sometimes I withdrew from an argument because there wasn’t any disagreement about anticipated experience there, et cetera.
I don’t know where to draw a line between “non-rational age” and “rational age”. But I sure as hell I’m with you guys now.
Welcome, Victor.
Perhaps you’ll find this funny:
http://earthfireinstitute.org/2010/02/a-telepathic-cry-of-the-heart/
It remembered me the elementary particles of monarchy (the “kingons” ) of Terry Pratchett.
Since each kingdom can have one and only one king, in the case of death of king his heir becomes a new king instantly. So, if you carefully torture a king, you can use those particles to send a message faster than the speed of light.
When I was growing up my childhood friends would sometimes say, “I wish I’d been born five hundred years ago” or “It would have been so interesting to live during medieval times”. To me this was insanity. In fact it still sounds insane. Who in their right mind would exchange airplanes, democracy and antibiotics for illiteracy, agricultural drudgework and smallpox? I suppose my friends were doing the same thing people do when they imagine their pop culture “past lives”: so everyone gets to be Cleopatra, and nobody is ever a peasant or slave. And the Connecticut Yankees who travel back in time to pre-invent industry are men, because a woman traveling alone in those days just invited trouble.
No, I never wanted to live in the past. I wanted to live in the future.
Mostly because I had a keen desire find out what happens next. I mean, just think of the amazing things in store—space travel, AI, personal immortality. What a fool I was.
I no longer trust the future will be a glorious place. (It was a little painful to give up that belief.) I once studied history and the history of technology so I could write about imaginary civilizations with some versimilitude. And I learned that everything ends, even Rome. Even us.
So I started studying economics and politics to try to figure out how we got here, and how we might possibly get someplace else. It seems unlikely that the same irrational brains that got us into this mess will be able to get us out. I mean, people are literally not sane. Myself included. The best, the only tool we have is dangerously flawed. (OMFG!!) Which led me here....
Hope for the future? Hope isn’t necessary.
As far as RL goes, I have two X chromosomes and live in Minnesota.
Hello Less Wrong!
First things first: I beg your pardon for my crappy English, this is not my first language.
I’m from Barcelona (no LW community, here, I’m afraid) and I studied telecom engineery, but I work as a teacher and I draw cartoons (you can check http://listocomics.com but they are in Spanish). I’m also a rationalist wanabe. I mean I haven’t even read the whole of your major sequences but I have always tried to move myself the rational way. I love Dawkins books and I was amazed the first time I read about logical fallacies at the Wikipedia. I have always been quite interested in phsicology, too, but most of the popular psychology books I’ve read set my bullshit alarm on, cause most of their content seemed to come from the mind of the author after thinking about it strong while sitting in the sofa, without further research. I’m glad of having found a site that talks aboute the human mind and human behavour in an easy to understand way and with references. It seems like a good place to learn stuff.
Actually, I’m curious about what you, as rationalists, may think about NLP. Is it the right place to ask? NLP: Bullshit or not?
And I would also love to hear some rationalist opinions about yoga. I’ve been trying it for a couple of months and I’m still confused. The stretching part is good for the muscles, that’s quite sure, but there also seem to be a lot of new age paraphernalia. Do you think there are serious researches proving that yoga is better than just stretching?
And, more in general, de rationalists recomend any specifical sport? Some way to get the maximum health with the minimum effort and time?
(I’m not sure if this was the right place to ask about those things, just tell me if I should post somewhere els or if those subjects are already discussed in some other thread)
Thanks for everything, and congrats for the page, I’m already recommending it to friends!
Welcome to LW!
I love your comics. I’m going to use them so that I don’t forget my spanish. I’m currently doing a little research (for myself) on NLP-type stuff. If you want a comprehensive source, then this is what I’m going to be purchasing shortly.
http://www.amazon.com/Oxford-Handbook-Hypnosis-Handbooks/dp/0198570090/ref=sr_1_1?ie=UTF8&qid=1320721250&sr=8-1
I’m not expert on yoga (but I’ve done a bit). I find that pure meditation is better for the mind than yoga (there is a lot of secular research that shows that meditation is good in a lot of ways for the mind). And I find that pure exercise is better for the body than yoga. some people like to mix the two. I don’t.
Most people have a misconception about meditation where they think you have to be sitting with really straight posture in order to meditate. This just isn’t true. I run and meditate all the time. Running is very good for exercise and is very conducive for meditation (especially if you just go in a straight line or on a treadmill).
I know that there is quite a bit of research on exercise and the mind. But most of it has to do with cardiovascular and not with weight training. I do both, I personally think running is better for the mind (and doesn’t require a lot of technical detail on proper form).
Dawkins’s “Selfish Gene” was one of my first “rationalist” books.
Hi LessWrongians, I’ve actually been reading this for a few months since I discovered it through HPMOR, but I just found this thread. I’ve been a traditional rationalist for a long time, but it’s great to find that there is a community devoted to uncovering and eliminating all the human biases that aren’t obvious when you’re inside them.
I’m 27 with a BS in Business Information Systems and working as an analyst, though I consider this career a stopgap until I figure out something more entrepreneurial to do. I’ve been slowly reading through the sequences, but my brain can only handle so much at a time.
Mostly I just want to say thanks to everyone who writes/reads/comments on LessWrong. This site is awesome. It’s the only place I’ve found on the internet that consistently makes me stop and think instead of just rolling my eyes.
Welcome !
Hello everyone!
I am a unwitting victim of HP: MoR, and of course it led me here. I’m still reading up on the sequences, which have plenty of intriguing content. My background is in Mathematics (specifically cryptography, not much probability theory) and Music (specifically bassoon and composition). Right now I work for the US government. I grew up as a secular Jew, so I didn’t really have that much of a crisis of faith or anything. I must say I found Eliezer’s description of Modern Judaism (“you are expected to doubt but not successfully doubt”) as surprisingly accurate and amusing.
Though, after reading through things, I don’t really think I can call myself a rationalist quite yet. I need more practice, honestly. Maybe I just need to successfully update :D
Perhaps I just need to look around more, but hopefully I can contribute to the more artistic ideas of the site. Reading through what is on the site, it makes me wonder how to apply rationalist methodology to the arts.
A most sincere welcome, from someone of a very similar background!
(And you’ve walked right in to a discussion you’re likely to find interesting...)
Cool thanks! I’ll check it out.
Hey, another bassoonist! (Saw your name in another thread, and had to see if you mentioned which double reed.) I’ve also got a math background (number theory and logic), though I’ve mostly abandoned it for law. Welcome to LW.
Hello everyone, it’s so great to be here. I was introduced to LessWrong by a post left by C. Russo on Freedomainradio.com back in late July, which dumped me right into How to Actually Change Your Mind. Since then, I have found myself spending progressively more of my free time here, reading both old and new content.
Over the last several years, I’ve made a habit of spending my evenings online, blown by the winds of curiosity. While this has led me to the vague sense that I needed to make some adjustments to my map, I didn’t have a good sense of the tools I needed to edit it.
I grew up in a religious (Mormon) family (was even a white-shirt-wearing, door-knocking, Book-of-Mormon thumping missionary for two years), but gave up my belief in my mid-twenties after searching for—and failing to find—a convincing argument for my belief. I had been taught to identify a specific and powerful feeling with “The Holy Ghost,” but when I reflected on my experiences, I realized that I had felt that feeling on many occasions that seemed inconsistent with the idea that God was giving me information in those moments. I have, furthermore, felt that feeling many times since my apostasy, which seems (to quote Cyan), like icing on the coffin of that false belief. A few days ago, I read a comment on A Rationalist’s Tale by summerstay which gave my feeling a name (frisson), and a scientific explanation.
I manage a small group of analysts at a large corporation, and have of late been on the lookout for ways to infuse LW concepts into our group discussions. On a related note, I read Raising the Sanity Waterline today, and wondered whether anyone has thought about or attempted to actually create a Youtube series corresponding to Eliezer’s four-credit undergraduate course with no prerequisites, designed to secretly make people more rational.
Sorry for the ramble; again, it’s a pleasure.
Hello and welcome!
There’s a welcome page? I hadn’t noticed. I suppose I could give a few details about myself. I’ve been posting here for a little less than two months now.
On Me
I am a software engineer in my late twenties. I enjoy reading fantasy and science fiction novels, as well as books about physics, mathematics, biology, astronomy, and many other topics. I play no sports, but I bicycle nearly every day. I also enjoy programming, writing, photography, cooking, drawing, winning videogames, and working out mathematical equations for topics of interest.
On How I Found the Site
I occasionally like to peruse David Brin’s blog, and wondered while reading a post how it was he came to recommend a Harry Potter fanfiction. So, David Brin’s Blog-> HPMOR-> Less Wrong. I then proceeded to lurk and find out what was being discussed to get some context for the message board discussions. Eventually, I decided to see what would happen if I started posting comments.
So far, I’ve enjoyed the discussion on this site. I think there’s a lot to think about here, which exercises my hobby of pondering the nature of society, life, and the universe in general.
Hi. I just opened a new account with this user name. My user name was playtherapist. It was pointed out to me that it was still being misinterpreted as play the rapist. I am a child therapist and social worker. I help disturbed children work through their issues while using dolls, action figures, a sand tray, art materials and therapeutic games. This is called play therapy and is the most effective way to do therapy with young children. I would never dream of “playing the rapist.” There didn’t seem to be a way to just modify my user name, so I opened a new account.
I am the mother of a regular poster and meetup leader. I started reading posts out of curiosity about what he was talking about, etc. Recently I began reading the sequences and top 100 articles. Some of it is quite interesting.
Hey, another social worker! Great!
I’m curious how you found this blog and what attracts you to it. I never would have, except for my son. It’s definitely geared towards young nerds, and most of the posters are guys.
My intro is a few above yours. I found this blog through my husband, who is a much more typical LWer (male, atheist, computer programmer, sci-fi fan).
I guess what attracts me to it is that most people I know write me off as unreasonable or cruel for trying to apply logic to situations where they go by convenience or custom. I would continue more or less doing this even if I never found a community of others, but it is comforting to see a community out there. The main turn-off for me is that most of what I’ve read here doesn’t apply to my life in a useful way (as far as I can tell).
Welcome (again)!
I’m a 19-yo female student in the NYC area.
I was mildly ecstatic to find that not only does Less Wrong exist, but it’s members have articulated absolute loads of things that my own mind had danced around but not gotten close to putting into words (reservations as to the value of that aside). I actually first became fascinated with Bayesian analysis when I learned about its use in cryptography, and in the pre-computer-age Bomba Machine that helped crack the German Enigma code at Bletchley Park. I saw that it could be used in a much less narrow way, insofar as plain old everyday rationality is concerned and I’ve been increasingly interested in it since. And along came Less Wrong to just blow open the idea into so, so many tangents and applications. :) Just great.
LW has also sort of managed to shock me by covering almost all of the specific areas into which my autodidactism has ranged, from philosophy and theosophy, to neurology and quantum physics. And seeing as I am (and as I suspect many people who become unhappy with the rate that the universe is ‘giving’ them information, and decide to SEEK it) ‘educated’ in a very deep but very patchy manner, LW’s holistic approach to knowledge has been really refreshing, and I’ve had great fun (although not in the trivial sense at all) exploring it for a while. Now I’m going to start in on the Sequences.
I’m also absolutely going to seek out the LW/OB NYC meetups once fall starts—it’s highly difficult for me to find people to have, er, rational and challenging discussions with, not to mention the camaraderie that comes from shared true ‘curiosity’, as per Eliezer’s definition. I see good evidence here on the blog to believe it will live up to my expectations.
Cheers.
Welcome! Glad to have you here.
My name’s Dave.
I got here through the MoR fic a week or so ago, thence the Babykillers/HappyFunPeople fic, thence the Overcoming Bias archive, which I’m currently working my way through. Created an account to comment on a post there, then found this post.
I’m not sure I do identify as a rationalist, actually. It seems to me that a necessary condition to justify my making such a claim is valuing habits of thought and behavior that lead to accuracy over other kinds of habits—for example, those that lead to peace or popularity or collaboration or productivity or etc. -- and I’m not sure I do.
(I don’t mean to suggest that they are incompatible, or even mutually inhibitory. It might work out that someone primarily motivated by rationalism also ends up being maximally peaceful, popular, collaborative and/or productive, just as it might work out that someone primarily motivated by pacifism also ends up being maximally rational. But I don’t see any good reason to believe it.)
That said, there are habits of thought and behavior I value and see well represented here. Precision in speech is one of them—saying what you mean, requesting clarification for ambiguous statements, etc. Argument to explore an idea rather than defend a position is another. A third is the willingness to assume good will on the part of someone one disagrees with; to treat disagreement as an opportunity to teach or learn or both rather than as a challenge to be defeated or evaded. (Though perhaps that’s just the second reason wearing a funny hat.) Active interest in how people think (as distinct from what they think) is another.
These are all fairly rare traits in the world, and even more so on the Internet, and I enjoy them where I find them.
More demographically: I’ve lived in Massachusetts since I came here for college 20+ years ago, was a cognitive science major back then and since then have worked in the software biz in various capacities (currently a requirements analyst). In my non-work hours (and in more of my work hours than I ought) I do community theatre and wander the Internet; I’m currently rehearsing for a production of The Goat and getting ready to direct a production of Equus next year. Was raised an Orthodox Jew and still identify that way culturally, but neither practice nor believe. Recently married my partner of 18+ years.
That’s probably enough for now. Feel free to ask questions.
Hello. I found out about Harry Potter and the methods of rationality while browsing TV tropes, which eventually led me to this site. I have never thought much about how i make choices before, but after reading a couple sequences, it looks like many of the things i am most inquisitive about are discussed on this site, and for at least the last couple years i have been reinventing the wheel on some of the ideas listed here about rationality. It is convenient to be able to learn things by reading this site, that otherwise might have required me to live a long, interesting life to discover :p
I’ve been lurking on LW for a couple of months, trying to work through all of the major sequences. I don’t remember how I discovered it; it might have been a link in the BadAstronomy blog. I studied astronomy in school and grad school and end up becoming a software engineer, which I’ve done for almost 30 years now. Most of the content here resonates powerfully with the intellectual searching I’ve been doing my whole life, and I’m finding it both stimulating and humbling. Spurred by what I’ve read here, I’ve just acquired Judea Pearl’s “Causality” and Barbour’s “The End of Time”, and I’m working through the Jaynes book on bayesian probability (though the study group seems pretty inactive). There’s a lot of synchronicity going on in my life; much of my software work over the last decade has involved causality graphs and Bayesian belief networks, but I hadn’t taken the time to delve very deeply into understanding the underlying fundamentals. I recently read Lee Smolin’s “The Trouble With Physics”, and he mentioned Barbour’s work as a possibly promising new direction, so reading Eliezer’s comments on it struck a chord. Finally, I’m becoming increasingly aware of transformative change in society (though I wouldn’t go so far as to anticipate the Singularity any time soon) and trying on new ideas and concepts that might make me more successfully adaptive, like those found in Seth Godin’s blog and books or Pamela Slim’s “Escape from Cubicle Nation”. I recognize a similar leap facing me here: if I come to believe that the Singularity/AI are “real”, can I stop lurking and take meaningful action?
You look to be very capable of using correct reasoning, based on your extensive software experience and familiarity with causal nets!
I recently asked question here about timeless physics, but no one seems to want to answer it… I think you might have some good insight on that matter.
Hiya, thanks to everybody here for making this such a welcoming and fun community.
I’ve identified as a skeptic and an atheist for a few years now, but I was intrigued by the way that the Less Wrong articles I saw seemed to kick it up a notch further. “Weapons-grade rationality” I think I saw one article put it.
I’m (as of the moment) somewhat skeptical of singularity theory, but as an activist I’m interested in helping to raise the rationality waterline. My education and professional experience are in computer programming. Currently I’m serving as a Peace Corps volunteer in Jamaica.
Hi I found Less Wrong a few days ago when someone pointed me towards your recent list of recommended books. I followed the comment thread (particularly nodding my head at the mentions of Marcus Aurelius’ Meditations which I want to read) and had a look around the rest of the blog. I liked what I saw.
I’m an American living in Cyprus, and into learning more about the Epicurean, Skeptic, Stoics and Platonic philosophies. I’m also a molecular biologist by training, and interested in ecology, ornithology, birdwatching, cooking, and philosophy of science.
For my rationality, I grew up always thinking that Christianity was a nice metaphor for issues relating to the human condition, but never thinking that anything in the Bible happened literally the way it was said. I suppose you could say that I believed in the value of belief. Watching Bill Moyers’ interview with Joseph Campbell in The Power of Myth changed that for me 15 or so years ago. It just clicked with my view of religion: it served as a mythic narrative, and you don’t need a mythic narrative to be religious… Star Wars or any other epic myth will do nicely. So I severed the only reason I ever had to value religion and never looked back, being skeptical of dubious claims ever since by nature.
If there are any skeptics, stoics, Epicureans or other rational minds in Cyprus, please contact me!
My understanding is that Campbell was never well-regarded by the relevant academics and that time hasn’t helped his reputation any.
This reminds me, by the by, of my own “conversion” experience: a book by the name of the Lucifer Principle by a one Howard Bloom. I read it at a young age and was dazzled by the basic idea of evolution, which had been taught to me in school and was never disputed by my church, but never with such power: I finally Got It; that from random processes patterns always emerge and are implicit, humans are just a complex pattern operating on the basis of laws mostly beyond our comprehension, &c.
Years later, I re-read it, expecting to re-unite with the wonder of my past and… was struck by how stupid it was. The arguments were moronic, the facts were wrong half the time, and so on. But I owe it a debt for making me a materialist, even if I would have dismissed it after perusing it at the library today.
Arrgh!! Totally meaningless!
No, it’s a good heuristic. It’s good enough reason for the lay to accept anthropogenic global warming, the Holocaust, and the fact that HIV causes AIDS, to gesture at obvious examples.
Obviously not everyone can use that heuristic. Like any other, it will be wrong sometimes. But it’s good enough for Bayesian updating.
(So perhaps “Arrgh!! Sometimes overrated!”)
Oh I’m not saying that Campbell was well-regarded by his peers in academia—I’m not a scholar in that field by any means and don’t know anything about that. I was just saying that it woke me up to see that a developing mind can learn useful values and ideals from any kind of epic story. IOW a religion isn’t necessary for our morals to take shape.
I’m pretty sure I understand what Campbell was doing, and given that it was something totally cool and fundamentally opposed to what academia is about, this just shows that they could identify what he was. Ditto Tolkein and Lewis.
Basically, these are people who are intentionally creating a misleading conception of history in order to shape the identities of children who encounter it towards identifying with mankind as a whole rather than with some smaller group, NOT people who are trying to explain how things are to their readers, framed neutrally.
Hi there,
I am a high school senior who is interested in science, particularly in natural sciences. One day I hope to further our understanding of...well, anything you can think of!
My lifestyle, which I adopted after carefully analyzing my goals, is pretty spartan: I eat a strict diet, I exercise often, I only read certain things and so forth.
I discovered the transhumanist movement a few months ago. I have decided to join lesswrong.com because I think that I stand to learn a lot from this community and, maybe, even bring something to the table.
What kinds of things, out of curiosity, and why do you read them and not other things ?
Nonfiction because: my faulty brain sometimes mistakes fiction for reality(e.g., I used to believe that Santa is real) and cognitive economy—there is a finite amount of knowledge I can store, so I would rather make sure it’s accurate, truthful, useful knowledge.
In this case, how do you know what is fiction (and therefore you shouldn’t read it) and what is not (and therefore you should read it) ?
Can you elaborate ? I’m curious about the topic because I’ve heard this statement from several of my friends, but I can’t quite wrap my head around it.
In the interests of full disclosure, I personally do read fiction: primarily because I find it enjoyable, but also because it sometimes enables me to communicate (and receive) ideas much more effectively than nonfiction (f.ex., HPMoR).
http://en.wikipedia.org/wiki/Interference_theory
New memories can interfere with the recall of old ones if they are similar.
That doesn’t necessarily mean fiction is likely to cause problems.
I guess it depends, in part, on how similar the knowledge you deem important is similar to works of fiction. To use a trivial example, I doubt that any work of fiction would cause me to forget what 2 + 2 is equal to.
I look for background info on the piece I consider reading and read its abstract.
See the reply below. I’m not good at explaining this stuff.
Horace wrote that the purpose of literature is “to delight and instruct”. It delights precisely because it’s instructive and it’s up to you to decide whether you only need precise information(nonfiction) or embedded information(fiction).
What about pieces that blend truth and fiction, such as historical novels or most newspaper articles ?
Fair enough, but I’m still curious. Do you participate in any activities that you find enjoyable, but ultimately not very useful in the long term ? I’m not trying to be glib here; I genuinely want to learn about your way of thinking.
I don’t usually read those kinds of pieces.
No, I only take part in activities that have some long-term benefit.
That makes sense. What algorithm are you using to decide which activities have some long-term benefit ?
Pros&Cons and projected outcomes.
Right, but how do you evaluate pros and cons, and project outcomes ? Obviously you wouldn’t take an action that has more cons than pros, and therefore has a poor projected outcome, but that doesn’t tell me much.
For example, what made you decide to begin spending time on writing posts on Less Wrong, as opposed to spending that time on reading quantum physics books, or lifting weights, or something ?
I assign an util to each possible outcome.
I do read quantum physics and lift weights and whatnot! :) As to why I decided to spend time here, see my original post.
tomme, welcome to lesswrong, gday I’m Peacewise.
re
Fair crack mate, “Santa” is a standard fiction/lie perpetrated by society and parents, hardly something to be used as evidence of a “faulty brain”. In fact its more likely to be evidence that your brain was and is functioning in a developmentally normal state.
I suggest you reconsider your position on fiction, since you state
there is indeed plenty of accurate, truthful and useful knowledge within the realm of fiction. Shakespeare has plenty of accurate and useful knowledge about the human condition, just to give you one counter example. “Out damned spot, out ” by lady Macbeth is an example of how murder and the guilt caused by the act of murder affects the human mind. (Macbeth, Act 5, scene 1.) Lady Macbeth cannot get the imagined blood stains off her hands after committing murder.
Humans are subjective creatures, by experimenting with fiction you’ll be looking into the human condition, by avoiding fiction you are dismissing a large subset of truth—for truth is subjective as well as objective.
I now believe that fiction could be useful because it conveys experience. For example, The Walking Dead, the Tv series I am watching at the moment, has a complex interplay characters, as it shows how humans interact in a plethora of situations.
Most people don’t have that in mind when they bump into fiction. But, as I said, if you don’t have enough experience, and you need a quick dose, sometimes fiction can help you.
Hi, I’m Richard. I’m a lawyer, practising in Norwich, England. I’ve been ‘lurking’ on lesswrong, and working my way through the sequences, for some time.
I have an interest in technology, and particularly open source projects. For example, I’m writing this right now in Emacs.
I hope I will be able to contribute positively to this community, which has certainly already helped me a great deal.
Hello All. I came across Less Wrong via Common Sense Atheism a few weeks ago. I have enjoyed it so far, but I have yet to put in the time to get up to speed on the sequences. Plan to, though.
I’m a Financial Accountant in Birmingham, AL. I’m not sure I would (yet) identify myself as a rationalist, but as for what I value, I value truth above all. And if I’m not mistaken, valuing truth seems a big step toward becoming a rationalist. I also value life, liberty, happiness, fun, music, pizza, and many other things.
Here’s a little more about me:
Height: 6′0″ Shoe Size: 12 Favorite Sport: Basketball Favorite Philosophers: Calvin & Hobbes Greatest Weakness: Distinguishing between reality and fantasy Greatest Strength: I’m Batman
Hi Less Wrong, I’m Burr a retired commutations consultant and Entrepreneur. I’m just watching and listening. I’m taking the online AI course from Stanford.
Hello. My name is Gustavo Bicalho, I’m from Brazil, I’m 20 years old today. I intended to introduce myself here after I finished the sequences (I’m half way through the Fun Theory Sequence) but I thought I should give me this as a birthday gift. Heh.
I have some background in computer programming, having done a technical course of three years during high school. Although I don’t know much of computer science (I know just a little about algorithm analysis and that was self-thaught from wikipedia), I think programming has helped me reshape my way of thinking, made it more structured and precise. I try to improve it however I can, and this is one of the reasons I’m joining LessWrong.
For several reasons, though, I left the computers field (not completely) and I’m now a Law student. I don’t know if you get many of those around here. Anyway, reasoning in this field seems, to me, specially biased. Of course, any reasoning about law involves thinking about ethics and politics, but that isn’t a license for fallacies lack of rigor in arguments. I think this is a problem, and rationality can help me to fight against this.
Also, I’m very interested in moral philosophy, as the foundation of Law. Yudkowsky’s metaethics still isn’t completely clear to me, but I’ve seen some discussion about moral philosophy around here and I guess it’s probably worth reading (I have yet to read lukeprog’s No-Nonsense Metaethics). Specially, if there’s any discussion about justice, or fairness, I would like very much to read.
Besides that, I like to learn almost anything. Physics is interesting, math is very interesting. After reading the first sequences, cognitive science, evolutive psychology and decision thory got into the list, too. If I can learn at least the basics of these fields, I think I’ll be a better thinker and a better person. I think LessWrong is a good starting point for that, too.
I think that’s it.
Oh, if there’s some post/discussion around here about Law already, I would be very glad if someone pointed it out.
See you around!
Gust
PS: Wow, this took me three hours to write o.o Trying to make a good first impression is kinda hard. PPS: Three persons in the same day! Is that usual?
Happy birthday!
Thank you!
Most recent previous instance I could find: ten days ago. You could say it’s not unusual. :)
Do you go to law school in the U.S.?
I ask because I have been considering that route.
P.S. Since the focus of this discussion board is rationality, I will throw out a couple extra questions, with my own answers.
Law school entails an investment of 3 years of your life and perhaps $150k in tuition. How much time and energy should you spend studying and researching the pros and cons of law school and lawyering before you make the decision to attend?
If you attend a law school where only X% of the class finds suitable employment and career prospects, what is the probability that you will end up in that group?
As to the first question, law school cost about $60k to attend when I went. To my credit, I worked for many months with an attorney family member and satisfied myself that I wanted to be an attorney before attending law school. However, I spent just 5 minutes or so researching my subsequent job and career prospects before attending. In hindsight, this was pretty boneheaded.
As to the second question, that probability is probably a lot lower than your gut is telling you. See, law school is much more competitive than college; which in turn is much more competitive than high school. It’s natural to forget this fact and assume that you will be one of the top guys in law school just like you were in high school and college. Personally, I was less successful in law school than I would have predicted. Also, my career has been less successful than I would have predicted.
The bottom line is that as a rationalist, you should probably (1) spend a lot of time and effort talking to law school graduates before you go; and (2) assume that you are probably an ordinary schmuck in terms of predicting outcomes.
Thank you for a well thought-out reply.
I have had misgivings about the law path for essentially the reasons you mention, and especially after much research. I know that being an attorney is not as glamorous as television shows make it out to be and I realize that the high income figures often reported for lawyers are skewed (as in the top law firms pay the most to the top law school grads, and the rest are stuck with little to nothing). I also understand that with the American economy the way it is and the large surplus of aspiring lawyers, the field is even more competitive today. I appreciate you confirming this first-hand.
The only problem is that at this point in my life, I feel like I have no other choice. I am currently a sophomore in college at a relatively good private liberal arts college. I have little aptitude (at least, little in terms of a comparative advantage) in the traditional hard sciences—biology, chemistry, physics—so medical school or grad school in those fields is not an option. I also am not especially talented at math and have never taken a computer science class, so computer programming (I mention it because it is frequently lauded here on LW as a lucrative career choice) is not an option either. Grad school in the fields I am interested in—political science, economics, and philosophy—is not particularly appealing due to the glut of grad school graduates in the social sciences and the large time investment.
My comparative advantages lie in being able to read quickly with high comprehension, write analytically, and think logically. I want to make enough money to live well and to be able to donate to the cause(s) I am/will be interested in.
What do I have left besides law school? (not purely a rhetorical question, by the way)
One other question: In your personal, but informed, opinion, would graduating from a top-14 or top-20 law school in the top 25-50% of my class ‘guarantee’ me a job? In this economic climate and in the near future?
ETA: Are there any specific situations where you would recommend law school? Such as receiving a scholarship or getting into a top law school.
I think this is a good question and unfortunately I don’t have an answer. For like 50 or 60 years, law school was a good way for a reasonably smart person to have a reasonably prestigious well-paying career. Most importantly, if it didn’t work out you would not be facing financial ruin. But now it seems the law school train has left the station. Actually, it seems like higher education in general is not the good deal it once was.
Quite possibly there are more opportunities now than ever before but they require more creativity to find.
I am not really informed on this question since I graduated law school 15 years ago. It’s also really hard to get good information on this sort of question since so many people have an agenda or an axe to grind. You might try asking on a few of the law school discussion boards.
I do think it’s worth considering if you get a bona fide scholarship. In that case, your main risk is 3 years of your life. Just beware of the “section stacking scam.” That’s where the law school gives you a scholarship contingent on maintaining a certain grade point average and then puts all the scholarship students in the same section. Guaranteeing that a very large percentage will lose their scholarship.
Going to a top-rated law school is still a bit dangerous. You may land a high-paying job only to get laid off or discover that you hate your high paying job.
If you are accepted into the top three schools (Yale >>> Harvard, Standford), you are very likely to be employed as a lawyer. Especially since the economy will have improved a bit during the passage of time at law school. If you in admitted into the top 4-8, you can feel somewhat comfortable. The rest of the top tier is unclear.
If you are not admitted into a first-tier school (the definition is a bit amorphous), then it is unclear whether law school makes economic sense. Everything I’ve heard says that third or fourth-tier schools are a terrible economic decision.
I’m not sure if brazil’s reference to section stacking actually occurs, but he is right that most find law school much harder than college. Much, much harder.
If you want gossip on Bigfirm life, you could search this blog but be aware that their target audience is associates at those types of firms (and most lawyers do not work at those types of firms).
I am a practicing attorney in the United States. I would suggest you think long and hard before going to law school. There have been big changes in the state of legal education over the last 10 years and the consequences of those changes are only recently coming to light.
Most importantly (1) in real dollars the cost of attending law school has pretty much doubled in the last 10 or 15 years; and (2) at the same time, the bankruptcy code has been amended to make it practically impossible to get student loans discharged in bankruptcy. The upshot is that if graduate law school and cannot find a high-paying job, you are screwed. To make matters worse, most law schools have a tendency to “gild the lilly” as far as their placement statistics go.
No, I study in Brazil. I don’t know how’s the job market and the quality of law schools there in the U.S.… I guess I could tell you what I think about the experience I’m having here, but I suspect it would be wildly different from what you’d have there.
Hello I am a philosophy student in north Jersey. I’m 20 years old, and am very familiar with LW and the sequences. I’ve been reading LW now for about a year, and it has completely changed my life. I am very grateful to Eliezer and all of you for letting me have my Bayesian enlightenment at 20. When I first read the twelve virtues my life changed forever. I am definitely one of those that considers the sequences to be one of the most important works i have read, at least as far as having a personal influence.
I want to work on the hard questions of philosophy, grue and induction, cognition and consciousness, nominalism v.s. realism, Bayesian epistemology, philosophy of probability and mathematics in general, and even meta-physics, though I would like to positivize the field a bit. What I want to do as a philosopher is find problems/paradoxes/questions which fascinate me, and use rationality to solve them. “Solve” being the key word there. I think LW has done a lot to pursue many those goals, which seem strictly like philosophical goals. It seems to me, that LW should go full force and treat itself as a philosophical movement, conveniently primarily concerned with systematically becoming less wrong. Yes, there are mathematicians, and AI designers, and physicists, and psychologists among us, but that is how it should be in any modern philosophical movement.
I have given myself some primer time to become familiar with your terminology, content, and techniques. I now want to use these techniques to solve problems on paper and share the solutions with you. I am doing this because I expect that this will let me know how I am doing so far, and where I need to improve.
Lastly, I would like to ask, how does less wrong see itself? I mean what is the general LW opinion of what LW is? Is it a blog? An open source research institute? A philosophical movement? A non-philosophical movement? A self-help movement? I am curious.
A kinda nifty blog.
I would like to see it become this. And not just for AI ethics/decision theory either. I’d like to see an entire “LW science” movement, where we tackle things like quantum gravity.
Yes, I know it’s a dream. For now.
That would be fun.
Welcome!
That’s a huge amount of philosophy to look at. Might I suggest narrowing your interests down a bit, at least at first? It’s very easy to read a little bit of everything, but much harder to contribute something non-trivial to every field.
It seems to be a little bit of all of those things. Some people here are rabidly anti-philosophy, and so if LW overtly called itself a philosophical movement, those people would probably end up evaporating off. On the other hand, some people would very much like to see the self-help aspects of LW become secondary to the more philosophical or technical aspects. Like everything else, it’s a bit hard to pin down to a distinct category.
Being anti-philosophy is something philosophy needs. Not in a boring, the field is dead Rorty sense. In a, these are scientific questions with definite right and wrong answers, kind of way.
I don’t think anyone is ever really anti-philosophy; perhaps my imagination is so daft that I can’t imagine someone with different tastes. I think philosophy has really frustrated a lot of truth seekers because it was being done poorly. Even in analytic philosophy, only ever so rarely does a tool from analytic philosophy come about that could not be compared to using a stick to break apart and probe matter.
Lesswrong needs to solve philosophical problems to do its job, whether to build AI, or systematically cause rationality. It needs to solve scientific problems too, but lesswrong’s practice seems to consist primarily in long winded, immersive, and concentrated discussion, using previously established technical terminology and calculi, with the aim of settling the truth value of some claim. The method of argument is the method of philosophy. This mixed with the philosophical nature of much of the content here on LW, are enough for me to think of LW as a philosophical movement. But a philosophical movement separated from the long western tradition stretching back to plato.
I like to think of LW as a philosophical movement, analogously to that famous internet meme about that statistician which goes something like this:
Derp was late to his probability class, and quickly jotted down the HW for that week’s class. He worked on it for quite a while. When he got there next week, he told his professor that he found the HW harder than usual. Derp’s professor informed him that what he had jotted down was not the HW, it was three unsolved conjectures. Derp then presented those proofs with the help of his professor as his dissertation.
LW solves some seemingly unsolvable philosophical dilemmas in a similar fashion; and if the average LW user is somehow helped in solving open and VERY DIFFICULT philosophical problems in the manner of insanely competent philosophers, by not thinking of him/herself as a philosopher, or by just treating philosophical problems as trivial HW, then who gives a damn? “Philosophy” is a pretty lame word anyway, “Lesswrongianism” however, that’s a badass word. If you guys want us to be called “LWers” instead of “philosophers” I don’t care, as long as we still solve the open philosophical problems of the previous and new century.
It would be badderass in a dead language. “Minorifalsianism” or something.
“Minorfalsology” is totally the best word for it.
Narrowing my interests is probably not an option. The fact that I can practically work on anything and still be a philosopher is one of the things that appeals to me about the field, but maybe that has something to do with why it so rarely done competently :/ My only other option is to work my butt off, but I know that to be a generalist and contribute takes lots of work. I do specialize in what I like to call algorithmic philosophy, and philosophy of mathematics, but that is only because I think they are of great import to my other fields of interest.
When I was your age (and how much I rue the saying of this) I also felt this way. I hope it works out better for you than it did for me.
Hi, everyone.
I’m currently finishing a first degree in CS, and I’ve been reading LW for a few months now (since June). I’ve read through most of the Sequences and check the front page of the site for anything that looks interesting whenever I want to put off doing something, which is usually several times a day. I also need to get round to finishing Godel, Escher, Bach some time (I’m kinda slow).
I am, at the moment, a terrible rationalist—my goals aren’t even clearly defined, let alone acted on, and I have a strong background in tournament debating, which allows me to argue myself into believing whatever I feel like believing at any given moment. I think I’m getting better at that, but of course my own opinion is almost worthless as evidence on the subject.
On the other hand, reading this site (especially Yudkowsky’s stuff) at least made me stop being religious. I like to think I’d have got there in the end anyway, but seeing as I really didn’t enjoy it, I thank everyone here for pulling me out sooner rather than later.
Quick question: Does anyone know of a formal from-first-principles justification for Occam’s Razor (assigning prior probabilities in inverse proportion to the length of the model in universal description language)? Because I can’t find one, and frankly, if you can’t prove something, it’s probably not true. I’d rather not base my entire thought process on things that probably aren’t true.
Hoping to be able to contribute, Ezekiel
PS Good grief, there’s an average of one introducing-yourself post every couple of days! Why the heck are all the front-page articles written by the same handful of people?
Maybe Kevin T. Kelly’s work will fit your bill? Also see the discussion on LW.
http://wiki.lesswrong.com/wiki/Occam’s_razor Not sure if thats in depth enough, but I think it does a pretty good job. -edit the apostrophe seems to break the link, but the url is right.
Thanks, but that proof doesn’t work for the formulation of Occam’s Razor that I was talking about.
For example, if I have a boolean-output function, there are three “simplest possible” (2 bit long) minimum hypotheses as to what it is, before I see the evidence: [return 0], [return 1], and [return randomBit()]. But a “more complex” (longer than 2 bit) hypothesis, like [on call #i to function, return i mod 2] can’t be represented as being equivalent to [[one of the previous hypotheses] AND [something else]] so the conjunction rule doesn’t apply.
I think the conjunction-rule proof does work for the “minimum entities” formulation, but that one’s deeply problematic because, among other things, it assigns a higher prior probability to divine explanations (of complex systems) than physics-based ones.
What if instead of assigning prior probabilities to rules governing the universe in inverse proportion to the rules’ length, we assigned equal prior probabilities to rules governing the universe and assigned probabilities to states of the world based on the sum of the probability of each universe that could produce that state of the world times the probability that universe would produce it (as many universes would have randomized bits in their description)? I think the likelihood of outputting a string of a hundred ones in a row would then be greater than that of outputting 0001010010100110100010000100100010100100110101101000000101101111110110111101001001100010001011110000.
We could then revisit our assumption that in the rules’ world, all are equally likely regardless of length. After all, if there is a meta-rule world behind the rule world, each rule would not be equally likely as an output of the meta-rules because simpler rules are produced by more meta-rules; their relationship is as that of states of the world and rules above.
This would reverberate down the meta-rule chain and make simpler states of the world even more likely.
However, this might not make any sense. There would be no meta-meta-...meta-rule world to rule them all, and it would be turtles all the way down. It might not make sense to integrate over an infinity of rules in which none are given preferential weighing such that an infinite series of decreasing numbers can be constructed, nor to have effects reverberate down an infinite chain to reach a bottom state of the world.
I suspect you will never find one. To get the scientific process off the ground you have to start with the linked assumptions “the universe is lawful” and “simpler explanations are preferable to more complex ones”. Those are more like mathematical axioms than positions based on evidence.
The reason being, you can explain absolutely any observation with an unboundedly large set of theories if you are allowed to assume that the laws of the universe change or that complex explanations are kosher. The only way to squeeze the search space down to a manageable size is to check the simplest theories first.
Fortunately it turns out we live in a universe where this is a very fruitful strategy.
ETA: I’m relatively new here: Whoever downvoted this could you perhaps explain your thinking?
As I understand it, that is the justification.
Upvoted for pointing out that Yudkowsky already dealt with the issue. I’d forgotten. I’m still not completely happy, but I guess sometimes you do hit rock bottom...
Hi all.
I’m a 21 year old junior at Bryant University, and I am currently majoring in marketing and minoring in legal studies. I discovered lesswrong through Lukeprog’s CSA website; however I have been spending more time as of late reading lesswrong than CSA.
First and foremost, I am hoping that lesswrong helps me become a more instrumentally rational person. I currently struggle with a number of issues including akrasia, effectively controlling my emotions, and goal setting. I don’t think lesswrong has had a noticeable positive or negative effect on my life yet, but I’m hoping that if I continue to read lesswrong and put in an effort to implement the techniques described, I will begin to see the benefits.
As far as my personal goals, I will freely admit that I have no idea at all what I want to do with my life, despite the fact that I have probably spent more time thinking about it than a good deal of the population. I think that I may need to research and read more as well as try out different kinds of lifestyles in order to sort out my goals and desires. The only major goals which I’m fairly certain won’t change in the near future are: to be happy and to be more knowledgeable about world religions, such as Christianity. Although my current estimate of the Christian God’s existence is pretty low, it would still suck to spend an eternity in hell. Hence, I have a strong desire to read about religion.
The rest of my life goals are hazy at best, which I hope to change. I’m currently doing fairly well at a business school, but I really have no idea at all what I want to do for a career after I graduate. In fact, I’m not even sure if I want a job at all after I graduate. Although I feel that I should care about alleviating some of the suffering in the world, I really don’t have such a desire at the moment. I am actually contemplating living away from society for a few months (though I’m not sure exactly when) to see if I would be happier without the constant cycle of fulfilling desires. My desire to live away from society is definitely not set in stone though. I plan to read more about Buddhism, and the lives of people like Thoreau before I make such a major decision. I am curious—has anyone that posts on lesswrong lived away from society for a period of time? If so, I would appreciate being directed to a post describing their experience.
I think that is everything important that I wanted to say about myself. I apologize if my distinction between goals and desires doesn’t match the professional literature and I hope to talk to members from the community in the future.
I’m 22 years old, and currently a fourth-year college student, studying Philosophy and minoring in Computer Science at a very small, Christian school. I found a link to LW while searching for open, online scholarship combining analytic philosophy with algorithmic analysis. After glancing over the resources here, I am extremely excited about the prospect of participating. Philosophical logic, formal epistemology, and functional programming are my passions, and I am thrilled whenever I see interdisciplinary progress being made in cognitive science research. Everything I love is aptly characterized as being abstractly directed at the investigation of human reasoning. So, I definitely feel that I will be able to learn quite a lot from all of you.
Until two years ago, I was a committed and highly conservative Christian. That’s how I was raised, and overcoming my own internal resistance to changes in religious perspective was quite a slow and painful process. I frantically searched for philosophical justifications of the rationality of theistic belief (e.g., Plantinga, van Inwagen). Eventually, however, my own philosophical reflections forced me to conclude that I indeed had no good reasons for believing many of the things I had previously believed. I now identify as a rationalist and an agnostic.
My present task is a paper analyzing potential problems arising from the account of evidential probability conjucted with E=K in Timothy Williamson’s “Knowledge and Its Limits”. I find this rather enjoyable. In my spare time, I’ve been reading books and articles on epistemic logic, Bayesian epistemology, and the Philosophy of Science. In future, I’d really like to be a philosopher, a programmer of some variety, or a mathematics teacher. As far as hobbies are concerned, I’m an avid Go player, Haskell coder, and open-source software advocate.
The one thing I value most is education. I’d like to work to make information, knowledge, and genuine wisdom accessible to more people. High quality intellectual and moral instruction seems to contribute so much to the quality of one’s life, that I feel a strong desire to do anything in my power to provide that to more people. In light of this, I am very curious about how people learn and understand, but I also feel a sort of obligation to better my own understanding of what sound judgments, rational decisions, and solid arguments look like.
I’ll end this here, to keep it brief. I anticipate stimulating and constructive exchanges with many of you.
Greetings,
I am 32 year old middle class male from the Kansas City area. I grew up on a farm in south-central Kansas, in an evangelical christian family. From an early age I was identified as having above average intelligence. I also have ADD, although it went undiagnosed though my elementary and middle-school years, as I was easily able to complete my work in a short enough time frame that I was not distracted. During this time, I was also heavily indoctrinated in the church. During my high school years, it became apparent to me that there was something wrong- I wanted to complete assignments, but would find myself unable to concentrate on them long enough to finish them- once I understood the concepts, I lost all interest in mindless repetition of the material, even though I knew there were benefits to completing it correctly. Noticing I fit all the signs of ADD, I persuaded my parents to talk to my GP about medication: the GP stated that while he agreed I fit the signs, he did not want to place on me the stigma of being labled add.This began a downward spiral, culminating in my first semester of college- I signed up for several honors classes, but not having acquired the skills needed to complete a truly challenging project, I failed them all miserably. Defeated, I returned to my small town, and began taking classes, first at a local community college, then at a local christian university. In 2000, I became a father, got married, dropped out of school, and proceeded to hide with my family in low income housing.
These were dark days for me- I knew I was failing in every possible sense. I didn’t know how to solve it. I didn’t know how to figure out how to solve it. I did know we needed money. I took any job I could find. I hated most of them. This continued for 6 years.
At some point, I realized that in order to improve my situation, I had to formulate a plan. I went back to college while working full time building wooden pallets, and received my AS in computer science. I found a GP that would treat my ADD, and saw immediate improvements in my ability to focus. I went on to start my BS in compsci, and was picked up by a startup company, doing both tech support and Linux IT work. During this time, I finally began to look at my beliefs critically. Many, many times I had faced ideas that indicted the existence of god, and each time I had carefully ignored them. However, part of deciding that I needed a plan in order to improve my life was a recognition of determinism: if actions did not have logical, consistent consequences, than there was no ability to plan at all. However, for that to be true, it meant there could be no such thing as a supernatural event, which I viewed as an uncaused action. The death of my faith was a war of attrition, each step painful. I wanted to believe I would see my family after death, that those I loved would be available to me after this short time on earth. I wanted to believe that my consciousness would never end. I eventually let each of them go: I had decided I wanted to know truth more than fantasy.
I moved to Kansas City in 2008, lost my job with the start-up, took another one, and then another in the tech industry, learning more at each position. In august this year, reddit.com had a link to HPatMoR, and I devoured it. This led me here, and I have read all of the main sequences, and am reading everything else I can, as quickly as I can. I feel behind: here, I have found not only the process for finding truth, but also the process for solving problems ion general, and doing it effectively.
I feel that I am in the midst of rewriting my own code: most of my life, my natural ability has been hindered by bad software, and I am starting to patch out some of the bugs. I have four children now: teaching them how to actually learn, how to accomplish their goals, and how to set goals worth having has become my top priority, especially with my older two: I missed a window where some of this could have been taught intuitively over time, and now I have to help them unlearn bad habits formed under my care. I am in process of finding cryonic options that fit my entire family on my budget- tricky, but not impossible. I am trying to improve my math skills; I made it through calc 2, and was fortunate to have college professor who not only understood what he was teaching, but was passionate about it, and willing to spend extra time helping me understand it at an intuitive level- however, I have let it sit for several years, and am having to dust it off.
I am joining the community now, because I feel I have a grasp on the concepts well enough now that in order to grow, I need to start discussing them. I know I still have a ways to go, but I believe with time and effort, I can make strong contributions to the community.
Welcome to Less Wrong, and good luck in your quest for bettering yourself !
Or hum… how do you wish “good luck” in a rational way ? ;)
A: Don’t worry about it too much and get on with something more important.
Say something surportive but actually meaningful, like “I’m impressed by your achievement.” or “Keep going awesome person!” or even just “I hope you do well.”
Awesome.
Greetings, all. I’ve spent most of my life (being 24 now) longing for the sort of clarity provided by rationalist thought, but only discovered a few months ago that there was such a thing as empirically verifiable truth accessible to me, and that it was possible to build a belief system with solid foundations. I’m still going through the resulting lengthy process of reassessing my beliefs in light of actual evidence.
My partner recently introduced me to this site, and I dived right in—only to hit a concrete wall. My mathematical skills, unused since school, have completely atrophied, to the point that I can’t even follow An Intuitive Explanation of Bayesian Reasoning (my work computer’s refusal to load applets not helping). Since a significant proportion of the Sequences seem to rely on at least a basic understanding of probability theory, I am rather stuck. With this in mind, I’d like to ask for recommendations of material which will help me grasp the essentials necessary to fully understand Less Wrong.
I realise that asking for things I might theoretically find through sufficient Googling sounds lazy, but on the other hand the fine people here might know the best-written and most effective ways of covering the necessary ground.
So: what areas of mathematics and probability theory do I need to cover in order to be able to follow the material on Less Wrong, and do you know of any good sources for learning them, assuming I’m starting from zero?
Don’t worry, you’re definitely not the only one who found the Intuitive Explanation difficult. Have you seen Visualizing Bayes’ Theorem? If that doesn’t help, there are some other explanations on this LessWrongWiki page.
As far as the sequences are concerned, you’ll probably be fine as long as you have a basic understanding of what probability is and how to use Bayes’ Theorem; fortunately, there isn’t too much math in the Core Sequences.
Welcome !
The “Intuitive Explanation” is very interesting, but not always the easiest to grasp. The most important thing to understand the Sequences is the beginning, understanding how to compute (even if you do it manually, by “counting” women of each possible cases) the chance of having cancer knowing you have a positive mammography.
For the rest, I would advise you to start reading the Sequences, and stopping when you find something that you don’t understand, and then trying to learn that part of maths. You’re free to ask for pointers or hints when you find such a “blocker”.
What you’ll need is base of probability theory, a tiny bit of vector algebra (or anything that can help you grasp the concept of n-dimensional space, with a huge n) for the quantum mechanics sequence, and the understanding of what a “function” is in maths. The rest should go easily.
I have a unique way of explaining Bayes’ Rule that has so far helped zero people out of the one who has looked at it. The advantage is that it is very different than other ways, so if those are confusing, you could give it a try.
Welcome to Less Wrong!
Welcome!
Greetings everyone; I recently found this website and immediately witnessed a great abundance of intellect and informed stream of thought-forms in a numerous of interesting topics, something- I might add, relatively rare in many forums ‘out there’ on my previous personal experience. In a brief response to the interest in: “know who you are, what you’re doing, what you value, how you came to identify as a rationalist or how you found us.”
My name is Steven. A senior undergraduate student majoring in psychology, with a fair concentration in cognitive psychology and a minor in philosophy. Although I am more of a designer at heart ,I pursued this field in the reason to learn more about topics that generally revolve around many philosophical theories in nature. My main interest dwell in the fields and issues pertaining to: philosophy of mind, neuroscience, psychophysics, anthropometric, cognitive psychology, probabilistic science, engineering psychology and human factors. My long term goal: is to apply my knowledge into the fields of: Philosophy, human-computer interaction (HCI), artificial intelligence(A.I), ambient intelligence, ubiquitous computing, information science/technology, interaction design and space automation.
I value innovation rather than the traditional approach on things, generally: logic, rationality, strong philosophical exchange, novel ideas, idealistic concepts, multidisciplinary perspectives, well grounded theories, humble personalities, informed commentaries, objectivity, evolution of ideas, altruism, goal oriented designs, achievements and curiosity.
I personally subscribe to the notion that: nature and life itself is far ‘more/to’ complex than what we can perceive or detail in order to even engage on explaining them on such absolute parameters and/or limitations, I values the approximation science gives it and the level of uncertainty and margin or error it present when doing so. Naturalism have shown to be productive towards rendering rationality as well as fruitful results to most of the greatest quest and question of the human mind: The advancement of technology seams to be the major sign of this development and the future of the next epoch/period of the human condition. Studying nature to understand its mysteries and potentials seams to be the best instructor to a developing civilization and curiously enough well founded as a natural fail-safe mechanism for intelligence. I find pursuing and collaborating in the advent of a new period to be the most interesting step anyone could take upon in a lifetime.
I found about this portal at the webpage of the Singularity Summit 2011- in connection with the ‘Singularity Institute’. I’m Glad to be part of this rational community. Thanks for reading.
It seems like LessWrong was essentially made for you!
Welcome!
Welcome here!
Hello LessWrong.
In order then,
I would consider myself to be on the line between an aspiring and burgeoning Artistic Polymath; a storycrafter not picky about means or medium, but very picky about what I would call Extrapolated Contextual Detail. For my part, I treat stories very much like thought-experiments, and as such I’ve invested a lot of effort in expunging from my mind the defaultness of the environment in which I was raised, so that it does not taint my creations (I am still far from perfect at this). Unless I am mistaken, this particular route to rational thinking is less than common here, but even coming from a different direction, I seem to have ended up in the same place. However, I don’t think it was storycrafting in itself that led me to question why I thought what I thought. I remember the very first time I Noticed My Confusion. It’s actually one of my very earliest memories, pre-kindergarten: I wanted to know how computers worked, and I had books for kids with big friendly titles like “How Computers Work” but they didn’t actually explain. I remember working myself up into quite a fit before my dad finally found an old textbook of his and used it to actually explain logic-gates and such to me. My artistic inclinations were with me that early, also, and I don’t know which developed first or if they’re even related. But that was the trend of my early life, at least until the public school system spent 12 years crushing my spirit and destroying my health. Today I happen to be male, 22 years old, sexually attracted to females, ambiguously pale, and of average height and weight. Also romantically bereft, socially frustrated, and professionally aimless, mostly due to my sleep disorder.
Officially, I’m unemployed. Unofficially I’m being paid to house-sit here in California for my dad, who lives in Arizona. Beyond that, I am currently working on: planning a fantasy novel or two, planning a finite-length webcomic or three, producing machinima, learning 3d-modeling, writing fanfic, and map-making in Starcraft 2. Also, keeping a log of how long it takes for my sleep cycle to lap the clock (20 days on average so far), ever since I discovered that my abnormal circadian rhythm was an actual recognized neurological condition and not just some bizzare psychological problem.
I value creativity and sexuality. I value other things as well, of course, but these are the pieces of the human puzzle that most intrigue me. On sexuality, I personally (since I try to modify my self to test my theories, lacking a more reliable experimental option) have what are likely to be very weird views. For instance, I’ve managed to get myself to honestly feel that it is morally reprehensible to be squicked-out by anyone’s sexual attraction towards myself, regardless of my own reciprocal attraction or lack-thereof. I’m also fairly confident that I’ve succeeded in completely decoupling my sense of identity from my gender. My pet theory is that a far greater portion of the human sociosexual dynamic than commonly thought, is Nurture rather than Nature. Given how drastically I’ve been able to change my own sexual morals, I’ve come to have some confidence in the theory.
I don’t know if I do identify as a Rationalist yet (note the capital) because I’d rather not risk falling into the traps of Cheering or Atire.
I discovered LessWrong through TvTropes. I did little more than glance at the site before diverting to read Harry Potter and the Methods of Rationality, which captivated me right from the first chapter, for having clearly not Hollywood!Science and for the character of HJPEV who I related to instantly and not just because we have the same sleep disorder. His lamentation of child-prodigies who flash and fade hit particularly close to home even though I never really considered myself a prodigy (I never specialized in just one thing enough to be good enough, and I never saw the point of working to better myself in unenjoyable ways since I’m just going to cease to exist several decades down the road anyway. Yes I was still in the single-digits when I first comprehended my own mortality). I also read Luminosity before finally coming back to LessWrong, which was awesome for being what Twilight should have been. I’ve read somewhat more than half of the Sequences so far. Its very much engaging stuff, and its great to be able to put names to all the stuff that’s been going on in my head for a while now, and grow those seeds into more robust understandings.
Welcome!
That’s very cool.
Welcome!
Someone with the stamina to go through half the sequences should take a relatively brief detour and read Yvain’s posts. Finishing them isn’t as time consuming and the content is dense in value. Disease.
(That post assumes Eliezer’s sequence about words though.)
Hello there,
I am a 16-year-old high school student in Vancouver, Canada. I discovered Less Wrong several months ago through HP:MoR, which deeply captured my interest. After finishing the then released chapters, I knew I wanted to learn more. Upon reading the sequences, I felt enlightened. I discovered a new way of thinking, of making decisions that would benefit myself and others more. I delved through articles and eventually started to use Anki, learning fallacies and cognitive biases. As a result, I am more mentally organized, I am doing better in school (especially in being able to express and back up opinions), and generally feeling that life makes more sense.
Much of my thinking has already been affected by my father, a teacher of Philosophy and Western politics (he teaches in China). By that I mean I’ve been introduced to quite a few well known problems of morals and paradoxes alike (Trolley Problem, Zeno, etc). I feel after discovering Less Wrong I am able to have a better view of these problems.
What I am most interested in are the subjects of math, logic, and computer programming. One of my personal goals is to help others understand rationality as well. Despite this, I occasionally dabble in the Dark Arts, but only within class debates (where you are, of course, expected to choose a side).
From Less Wrong, I hope to further develop my thinking, making better choices for myself, others, and helping others make better choices as well. From that, live a better life in general.
Just make sure to focus your effort on setting up opinions to reflect facts, not on making opinions appear convincing or on your side. In particular, lots of things are confusing, uncertain and unstable under potential evidence; or offending, or supporting policies you believe wrong, or “improper” for your “identity”. Reality doesn’t care, so you shouldn’t either.
Welcome!
Careful now.
Excellent.
Tangentially relevant. I think I used to overestimate the importance of this.
Hello Less Wrong.
I am 19 years old and have been interested in philosophy since I was 13. Today, I am interested in anything that has to do with intelligence, such as psychology and AI and rationality.
I believe in the possibility of the technological singularity and want to help make it happen.
I hope that the complex and unusual ways of thinking that I have taught myself over the last years while philosophizing will allow me to tackle this problem from directions other people have not yet thought of, just like they enabled me to manipulate my own psyche in limited ways, such as turning off unwanted emotions.
I am currently studying computer science in the first semester with the goal of specializing in AI later.
I hope you’ll be reading more of this site—a lot of the point is that we don’t just want a technological singularity, we want a singularity that’s good for human beings.
I hope you’ll post more about the ways of thinking you’ve developed.
Hello. I’m Snowyowl, or Christopher if you’re interested in my real name. (Some people are.) I first discovered this site on Friday 14th August, when a friend of mine (who calls herself Kron) pointed me in the direction of the story “Harry Potter and the Methods Of Rationality”.
I don’t consider myself a rationalist, because that seems like a sure-fire way of feeling superior to 90% of the world. Also, I have realised in the past week that a lot of my beliefs and opinions are contradictory—in LessWrong lingo, my Bayesian network isn’t internally consistent. Of course, I had noticed that before now, but it didn’t seem an important problem before I read a few relevant blog posts. So no, I’m not a rationalist, and I hadn’t even heard the word until two weeks ago.
I’m a second-year mathematics undergrad at the time of writing; I had actually heard of Bayes’ Theorem years ago. I have also taken courses branching out into computing and physics. The techniques in your blog appeal to my way of thinking, since I enjoy mathematics and logic, and applying scientific methods to everyday life is a relatively new concept to me.
So hello, LessWrong! I look forward to many calm and reasonable debates!
Hi, I’ve been reading Less Wrong since about January this year, I got interested in the site because of the Baby eating aliens fiction which someone recommended, I had before coming here read a few posts at Overcoming Bias.
At the time I read most of the Yudkowsky coming of age sequence and was also especially interested in the Luminosity sequence. I’ve recently started thinking about Timeless Decision theory and reading with great interest this sites take on the blind idiot god.
The thing I think this site helped me most with was to impart on me how important the theoretical underpinnings of reasoning really are. It has also made me invest serious effort into studying game theory, Bayesian statistics as well as review information theory.
In RL I’m a Male Physics undergrad in my early 20′s.
Now you have me wondering what the Female Physics classes are like. ;)
I hear the fluid mechanics course taught by Dr. Irigaray is really good.
I am an undergraduate mathematician currently headed towards a life of doing theoretical computer science research. Several unrelated friends mentioned LW to me at one point or another in my life, so I read an arbitrary well-liked post every so often for a while. Eventually I concluded that visiting the site somewhat regularly would make me happy (although I have thought enough about how I think, and am easily arrogant enough, to doubt that I will become a better person or learn too much about myself) and so here I am.
I am an (almost) Bayesian utility maximizer when I manage to do what I think I should. My utility is the expected quality of a uniformly random instant of conscious experience (although less flagrantly ill-defined than suggested by such a summary). In practice I am fairly selfish and lazy, but also good at accepting unpalatable arguments.
I am interested mostly in solving problems whose solutions I think would reduce suffering significantly compared to their difficulty, but I also spend a little time thinking about more philosophical issues and questioning my current decision making procedure. I guess a more precise picture of my interests will emerge as I make more comments, if I do, and will be irrelevant, if I don’t.
Hello rationalists (I’m tempted to shorten that word, but neither “rats” nor “rashes” is very complimentary),
I’m a sophomore in college, studying English. I’ve always been interested in getting smarter than the general population, and websites like this never fail to give me some productive reading/thinking material.
I’m very religious, which some would say is a serious fluke in an otherwise freethinking person. I disagree, but I won’t waste your time with my irrational arguments in favor of my own methods of worship.
I love intelligent argument. I think we can get further, sociologically and mentally, by defending and testing rational thought than by any other method.
I probably will never get enough points to be one of the rationalati here, but I’ve subscribed to the RSS and I’m looking forward to several mind-expanding thoughts.
I discovered this site through youarenotsosmart.com.
Good site! I didn’t know that it linked here—was it a comment on a post, link in a post...?
There was a link in the illusion of transparency post.
I wonder if the You Are Not So Smart Guy is one of our veterans, though the writing style isn’t one I recognize.
I shan’t press you any further on this because you don’t appear to want to go there, but you may wish to consider why this one part of your life apparently has its own independent epistemology.
People here tend to see rationality as globally applicable to all domains of knowledge, so a claim that one area of your life is off limits sounds to us like “numbers are good for counting apples, but not oranges.”
More candidates for cutesie short forms of “rationalist”: rashie, ratie (RAY-TEE, or more likely RAY-DEE given typical English pronunciation habits), rasho, nalist, ratnist, tionlist (SHUN-list), Rashomon.
I’d also voting you up for “rationalati”, even though it’s not shorter. :-)
I think I first came to this site via a link on another forum to the “Three Worlds Collide” story… or the “That Alien Message” one. And then I read more articles. I find rationality, cryonics and the singularity to be very interesting, and most of the articles I’ve seen so far are about those topics.
I’m in the UK, and I’ll be in sixth form in september, will do maths, electronics, chemistry and physics.
I don’t yet feel I can identify as a rationalist, but I don’t think I’ll be able to assess this until I catch myself thinking irrationally in response to something, either before or after the fact. I’m not sure how I can even define “me as a rationalist”...
Hi all.
I have lurked on Less Wrong since Day 0. I found Overcoming Bias from Economics blogs I used to follow closely (Marginal Revolution, &c.) I now have my toe in the water here, having been unable to resist joining the Jaynes Probabiity Language of Science study group.
I came to Rationalism firstly by way of Physics and Mathematics, secondly by way of Philosophy. In college I used to do my problem sets in the Philosophy section of the library and my break time was devoted to Plato and to Aristotle and to Hume and the rest of those dead white guys.
After college in California I moved to the Gulf Coast and to do seismic for the oil industry. I have been using AI algorithms since 1992, which have a large number of seismic applications. If anybody is interested, I could point you to some references which have presentations and source material which compare with any I have seen.
I am also interested in applications of AI for finance and quantitative and technical analysis of asset and commodities prices. At this point I am near to a complete ignoramus on this subject and am keen to listen to and learn from anybody with a similar interest.
My mentor in spirit is Richard Feynman and I am trying to follow his advice as closely as possible. First, solve easy problems. Then keep working and keep solving harder and harder problems. Eventually you may find you have solved a problem that nobody has solved yet!
Me too, but why would someone who knows something about AI applications for finance and quantitative analysis teach anyone else about the subject?
Teaching and learning do not have to be restricted to one direction. Two heads might be better than one! Have you ever heard a college course Teaching Assistant tell you he learned more from classes where he TA’d than from most classes where he was a student?
As the old Latin saying goes, Qui docet, discit. (“He who teaches, learns.”)
I don’t know how isomorphic the cases are, but Francis Spufford’s Red Plenty, fresh off the presses, is about the attempt by 60s-era Soviet reformers to implement cybernetic planning. While I haven’t read it, I’ve seen glowing reviews from both opponents and proponents of planning.
Hello!
I think I may have posted on a welcome thread before, but I still consider myself pretty new so saying hi again.
I’ve long thought rational thought is underrated. I find LW very interesting but quite difficult to get into.
Things I’d like to see:
Better introductory content.
Things I find particularly interesting:
Discussion of akrasia and strategies for avoiding it.
Buddhism—is it compatible with rationality? Personally I think some aspects yes, some aspects no.
Further comments, which I’m making in the safe haven of this topic rather than the wilds of the rest of LW:
I’m moderately sympathetic to all the cryonics / singularity stuff that’s often talked about here, but also suspicious. I haven’t come up with a properly argued response, (or even read all the very long posts about it!), but LW in general gives me a feeling of twisting things to fit already chosen conclusions on these topics.
Cryonics: I view it as a long-shot option with a possible big payoff. The part I have my doubts about is the feeling I get that it’s seen as a particularly good long-shot that’s important to focus on.
Singularity stuff: This has all very possibly been discussed at length in a long post I haven’t read, and I’m quite happy to get references. Two areas of this make me uncomfortable:
For me a key problem seems to be the rate at which people can adapt to new technologies. I’m sure I’ve seen this raised either in Marooned in Realtime (http://en.wikipedia.org/wiki/Marooned_in_Realtime) or in very standard commentary on it, so I’m sure this has been addressed somewhere. This seems likely to me to stop acceleration in technology once we reach the stage of significant change within a human lifetime.
Someone still has to do all the thinking. Assuming the singularity happens, and as yet undefined entities can solve major problems in short timespans, this will be because they are thinking very fast. They will be operating on a much faster time scale and to them, the apparent rate of progress won’t be much greater. The singularity will only appear to solve all our problems by handwaving from the point of view of the un-accelerated. Which around here seems to be viewed as an unpleasant state of existence, to be escaped as soon as the technology is available.
I think it would be possible to dump the mystical elements of Buddhism, and combine the rest with Bayesianism. I could see the ideal of optimal enlightenment.
I see some very promising trends in some of the Western Zen stuff, eg Brad Warner ( http://hardcorezen.blogspot.com/)(before anyone says it, I also see big problems with him!)
There’s a lot of dumping of mysticism, and some of the more unfortunate bits like gods and reincarnation.
And there are Buddha quotes like:
“Be lamps unto yourselves. Be refuges unto yourselves. Take yourself no external refuge. Hold fast to the truth as a lamp. Hold fast to the truth as a refuge. ”
(intermediate source http://www.sapphyr.net/buddhist/buddhist-quotes.htm, I’m pretty sure there are primary sources but too lazy to dig them up)
Which I think is very compatible with rationalism.
And a lot of Buddhism seems to me to make nice testable claims “do these things and you will experience a greater frequency of desirable mental states”, for example.
However there’s also other stuff I’m somewhat sympathetic to, but have doubts about, which seem to suggest giving up on rational thought.
Hi all.
I’m 30, live in Sydney and work on image processing. I also have a wife and two beautiful daughters, currently nine months and two and a half years old.
I have a strong background in pure maths and an ongoing interest in philosophy. I’ve been a rationalist since before I even knew what one was. Discovering ET Jaynes’ “Probability Theory” was the closest thing I’ll probably ever have to a religious revelation.
I finally wrote down a large explanation of some quite fundamental philosophy I’d had in my head for quite a while and sent it to a couple of friends to get their opinion on it. This prompted one of them to point me here. Since then I’ve read quite a bit, although far from everything, and am enjoying almost every bit of it. I look forward to posting those very thoughts here some time soon, as they appear to still be both novel and consistent with the views here.
I thoroughly enjoy a good forum debate, and have a fairly high opinion (and at least some evidence to back it up) of my ability to think logically and write a well structured (if sometimes overly wordy) argument. Which of course doesn’t mean I’m always right, and, as a good rationalist should, there’s nothing I like more than having my argument torn to shreds by a superior one. I look forward to it happening in the near future.
Hello!
I’m 18 years old, American, and a sophomore in college.
I discovered this site through HPMoR in December of last year, but did not seriously start reading the Sequences and other posts until the past half year or so. This site played an instrumental role in de-converting me; I had grown up in the Midwest in a very fundamentalist Christian household. After becoming firm in my atheism (untheism + antitheism), I sadly stopped lurking on here, until I became interested in philosophy and the rationality as espoused on LW.
I have always been considered “smart” in school, or to put it more specifically, I was well-optimized for succeeding in the United States’ public educational system. Similar to probably a non-trivial number of posters on here, the U.S.’s approach to (public) education almost completely failed me—not necessarily saying the system is broken, but it is/was broken for me individually. My high school taught to the lower denominator, and even after both skipping a grade and deciding to graduate a year early, I was never challenged in school. I never discovered my academic interests, never was intellectually stimulated, and in fact, was socially pressured into downplaying my intelligence whenever possible. This is not to say that I was blameless. I have always fallen prey to akrasia, and this combined with low standards in school contributed to me not exploring my intellectual boundaries and accepting the worldview I was brought up in.
Thankfully, because of a life-changing event (in summary: went halfway across the country to a top 15-ranked private college, accepted an Army ROTC full scholarship, partied too hard, realized I abhorred the military, decided not to contract with the Army, realized after almost failing first semester that my work ethic from high school was not enough, and transferred to my state’s flagship college for the second semester) I was forced to re-evaluate my worldview, confront any hidden assumptions, make my personal philosophy as coherent as possible, and really discover what I wanted to do with my time on this pale blue dot.
Currently I’m at my third educational institution (small, private liberal arts college) in two years and finally feeling simultaneously happy and intellectually stimulated. I’m looking forward to reading more insights on this blog and applying them to my life whenever possible. Perhaps I may even chime in if I’m feeling particularly courageous, but I’m a lurker by nature.
Just wanted to finally introduce myself and say thanks to all of you here for helping me turn my life around for the better!
Welcome here !
Hello everyone.
I live in Croatia, currently working as an IT consultant after working some years at the University. Along with software development I was always interested in psychology, particularly evolutionary psychology, social psychology and human rationality.
I guess I’ve been a rationalist for as long as I can remember. My interest in science and (oddly) my exposure to catechism at an early age—in a then socialist country—made me question people’s approach to knowledge and reasoning.
I hope to find ways to effectively communicate facts and ideas about human rationality to people, especially young people in my region of Europe. However, I’m still struggling to understand the laws and mechanisms of human reasoning, so I’m hoping my participation here will go a long way in helping me with that.
Welcome!
Hi, I’m a college student in Portland, and I’m planning to major in either Physics or Math and Physics. Although rationalism relates fairly obviously to those fields, that’s not my where my interest stems from. I’m interested in rationalism because it can be used to explain things less obviously in its domain such as politics and literature. Additionally it provides a structure for interpreting knowledge about the physical world, which is not as self-evident as it sounds. I first heard about Less Wrong from HP:MoR and discovered it through a comment on Reddit.
I’m not sure if this is at all coherent, but I’m psyched to be here and be a part of this website.
Welcome!
I also found Less Wrong after reading the Harry Potter fanfiction. Becoming a more rational person is something that I like to think I have strived towards for most of my life, even if I wasn’t aware of what it was called a lot of the time.
A lot of people who surround me in life aren’t very rational, so I looked towards the internet for a place to discuss things where a rational viewpoint is considered the optimal viewpoint. This is because I am aware of my ignorance across many fields and of the world, and I am also aware of my tendency towards irrationality in many circumstances, and want to somehow lessen this ignorance and this irrationality. Spending some time on this site seems like a good way to do that.
Here are a few things that I currently like the sound of that seem to have some kind of relevancy within a rational viewpoint. I think Altruism sounds pretty good, and it also seems like this site would be a good place to discuss how to make a positive impact on the world, and indeed work out what a positive impact could be considered to be. I do want to become immortal; it seems that one normal human lifetime is not nearly enough to achieve many of the things I want to achieve, and the prospect of unlimited time in order to ensure that these things can happen seems like a good idea. Transhumanism sounds great based on what I know of it.
I hope that my time here will assist me in becoming.. less wrong about everything. You know, this site is named very well.
Ok, I’ll go read the sequences now.
My name’s Joshua Bennett, and I also came here after reading the Harry Potter fanfiction. I made a commitment to pursuing rationality after reading Richard Mitchell’s book The Gift of Fire, and seeing even a fictional example of applied rational thinking got me excited. I know that, despite my best efforts, I am a terribly irrational person; I want to fix that.
In the past year or so I’ve thrown off (among other things) my fundamentalist Christian beliefs in pursuit of truth, and I now call myself an atheist and anti-theist. When people ask how I lost my faith, I tell them I didn’t lose it so much as cut it out and throw it away as one would a cancer. I know there are many other cancerous irrationalities lodged into my mind, and I hope that, by studying and conversing with the community here, I will begin to excise as much unreason as I can.
(By the way, I’m glad to see this community is atheist-friendly; I live in Texas and there don’t seem to be very many non-religious folk around.)
Well, you’ve got Steven Weinberg. Not to mention a number of people here on LW.
There was an atheist picnic at the park where I work. They were celebrating the rapture that was supposed to take place back in May (needless to say, they weren’t too surprised when the rapture was called off). I got to speak with a few people, but most of the meetup groups were rather far for me to drive to on a regular basis.
Thanks for the links. I’m located in the DFW metroplex, but I could make a drive to a meetup elsewhere once in a while.
I’m a 28-yo male in the SF area previously from NYC.
This site is intimidating and I think there are many more just like me who are intimidated to introduce themselves because they might not feel they are as articulate or smart as some of the people on this forum. There are some posts that are so well written that I couldn’t write in a 100 years. There is so much information that it seems overwhelming. I want to stop lurking and invite others to join too. I’m not a scientist and I didn’t study AI in college, I just want to meet good people and so do you, so come out and say hello.
My fascination with rationality probably started with ideas of fairness. I was the guy who turned the hour glass sideways to stop the time, if an argument broke out between teams while playing Charades, so when resolved, the actor would be allotted their fair time back. Not being fair bothered me a lot, because it didn’t seem rational.
What also helped push me along my path towards rationality is my interests in biases. After learning about biases in college, I thought it had absolutely profound consequences, I was made aware of my own biases and thought it was the greatest thing in the world — to become more self-aware, to know ones self better is awesome… And with my new found knowledge, I was quickly disappointed with people. I do not let it bother me as much before, but occasionally, when ever someone thinks they experience more utility with expensive vodka because of the quality and not at all the price, I die a little inside.
Starting around the time I graduated university, it’s hard to pin point an exact date of time frame, but I shed religion, and gradually started reading more about humanism and skepticism. It was nothing too deep, but enough for me to have a clear foundation for what I believed. I owe this all to the internet, it led me to watching Atheist videos, TED, being exposed to skepticism, the debunking of myths, Reddit, and finally Lesswrong.
Hello. Please call me Paul Watcher. Watcher is not my real name, but I do know someone named Watcher, and it is what I’ve been doing. I’m a medical student.
I’ve recently finished all the sequences (except the luminosity one still), and my head still hurts. I’m really happy I found them, though. It was painful, but I call myself better now.
I’m now relearning as much as I can. I’m trying to use divia’s Anki deck to memorize the sequences: basic things worth memorizing. I still have yet to actually understand lot of what I read here, so I hope that helps.
I registered because I’m still confused about some things, which I hope will get answered in whatever general discussion thread I post them in. I don’t really anticipate participating much more (though I’m not too confident on that).
Nevertheless, I am pleased to meet you all.
Edit: I have a question. Let’s say that I’m confused about something in, say, Conservation of Expected Evidence. Should I ask my questions on it in comments of the article itself, or in the open thread of this month, or somewhere else?
If you have a question, and don’t particularly care if others after you see the answer, asking in the Open Thread probably will get more people looking at your question. On the other hand, people do look at the recent comment page, and try to answer questions, so I can’t say that’s a bad option. If it’s not time critical, I’d ask in the article, then, if no one answers, link to your question from the open thread.
It might also make sense to raise the question as a new topic in the discussion section.
I think it’s the way to bet that if you’re confused by something here (especially if it’s at all technical or about using the site), you aren’t the only one.
Thank you both for the answers. I don’t have much time right now to think about this, but I think I’ll comment in the article itself. It’s pretty specific.
Bonne journée.
Hello! You have another victim via MoR.
I am already a bit conflicted about the site—I am finding the content inspiring, useful and helpful, given that I am going through a bit of a life ‘directional re-evaluation’ at the moment, but it is also sucking away a lot of time that I could be devoting to actual analysis and practical action...
Oh, well, when I finish reading every post, I can carry on from there!
Related: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. It’s also one of the reasons I (and other people) wish it were easier to download the site or portions of the site (like the Sequences plus comments) for offline reading.
Hi! I first came here a couple of months ago through MoR (through TV Tropes), which seems to have been a gateway drug of sorts for many of us here. Right now I’m reading my way through the sequences and other posts. I find it surprising how much difference it’s made in my thought processes in just the short time I’ve been reading to just have the Litany of Gendlin available and verbalized, or making my beliefs pay rent. I think I’ve always been very analytical, but the most helpful things I’ve read on Less Wrong so far have been ways to focus that analysis and make it useful. My biggest complaint so far has been finding my browser somehow full of unread but saved Less Wrong tabs every time I open it. How does that keep happening, I wonder...
I’m also one of the (presently, six) members of the Less Wrong Folding@Home team, in case you were wondering.
Meaning you work on Folding@Home or you contribute your cycles?
I contribute cycles, as part of team #186453 (Less Wrong).
Lured in by ciphergoth, who successfully irritated me into looking. Finally irritated into creating a login to comment on a post that wasn’t listing its sources.
I also write a lot on RationalWiki, with subjects of local interest being the cryonics and LessWrong articles. Please remember that we love you really, we’re just annoying about it.
Having given it some thought, I don’t label myself “rationalist”. “Whatever-works-ist” is probably more accurate. LessWrong’s ambit claim upon the word “rationalist” is very irritating.
LessWrong irritating me seems good for me. Or productive, anyway. This may not be the same thing.
We call “whatever-works-ism” instrumental rationality.
My name is Elizabeth, and I made my way here through “Harry Potter and the Methods of Rationality,” but quickly found myself fascinated. I’ve been reading intermittently for a few months, and would likely not be posting here today due to an unfortunate personal tendency towards lurking and the sheer daunting nature of the volume and intelligence of discussion, but when I was reading about narrowness I came across a comment I couldn’t help responding to, and decided my newfound positive karma score was worth overcoming my trepidation about permanent records.
I’ve most recently been reading about the nature of words and definitions, which is a topic of particular interest to me. I really like it when a post walks me through a set of ideas that I sort of half knew, but never really codified, and I like it even better when it’s something I had never thought of, or which changes how I think of things. Some of the posts about biases were particularly effective in that regard. I hope to be a productive part of the discussion.
Hi, I’m Rory O’Kane. I’ve been reading Less Wrong for a few months. I first came across it a year or two ago, when a Hacker News comment linked to the AI-in-a-box experiment description. I followed some links from that and liked each Less Wrong post I read. A few more times in the next months, I stumbled across a random comment or article pointing to a Less Wrong post that I also enjoyed, until I finally decided to read the About page and see just what Less Wrong was all about anyway. Every so often, I came to the site, read posts, and followed links from those posts. In this way I read most of the sequences, but not in the order listed on the wiki.
I have been programming computers since I was 7 and I like math too, so articles about how to think logically naturally interested me. I’ve been reading and loving Harry Potter and the Methods of Rationality – I’ve recently noticed many more of the stupid actions characters in stories do, and HPatMoR has helped satisfy my want for a story in which characters generally don’t do such obviously stupid actions. (There are examples of such actions in, for instance, the magical girl anime CardCaptor Sakura, in which the main character Sakura just accepts that she has to collect all of the magic cards without asking their magical guardian how the cards were created and who created them, or what the meaning behind a certain recurring dream is, or how magic works, or what this upcoming doom he keeps hinting about is.) I’m in college right now taking a Computer Science degree, about to start my sophomore year. I’m currently trying to figure out what the elements of the best possible programming language would be, hoping I can eventually write a language or tool to ease my frustration at the redundancy of C++, which we must program our assignments in.
A note about the welcome post:
I don’t really like the use of “we” here. I, too, am atheist, but I would guess that there are probably some people new to this site who are atheist but who have not yet really “given full consideration to theistic claims”. I would revise the sentence to “In general, this isn’t groupthink; most of us really, truly have given full consideration …”.
Hmm, fair point. Quick poll below:
Vote for me if you would prefer the post stay as is. (Karma balance below.)
Voting for original wording. In context, “we” clearly refers to the “core” of LW, which, just as clearly, is the collection of people whose atheism needs explanation to new readers.
Changing to “most of us” implies there is a notable subset of participants who haven’t given full consideration, and draws attention to that subset (“well, most of us have...[but there are a few people who haven’t]”).
There isn’t any need to weasel-word around the atheism here; it’s not anything we need to be apologetic about.
Vote for me if you would prefer the post edited as suggested. (Karma balance below.)
Karma balance. Vote me down to satisfy your sense of justice.
PHLOGISTON FOREVER!!!
Hi. I just joined the site yesterday to post a comment. I’ve been tracking the feed for about a week, having recently decided to re-engage with the Internet. I learned of the site about three months ago, by way of a blogger who was blogging about social issues. I disagreed with him very strongly on those issues, but I checked out his other posts and he mentioned a discussion over here (I think he’s a participant).
I think that the post that originally attracted my attention was something relating to the singularity idea. Being a geek myself, I’m kinda interested in the “geek rapture”, but haven’t gotten a good sense of how people approach it (I know there’s a book).
Anyway, I checked out the site: i liked the mission statement and the structure. Probably most importantly, the name stuck in my head. “Less Wrong” has a nice, calmly optimistic ring to it (kinda like Marginal Revolution, another blog I like). I really like how the site relies on user ratings. I’ve been a big fan of systems that have the community act as the gatekeeper, and have always jumped on board such projects (Wikipedia and Daily Kos, for example). I even once tried to set up a Wiki for debates, but it was very clunky and never got critical mass.
I’ve been participating in on-line political debates for about 15 years now. I think I’ve learned a lot, but I ofter get sick of the debates—especially when they involve mainstream activists who just repeat the same tripe over and over again. I’ve also become rather cynical towards our political institutions. I don’t really think that it matters what I think about politics—if I’m not willing to make a career out of it, I’m not going to impact anything. I’ve decided to make my career as a scientist instead.
All of these futile political debates lead me to ask why people are so bad at thinking (or at least, expressing rational thoughts). I’ve always viewed politics as a means to an end—that end being human happiness—and I’m increasingly thinking that it is irrelevant to promoting that end. I’m thinking that the real issue is in how people think and solve problems. If people think right, the politics will sort itself out. So, I’m hoping that Less Wrong can provide a more productive discussion.
This is precisely how I feel. Sometimes I daydream about starting a political party that has no ideology apart from vague consequentialism, commitment to rationality & empirical testing of policy proposals. Call us the “Whatever the Hell Works” party.
Some niches might be opening up in US politics. Unfortunately, sensible people don’t seem to be rushing to fill those niches.
There are at least 3 things going on in “politics” though. 1) Public discussion about the problems facing society including possible solutions and value debates. 2) Getting the “right” people in the right places so that upcoming problems can be addressed well. 3) People making sure they and theirs get a “fair” share of the pie including making their living through politics.
Unfortunately, the “Whatever the Hell Works” party probably doesn’t do well on that third aspect which probably means it would have a hard time getting and keeping people working for it. Ride a tide of dissatisfaction into power, but then it is really tempting to become just the latest version of the same old politics.
Oh, I agree! It’s only a daydream. =P
Hello! I’ve been a reader of Less Wrong for several months, although I never bothered to actually create an account until now. I originally discovered LW from a link through some site called “The Mentat Wiki.” I consider myself an atheist and a skeptic. I’m entering my senior year of high school, and I plan on majoring in Physics at the best college I can get into!
Actually, I had come across EY’s writings a few months earlier while trying to find out who this “Bayes” was that I had seen mentioned a couple different blogs I read. That was a pleasant connection for me.
I had an interesting time testing Tversky and Kahneman’s Anchoring Bias for my end of the year project in my 11th grade Statistics class. On the plus side, we found a strong anchoring effect. On the minus side, it was a group project, and my groupmates were...not exactly rationalists. I had to kind of tiptoe around what LW actually was.
Since I’ve started reading Less Wrong, I think the best sign of my improvement as a rationalist is that a number of concepts here that I used to find penetrating or insightful now seem obvious or trivial. On the other hand, I think a red flag is that I haven’t really made any major revisions to my beliefs or worldview other than those coming directly from LW.
I look forward to learning as much as I can from Less Wrong, and perhaps commenting as well!
Particle Man, Particle Man, does whatever a particle can! What’s he like? It’s not important. Particle Man!
Sorry, couldn’t resist.
I’m sorry, I’m not.
Welcome! Would you happen to be the same Particleman from LDF/Stonehenge?
I’m sorry, I’m not.
Ah, okay. Welcome nevertheless. :)
My search began when I realized that I was confused. I was confused by what people did and what they said. I was confused by my responses to other people, how interacting with other people affected me. And I was confused about how I worked. Why I did the things I did, why I felt the way I did, why sometimes things were easy for me, and sometimes they were hard.
I learned very early in my life that I needed to critically analyze what other people told me. Not simply to identify truth or falsehood, but to identify useful messages in lies and harmful messages hidden in apparently truthful statements.
At the age of 11 I taught myself to program on a TRS-80, and in the process I discovered how to learn through play and exploration. Of course I had been learning in this way all along, but this was when I discovered the truth about how I learned. This realization has changed my approach to everything.
Computer programming confused me, so my search continued. By focusing on how I thought about programming, I quickly became very skilled. I learned how to explore problems and dissolve them into useful pieces. I learned how to design and express solutions in many programming languages and environments. I learned the theory of computation and how it is tied to philosophy, logic, mathematics and natural languages.
I worked in industry for 20 years, starting with internships. I’ve worked on large and small systems in low level and high level languages. I’ve done signal processing for engineering systems and developed web interfaces. I’ve worked alone, and in teams. I’ve run software teams launching companies.
Programming still confused me. I was frustrated and confused by how difficult it was to do programming well. In general it is very difficult to implement a simple idea, in a simple way that is simple to use. Even under ideal circumstances and in the best designed system, complexity grows faster than the code base. This dooms many projects to failure.
I am now coming to grips with the true nature of this problem, and with its solution. The problem rests in the nature of knowledge and meaning. The implications extend far beyond computer science and I intend to write articles on this topic for Less Wrong.
A core idea that I am exploring is the context principle. Traditionally, this states that a philosopher should always ask for a word’s meaning in terms of the context in which it is being used, not in isolation.
I’ve redefined this to make it more general: Context creates meaning and in its absence there is no meaning.
And I’ve added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meaning and open a path for communication between disparate domains.
Some examples: In programming, an argument or message can be passed only if sender and receiver agree on the datatype of the argument (i.e. on how the bits should be interpreted). In Bayesian inference, all probabilities are conditional on background knowlege. In natural deduction (logic), complex sentences in simple contexts are decomposed into simple sentences in complex contexts.
In all cases, there are rules for transferring information between context and “content”. But you can never completely eliminate the context. You are always left with a residual context which may take the form of assumed axioms, rules of inference, grammars, or alphabets. That is, the residual is our way of representing the simplest possible context. I think that it is an interesting research program to examine how more complex contexts can be specified using the same core machinery of axioms, alphabets, grammars, and rules.
Absolutely. The interpretation of the evidence depends entirely on its meaning, within the context at hand. This is why different observers can come to different conclusions given the same evidence; they have adopted different contexts.
For example: ”...humans are making decisions based on how we think the world works, if erroneous beliefs are held, it can result in behavior that looks distinctly irrational.”
So when we observe a person with behavior or beliefs that appear to be irrational, we are probably using a different context than they are. If we want to understand or to change this person’s beliefs, we need to establish a common context with them, creating a link between their context and ours. This is essentially the goal of Nonviolent Communication.
I also see ideas in Buddhism that can be phrased in terms of the context principle. Suffering (dukkha) is context dependent. We may suffer under conditions that bring another joy. My wife, for example dislikes most of the TV shows I watch. If she realizes that I am happy to put on headphones to spare her from exposure, she can experience gratitude instead of resentment.
This is a key insight. If you can split a system arbitrarily between context and content, how do you decide where to make the split? In programming, which part of the problem is represented in the program, and which part in the data?
This task can be arbitrarily hard. As I stated above:
The Daily WTF contains many examples of simple ideas implemented poorly.
In computer science you can ground certain abstractions in terms of themselves. For example the XML Schema Definition Language can be used to define a schema for itself.
The observable universe appears to be our residual common context. If we want to come up with a TOE that explains this context, perhaps we need to look for one that can be defined in terms of itself.
This sounds similar to what I am working on. I am working on a methodology for creating a network of common contexts that can operate on each other to build new contexts. There is a core abstraction that all contexts can be projected into.
Key ideas for this approach come from Language-oriented programming and Aspect-oriented programming.
“The implications extend far beyond computer science”
In one way they do, in another they are very simple.
“The problem rests in the nature of knowledge and meaning”
Some things have simple answers, others are complex, but, if there is a mind to ask the question, then?
Hello, my name is Brett, and I am an undergraduate student at the University of North Texas, currently studying in the Department of Anthropology. In this semester, my classmates and I have been tasked with conducting an ethnographic study on an online community. After reading a few posts and the subsequent comments, LessWrong seemed like a great community on which to conduct an ethnography. The purpose of this study is to identify the composition of an online community, analyze communication channels and modes of interaction, and to glean any other information about unique aspects of the LessWrong community.
For this study I will be employing two information gathering techniques. The first of which will be Participant Observation, where I will document my participation within the community in attempts to accurately describe the ecosystem that comprises LessWrong. The second technique will be two interviews held with members of the community, where we will have a conversation about communication techniques within the community, the impact the community has had on the interviewees, and any other relevant aspects that may help to create a more coherent picture of the community.
It is at this point that I would like to ask for volunteers who would like to participate in the interview portion of the study. The interview will take from forty-five minutes to an hour and a half, and will be recorded using one of several applicable methods, such as audio recording or textual logs, depending on the medium of the interview. If there are any North Texas area members who would like to participate, I would like to specifically invite you to a face-to-face interview, as it would be most temporally convenient, though I am also available to use Skype, one of any other voice-based, online communication systems or the telephone to communicate.
If you are interested in participating, please send me a PM expressing your interest. If there are any questions or comments about the nature of the study, my experience with Anthropology, or anything else, please feel free to reply and create discourse. Thank you for your time.
Hi!
I’m a 3rd year Economics Undergrad student at the University of Glasgow. I found LessWrong, by reading a Profile on Peter Thiel, my interest are: economics (obviously, used to be macro but now gearing towards more experimental area’s.) philosophy, mostly stoic; not Seneca etc but Aurelius ‘Meditations’, history of maths and risk. Financial markets to an extent, but it’s not something I’m pursuing religiously. I have always been interested in self-development but though that the literature would need to be seriously scrutinized, so I’m very happy that I found this place. Singularity, from a economic point of view. Transhumanism is something I find extremely interesting combined with Cognitive enhancement at the moment, I’m still mapping the territory of it.
Cheers / UngnsCobra
Welcome to Less Wrong! Your interests sound interesting. What does it mean to look at the Singularity from an economic point of view?
I’m fairly new to singularity etc. but from what I have read so far. Looking at singularity as a if scenario through Brain Emulation’s (uploading). How would this affect the economy regarding, emplyment, growth etc. So far I have found papers looking at economics of singularity from Robin Hanson. I’m struggling finding other source’s so I would be very grateful if someone would like to contribute.
I don’t really know of any myself. It’s hard to do economics about such divergent and unclear scenarios, and economists typically do them as jokes (eg. Paul Krugman’s paper on investing in a relativistic time travel framework). And there seem to be penalties—that Hanson paper from 2008 still has not been published 4 years later, for example.
For those who are interested.
(To gwern and Will_Newsome) Haha that’s great, it’s a somewhat juvenile undertone in Krugman’s writing in this paper. that’s exactly the kind of paper’s i’m looking for—paper’s that are something of a outlier in the field of economics, if any other paper’s come’s to mind in the same direction it would be appreciated.
Hello everyone, I’m a 27 year old graduate student pursuing a degree in optics from the University of Central Florida. I perform experimental research in optical sensing of biological and random materials. Though I enjoy my research, I’m more interested in the philosophy of science. By philosophy of science I mean the framework of logical structures that scientists use to identify problems and arrive at solutions. Most of my colleagues, myself included, received no formal education of this type; rather, our educations were limited to the theory and application of the hard sciences while it was assumed that we would develop a framework for rational thought as a consequence. However, I see many working scientists fail to employ rational thought, especially in the lab, and I believe the inclusion of this topic in engineering and science curricula would better prepare students for graduate and industrial work.
I feel that a brief history of how I came to understand rationality would help describe who I am. I first became attuned, so to speak, to rationalism when I read Nietzsche’s Genealogy of Morals in college. I was raised protestant but throughout my life had felt no affinity for the Christian world view. However, growing up in rural Ohio afforded me no other mode of thinking. GoM’s criticism of ascetics, along with increasingly frequent encounters with liberal thought in college, led me to embrace my skepticism for the first time.
I read Robert Pirsig’s Zen and the Art of Motorcycle Maintenance my first year in graduate school. I’ve since read it twice more and, while I still can’t claim to fully understand Pirsig’s message, mark it as a major influence on my thinking, especially on practical problem solving.
The most recent event in my maturation as a rationalist is the discovery of both this blog and Julia and Jesse Galef’s Measure of Doubt. Though it seems a bit silly now, I honestly didn’t realize that other people thought the same way I did. It’s quite refreshing to learn that whole communities of like-minded people exist when one has been more-or-less secluded from them their entire life.
Aside from my interests in philosophy and science, I find environmentalism fascinating and feel morally obligated to make environmentally conscious decisions. I like to travel, rock climb, bicycle, cook, and brew beer. I’m happy to share more and am looking forward to learning from others on this blog.
Do you do any photoacoustic tomography, or is your work purely optical? I’m a math grad student in that area.
I’m also from Ohio, the Cincinnati area. Hi!
No, I do not do any work in that area, though I am vaguely familiar with it, having attended a few talks on the subject. However, the mathematics of solving the associated inverse problem is extremely relevant to the type of work that I do.
It’s great to meet another Ohioan. I was just driving through Cincinnati a few days ago.
Cheers, -kmd
Hi Less Wrong! My name is Jonathan, I’m 43, from Vancouver Canada, background in physics and philosophy (no longer professional), with interests in the Anthropic Principle, philoscience, Tegmarkian metaphysics, mutliverse theories, observer selection and assorted Bostromian subjects, and much else besides. I’ve been a proponent (shill) of the multiverse for many a year and am now gratified that it’s reaching mainstream acceptance.
Hi everyone!
My name is Felipe, from Argentina. I’ve been studying philosophy for the last five years or so, especially logic and philosophy of science, but this last year I also started learning web programming, and before that I was a very active editor in the spanish Wikipedia.
I learned about Less Wrong because I had just finished an experimental website, and I posted it on the imageboard of science and mathematics /sci/ (which some of you probably know), and there someone mentioned that people on Less Wrong would probably like it. So I came here, and I must say that after browsing for a while, I will definitely join the community! I also read above that “If you’ve come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation.”, so I guess it wouldn’t be out of place to share my site here. Here goes:
http://formalforum.com/ is the address of FormalForum, a website designed to structure debates in a rational way. There are two basic types of posts you can submit (for now): propositions and objections. Propositions are things that may be true or false (like “There is no retroactive causation”), while objections are defined as a special kind of argument: an argument which concludes either that a certain proposition is false, or that a certain objection is invalid. For each type of post, there is only one rule governing its behavior:
Every proposition will be considered true, unless there is a sound objection to it.
Every objection will be considered valid, unless there is a sound objection to it.
A sound objection is a valid objection with true premises. As every premise is considered a separate proposition, the rule 1 aplies to each of them. Thus, an objection will be considered sound exactly when there are no sound objections to its validity, nor to any of its premises. Some consequences of these rules are:
New propositions will be considered true by default, as they start with no sound objections (indeed, with no objections at all).
New objections will be considered valid by default, as they start with no sound objections (indeed, with no objections at all).
Not every new objection will be considered sound by default, as it may have among its premises one or more old propositions that are currently considered false.
As the site grows, some propositions will tend to get re-used more than others, which will rise their importance, for the soundness of more and more objections will depend on them being true. Eventually, some propositions will come to light as of key importance, while many others will sink into oblivion.
The system draws from the ideas of Austro-British philosopher Karl Popper (and others), who in his work The Logic of Scientific Discovery, argued that our acceptance of propositions such as “that crow is black”, although “inspired” by experience, ultimately depend on a convention: the convention of accepting as true those propositions which nobody cares to doubt. When someone does doubt a proposition, then s/he will have to extract one or more consequences from it that can be tested empircally, and if any of those consequences does not occur (that is, if nobody doubts that one or more of those consequences does not occur), then the proposition is falsified, and must be considered false. Else, the proposition remains true.
The website is new, and has many flaws and shortcomings, but the essence is there. I hope you find it interesting. In any case, I find Less Wrong very related to my interests, so formalforum or not, you will definitely see me around.
I just uploaded the evolution of FormalForum: http://ergoforum.org/
Any feedback appreciated!
Hello,
I was introduced to Less Wrong by a friend about a year ago. My first impression was of thoughts and opinions that I already had, or had half-thought, but expressed much more clearly. How could I not love it? I eventually read all of the sequences, finding novel but brilliant ideas. I now recommend them to almost everyone I meet. Coincidentally, after I’d started reading the sequences, I found HP:MOR, and had my mind blown when I found out most of them were written by the same person. Currently, I’m trying to read E.T. Jaynes’, “Probability Theory: The Logic of Science”, but I’m having some trouble, especially since I can’t seem to solve any of the examples. If anyone has a solutions guide, or some small hints, I’d greatly appreciate it.
Welcome!
Hi everybody! More than half a year ago, I came across LessWrong via Harry Potter and the Methods of Rationality, and have since read around half of the sequences. I’m so glad I found this site. I had a sense that more is possible, but I didn’t even know the word “philantrophy” existed before I got here, although that might be because that word is less common in German (which is my first language). At the few meetups I’ve been to, I’ve met some very awesome folks – I can’t remeber feeling so comparatively uninteresting ever before. I hope my experience with this site continues to be this eye-opening.
It’s “philanthropy”, but “philantrophy” would be an awesome neologism for the chaos that results from well-intentioned but ill-conceived humanitarian aid.
Philentropy: (noun) measure of the decrease of the utility/dollar ratio as a function of distance to recipient.
Edit: Here I thought I just made this up whole cloth, and what does google tell me but that it’s the name of an album older than I am. Nothing new under the sun, etc and so on. Relevant.
Welcome to LW, cadac!
Hello all.
My name is Alerik. I’m a 29 year old Civil Engineering student and father of one (so far). I’m hoping to graduate within the next year. I’ve been in school forever, changing schools several times, and majors from naval architecture to physics to applied math and computer science to civil engineering. I’ve been a terrible student much of the time, and a poor organizer of my time much of the time. I was raised very religious, broke away from my church when my grandfather’s death revealed the enormous corruption within the church, and broke with theism and religion in general in my mid twenties after a lot of reading, especially at stardestroyer.net. I came to be introduced to Less Wrong through several links from stardestroyer.net on topics about artificial intelligence and epistemology.
After my deconversion I found I was able to make my way out of a decade of suicidal depression and constant internal rationalization processes trying to harmonize dogma and science. I was able to engage in functional adult relationships and move forward with reduced fear. Nevertheless, I am still riddled with irrational and self defeating behaviors that I was unable to consistently overcome even when I detected them in operation. Only recently have I been able to make much significant progress, and have only taken beginning steps. I have found the Litany of Gendlin to be of immense help. I have also joined a local freethinker group, but it has not yet become well organized, and the focus is still on the influences of religion and not on how to improve rationality in general. I Wish to Become Stronger; therefore I am here. I must cleanse myself of the cloudy emotions and habits that prevent me from seeing what I need to see or deciding what I wish to decide. I must move forward with choosing the best life possible for myself and my family. And I expect, and even hope, though I admit to occasional fear, that the resulting optimal path in life results in a world very different from what I have come to expect.
Hi, Alerik! Welcome to less wrong. Congratulations on the progress you’ve made, and good luck in your future endeavors.
Hi all, I’ve just started reading Less Wrong, having long seen links to it on utilitarian communities online and through philosopher friends in Oxford. If you want to know more about me you could read the ‘about me’ page on my http://www.philosofiles.com/ website, though I won’t bore you with the details here! I’m always more than happy to discuss my beliefs though, so I look forward to eventually engaging with the discussions here :)
Welcome here!
Beware of things that are fun to argue, and don’t forget to win!
Welcome!
Hello! I’m here because a reference to Less Wrong that Nancy Lebovitz made on another forum intrigued me, and I love the last line of the FAQ: there’s nothing in the laws of physics that prevents reality from sounding weird.
I disagree that perfectionism as described on the About page is always a good idea, but my imagination can easily come up with an ideal standard which no living person can actually meet. And stay alive. Usually because of slippery-slope arguments, but if an ideal cannot be taken to the extreme example, can it really be that ideal?
I do believe in God, but not as defined in the FAQ, and I usually feel it is more accurate to say that I am an agnostic.
I started a couple blogs in July, and I am an aspiring writer. Humor is where I feel most comfortable at the moment. http://claricaandthequestion.blogspot.com/
I suffer from depression, but while it demonstrably limits my activities, I find it much harder to identify its effects on my mood, which is usually cheerful. There seems to be a lot of stuff that’s interesting to read here, which is totally exciting.
Welcome!
Hi,
I’ve been lurking on LW/OB for a while but thought I’d sign up. I’m currently doing a philosophy degree which you might expect would make me feel unwelcome on LW (which is often fairly anti-philosophy) but it’s actually really great to come across a group with a similar view about how to do philosophy as me—I tend to come across more interesting philosophy ideas here than I do in classes.
Anyway, just thought I’d say hello.
Welcome to Less Wrong! Don’t worry too much about our being anti-philosophy. We’re more against the common views held and methodologies used by philosophers than against the field itself. That is, the areas philosophy investigates are worth investigating; the bashing is reserved for the way philosophers go about it.
Thanks for the welcome.
And yes, my opinions re: philosophy are much the same so it seems like I’ll fit in fine.
at which school?
My comments before weren’t intended to reflect poorly on philosophy at my specific university. From what I can tell, they’re a good university for philosophy but I simply find a lot of philosophy to be of dubious value and my views on which philosophy is useful aligns to a reasonable extent with views expressed on LW. So despite the ability of this specific department I find that not all classes cover stuff that I find to be useful.
College in Australia = should be fun ;) Welcome to LW
Hello all !
I’m a twenty-seven years old student doing a PhD in vegetation dynamics. I’ve been interested in science since forever, in skepticism and rationality per se for the last few years, and I was linked to LessWrong only a few months ago and was blown away. I’m frankly disconcerted by how every single internet argument I’ve gotten into since has involved invoking rationality and using various bits of LessWrong vocabulary, I think the last time I absorbed a worldview that fast was from reading “How the Mind Works”, lo these many years ago. So I look forward to seeing how that pans out (no, I do not think I’m being a mindless sheep—I don’t agree with everything Steven Pinker said either. I’m just in the honeymoon “it all makes SENSE !” phase).
I’ve got to say, I’m really grateful for this great resource and to the internet for giving me access to it. Next time an old geezer tells me about how awesome the 50s and 60s were I’ll bonk them over the head. Metaphorically.
What do I value ? 1) being right and 2) being good, in no particular order. I’m afraid I’m much better at the first one than the second, but reading posts here has gotten me to think a bit on how to integrate both.
Welcome to LessWrong!
Hi. I’ve been a lurker since before Less Wrong existed, reading though the sequences as they existed on Overcoming Bias. I regularly read new posts on Less Wrong and have made it through a couple of the sequences, but have failed to internalize much.
I am very interested in the topics discussed here and have recently decided to take a more active role in the community as well as really learning the existing material.
A little about me personally. I’m a 23, male, computer programmer (‘software engineer’) that has essentially slacked off his entire life. I have extremely varied intellectual interests ranging from the arts, music, and design to computer programming, programming languages, mathematics, human intelligence augmentation, medicine, computer science, and artificial intelligence. It is rare that I find something that I am not in any way interested. I have studied mathematics and computer science formally.
I can’t necessarily call myself a rationalist because I lack pretty much all instrumental rationality. I am generally very rational in thought, but not in action.
I think the materials available on Less Wrong are both awesome and intimidating. I feel like I have already learned a lot, but know that I have really only scraped the tip of the iceberg.
(Located in North Carolina)
Hey, that makes two of us. Where about?
Raleigh, what about you?
Durham, here, right by Duke University, though I’ll be moving to another part of the city early next year, with some luck.
I don’t drive, but if we scare up a couple more people nearby and start a meetup group, it’d be worth taking a cab to, I bet.
Having a meetup here would be great. I know a couple people that would attend, but I’m not too sure of the overall readership here.
Hi all, I’m John Bustard. I was suggested this site by a friend and I’ve just started getting into it. I’m a PhD student in computer vision, with a basic need for intellectual discussions (nice food and good debates are pretty close to heaven for me). I’m also very keen on improving my knowledge of statistical learning, which I feel is the key to understanding truth (the formalisation of understanding). I’m a fan of the singularity with a preference for brain scanning and simulation as the triggering event. Above all, however, I’m attracted by the sense of community this site represents. I feel a great empathy with those whose posts reflect a dissatisfaction and frustration with the world around them. I have recently started being a bit more public about my own views, primarily in the hope of finding others who feel similarly. My posts on my own site tend to be more personal and much less rigorous. In part, so that I can talk about ideas that are hard to be rigorous about, but also as an honest analysis of my own feelings. Please feel free to criticise them at the site. I’ll be much more thorough with the posts I make here. I hope I can contribute something interesting and look forward to reading your impressive catalogue of articles.
Hi. I’ve been reading and posting here for 3 weeks or so, and am working my way through the sequences, so it’s time to introduce myself.
My full nym is Perplexed in Peoria (PiP for short.) I am a retired computer engineer (software simulation of hardware designs). My checkered undergrad career included majors in chemistry, physics, poly sci, and finally economics. My recent reading interests include molecular biology, evolutionary biology, formal logic, philosophy, game theory, and abiogenesis. Currently I am reading Pearl on Causality, Wimsatt on philosophy, and Eliezer on whatever. I am of the opinion that WVO Quine has a lot to answer for. I recently bought a Mac and an iPad.
I hope to begin posting here within a few months on topics of rationality, decision theory, and game theory. My first posting is planned to be on an axiomatic/intuitive foundation for subjective probability which I hope is easier to understand and thus more convincing than Jaynes’s Chapter 2 using Cox’s theorem. I am currently fairly skeptical regarding the Singularity.
Edit and PS: Oh, I got here by way of a comment in a science blog—Jerry Coynes’s, I think. About a month ago, there was a flurry of discussion and wooly thinking about Free Will out there in the blogosphere, and someone left the comment that the problem had been dissolved here. So I checked, and found that I pretty much agreed with the (dis)solution.
You recently bought a Mac? (must control Linux and computer building evangelism...) Anyway, welcome. I look forward to your post, and seeing your reasons for doubting the possibility of singularity. With my limited research so far, I am nearly certain it is inevitable, if not imminent. Now I need to go rant on a computer hardware site to get expensive pre-built computers out of my system.…
Also bought a Ubuntu disk and book, intending to go dual-boot on the Mac, but haven’t installed it yet. Yeah, the Mac cost too much, but I bought it because I had never owned an Apple and I have worked with a variety of Unix systems. Currently, I am trying for nerdish breadth rather than depth. And having built an Altair, my computer build-it-yourself hunger is already satiated.
I recently published my FOOM-denialist rant as a comment on the “Why trust SIAI” thread. But that was two days ago. I don’t much agree with what I wrote there. The singularity seems to me to be much closer now.
An LW semi-tradition I try to encourage: When one changes one’s mind after a discussion, go back and add a note to the original post stating your new position and what led you to change it. Hopefully this will help us build a map of what arguments are correct and convincing.
I try to always upvote such things. Changing your mind should be a party.
I edited the rant adding my second thoughts.
Thx for reminding me to do so.
this would be very welcome, as I just read that chapter.
Hi, I’m Rahul. I’ve intermittently visited LW for more than a year, refraining from commenting as it seemed optimal to shut up and update my beliefs regarding ideas I wasn’t very well informed about. I feel I’m better prepared to contribute now.
I studied engineering and physics at school, moving on to work at trading floors of investment banks where I got a real, ringside view of decision making under uncertainty. Today, I work as a social venture capitalist looking to help disadvantaged micro-entrepreneurs rise out of poverty.
Despite my life’s digressions, I retain a strong interest in philosophy, mathematics and computer science. My interest in rationality was initially piqued in my undergraduate years by the work of Kahneman and Tversky. I am mostly an auto-didact in the things I really enjoy, but I must confess that at 25, I often feel old and intellectually left behind. LessWrong helps me catch up.
Hello! My name’s Adam. I’ve been reading LessWrong since April, but I think this might be my first comment. I usually feel like I don’t have much to add :)
I think my awakening as a rationalist can be traced back to reading Plato’s Republic when I was 15. While not the typical rationalist text, it did open my eyes to the world of philosophy and logic, and first gave me the hunger for truths.
I found Less Wrong when a rationalist friend of mine badgered me for ages to visit it. This was after a weekend I’d spent in a particularly foul mood because of the short-sightedness and irrationality of the people around me. And then I remembered that Less Wrong site he’d mentioned, and decided to check it out. Wow. I’d found a place where people shared my beliefs—and I realised it had taken me years to independently think of a lot of the ideas taken for granted here.
Less Wrong has been a large part of my life in the last half-year or so, and I can see myself here for a very long time.
Just how I felt. Like I had stumbled across the intellectual equivalent of Callahan’s Crosstime Saloon.
I was here a month or two ago, left for a while, and now I’m back. I found this site on a google search for an old AI project I was trying to research out of curiosity. I have been interested in AI since I was 13 and found this old dusty book at a library book sale titled simply “artificial intelligence”. I read it cover to cover several times, and that’s really how I got into all of this. Anyways, after finding this site, it really hooked me in, although I guess I was kind of resistant to the general opinion of the community here at first, which is how I got voted down so much. Now I have to wait 10 minutes to post this >=/
Don’t be discouraged. When I first started to post on this internet website, I was frequently voted down, usually to the point that I had to wait before submitting comments. However, by persisting, and making informative, reasoned comments, I was able to raise my Karma well above that needed to submit an article.
And this is despite significant disagreement with other Users!
Have been a long time reader of Overcoming Bias, but haven’t gone over to LW after the split.
I’ve been a rationalist as far back as I can remember, but I really became serious when I was 12. I grew up in Israel, and I was being prepared for my Bar Mitzvah by a Hasidic Rabbi. As Hasidim are prone to do he was telling me some mystical story, wherein he mentioned that the Sun orbits the Earth. I correct him offhand that this must be wrong. He countered with what I now know to be a classic “Have you ever been to space yourself?” followed by the even more classic “Maimonides said so, you’re not saying you know better than the Rambam, are you?”. I knew so clearly he was wrong, I could explain roughly how it wouldn’t really make sense given what we know about gravity, etc., but I couldn’t really even convince myself how one might reach that conclusion from scratch. As a 12 year old I vowed to never be in such a position again. (Although my Bar Mitzvah went off flawlessly, I’m now an avowed non-theist in the presence of religious folks, atheist otherwise)
My academic training was in Linguistics and Computer Science, and I’m currently working on a startup in Silicon Valley.
Ouch. So this is how “but not that particular proof” feels from the other side.
Very much so. I spent the next 10 minutes twisting myself up in knots: “Astronauts went up in space”, etc. Always getting “But you yourself never went in space!”. In my 12 year old naivete I replied that the mystical story he was just telling me was not witnessed by him in person. At which point he grabbed some old book that was nearby and mentioned that since it was written there it was true. That’s when I knew to give up.
Ah, the beloved ‘appeal for humility.’ It’s the gift that keeps on giving...
Welcome! I was in linguistics too, for a while.
Hi, I lurked on OB and, until recently LW. I’ve since poked my head out a bit and asked a few questions to try to figure some things out. Like a lot of people here, my areas of interest are varied.
My main hope with starting to post on the site is that I might be able to provide some more introductory material trying to introduce people to LW—partly because I’m learning it myself so I’d find writing such posts challenging whereas many of the people who have been posting here for longer are excited by more complex things.
I’ve had the same experience- thanks for the introduction!
My name is not Stuart David. I use a pseudonym online as a means to completely sidestep the issue being branded with a view I don’t necessarily hold but have simply argued for or posted about. I am also an extremely private person and wish to remain so.
I am in my mid 20s and I am still working on my B.S. in Physics. On and Off university for the past few years. I have been involved in the promotion of reason, science and skepticism via CFI (Center for Inquiry) and I have personally pursued rationality for the past 10 years or so. Preferred activities in my life are learning, debate, philosophical inquiry, science, history, politics and chess.
I am a consequentialist morally with the fundamental value of well being/human flourishing to be maximized. I am deeply committed to science and reason and strive to build my life around this. Needless to say I do not believe in supernatural things. I am also a determinist and I am skeptical of a persistent self.
My aim in joining this site is to tap into what seems to be a remarkably brilliant brain pool and post articles of my own so they can be destroyed if they can. I have read some of the sequences on this site but I had to put that on hold for different reasons I intend over time to become more and more familiar with this and eventually start a meet up group. I already have a weekly meet up group dedicate to philosophy, science, rationality and debate that has been going for over 5 years so It would just be a matter of incorporating more and more less wrong content into our activities.
Thank you and I look forward to interacting with you.
Wow, that must be some kind of record.
Hi my name is Krish Sharma. I am a recording record producer and recording engineer, with several small music-related businesses. I have degrees in economics and computer science, but as far as music I am self-taught. I feel a strong connection to the idea of the pursuit of human rationality, but many times feel I lack the processing power to really make sense of our environment on my own. In my ad-hock voyage through the information biosphere I have felt at times very discouraged by the general “triumph of irrationality”. For the most part my internal solution has been to point out inconsistencies in data or logic where I see them and also, especially in business dealings, pay special attention to avarice-connected misrepresentations. Going forward, however, I hope to move on from this reactionary approach and develop my own set of paradigms and worldviews. Instead of merely understanding what I don’t believe, I want to understand what I do believe. I am hoping to achieve not only a clearer and more nuanced picture of the environment in which I live, but also a greater connection to it.
I hope to be a constructive addition to any discussions I participate in here.
Hi everyone, I’ve been following this site for a long time and I really feel like it’s had a huge impact on me, if not just because I’ve discovered a huge community of people who seem to have the answers to the questions I’ve always been asking myself (or at least the cognitive apparatus for reaching them!)
Me
I’m a 20 year old male from the UK and have been working for two years in a private hospital with the aged, terminally ill and cancer sufferers. The job requires me to work 12-14 hours a day with little human contact other than with patients and nursing staff which gives me an enormous amount of time to just think about things and debate things through rationally by myself. I’m almost obsessive in my fascination over the mechanics of thought and why I think the way I think, or like the things I like, and am constantly asking myself whether I’m decieving myself or whether I really believe what I think I believe. Finding so many people in this community who have constructed various models for analysing that way of thinking and expressed them so eloquently has given me such confidence and really renewed my enthusiasm for “staying in the desert” of thought that can sometimes turn into a very scary place.
Where’d I find this place?
You know I can not remember at all where I found LessWrong, I can only guess that an article I read somewhere on the internet mentioned in briefly and that in the following moment the idea that my curiosity will always reward me proved itself true.
If I could add anything else it would be to say that I’m keen to learn from everyone here and hopefully one day meet your standards for living up to the virtues that I hold dear.
Anyway I hope my introduction didn’t make me sound too weird or anything...
Greetings fellow user & producer of thoughts!
My parents named me Jonathan, I’m 20 and born in Copenhagen. I’m honored to find such a high quantity, high concentration of high quality minds. My dad (not very generous with compliments) told me recently that I’ve always been wierd, much more conscious about everything since very young. I’m also about the fastest learner I know of. Two major weaknesses would be that I’m mortal and my English is very unpracticed in terms of output. I value: Consciousness, Intelligence, Practicality, Good decision making, Well thought out ideals and sticking to them, Self-control—including the ability to control what I value, what feelings I have linked to which ideas, control of my mindsets and the ability to switch freely between them.
I woke up this morning after 3 hours of sleep (and no, aside from power naps I don’t practice polyphasic sleep, yet.), I didn’t feel the slightest bit nervous about going to the math exam, that I had only 2 days earlier, by chance, when tidying up my inbox realized I was registered on. The fact that I still hadn’t read half of the math book for the semester which just inferred I would have to learn while being examined made me focused, not nervous.
But I’m so super extremely fantastically pleased to learn of the existence of lesswrong.com, that just minutes ago I was nervous about writing this.
After my exam I had a talk with my friend about my recent progress and obstacles in context of my life purpose, which would be fitting to present now I reckon.
Three ways of of naming it would be: The way to Universal Genius/The journey to becoming a 3rd millennium polymath/Self-development with no reason or intentions of limits on proportions.
It’s my first candidate to something I find fully valid as a meaningful purpose of my life. It both feels more right and enjoyable than anything, but I think that is because it is backed by my reasoning (or rational thought). I won’t go in depth with that unless there is interest (also since I’m assuming LW actually might be a place where others could’ve come to the same conclusion as me), but I’ll touch my reasoning shortly.
All of which I do, I want to do optimally, my brain is my tool for doing so. I do not know the limits of either mine nor the brain in general, and therefore see only disadvantages in setting them for myself. If (insert whatever), I’ll do that better with a better brain, so I better train that brain.
So to not make this a book length comment; I told my friend that epistemology was my current main objective to worry about. That led to him to suggest me to learn about Bayes statistics and referred me to LW to start learning about it.
Let the learning commence!
Hi, Optimind. I’d suggest starting with either An Intuitive explanation of Bayes Theorem or An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem. After that all of the sequences(except maybe the quantum mechanics one) are worth reading.
Thanks! Bayes theorem seems very useful, though I haven’t gotten through it all yet. I’m not a good reader yet.
Have you got any idea how far is my goal from everybody elses in here?
FWIW, my own intuitive explanation of Bayes’ Theorem—which may be inaccurate and wrong—usually begins somewhat like this:
Let’s say that, one morning, you walk outside your front door, and immediately slip in a puddle of water and twist your ankle. Did CIA agents put the puddle there just to hurt you ? Well, according to the theorem,
a). That’s the wrong question to ask; a better question is, “how likely is it that CIA agents made that puddle ?”
b). To answer that question, you need to keep in mind that puddles can happen for all kinds of reasons (rain, sprinklers, etc.), not just due to the machinations of CIA agents.
Of course, no intuitive explanation is a substitute for math...
Hey everyone. I found out about Less Wrong via Common Sense Atheism a couple of months ago and I’ve been reading up on the Sequences and trying to learn more about Bayes’ Theorem so that I can think more like a Bayesian in everyday life. It was only recently that I decided to actually make an account and contribute a bit.
I’m a software engineer for the Army. I’m not uniformed military (I used to be, for the Air Force) but a government civilian. My hobbies include swing dancing, playing guitar (mostly metal), learning about religion and studying Koine Greek (I might try to get an MA and possibly even a PhD in religious studies eventually), working out, and of course studying rationalism.
Interesting combination. Coming via CSA, I’m guessing you’re the ‘understand the enemy to defeat it’ or ‘how could such a strange thing as religion work’ kind of atheist?
Yeah, I think it’s probably a combination of both. Maybe somewhere down the road I’ll be sneaking in rationalism while talking or teaching about religion? That’s the goal, anyway.
Being rational is a component of religion, though the hard core atheist rejects that fact.
I once lightly unpacked the story of genesis and related it to more modern theories of cosmology and biology. Considering the resources available to the author of Genesis, it turns out to be quite effective. Sure it’s way off, don’t get me wrong, but how many thousand years ago was it written, how’d they work it out? Thousands of years ago, and some of it is still congruent with modern science.
I’ll provide a link in a pm if you request.
Good afternoon, morning or night! I’m a graduate student in Epistemology. My research is about epistemic rationality, logic and AI. I’m actually investigating about the general pattern of epistemic norms and about their nature—if these norms must be actually accessed by the cognitive agent to do their job or not; if these norms in fact optimize the epistemic goal of having true beliefs and avoiding false ones, or rather if these norms just appear to do so; and still other questions. I was navigating through the web and looking for web-based softwares to calculate probabilites, so that I found LW, and guess what! I started to read it and couldn’t stop—each link sounds exciting and interesting (bias, probability, belief, bayesianism...). So, I happily made an account, and I’m eager to discuss with you guys! Hope I can contribute to LW some way. We (me and my research partners) have a blog (https://fsopho.wordpress.com) on epistemology and reasoning. We’re all together in the search for knowledge, fighting bias and requiring evidence! see ya =]
WARNING: long post. I detail my entire intellectual development and how I came to be interested in LW. More posts on LW should have short summaries like this one (IMO).
Hello! I’m a 17 year old high school student. I was raised a lukewarm christian (I went to church maybe 5 times a year). Around 3rd grade I deduced Santa Clause could not exist. Around 9th grade I first HEARD the word atheism (and shortly thereafter agreed). I’ve always wanted to have some big impact on the planet. When I was younger (5th-8th grade), I thought I would try to become a professional basketball player (this is embarrassing to write).
I decided in 9th grade that intellectuals have far more impact on the world than basketball players and have been reading as much as possible ever since. Brave New World had a profound impact on me was largely responsible for my turn away from basketball and more towards Utopian thinking. I know “Politics is the Mind Killer”, but I feel that watching the zeitgeist films had an important impact on my early readings. It showed me how stupid everything that I’d been told before I could think critically was. I still want to create Utopias (Utilitarianism is the only ethical code that makes any sense). However, I think that after reading David Pearce’s “Hedonistic Imperative” I’ve focused less on things like the zeitgeist movement and occupy wallstreet and focused more on finding happiness independently of one’s external circumstances (Milton said that “the mind can make a heaven of hell and a hell of heaven).
This first led me to Buddhism. However, the lack of philosophical rigor coupled with the hypocrisy of swami’s who have been accused of sexual harassment has led me to turn away from Buddhism as a perfect formula for happiness and Utopia (I still meditate though. As Sam Harris has said (paraphrasing), Buddhists don’t have a monopoly on meditation). My researching Buddhism also coincided with me becoming depressed. I’ve certainly improved drastically since then,but I still will suffer brief bouts of negative emotion (rest. Exercise. Nootropics. And a weekend of productivity reliably quell these feelings). During this period of reading about Buddhism, I read a bit about parapsychology and the statistical evidence for it.
But recently, I’ve decided that the evidence for and against parapsychology is relatively unimportant (wireheading is more conducive to Utopia than levitating). But, I am not satisfactorily convinced of the truth or falsity of parapsychology (keeping an open mind). I’m not quite sure when I plan to conclude whether it’s true or not. I’ve decided that I’m just going to keep up my meditation practice because if it were true, I’d want to be able to do it and the first step is to be able to meditate better regardless of whether I regard it as true or false. Also, the notion of enlightenment doesn’t really seem consistent (people mean a lot of different things when they say it, just like when they say god). Furthermore, I think “enlightenment” is something that is purely neurological (no reincarnation) (Wiki:God Helmet).
So, based on all the previous information, I’ve concluded that I want to see neuroscience advance to the point that we can create a neurological utopia like the one proposed in David Pearce’s abolitionist project. However, after doing a lot of research on nootropics, I’m concerned that our current state of understanding of the brain is very limited and that there is a lack of funding for the type of research that we need (nootropics for normal individuals and whole brain emulation). Thus, I’m torn between deciding to major in neuroscience and majoring in something that would be conducive to the restructuring of society so that more neuroscience relevant neuroscience research can be done. I would try to restructure society by improving our educational system and creating seasteads (I was very excited to see that Patri Friedman is a member of this forum). Also, I came up with the idea behind debategraphs.org before I discovered that the site already existed. Either way, I realize that the contributions of any one individual are minimal (Somebody else came up with the theory of evolution at the same time Darwin did).
So that’s my intellectual development thus far. I’m currently reading Bostrom’s “Roadmap to WBE” in order to gain a better idea of the neuroscience and feasibility of WBE and this should help me make a more informed decision on what to major in. Also, I’m going to read the “Fun Theory” sequence as soon as I get enough time. I’m also reading about hypnosis and the placebo effect in order to get an idea of how much control the mind can have over itself (this fits in with my earlier Buddhism research).
After reading around here for a little while, I feel that I have finally found a home. I am the only person I know personally who is interested in all of the topics I’ve listed above. I have a few friends with a minor interest in philosophy and seasteading, but they aren’t nearly as serious about learning as I am. I really love it that this community exists. I’m not used to feeling dumb (and I don’t plan on feeling that way for much longer). I want to go to the rationality boot camp and meet some of you in person. I’m still puzzling out why I want to create a Utopia and have a big impact on the planet. I don’t really know what I’d do without this goal in mind. It seems relatively silly given my view on the historical impact of any one individual. Yet, I don’t know what belief I would replace it with (and I may not be willing to give it up).
I need to read Bostrom’s “Roadmap to WBE” and figure out how I think the Fermi paradox most likely plays out. It may very well be that if WBE is not possible that I will return to taking a parapsychological and meditative approach to creating Utopia (though I think that I’d create seasteads, education reform, and do a lot of reading on LW about WBE before I made such a conclusion.). I realize it’s a little sad that I can sum up most of my intellectual development in one post. Random stuff: I’m very physically fit. I eat the healthiest diet possible and workout regularly. I enjoy a wide variety of music. I learned to read by playing pokemon on the gameboy color.
When I was younger (three years old), I thought I would try to become a helicopter.
You have no idea how hard I’m laughing.
We sound alike. I’m curious where are you from?
“On the other hand, I would have to take care of myself which would take a lot of time.” Borrow 4 Hour Work-Week (by Timmothy Ferris) at your local libary, then that shouldn’t be a problem if your just closely as smart as you seem. Yes, the title sounds like a get-rich-quick scheme (he has even made fun of it later himself.). But he’s actually very sensible and practical minded, not very brilliant philosophically though.
Alike indeed.
I’ve decided I’m going to tackle the sequences one at a time. I’m going to create a folder on my desktop for each sequence. I’m going to have a word document with all the insights I’ve had relating to a particular topic within the sequence. I think I’m going to start with “the craft and the community”, “Yudkowsky’s coming of age”, and “fun theory” (These seem to directly answer my question of how I can help create a utopia).
One reason to post what one is going to do is to establish a form of accountability for oneself. That’s a good reason to post something like this, there are also other good reasons to post something like this. There are even bad rasons to post something like this. Do you mind sharing your reasons?
Not at all, first of all, it’s useful for me to write all this out because then I can see the driving force behind all the books I choose to read whereas normally I don’t go through this entire through process every time I choose something to read. second, I did ask for some specific advice for how navigate this forum. Obviously I asked because I wanted to know the answer. third, I want to learn, so if somebody has already read similar material for similar reasons, I want them to comment and give me some advice on which books to read and which ones not to read and to tell me if they see any flawed reasoning in my post. fourth, I’d love to make some friends on these forums. There are people here who are graduating early from high school (something I might do) and they could offer some advice when it comes time for me to make that decision. fifth, I’ve been talking about how little I know for a while, but if there were any way I could help the forum or offer up some insight that hadn’t been thought of, I will do so.
One good way to set about learning something is to start with the specific sub-section you are most motivated to learn. It’s good you have identified those.
Nonetheless, there are tradeoffs involved—some things might build on others, for example, so all else equal there might be a best order to read things in.
I recommend the first five subsequences of How To Actually Change Your Mind, A Human’s Guide to Words, and Reductionism).
Thanks for the tip. I wish I weren’t in high school right now. So much busywork.
In my experience, it only gets worse.
I would think that if I were in college, I wouldn’t have to take classes that are incredibly slow paced. Also, I wouldn’t have to be physically present in the school for 8 hours. The classes would be more specific to my interests.
On the other hand, I would have to take care of myself which would take a lot of time.
Right, I forgot about college. If you do that right, it can be idyllic.
In college I’m taking it slow because I have the luxury of money and time and a wonderful environment in silicon valley. I feel like if i was taking as many units as i could and not just a comfortable amount above full time status there would be a TON of busywork but so far I’m greatly enjoying my idyllic experience :) I think it all depends what you wanna optimize for.
I found this list after finding and loving EY’s Harry Potter series.
I have a background in statistical pattern recognition, and quickly found that most of the writers I found real value in during graduate school were canon here—Jaynes, Pearl, Wolpert, Korzybski, to name a few. I’m hoping to pick up more, like Kahneman.,
Way back in early 90s, I was on the extropians list as well, and think I’ve seen a familiar name or two. Quality discussion groups are hard to find, and I’m very happy to have found this place.
G’day friends,
who you are—that’s a question far too involved for a first post. My name is not who I am, nor my job, nor my place of birth, nor what I do for leisure, nor what I find interesting, if you could see me, you might consider that my body is who I am, and I’d agree with that… online I am Peacewise, and that’s a name I’m proud and respectful of, so you can expect that I’ll maintain the dignity of those who interact with me.
What you’re doing—I’m operating a business and studying a teaching degree. Along the way I’m being consistent with my belief that empowering people to see their lens is useful.
what you value—I value knowledge, I value sport and education, I value people and other living creatures.
how you came to identify as a rationalist—I’m not completely familiar with the word “rationalism” as it’s used on this site, but my inclination to answer that question is …by asking questions, by challenging assumptions and preconceptions, by noticing my own lens, by trying on answers and being willing to risk being wrong.
how you found us, a Quora user had a link on his profile page. I appreciate what Alex K Chen had to say in several Quora answers so I followed my intuition and ended up here.
I’m hungry for better thinking, I’ve spent enough time arguing with those who don’t know what they don’t know—yet claim they know it! I’ve had some success in opening a few minds along the way, but I’m tired, so weary of the overwhelming lack of decent thinking and indeed weary of the lack of motivation for decent thinking from too many.
I’d just like a place to be, where I don’t have to lead the horse to water and then watch it die from thirst as it drinks sand. That’s a flip of a saying my dad uses. “You can lead a horse to water but you can’t make it drink.” Quite true, but having led that horse to the water, having invested the time, to watch it then not drink the water, instead drinking sand and consequently die, well that’s hard… and I’m over it.
Hello all,
I’ve put off coming here for as long as I have been able (not due to not wanting to join the community, but due to the fact that my obligations make it so that I often have to drop communities, which I feel regret about) but I think I finally have time to be a quasi-active participant in the community here, so we’ll see what happens.
I first saw this site, following it from Harry Potter and the Methods of Rationality about a year or two ago, and followed that up with reading the sequences. (Which were instrumental in helping me push away a whole host of cached thoughts and poor patterns that I had developed over my life, although it took some time to do so, and I doubt I got them all.) At around the time I was reading through them, I was contacted by Adelene Dawner through a friend’s livejournal, who invited me to join the community. I think I finally might have some time to devote to being quasi-active, or at least following things and commenting occasionally.
Welcome to Less Wrong!
Hi all. Nothing really fancy to say about myself. I like writing webcode and dabble in the basics: PHP, CSS, HTML/XHTML, maybe a little JavaScript here and there. Lately I’ve been teaching myself PERL on account of it’s quick and dirty utility. I got pulled in to Less Wrong while reading Eliezer’s sequence on Quantum Physics. I wanted to see what this community was all about, so I created an account, read the introductory articles, and left this comment.
Welcome to Less Wrong!
Hi, as requested, here is my introduction: I ended up here thanks to HPMoR, I have a physics degree and frequent the relevant freenode channels. I have observed that scientists are not significantly more likely to behave rationally than anyone else, not even in their area of expertise and this site appears to explain some of that. Ironically, it appears that this community is less wrong not much more often than an average person,either, though this might be just my initial impression. In any case, I hope to improve my personal rationality quotient, despite the overwhelming odds against it.
If there are indeed overwhelming odds against, you shouldn’t hope (and conversely).
Why do you think there are overwhelming odds against any significant improvement? After all, most people in the world aren’t even trying.
Hi everyone, I’m a 25 year old Olfactory Psychology student, hopefully about to start my phd soon. I have a blog myself at http://freeze43.wordpress.com/ that’s mostly about atheism and philosophy. I was here after a link by a friend pointed out some stuff by Eliezer Yudowsky and I was really excited about it.
I got into rationality fairly early by enjoying religion and philosophy classes and being concerned with a desire to find truth. As I progressed through my Psych undergrad I found myself changing my career preferences as scientific understanding became far more convincing and powerful to the point where it is perhaps the only truly implicit understanding we have. I was shocked to see humanist psychology et al. strutting around as if it was meaningful when compared to statistically verified information. What was worse, is that this fluff stuff invaded the “psychology” section at bookstores, was made into teaching curriculum at schools and was thought to make meaningful predictions about people’s lives. So, here I am in the most scientifically-based psych discipline I could get into; studying the sense of smell.
Olfactory psychology is rewarding and has led me down some weird paths in understanding consciousness which is my big interest. However I feel I’ve neglected other studies of cognition and I really want to get a better insight.
Hey, I’m Jon. I’ve been reading Overcoming Bias for probably about 3 years and only recently discovered Less Wrong. (IIRC, OB was getting more and more into AI, then it split into OB and LW. I stayed with OB but never looked at LW.) I have a bachelor’s in Mathematics and Economics, and was getting a PhD in Economics before I dropped out after my second year. (I became severely disenchanted with the discipline and arrogant and hostile with my program.)
Some post (I don’t remember which) from OB lead me here maybe two weeks ago, and I read “Generalizing From One Example.” I thought it was fantastic and have spent the past couple weeks devouring the LW archives when I could. I was particularly struck by “Intellectual Hipsters and Meta-Contrarianism” and have sheepishly accepted this bias in myself.
I’m not sure if I’m a “rationalist” per se (since I’m not sure what a “rationalist” is—more reading to do) but I hate being wrong and/or not understanding things and have historically been more interested in becoming right than proving I’m right. Through my study of economics, especially my focus on first principles (the idea of rational preferences, choice structures, and revealed preferences, etc) I’ve come to believe that rationality is actually more of a skill. Imagine my delight when I discovered a whole community who not only agrees with me, but who actively endeavors at further understanding and honing that skill.
I look forward to learning more here, and hope that eventually I might be able to offer something of value in return.
Hi, Living in Montreal, Quebec, Canada and studying in the equivalent of college. Lazy is a word describing me well. Seeking consonnance and being lazy, most of the time I think and do not act. I seek to act free of pointless things, opinions, biases and ?
And it is difficult. LW is breeze of fresh air to my mind. I want it to help me change myself. I want to be more congruent and rational.
Discovering new possibilities makes me see my inadequacies and now I feel I have to do something about it.
This is a step.
Howdy. I’ve been reading this blog for several months, but I’m hoping that having an identity on this site will provide incentives to internalize its logic; I’ve found in the past that it’s easy for knowledge to fly away when you don’t have a short-term stake in understanding it. Of course, that introduces its own potential for bias, but you’ve got to start somewhere.
Demographically, I’m a software engineer in my mid-to-late twenties living in the SF Bay Area. I spent some time studying classical AI while I was working on my undergraduate degree, but I’ve recently developed an interest in nonclassical methods; I also have interests in game theory, economics, and game design. I’m additionally a fairly serious martial artist, which informs many aspects of my thinking.
I have a fairly strong aversion to calling myself an “-ist” of any kind, but I can label myself a reductive materialist without cringing.
My name’s Brian. I’m posting under a handle because I expect more people I’d encounter here to have associations attached to the handle than to my actual name.
Hello LW. I’m Phil, I’ve been reading Less Wrong for a little over a year now. One of my most prominent “ugh-fields” is that surrounding my (very low) content consumption/production ratio, and I, somewhat baselessly, hope that posting here will help me become a more thoughtful and disciplined writer.
Currently, I’m an undergraduate studying physics and computer science in Chicago. I am highly torn between pursuing a career in science or one in engineering. Several articles here have helped me understand the difference between the two better, but that hasn’t translated neatly into resolving my ambivalence.
During my high school years, I became an ardent atheist and libertarian (now somewhat tempered), and grew attatched to transhumanism after reading The Singularity is Near.
My college experience thusfar has really impressed upon me the need for rationality. Coming to interact with such a huge repository of previously unconsidered hypotheses has shattered some of my unwarranted certainty I built up from years of being in an environment which never challenged me. I hope this will be another (fruitful) step on that path.
I’m a student; I value education and intellectual freedom for all sentient entities. I was told I would enjoy the Sequences after asking someone “Do you think that any ‘good’ society is inherently hierarchical?” over drinks.
I’ve always identified as a rationalist since I remember being conscious; I became a stated atheist approximately age four when I literally rejected the notion of a loving God along with the idea of Father Christmas and the Easter Bunny.
Good on you! I was raised what I call funeral-Christian. We would sort of half-assedly pray whenever anybody got sick or died, but my family was totally uninterested in religion otherwise. My sister asked if we were catholic at age 16 or so, to the amusement of all adults concerned. I sort of vaguely thought we were freemasons because I found my granddad’s old masonic junk in a drawer. Not sure why I never thought to just ask...
But I was a total moron about Santa. I actually managed to invent belief-in-belief in Santa (“maybe Santa doesn’t actually exist, but does that really fundamentally matter?”) at about age 7. So I’m working off a huge rationalist karmic debt.
Hi everyone,
I’m an undergraduate at the University of Minnesota majoring in Philosophy and Mathematics and minoring in Economic Theory. I’m most interested in logic-related subjects (mathematical logic, philosophy of logic, philosophy of math, etc.) and moral philosophy (including meta-ethics, ethical theory, and some issues in applied ethics), but I’m also interested in various issues in the philosophy of mind, decision theory, and epistemology. I’ve been participating in competitive debate since I started high school and I now coach my old team.
I found out about Less Wrong through a friend in the Transhumanist club at my university and have been lurking for a while. I’ve learned a lot from the site and have had a lot of fun browsing the articles, so I thought I should finally get involved in the discussions. As a utilitarian, Bayesian, atheist, rationalist, I tend to agree with a lot of the core views here, but I’m also a moral realist and a property dualist, so I’m looking forward to some healthy debate on the site.
An objective standard might be good here. I’d suggest something like ‘if your theist arguments aren’t roughly as sophisticated and carefully reasoned as those of Alvin Plantinga, you probably shouldn’t bring them up’.
I remember stumbling across Plantinga’s modal argument and going “what?” For convenience of onlookers, here it is in a more digestible form.
Premise 1: Besides our world, there are other “logically possible” worlds.
Premise 2: Some cheeseburgers are totally awesome.
Premise 3: To be totally awesome, a cheeseburger has to exist in all possible worlds, because being “logically necessary” sounds like a totally awesome quality to have.
Conclusion: Therefore, if a totally awesome cheeseburger is possible at all (exists in one possible world), then it exists in all possible worlds, including ours.
(facepalm happens here)
A.k.a., ontology with some bells and whistles.
This introductory philosophy class syllabus links to a statement of the ontological argument by Platinga, if anyone wants to read the argument in the words of the proponent.
the entire enterprise of modal logic seems facepalm worthy to me
I understand that when folks say “modal logic” in this context, they’re generally referring to model logics that implicitly quantify over poorly-defined spaces. However, that’s not what all modal logics are like, and so I hate to see them maligned with a broad brush.
Consider, say, dynamic logic), which I actually use as a tool in my research on program analysis. When my set of “actions” are statements in a well-defined programming language, I can mechanically translate any dynamic logic statement into a non-modal, first-order statement. I almost never do this, because the modal viewpoint is usually clearer and closer to the way we actually think about programs.
Equivalently: you can use whatever logical operators you like, if you can define the operator’s meaning without reference to the operator. It can help you say what you’re trying to say, rather than spending all of your time with low-level details. It’s like a higher-level programming language, but with math.
Consider my eyes opened.
This is my problem with the modal logics I have encountered—bad or unclear definitions of the modal operators.
Modal logic is actually quite useful. If modal realism turns you off you can just accept it as a language game (which any sort of formal logic is going to be.)
The non-sequitur in Plantinga’s argument, as presented by cousin it, is P3. (Plantinga’s own argument is a bit more subtle, and its ultimate error is in eliding between different meanings of the term “possible.” He successfully shows that under formal logic if possibly necessarily x then necessarily x, and then ascribes possible necessity to God because God is one of the most few things that often is argued to be necessary, and that God seems like the sort of sufficiently abstract thing that it might be necessary. But this isn’t the sort of possibility that’s germane to formal logic.)
Haven’t read Plantinga and not going to, but ‘possibly necessarily P’ does not imply ‘necessarily P’ in all modal logics.
I agree with Eliezer’s critique of the value of modal logics: 1, 2.
Eh. He didn’t really show they’re not valuable, just that they haven’t reduced the notions they work with to something other than black boxes. Modal operators can mean all sorts of things, aside from “possibility” and “necessity”, and black boxes are fine as long as they work properly—if you need to know what their internals look like, that’s just a project for some other formalism.
Greetings, I’m Simon, 23, I study Bsc Computer Games Technologies, currently focusing on rendering pipelines and AI. My scientific interests include physics, computer science and, 3D rendering techniques(C++ is my weapon of choice).
Cheers
-dxCUDA
A google group has just started recently for Lw folks interested in making games: http://groups.google.com/group/lesswrong-gamemakers
Hi, I’m Alex. I study biochemistry at Rutgers University. I think I was linked to Three Worlds Collide through a TVTropes page. In the past few days I have been curious about
Kolmogorov complexity,
how to derive the formula “y = 1/x” by slicing a cone with a plane,
and when it’s appropriate to generalize laboratory results in psychology to human interactions outside the laboratory. Like, the original result on Hold Off On Proposing Solutions was probably done with groups of strangers; is it still true of groups of friends or coworkers? I think so.
Hi Alex! Welcome to Less Wrong. I’m pretty new here also, so if you want someone to work through Sequences with, let me know.
Three Worlds Collide is great! I also recommend Harry Potter and the Methods of Rationality, which was also written by Yudkowsky.
Hello folks! I am a 18 year old italian Student who will start studying Mathematics in Germany this year. I was always interested in the way of the rational/scientific method, and since I remember tried to use it to reason about almost everything.
A month ago some friends showed me HPMoR, which I read in like 3 days and really enjoyed it. So finally I came here. I read some subsequences and various single topics, including a lot of the comments, which I found almost always very interesting.
This blog opened my eyes especially on cognitive biases thing. Often had I noted in hindsight that I had made poor decisions or evaluated a situation badly, but I never really saw how this could happen. So I am very glad to learn the causes behind those mistakes in judgment, so I’ll hopefully be able to avoid them sometimes. I finally decided to register, so I might comment from time to time, when I think I have something to say.
Welcome!
Hello! tkocian is my name. Philosophy is my game. I am a former fudagelical minister who had a massive de-conversion ten years ago after balancing the house of cards that is faith for 6 years. I am 34. I am drawn to reasonable discourse because I want to be shown where I am wrong. I am on the side of the truth and follow the logic and evidence wherever it leads me. I have no dogmas and cross all paths in my pursuit of reality. I found lesswrong at the behest of a good friend, who, after I raved to him about the podcast Conversations From the Pale Blue Dot, said I should check this site out. I come here expecting to be stretched fully. Just pocking around the site and articles made me realize that I should be careful. I might just disappear into this site for a good long while. Thanks for having me.
Welcome to lesswrong! At the risk of other optimizing, I have decided to attack you with a tab explosion.
Here is the CPBD guy’s top fifteen list for Yudkowsky’s posts, of which (now these have been recommended by two people) I recommend this this and this.
It may be that other writers than Yudkowsky write most in consonance with how you think, in which case you might want to look at Yvain’s posts, of which this, this and this are some of my favorites.
It may be best to start by reading the material easiest to digest, which may be in story format—similarly, dialogues are often relatively very easily accessible.
Here are some links to some posts I think would make a good introduction.
Thanks, that should keep me busy for a bit.
Link for “very” is broken.
Fixed. It was two links. I had to add a word...the link sentence is no longer the finest specimen of prose! The whole sentence grew clumsily through this process, with me writing it with just a few links and words, then finding a link and inserting a word.
Hello! quinesie here. I discovered LessWrong after being linked to HP&MoR, enjoying it and then following the links back to the LessWrong site itself. I’ve been reading for a while, but, as a rule, I don’t sign up with a site unless I have something worth contributing. After reading Eliezer’s Hidden Complexity of Wishes post, I think I have that:
In the post, Eliezer describes a device called an Outcome Pump, which resets the universe repeatedly until the desired outcome occurs. He then goes on to describe why this is a bad idea, since it can’t understand what it is that you really want, in a way that is analogous to unFriendly AI being programed to naively maximize something (like paper clips) that humans say they want maximized even when they really something much more complex that they have trouble articulating well enough to describe to a machine.
My idea, then, is to take the Outcome Pump and make a 2.0 version that uses the same mechanism as the orginal Outcome Pump, but with a slightly different trigger mechanism: The Outcome Pump resets the universe whenever a set period of time passes without an “Accept Outcome” button being pressed to prevent the reset. To convert back to AI theory, the analogous AI would be one which simulates the world around it, reports the projected outcome to a human and then waits for the results to be accepted or rejected. If accepted, it implements the solution. If rejected, it goes back to the drawing board and crunches numbers until it arrives at the next non-rejected solution.
This design could of course be improved upon by adding in parameters to automatically reject outcomes with are obviously unsuitable, or which contain events, ceteris paribus, we would prefer to avoid, just as with the standard Outcome Pump and its analogue in unFriendly AI. The chief difference between the two is that the failure mode for version 2.0 isn’t a catastrophic “tile the universe with paper clips/launch mother out of burning building with explosion” but rather the far more benign “submit utterly inane proposals until given more specific instructions or turned off”.
This probably has some terrible flaw in it that I’m overlooking, of course, since I am not an expert in the field, but if there is, the flaws aren’t obvious enough for a layman to see. Or, just as likely, someone else came up with it first and published a paper describing exactly this. So I’m asking here.
This creates a universe where the Accept Outcome button gets pressed, not necessarily one that has a positive outcome. e.g. if the button was literally a button, something might fall on to it; or if it was a state in a computer, a cosmic ray might flip a bit.
True enough, but once we step outside of the thought experiment and take a look at the idea it is intended to represent, “button gets pressed” translates into “humanity gets convinced to accept the machine’s proposal”. Since the AI-analogue device has no motives or desires save to model the universe as perfectly as possible, P(A bit flips in the AI that leads to it convincing a human panel to do something bad) necessarily drops below P(A bit flips anywhere that leads to a human panel deciding to do something bad) and is discountable for the same reason why we ignore hypothesises like “Maybe a cosmic ray flipped a bit to make it do that?” when figuring out the source of computer errors in general.
P(A bit flips in the AI that leads to it convincing a human panel to do something bad) is always less than P(A bit flips anywhere that leads to a human panel deciding to do something bad), (the former is a subset of the latter).
The point of the cosmic ray statement is not so much that that might actually happen, but is just demonstrating that the Outcome-Pump-2.0-universe doesn’t necessarily result in a positive outcome, just that it is a universe that has had the “Outcome” accepted, and also that the Outcome being accepted doesn’t imply that the universe is one we like.
In this document from 2004 Yudkowsky describes a safeguard to be added “on top of” programming Friendliness, a Last Judge. The idea is that the FAI’s goal is initially only to compute what an FAI should do. Then the Last Judge looks at the FAI’s report, and decides whether or not to switch the AI’s goal system to implement the described world. The document should not be taken as representative of Yudkosky’s current views, because it’s been marked obsolete, but I favor the idea of having a Last Judge check to make sure before anybody hits the red button.
Welcome!
So no more problem if it kills you! But what if it kills you and destroys itself in the process?
The answer to that depends on how the time machine inside works. If it’s based on a “reset unless a message from the future is received saying not to” sort of deal, then you’re fine. Otherwise, you die. And neither situation has an analoge in the related AI design.
I don’t think it prevents the wireheading scenario that many people consider undesirable. For instance, if an AI modifies everybody into drooling idiots who are made deliriously happy by pressing “Accept Outcome” as often and forcefully as possible, it wins.
Or more mundanely, if it achieves a button-press by other means, such as causing a building to collapse on you, with a brick landing on the button.
I’m a 19 year old college student (rising sophomore) who is studying political science and economics. Throughout my entire life that I can remember, I’ve been extensively interested in how people work, why they do the things they do, and how these things could be done better. This seems to make me a natural fit for the content of Less Wrong.
I’m personally involved in Political Science research, specifically dealing with the political psychology of how people acquire opinions, use them to make decisions, and update them with new information. Since encountering Less Wrong, I’ve learned that this is another thing that could be done better—the whole idea of rationality.
I’ve also been studying philosophy on my own time and writing on my blog (greatplay.net), which has borrowed heavily from the Less Wrong sequences. I’ve personally looked into philosophy of religion (ended up atheist), philosophy of language (Less Wrong seems the best for this), and philosophy of ethics (less wrong seems muddled here).
I personally value both curiosity and compassion, making your life better and seeking to spread this knowledge to others. I value many, if not all of the Twelve Virtues (to the degree that I understand them), because I want to know reality as it really is.
I’ve been a long time lurker of Less Wrong, and I’m still intimidated about joining the community—there seems to be a lot of smart people here and I’m slightly worried I won’t measure up. But the only way to truly become stronger yourself is to fight stronger people, so I need to get out of my little pond and converse with the heavyweights.
Greetings Lesswrong
My name is, well, my username. I’m sixteen, currently in my final year of high school, male, and living in Inverness, Scotland.
I first became interested in Lesswrong and it’s philosophy after reading some very good fiction here, although many of the references took me time to understand. After sporadically reading the occasional article for a few months, and doing some serious thinking about where I want to be in a few years, I decided to register to better network a few weeks ago. I haven’t made notable progress on the sequences yet, although I aim to have finished the ones on the site this year, and hopefully start attending meet-ups if any are near to my university once I’ve started my course, if my family can afford university and are okay with it, of course. My main interest in the site is it’s possibilities for altruism, as we’re mostly aspiring consequentialists, we’re less likely to be able to justify passive acceptance of the norms like most people I meet seem to. I’ve met, and know, a fair few good people, but they are mostly what I once saw described here as ‘barbarians’, albeit extremely noble ones. They do their best for everyone around them, and try to help others, but in a disorganized way I think could be improved with the right coordination, and planning.
For now I hope to get a few questions answered, comment occasionally, and keep quiet until I really grasp the concepts most of the discussion on this site is based on. If there’s any other rationalists or aspiring rationalists in the Scottish university system, it would be good to know where they are as well, of course.
I’m currently in neither Scotland nor its university system, but I spent a semester at the University of Edinburgh! Yay! Welcome.
I’m thinking Stirling university might suit me the best for now, it’s outside town, has a good economics department, and is closest to where I live after Aberdeen. Still, I’m not set, did Edinburgh have a decent Economics faculty when you were there?
No idea, I was in philosophy. (...That, and I stopped showing up to classes partway through the semester. Turns out when nobody notices if I go to a thing or not, I rarely go to the thing.)
But how can you even be sure you really exist?! Maybe you’re just a figment of someone else’s imagination?
I’ve always wondered, what do people do in the philosophy course? Is it all ancient greek poetry, or is that image outdated?
In Edinburgh, I was taking three classes: philosophy of language (we discussed Superman a lot), moral and political philosophy (I most clearly remember covering Rawls), and philosophy of mind (philosophers of mind love pain). I took one course my entire academic career that was entirely about something Greek (a graduate course on the Republic). We read it in English.
And lo, she did answer the question concisely and effectively! I doubt Bayesianism is available in any of the universities I might be going to though, so I think Economics, minoring in Sociology and Psychology would suit me best.
Who?
(I am a she.)
Ah, edited. Perhaps I shouldn’t have made the assumption.
Greetings.
I am 20 years old, male, and graduating college this fall with a BA in Philosophy. I am aiming to go to grad-school, specializing in something to help bring about a friendly singularity. I have been reading Lesswrong for about two years and it has been of enormous use in regulating my thought-patterns. Unfortunately, I am currently in a small Bible-Belt city, far away from any meetup groups. I am currently working on my studies in 20th century philosophy, while trying to incorporate useful subjects which many contemporary philosophers ignore. (My Philosophy of Science textbook had a very poor chapter on Bayesianism, which is okay since the instructor ignored the subject anyway.) Aside from the 20th century work, I am teaching myself Python, Bayesian theory, and basic psychology.
Greetings, Lessrs. Wrong.
By coincidence, my career led me do some consulting for a company with which Robin Hanson was affiliated, and I discovered Overcoming Bias as a result and ultimately wound up here.
At this point, I’d describe myself as a long-time listener, occasional caller.
LessWrong community, I say hello to you at last!
I’m a first year chemical engineering student in Canada. At some point in time I was linked to The AI-Box Experiment by Yudkowsky, probably 3-1/2 years ago. I’m not sure. The earliest record I have, of an old firefox history file, is Wed Jun 25 20:19:56 ADT 2008. I guess that’s when I first encountered rationality, though it may have been back when I used IE (shudders). I read a lot of his site, and occasionally visited it and againstbias. I though it was pretty complicated, and that I’d see more of that guy in my life. Years later, here I am.
One concern I have is whether or not I belong here. Sure, I like to learn on my own and do a lot of rationality-related stuff, but to accurately express how badly I am at rationality, I will compare my own abilities to most republican’s ability to understand science. I don’t think I’m particularly smart, on top. I argued with teachers and got a ~93% average in High School, though I like to think I understand things more than most students. I have not taken any formal IQ test, but I consistently score a mere 120 on online tests.
My motivation tends to be highly whimsical, and though I’m attempting to track myself on various fronts I keep failing. If I ever get addicted to a drug, I will never escape it. I have horrible dietary habits, though miraculously I have stayed lean enough. I don’t exercise and constantly fail to realize how most people around me could kick my ass.
I’ve read about half the sequences, and taken notes on maybe 15%. I think Gwern’s writing is not top-notch but always a pleasure to read. Methods of Rationality is a mediocre story by an author who isn’t. It’s not even in my top 20 fanfictions. Someday I’ll actually send him some feedback, but I think it would all be ignored because he’s trying make fanfiction something it isn’t. To his credit, it worked much more than I though it would. Three worlds collide demonstrates to me that most of you don’t understand the lack of ethics in this world—you should all accept that assimilation is the optimal solution.
On the other hand, I’d fight to the death and beyond to avoid it. I’m not ready to leave everything I am behind. I’m also not ready to sign up for cryogenics, and I have definitely heard all the arguments for it. My pathetic refutations are that I don’t want to ruin my life trying to survive forever, I’d rather live a good life now, and that I expect either existence is such a cold, cruel place that civilization will fall soon, or other life will preserve my existence anyway. Possibly with time travel. Or just through everything happening, like in Greg Egan’s novel Permutation City.
I think that’s about all I can write today. I hope I don’t make too many enemies here. Hope to get to know you all!
Welcome to LessWrong!
I would say that if you’re interested in rationality, you belong here. It doesn’t matter if you’re not that good at it yet, as long as you’re interested and want to improve then I would say this is where you should be.
Be careful of the priming effects of calling yourself bad at rationality, questioning your place here, saying you’ll never escape a drug addiction, etc. etc. The article on cached selves might be somewhat relevant.
This suggests to me that you don’t understand ethics.
While I’m occasionally convinced of the existence of akrasia, it would be an odd thing to note that one fighting to the death was caused by it.
I’d just like to point out that recently someone asked (doubtfully) whether anyone here still has strong feelings regarding three worlds collide. It seems indeed to have a prominent place in the popular consciousness.
Well, I am new here, and I suppose it was a slightly presumptive of me to say that. I was just trying to introduce myself with a few of the thoughts I’ve had while reading here.
To attempt to clarify, I think that this story is rather like the fable of the Dragon-Tyrant. To live a life with even the faintest hint of displeasure is a horrific crime, the thought goes. I am under the impression that most people here operate with some sort of utilitarianist philosophy. This to me seems to imply that unless one declares that there is no objective state for which utilitarianism is to be directed towards, humanity in this example is wrong. (In case someone is making the distinction between ethics and morals, as an engineer, it doesn’t strike me as important.)
To the issue of akrasia, I don’t see this as a case. My own judgement says that a life like theirs is vapid and devoid of meaning. Fighting to the death against one’s own best judgement probably isn’t rare either; I expect many, many soldiers have died fighting wars they despised, and who had options other than fighting them. In effect, I feel like this is multiplication by zero, and add infinity. You have more at the end; you’re just no longer the unique complex individual you were, and I could not bear to submit to that.
The general thrust of the Superhappy segments of Three Worlds Collide seems to be that simple utilitarian schemas based on subjective happiness or pleasure are insufficient to describe human value systems or preferences as they’re expressed in the wild. Similar points are made in the Fun Theory sequence. Neither of these mean that utilitarianism generally is wrong; merely that the utility function we’re summing (or averaging over, or taking the minimum of, etc.) isn’t as simple as sometimes assumed.
Now, Fun Theory is probably one of the less well-developed sequences here (unfortunately, in my view; it’s a very deep question, and intimately related to human value structure and all its AI consequences), and you’re certainly free to prefer 3WC’s assimilation ending or to believe that the kind of soft wireheading the Superhappies embody really is optimal under some more or less objective criterion. That does seem to be implied in one form or another by several major schools of ethics, and any intuition pump I could deploy to convince you otherwise would probably end up looking a lot like the Assimilation Ending, which I gather you don’t find convincing.
Personally, though, I’m inclined to be sympathetic to the True Ending, and think more generally that pain and suffering tend to be wrongly conflated with moral evil when in fact there’s a considerably looser and more subtle relationship between the two. But I’m nowhere near a fully developed ethics, and while this seems to have something to do with the “complexity” you mentioned I feel like stopping there would be an unjustified handwave.
And you think that not being able to bear submitting to that is wrong?
Personally, I’m one of those who prefers the assimilation ending, there are quite a few of us, and I certainly wouldn’t be tempted to fight to the death or kill myself to avoid it. But for a person who would fight to the death to avoid it to say that assimilation is optimal and the True Ending is senseless seems to me to be incoherent.
I think the confusion comes from what you mean by “utilitarian.” The whole point of Three Worlds Collide (well, one of the points), is that human preferences are not for happiness alone; the things we value include a life that’s not “vapid and devoid of meaning”, even if it’s happy! That’s why (to the extent we have to pick labels) I am a preference utilitarian, which seems to be the most common ethical philosophy I’ve encountered here (we’ll know more when Yvain’s survey comes out). If you prefer not to be a Superhappy, then preference utilitarianism says you shouldn’t be one.
When you catch yourself saying “the right thing is X, but the world I’d actually want to live in is Y,” be careful—a world that’s actually optimal would probably be one you want to live in.
If you’re able to summarize what makes the superhappies’ lives vapid and devoid of meaning, I’d be interested.
TerminalAwareness’s words, not mine. I prefer the true ending but wouldn’t call the Superhappies’ lives meaningless.
(nods) Gotcha.
I know it’s been some time, but I wanted to thank you for the reply. I’ve thought considerably, and I still feel that I’m right. I’m going to try to explain again.
Sure, we all have our own utility functions. Now, if you’re trying to maximize utility for everyone, that’s no easy task, and you’ll end up with a relatively small amount of utility.
Would you condone someone for forcing someone else to try chocolate, if that person believed it tasted bad, but loved it as soon as they tried it? If someone mentally deranged set themselves on fire and asked you not to save them, would you? If someone is refusing cancer treatment because “Science is evil”, I at least would force the treatment on them. Would you force transhumanity on everyone who refused it, is probably a better question for LessWrong. I feel that, though I may violate others’ utility functions, we’re all mentally deranged, and so someone should save us. Someone should violate our utility preferences to change them. Because that would bring an enormous amount of utility.
I’m struggling how to reconcile respecting preferences with how much of society today works. Personally, I don’t think anyone should ever violate my utility preferences. But can you deny that there are people you think should have theirs changed? I’m inclined to think that a large part of this community is.
If you haven’t, you should read Yvain’s Consequentialism FAQ, which addresses some of these points in a little more detail.
Preference utilitarianism works well for any situation you’ll encounter in real life, but it’s possible to propose questions it doesn’t answer very well. A popular answer to the above question on LessWrong comes from the idea of coherent extrapolated volition (The paper itself may be outdated). Essentially, it asks what we would want were we better informed, more self-aware, and more the people we wished we were. In philosophy, this is called idealized preference theory. CEV probably says we shouldn’t force someone to eat chocolate, because their preference for autonomy outweighs their extrapolated preference for chocolate. It probably says we should save the person on fire, since their non-mentally-ill extrapolated volition would want them to live, and ditto the cancer patient.
Forcing transhumanity on people is a harder question, because I’m not sure that everyone’s preferences would converge in this case. In any event, I would not personally do it, because I don’t trust my own reasoning enough.
I think all people, to the extent that they can be said to have utility functions, are wrong about what they want at least sometimes. I don’t think we should change their utility function so much as implement their ideal preferences, not their stated ones.
is what? Is willing to change people’s utility functions?
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.
But why would you want to live in a world where people are less happy than they could be? That sounds terribly evil.
I don’t think bland happiness is optimal. I’d prefer happiness along with an optimal mixture of pleasant quales.
Human values are complex; there’s no reason to think that our values reduce to happiness, and lots of evidence that they don’t.
Let’s imagine two possible futures for humanity: One, a drug is developed that offers unimaginable happiness, a thousand times better than heroin or whatever the drug that currently creates the most happiness is. Everyone is cured of aging and then hooked up to a machine that dispenses this drug until the heat death of the universe. The rest of our future light cone is converted into orgasmium. They are all maximally happy.
Two… I think an eternity of what we’ve got right now would be better than number one, but I imagine lots of people on LessWrong would disagree with that. The best future I can imagine would be one where we make our own choices and our own mistakes, where we learn more about the world around us, get smarter, and get stronger, a world happier than this one, but not cured of disappointment and heartbreak entirely… Eliezer’s written about this at some length.
Some people honestly prefer future 1, and that’s fine. But the original poster seemed to be saying he accepts future 1 is right but would hate it, which should be a red flag.
I don’t think a drug would be adequate. Bland happiness is not enough, I would prefer a future with an optimal mix of pleasurable quales. This is why I prefer the “wireheading” term.
I don’t understand how you could possibly prefer the status quo. Imagine everything was exactly the same but one single person was a little bit happier. Wouldn’t you prefer this future? If you prefer futures where people are happier as a rule then isn’t the best future the one where people are most happy?
I don’t understand how he could hate being happy. People enjoy being happy by definition.
Choosing a world where everything is the same except that one person is a bit happier suggests a preference for more happiness than there currently is, all else being equal. It doesn’t even remotely suggest a preference for happiness maximizing at any cost.
I would prefer to this one a world where everything is exactly the same except I have a bit more ice cream in my freezer than I currently do, but I don’t want the universe tiled with ice cream.
So you would prefer a world where everyone is maximally happy all the time but otherwise nothing is different?
Just like, making the ridiculous assumption that the marginal utility of more ice cream was constant, you would prefer a universe tiled with ice cream as long as it didn’t get in the way of anything else or use resources important for anything else?
I think this has way too many consequences to frame meaningfully as “but nothing otherwise is different.” Kind of like “everything is exactly the same except the polarity of gravity is reversed.” I can’t judge how much utility to assign to a world where everyone is maximally happy all the time but the world is otherwise just like ours, because I can’t even make sense of the notion.
If you assign constant marginal utility to increases in ice cream and assume that ice cream can be increased indefinitely while keeping everything else constant, then of course you can increase utility by continuing to add more ice cream, simply as a matter of basic math. But I would say that not only is it not a meaningful proposition, it’s not really illustrative of anything in particular save for how not to use mathematical models.
I would prefer “status quo plus one person is more happy” to “status quo”. I would not prefer “orgasmium” to “status quo”, because I honestly think orgasmium is nearly as undesirable as paperclips.
Doesn’t follow. I generally prefer futures where people are happier; I also generally prefer futures where they have greater autonomy, novel experiences, meaningful challenge… When these trade off, I sometimes choose one, sometimes another. The “best future” I can imagine is probably a balance of all of these.
Present-him presumably is very unhappy at the thought of becoming someone who will be happily a wirehead, just as present-me doesn’t want to try heroin though it would undoubtedly make me happy.
It really does seem like either you don’t really believe that the assimilation ending is optimal and you prefer the true ending, or you are suffering from akrasia by fighting against it despite believing that it is. You haven’t really explained why it could be anything else.
I’ve been posting here for a couple of months and haven’t introduced myself yet. Unconscionably rude. Anyway, I’m 29 years old and hoping to get my Ph. D. in a few months. I started out studying physics, then realized I was interested in more foundational questions than I’d be encouraged/allowed to work on as a young physicist, so I switched to philosophy. I guess I would characterize myself as a naturalistic metaphysician; I tackle traditional philosophical problems using modern physics (as opposed to the 17th-century physics still used to by all too many metaphysicians). I’m also very interested in political theory, but I’ll refrain from elaborating on that. My username does not lie; I identify as a pragmatist in the tradition of Wittgenstein, Quine, Putnam and Rorty (gasp!).
I’ve been defending a broadly Jaynesian account of statistical physics for a while. A few years ago I was extolling the virtues of Jaynes and someone asked me if I read Overcoming Bias. I hadn’t heard of the site at the time, so I checked it out and liked what I saw. I’ve been checking back sporadically since then. I began spending more time on the site starting a few months ago, about the time I needed to start really focusing on finishing my dissertation (sigh).
ahoy.
I have much to say about myself, but I don’t consider it worth most people’s time, so I’ll spare most of it.
I am currently going through the sequences and had no intention of commenting on any post or writing anything until I had finished all of them. I have to admit, though, it’s really quite difficult to “stay on” the sequences. I have a hundred-something tabs of lesswrong open right now, and it has come to a point where I am understanding them all pretty fully.
EDIT: Issue is under control. All is going well.
I would have preferred a better welcome for myself, but this is acceptable!
This is unfortunately not me yet.
Nonetheless, I offer an entirely different way of explaining it. The explanation goes with the visual: http://www.freeimagehosting.net/4dd80 (color coding: A is red and B is blue, so when A and B are the case this is represented in purple, when neither are it is black).
The four possibilities are: A&B, -A&B, A&-B, and -A&-B. They are represented in the grid as 1, 2, 3, and 4. Added together they will sum to 100%.
What’s the probability that out of 1, 2, 3, and 4, scenario 1 (A&B) is the case? (1) / (1+2+3+4)
Whatever it is, it’s the same as the probability that out of 1, 2, 3, and 4, either 1 or 2 is the case, and it is 1 that is the case out of 1 and 2. ((1) / (1+2)) * ((1+2) / (1+2+3+4))
Likewise, it’s the same as the probability that out of 1, 2, 3, and 4, either 1 or 3 is the case, and it is 1 that is the case out of 1 and 3. ((1) / (1+3)) * ((1+3) / (1+2+3+4))
The numerators and the denominators cancel in the two above cases to amount to (1) / (1+2+3+4), just like in the first of the three ways of phrasing the probability of 1 out of 1, 2, 3, and 4. We see that they are equal and set them against each other. We won’t actually use the first part of the equation, just the last two.
We replace ((1) / (1+2)) with P(A|B) (probability that A is the case, given that B is the case), ((1+2) / (1+2+3+4)) with P(B) (probability that B is the case), ((1) / (1+3)) with P(B|A) (probability that B is the case, given that A is the case), and ((1+3) / (1+2+3+4)) with P(A) (probability that A is the case). (1) / (1+2+3+4) is P(A&B) (probability that A and B are the case), but this won’t be used in the usual way of writing the rule, which isolates one term by dividing both sides of the equation by P(B).
We divide by P(B) so that when we know the three terms on the right, we can solve for the one on the left:
P(A|B)=P(B|A)*P(A)/P(B)
So, suppose we know that P(A)=40%, P(B|A)=30%, P(B|-A)=10%, and we want to find P(A|B).
Looking at the grid shows us at a glance that P(A)=(1+3), P(B|A)=(1)/(1+3), P(B|-A)=(2)/(2+4), and P(A|B)=(1)/(1+2). Remember Bayes’ Rule expresses P(A|B) as (1)/(1+3)*(1+3)/(1+2), the (1+3)’s cancel each other out, that shows why P(B|A) is multiplied by P(A). We always know that (1+2+3+4)=100%
So:
Intuitively:
Or the long way:
(I can’t help you in any useful way, but I can say “Welcome!”)
Welcome!
Hi everyone, my name is Wil, I live in the UK, I love Sci-Fi and how it inspires people to think differently or ahead of their time, and am a member of the working class but spend a lot of time on benefits, due to the effects of being rather strongly bipolar, having a very faulty short-term memory, and having intermittent extremes of varying states of mind that at times leads to sleeplessness and agoraphobia, which then gives me LOTS of time to think with very little else to do but read up online about the things that I’ve found myself thinking about, which is what led me here, so I’m a big fan of the internet as, at times, it’s all I have to still feel connected to the outside world and attempt to add to it in any useful way.
I’m not that well educated or read and I’m currently in a manic state where I have a sort of ‘everything is possible’ interpretation of the world, and I’ve been getting some crazy ideas about quantum theory, and it’s possible relationship to different theories of morphic resonance, and therein if it’s possible that the two could be argued to be associated in scientific terms as we currently understand them, such as through quantum entanglement, and whether various religious beliefs could also find a basis therein, etc, and have been struggling to find anyone IRL who will talk to me anymore, especially when they’re trying to watch X Factor, so thought I’d look about online for some possible discussion mates.
I can at times feel very slow and stupid and at other times feel very intelligent and clever, but I am aware I can also be completely deluded about my current reality and my abilities therein, but as is often said, insanity and genius can be a slim divide, so I thought there would be no harm in, when I feel I’m making more sense than not, just trying a few ideas out online with some intellectual types and seeing if anything I think might actually work for others, and not just me!
I tend to just leap in feet first but I am perfectly happy to be shot down for some faulty reasoning, as I’ve been made perfectly aware through-out my life that that’s what creation gave me to work with, and besides, being proven wrong still makes me learn something, but sometimes I also feel I have an idea that has some merit if I just had some more knowledgable people about me to flesh out the details a bit better and fill in the gaps in my knowledge, and I go through stages of having a sort of ‘inherent sense of philosophical realism’ enabled where I just FEEL if something is right or wrong, then come out with a fully-formed theory as to why, that seems to appear from no where, and then find myself trying to work out WHY, which is a very odd way of working I know, but it’s how my mind is sometimes, and I guess if I never have a go at trying to explain it to myself and others, then I’ll never know if I might have something I can actually work with during these mad manic times.
Anyway I’m going to try and resist starting any threads for a while yet, even when I can, I’m just going to read a lot (which I have before joining today anyway) and try a few comments on threads I feel I can hopefully manage a reasonable attempt at providing some understanding of, or perhaps lay out some other interesting associated question that I’d like to lead the thread onto, and in effect, dip a toe in and see where it gets me.
Still currently trying to understand quite how the site works, every site always seems to have it’s own unwritten rules too, so apologies if I’m running before walking or treading on any toes, but that’s how I tend to understand everything, I try it and learn from my mistakes, and I’m afraid there will be days when I just won’t make any sense at all, for which I can only apologise, but hopefully people will make it clear without being too cruel that I should just shut up and try again another day!
Be kind!
Welcome!
Greetings. I apologise for possible oversecretiveness, but for the moment I prefer to remain in relative anonymity; this is a moniker used online and mostly kept from overlapping with my legal identity.
Though in a sense there is consistency of identity, for fairness I should likely note that my use of first-person pronouns may not always be entirely appropriate.
Personal interest in the Singularity can probably be ultimately traced back to the fiction Deus Ex, though I hope it would have reached it eventually even without it as a starting point; my experience with this location, including a number of the sequences, comes from having encountered Yudkowsky-related things when reading Singularity-related information online (following a large web of references all at once, more or less) some time ago.
Depending on my future activity in this location, I may reveal more details about my current or future state of existence, but for the moment I plan to take advantage of the new existence of this account to lightly (?) engage in discussion when is something I find I want to say.
どうか、お手柔らかにお願いいたします。 よろしくお願いします。
Welcome to Less Wrong! Nobody minds if you keep your information secret; I keep my legal identity pretty separate from my Normal_Anomaly identity as well, and I’m not alone in this.
hello lesswrong!
I’m a 20 y.o. student two years in studying EE & physics, though I self-identify more as a scientist than an engineer.
currently I’m juggling about 3 ‘big’ goals—general education (in progress), lucid dreaming (more of a side project; might as well use those sleep-hours for something more fun than being unconscious), and rationality (which is why im here).
I found this site (and the concept and usefulness of rationality) via some of Eliezer’s writing as i was scouring the Internet in my eternal quest for vanquishing boredom. that was some time ago (1-2 years i think), back then it seemed like yet another interesting thing so i read a bit and then schedule restrictions had me put this on hold ‘for better times’.
fast forward to a few months ago; as part of my increasing interest in self-awareness i simply realized that if i won’t work on what interests me ‘now’ then i never will, so i picked up the projects that interested me and started them.
since then i’ve read the first sequence and quite a few other articles that caught my eye. as you can guess from my listing ‘rationality’ as one of my major goals, the ideas i encountered have made quite an impact.
now if past experience is any indication, i doubt i’ll become an active member of this society. still, i’ll read the sequences and will probably continue lurking around as long as i have Internet access.
that’s about it for who i am and why im here, now i have a few questions of the practical sort:
besides the sequences, is there any generally accepted recommended reading in the field of rationality, heuristics & biases and cognitive psychology? (and maybe something at beginner-level about AI, transhumanism and cryogenics) i already have a small list of books and i want to make sure that im covering all the basics, so suggestions are welcome.
and now for the big one; the target—i want to read and comprehend all the sequences. the problem—a few-months familiarity with tvtropes completely destroyed my ability to wiki-walk without a supercritical tab explosion. further details—reading the second sequence (words one) is moving at a pace of 10 posts/two weeks while reading for a few hours/day, and im currently at >90 tabs open, burn-out seems imminent without a change of strategy. the question—does anyone have a systematic way to read through all of the sequences (and interesting comments), which is optimized for comprehension, low risk of burning out and time efficiency? (i have some idea for this but its still an early draft and doesn’t ‘feel’ efficient)
Capital letters. Please use them.
BUT NOT EXCLUSIVELY!
Yes! This is the single most important reading, from which all others flow.
You miss out on comments, but reading them like a book is the way to go for this. Many LWers found this much easier. Here is the latest epub collection. As for comments—I think they are around an order of magnitude less important than the posts themselves, and so trading away the comments in order to, y’know, actually read the sequences is well worth it. My recollection is that important comments were addressed in later top-level posts, so you’ll get to read the most important ones anyway.
Oh, and welcome to LessWrong!
Thanks for crushing my last line of retreat, no more excuses to prevent me from (finally) reading the sequences.
As for books, funny how archive panic activates even when you expect and have pecommited to overcome it.
Will try.
Books
No tabs
Welcome! I read through the sequences by opening all tabs in order and reading through the comments by CTRL+F “Yudkowsky” and reading other comments when they interested me. Here was my advice to another person, it has links to some of my favorite posts. The OP there is relevant for advice in general.
Hi there!
I’m a 43 year old software Developer in New Zealand. I’ve found this site through the Quantum Physics sequence, which has given me an enormous improvement in my understanding of the subject, so a huge thank you to Eliezer. (I’d like to know the detailed maths, but I don’t hold much hope of that happening). I’ve since managed to do the double-slit experiment using a laser pointer, Blu-tack and staples, which was great fun. I’m currently trying to think through the Schrödinger’s cat experiment, which seems to me to be described slightly incorrectly. I may try to write up a page or so about that some time.
The Bayes’ Theorem stuff was also a great topic, although I’ve not been able to think of practical ways to apply it yet.
I’m a pessimist on the Singularity: I think that various resource, time and complexity constraints will flatten exponential curves into linear ones (and some curves will even decline).
I’ve always valued accuracy in the sense that we should try to find out what’s really happening and understand our evidence and assumptions. I find one of my main tools for thinking is the “level of confidence”, e.g. when people say “you can’t prove that” I like to re-state the issue in terms of “this evidence gives us an extremely high level of confidence”.
I’m currently reading the Methods of Rationality story and loving it.
Hi, welcome to LW!
Neat! Details?
The I.J. Good/Yudkowsky/Singularity Institute version, aka the “Intelligence Explosion,” doesn’t require Moore’s law. It requires enough understanding of intelligence and decision theory to write up a self-modifying algorithm of human intelligence or higher. This algorithm can then write better ones, a process which can be repeated up to some high level of intelligence. The main things one needs to believe to believe the Intelligence Explosion hypothesis are:
Artificial General Intelligence (a piece of software as intelligent as a person) is possible and will be invented
An AGI able to rewrite its own code can improve its intelligence, including its ability to find ways to improve itself
This process can be repeated enough times to result in a superintelligent AI
A superintelligent AI will be able to make major changes to the world to satisfy its goals
Obviously, this is a very brief summary. Try here for a better and more detailed explanation.
Here’s a picture of the double slit experiment http://imgur.com/a/2Uyux
I think achieving Human level intelligence is tough but doable. I suspect that self-improvement may be very difficult. But either way I strongly suspect that the power required to keep society ticking along will not be sustained. I think an AGI is 30 years away and that society does not have 30 years up its sleeve. I hope I am wrong.
“I think an AGI is 30 years away and that society does not have 30 years up its sleeve.”
The outside view, treating your prediction as an instance of the class of similar predictions made for centuries, suggests this is false. Do you have compelling reasons to override the outside view in this case?
The compelling reason is that this is what geologists believe, i.e. Peak Oil. Previous centuries of predictions are not relevant as they do not relate to decline (or not) in the production rate of the today’s dominant power sources.
Salutations, Less Wrong,
My name is—surprisingly enough—Joey Goldman. Well, at least that is the name I ask people to call me...but I digress. I am 17 years old and—for the next two weeks—a junior in high school. Despite the fact that I was born and raised in London town, I attend an American school.
I was raised in a quasi-Jewish family. As far as I could tell during my younger childhood, neither of my parents had strong ties to the Jewish faith. Nevertheless, we observed the High Holidays and Shabbat and I was bar mitzvahed. Over time, however, I managed to wean us off any and all Jewry, save Pesach, which serves more as a family/friend reunion meal than anything else.
I never really had a Crisis of Faith, rather I just began to realise that belief in some mythical god figure was not a notion that I ever truly held. I never even considered theism a genuine option: I have never had personal experience with people who really believed. Judaism was always, for us, a matter more of tradition than anything. This is, perhaps, due to the fact that my family’s community of friends comprises first generation immigrants from South Africa, for whom the synagogue acts as a community centre.
My father was, at a time, a professor philosophy and literary theory at a university in Israel, so I was always around thinking people. I was always a rationalist in training, but I only recently came to have a label for this feeling I had. Some time in the not so long ago, a friend shared lesswrong with me and I found a real phrontistery: a hub of like-minded (in the not groupthink way) people. I have not looked back since.
My main interests are broad. It would be hard for me to narrow them down beyond philosophy, maths, and (cognitive & physical) science!
I am still working my way through the early posts here. Hopefully I will begin contributing some time soon.
P.S. Looking forward to meeting some of my fellow Londoner’s this coming Sunday.
Hi Joey, and welcome to LW!
Hello all -
My name is Colin, and I am a long time lurker / RSS reader. Thanks for posting this welcome message, as it gave me motivation to finally get registered.
I stumbled onto LW from Eliezer Yudkowsky’s “An Intuitive Explanation of Bayes’ Theorem”, which I found when trying to explain to my mother what I was up to in graduate school, and why I was so excited about it. I have been interested in science and epistemology for as long as I can remember, so finding that there are principled ways to reason about uncertainty was pretty amazing to me.
I most enjoy the LW articles about the application of careful reasoning to personal decision making, as that is something I constantly struggle with. I enjoy being a Bayesian at work (sonar signal processing), but have more trouble at home. For example, I have a constant internal debate about riding my motorcycle, as it is simultaneously the most fun and dangerous of my activities. It is much harder to do the math when there aren’t numbers...
Thank you for all the interesting posts!
Hello, LW.
I’m almost finished with an undergrad degree in economics, and I’m currently trolling for actuary jobs. I used to write opinion columns for my school’s newspaper, and I look forward to being an LW contributor so I can keep the writing parts of my brain active.
Two years ago I finished thinking about religion, and a year ago I finished thinking about politics. I’m ready to learn some more things.
You finished thinking about them? What do you mean by that?
I’m fairly certain there is no god, and there’s no marginal benefit to learning more about the philosophy of religion.
No matter how much or little I think about politics, the chances of me being the marginal vote are negligible. There are better uses for my time than that mind-killer.
b1shop:
That’s true as far as voting goes, but politics is about much more than voting. It is rational to ignore politics only assuming that the situation will remain stable and tolerable where you live. If more interesting times come to pass, then the ability to recognize early signs of trouble and plan accordingly will be extremely valuable (which I can confirm from personal experience). Now of course, you may believe that this is highly unlikely, but to have any certainty about it, you must have a certain level of knowledge about politics and keep track of political developments to at least some minimal extent. So in any case, complete cessation of thinking about politics cannot be rational.
I would be interested, Vladimir, in what developments would increase your probability that it is time for American LWers to exit the United States. In particular, how sharply would increases in racial tension and racial conflict increase that probability?
ADDED. I ask the second question because I tend to believe that ethnic conflict was a major cause of the extremely-bad time in former Yugoslavia.
Honestly, this is one of the most difficult questions I’ve ever been asked! From my own experience, I can say that the scariest thing about outbreaks of mass violence is how hard it is to realize how bad the situation is getting until you’re already in big trouble. I will try to answer your question to the best of my knowledge, though. (Since your question got strongly upvoted, I trust that my answer won’t be condemned for dealing with an overly political topic.)
For start, ethnic tensions and incidents are by themselves not necessarily a sign of impending social breakdown, even if there are significant local outbreaks of violence and mayhem. In the U.S., in particular, there have been periods of intense racial tensions and conflicts, some which caused fairly large casualties and wide-area devastation (like e.g. the 1967 riots in Detroit, Newark, and elsewhere, or the 1992 riots in LA). However, as bad as these were locally, they didn’t lead to a larger-scale conflict and societal collapse on a nation-wide scale, since the higher levels of government (state/federal) have remained stable and in control.
For things to get really out of control, one of two things must happen: (1) a high level of government is taken over, legally or not, by people willing to start a civil, ethnic, or religious war or mass persecution, or (2) the authority of the government collapses, and the vacuum is filled by the strongest violent organizations that happen to be around (which will then typically proceed to go to war with each other and persecute whomever they don’t like). It seems to me that neither possibility is likely with the U.S. in the foreseeable future (even though close things have happened with some of its local governments, which led to the aforementioned incidents). I’ll give the lists of some reasons why I believe this is so, and I’ll do this by way of contrast with the situation in ex-Yugoslavia:
Restraints on democracy. In ex-Yugoslavia, the post-communist elections offered genuine choice, in the sense that the collapse of the Communist Party’s authority created a situation where anyone was free to run on any platform whatsoever, and the winners, with popular support, would really have the power to steer things in whatever direction they wanted. In contrast, in the present U.S. system, elected politicians have little to no practical control over almost any area of policy, since whatever measures they want to undertake must pass through impenetrably thick layers of bureaucracy and over high obstacles of judicial review.
Of course, (1) is true only as long as the bureaucracies and the judiciary have real authority. However, I don’t see any signs of the state and (especially) federal authority in the U.S. weakening—on the contrary, having some government agency, especially a federal one, get seriously angry at you for whatever reason is a frightful prospect for any individual or organization, and contempt of courts is unthinkable. In contrast, in ex-Yugoslavia in the late 1980s, it was evident that the communist authorities were starting to be seen as laughably impotent.
Political culture and tradition. In places where radical (and typically violent) regime changes are within living memory, government institutions are typically far less stable than in places where they reach far beyond that. The U.S. is certainly in the latter category, even if you count the Civil War as a radical regime change; in contrast, in ex-Yugoslavia, the regime was only 45 years old, with lots of people who were never truly reconciled to it and held (and perpetuated) grudges against it all along. This gives the U.S. government far more slack for blunders and mismanagement before its authority might start to get seriously eroded.
Ideological uniformity. The U.S. politics may seem ravaged by countless bitter controversies, but from a wider perspective, there is a remarkable ideological consensus with a very narrow (though, on most issues, slowly but constantly moving) Overton window. Only a small percentage of the population, and virtually nobody in the mainstream media, elite academia, and other influential sources of public opinion, hold any positions outside of it. In ex-Yugoslavia, the problem was primarily the ethnic rather than ideological conflict, so a better historical example of a country torn by truly deep ideological rifts might be the the Weimar Republic. Where such deep ideological rifts exist, of course, it’s hard to prevent political violence from becoming a regular part of the political struggle, and it’s unlikely that both the winners and the losers of political contests will accept its results peacefully.
Ethnic/religious identity politics. I wanted to compose a long paragraph about this very important issue, but then I realized it can’t be done without giving a lot of very controversial statements. So I’ll just make the general observation that in the U.S., there still exists a strong taboo against violence-threatening forms of identity politics at the higher levels of government, and in most places also at the local level. (The local exceptions to this rule have indeed led to instances of local violent societal collapse, as in the cities that were left ravaged and devastated by the ethnic/race riots and breakdown of public order some decades ago.)
(Besides these considerations, coups by security forces led by renegade elements in the government are another common source of violent political instability, but these are highly unlikely in the U.S., with its extremely strong tradition of lawful control over the armed forces.)
So, on the whole, I would start to get worried if I saw the following signs in the U.S.:
Weakening bureaucratic/judicial authority of the state and federal governments, especially the latter, which would enable elected politicians to exercise direct authority.
Loss of faith in the political institutions. By this I don’t mean the usual cynical and critical attitudes towards politics and politicians, i.e. when people think that they fall short of the official ideal, but a real loss of respect for that official ideal, thus opening the way for radical alternatives.
Erosion of the ideological uniformity, with radical positions starting to get taken seriously in the mainstream discourse, instead of being seen as loathsome extremism or charming but hopelessly naive idealism.
The principal lines of opposition in mainstream politics acquiring an ethnic dimension. By this I mean contests for public office where the candidates are primarily seen as representatives of conflicting ethnic groups, and such contests becoming the rule rather than an occasional local exception.
All of this still seems rather far-fetched in the present-day U.S., so on the whole, I don’t think exiting the U.S. for fear of violent social breakdown will be a rational step in the foreseeable future.
On the other hand, my view of the general direction in which the U.S. is moving is quite pessimistic, although I see a slow decay rather than a violent breakdown as the most likely course of events. Here I mean a continuing slow degradation of the quality of government, a gradual worsening of the present economic malaise, the life for most people getting uglier, more dysfunctional, and less dignified, the public intellectual life getting more mendacious and detached from reality, and so on. With this in mind, it may well be rational for many people to leave the U.S. (or move to a different place within the U.S.) in search of better opportunities. However, these are complicated issues, which would get us right into the middle of numerous controversies.
Thanks, Vladimir M, for this long and valuable reply to my question.
As well as non-American-specific analysis.
To paraphase Trotsky: “You may not be interested in politics, but politics is interested in you.”
In the U.S. many important functions are handled by the individual states. For example, most legal matters, including for example murder trials, are handled in “state court”. Also, the organizations most involved in taking licenses away from bad doctors, dentists and lawyers are state-based rather than Federal-based (and heavily intertwined with state politics). Of course the effectiveness of these organizations has a big effect on a resident’s quality of life.
So, in addition to the question of whether it is a good idea to exit the U.S., another relevant political question for U.S. residents is whether it is time to exit his or her state of residence. At least one highly-rational person I know has decided to end a long residence in California. (He moved to Florida.)
It is also rational to ignore politics assuming that it’s not possible for you to “recognize early signs of trouble and plan accordingly” easily and reliably enough.
Here you run into an ethical dilemma. Do you think that it’s generally better for voter turnout to be high than low?
If not, that would seem to be inconsistent with a desire for the US to continue being fairly politically stable (if you don’t have such a desire, please explain). [Note: previous statement about stability withdrawn, but I still think there’s a remaining point here in the following sentence] Among other advantages, the threat of being voted out of office is a significant check against politicians doing more than a certain amount of visibly bad things.
If so, then it’s unethical for you not to vote yourself, because you’d be contributing to a tragedy of the commons that you don’t want. It would be like being against music piracy but torrenting songs anyway since, after all, your individual downloads will have near-zero impact by themselves. Even if you ignore broader ethical principles like behaving as you would want people in general to behave, you specifically not voting has knock-on effects on those around you, and in diminished amounts on those around them.
This is a non-sequitur, despite its status as cached wisdom.
I see no reason to expect that higher voter turnout implies greater political stability. In fact, my intuition is exactly the opposite: assuming a genuine freedom to vote, low turnout is a marker of stability, since it signals that voters don’t much care who wins, which suggests that not much of importance depends on the outcome. You wouldn’t want to live in a country where it really, truly mattered who won an election.
Furthermore, the political class has a transparent interest in spreading the meme that high voter turnout is good, since a faction that wins an election with high turnout has a greater mandate to assume more power.
This is a really good point, so I’m withdrawing my statement about stability.
To pick another standard meme, what about popular involvement in the political process as a way of promoting just policies over unjust? That is, by unjust policies I mean policies that provide insufficient benefits to people who have little power. This is a separate question from stability, as a stable government can still have extremely unjust policies (or vice versa, though I can’t think of examples as easily).
I, sometimes proudly, do ignore the broad “ethical principle” of behaving as I would like others to behave. I don’t hold that as a moral belief.
Also, you can’t win this argument by appealing to negative consequences, because there are none. Yes, you did list some alleged benefits to democracy, but these benefits don’t go away for the nation if me (or even me + all my friends) stop participating. I don’t have any fantasies about the marginal effect of my personal participation.
(Note: I didn’t vote you down.)
Well, then to go back to the basics of ethics: if you were in the market for a bicycle, and had an opportunity to steal a really nice one from a stranger without any possibility of getting caught, would you steal it?
I’m saying I don’t always act a certain way. Producing a counterexample where I do act that way doesn’t disprove my position.
I used to have a reasoned moral code that favored consistency, but I slowly dropped these when I moved into the real world and witnessed lots of people not following my precious moral system. There’s no point cooperating if others don’t cooperate, too. For iterated games, tit-for-tat >= always cooperate.
There are some moral beliefs (i.e. don’t steal/lie) I usually feel a compulsion to follow regardless of the utility. I blame/thank evolution. In small circles, I lean more towards the golden rule (i.e. don’t overbill). But in larger circles, playing the cooperate card because you would want others to is not a strategy I endorse.
What do you think causes the difference between your behavior in small groups vs. in large groups? Perhaps if voting had small-group consequences you’d be more likely to. For example, suppose it were easy on social networks to see an overall “political participation score” for any given person, based on how many of the elections available to them they voted in.
There’s already big signalling benefits to voting. I think it explains why most people do it. However, it feels dirty for me to do something out of concern for my image, so I abstain.
Back when Elinor Ostrom won the Nobel prize, I remember reading a summary of her work that says self-management of the commons is possible when communities are a certain size. I forget the magic number, but I think it was something like 120,000.
You can say the same about astronomy, biology, chemistry, history and just any part of human knowledge that does not interfere with your daily life.
Imagine someone who could not find his country on map, does not know who is president or PM, does not know how his government functions, does not vote because he does not understands what are elections.
Is such person worthy of admiration or respect? I do not think so.
I’m concerned with margins, not extremes. I can find my country on the map. I have an idea of how close my country is to revolution. I can come up with impressive-sounding political theories to discuss with others that signal the personality traits I value.
But I think I’d benefit more by studying the details of physics than the details of politics.
Why? You cannot change the laws of physics and they have no impact of your daily life either.
Hello. I’m Michael. I’m an English major, still in undergrad, but my passion is library science. I’m not as big on research as I am on systems of research and information exchange. I prefer work to thought but I don’t like mindless labor either, so I’ve tried to squeeze into the narrows of academic librarianship, hoping for a happy medium where I can do something that helps people learn and keeps me from growing unread and “set in my ways.”
I’m only stopping here because it’s the next interesting, “honest” place I’ve come across and I want to extract what I can from it and enjoy the community for a while. I used to be a southern Christian who did not really respond to reality in anyway. I knew that one was suppose to “believe” in this truth or that truth, but the idea of belief corresponding to some reality never occurred to me until I ran afoul of a Kent Hovind seminar and was forced to reconcile beliefs to reality.
I’ve lurked and read here and there on Less Wrong and Overcoming Bias. I know already it’s a community of extreme intensity which I like. I’ve no predictions so I’m just going to enjoy it while I can.
Welcome to LessWrong.
If you have anything you’d like to talk about, may I suggest the current open thread.
Hi, I am Dean
I am a software developer with many years experience in web based design and development. I think of myself as a ideas person and am deeply interested in AGI. My only real experience is that of research such as reading and youtube so I am something of an armchair AI investigator. I have a few ideas of my own and hope to contribute one day.
Look forward to interacting with you all in a positive way.
Regards, Dean
Wow. It scares me that the internet has now been around long enough that one can speak of “many years’ experience” in such domains.
Hello, I am Alexander, and also a number of variations on Chalybs Levitas (depending on the screenname parameters of the site I’m signing up to).
I don’t consider myself a rationalist, yet. I still have a lot to learn, but I’ve started working my way through the Sequences, and I’ve started my walk through the other articles, by opening a new tab at each new link.
I value language, and I am practicing my craft as a writer (I’m terrible) as well as studying Japanese (also terrible there).
I chose Japanese as the foreign language to study first in part because I want to move to Japan, and I’ve signed up to the site because one of the things I’ve learned through reading the articles and Mr. Yudowsky’s fiction is that people are not pessimistic enough in preparing their plans. I tried to apply pessimism to my current plan to live in Japan, and I don’t think I got it right. I’m hoping to learn more, and to work out answers I would not have found on my own, by talking with the community here.
Phew.
Nice meeting you all, ~Alexander
doublepost how’d I miss that :(
Anyone care to be my chavruta? I think this thread is a good place for finding people of similar ability levels (considering how recently we’ve found this site, not our education levels).
http://lesswrong.com/lw/6j1/find_yourself_a_worthy_opponent_a_chavruta/
Why don’t you write a bit more about yourself? This is an introduction thread, after all! :)
I might be interested in exploring and discussing this site. I often feel like I missed the boat on being able to engage in the discussions of the sequences. I generally just don’t bother commenting on them, even when I have something to say, since it seems like the discussions on them are pretty much dead. I am doing the sequence re-read threads, but they only post about 1-2 a day. I wouldn’t mind someone to go through them faster with, and actually have discussions about!
Either way, welcome to Less Wrong!
I have introduced myself already.
Sounds like a plan! I’m going to have to catch up with where they are in the sequence reruns, but I can start in medias res.
I came to Less Wrong in 2009 because of posts I noticed on the Doomsday Argument, which I have written about in the peer-reviewed literature. Recently, I self-published a short e-book which addresses the subject along with other subjects that I think would be of interest to this community. But the book is not free—the price is $4—and I am concerned that I might be violating etiquette if I self-promote it in a Discussion post. (I do have four karma points.)
In a post I have drafted (but not submitted) for Less Wrong, I summarize part of my book; I also invite professional scholars and educators to email me to request a complimentary evaluation copy, and I extend the same offer to the first ten Less Wrong members with a Karma Score of 100 or greater who email me. Would such an offer be in keeping with Less Wrong’s etiquette? I am open to other suggestions. However, partly in order to avoid potential conflicts with the original publisher of an article of mine that this book expands upon, I do not want to make the book free to everyone.
My own personal take is that if the summary is enough to grapple with and fairly evaluate, so that people who don’t buy the book or get it for free can get still something out of that post (something comparable to your average Discussion post), then it’s fine. Good material is good material wherever it comes from—Gary Drescher’s Good and Real was not available for free but discussing it here with him was still fine because the material was very good.
(The last author to get bitten here was promoting a book that the LWer who read it described as extremely fluffy, uncomprehensive, and a good example of bad business books.)
Hello all.
New user here, so far extremely gratified at what I have encountered. I’ve had a sort of a fetish for feedback/self-referential systems ever since reading Hoffstadter’s “Godel, Escher, Bach” as a kid. While I no longer really agree with much in this work, at the time it was mind-blowing. I remember clearly the vision I had of the all-but-unlimited power of feedback loops and iterated functions.
What I’m trying to get at is that, while all* online forums, etc, are intrinsically self-referential, this one seems to be so in a special sense. Regardless of the content of a forum, perusal of the material initiates certain thought processes which lead to further discussion, ie, addition to and alteration of the content. This is the trivial sense. From what I have seen, the content here is in the main devoted to examination and refinement of these thought processes (and also improving explication), for the purpose of engendering more fruitful discussion.
Not sure if i’m getting my point across, or if it’s worth making at all, but that’s what i’m here to explore!
Welcome to Less Wrong!
I don’t know that the forum itself is self-referential, but you’ll certainly find other fans of self-referentiality here, myself included.
Lurker for many years, I’ve decided to join up to be more involved in what up until now has been one of the more interested RSS feeds. These days I’m onto my second career as a cognitive scientist and work for a research organisation. In a previous life I have worked as an engineer, mechanical, electrical and robotics, as well as setting up a dotcom which did some work in the VR area. I have worked with computers daily for over 30 years in various forms although these days I don’t do any development or programming. The change in careers was brought about by an increasing interest in human perception and decision making and I was lucky enough to go back to Uni in my 40′s and complete a Masters majoring in cognitive science without going insane (and without an undergraduate degree in psychology). I did this while being the manager for a virtual reality research lab. Doing a degree a second time around is so much easier when it’s more like a graded hobby and the reading and assignment material is basically what you be reading anyway! My research these days is centered around decision making and cognitive biases and I work in researching how these fields affect my groups clients. This involves developing judgment and decision making analysis tools, investigating the impact of cognitive biases on planning processes and some work on human factors with regards to software. Unfortunately I can’t talk in detail about much of my day work due to who my employer is however when it comes to the general field there’s no problem there. I have also been involved in the skeptics community for a few decades although these days I tend to channel my energy here into part time volunteer teaching of science, creative and critical thinking in primary schools.
Kind regards to all.
Long-time reader, only occasional commenter. I’ve been following LW since it was on Overcoming Bias, which I found via Marginal Revolution, which I found via the Freakonomics Blog, which I found when I read and was fascinated by Freakonomics in high school. Reading the sequences, it all clicked and struck me as intuitively true. Although my “mistrust intuition” instinct is a little uncomfortable with that, it all seems to hold up so far.
In the spirit of keeping my identity small I don’t strongly identify with too many groups or adjectives. However, I’ve always self-identified as “smart” (whatever that means). If you were modeling my utility function using one variable, I’m most motivated by a desire to learn and know more (like Tsuyoku Naritai, except without the fetish for unnecessary Japanese). I’ve spent most of my life alternately trying to become the smartest person in the room and looking for a smarter room.
I just graduated from college and am starting work at a consulting firm in Chicago soon, which I anticipate will be the next step in my search for a smarter room. My degree is in economics, a discipline I enjoy because it is pretty good at translating incorrect premises into useful conclusions. I also dabbled fairly widely, realizing spring of my senior year that I should have started taking computer science earlier.
I’ve been a competitive debater since high school, which has helped me develop many useful skills (public speaking, analyzing arguments, brainstorming pros/cons rapidly, etc.). I was also exposed to some bad habits (you can believe whatever you want if no one can beat your arguments, the tendency to come to genuinely believe that your arbitrarily assigned side is correct). Reading some of the posts here, especially your strength as a rationalist, helped me crystallize some of these downsides, though I still rate the experience as strongly positive.
I am a male and a non-theist, although I’ve grown up in an area where many of my family members and acquaintances have real and powerful Christian beliefs (not belief in belief, the real deal). This has left me with a measure of reverence for the psychological and rhetorical power of religion. I don’t have particularly strong feelings on cryonics or the singularity, probably because I just don’t find them that interesting. Perhaps I should care about them more, given how important they could be, but I haven’t displayed any effort to do so thus far. It makes me wonder if “interestingness bias” is a real phenomenon.
My participation here over the years has been limited to reading, lurking, and an infrequent comment here and there. I’ve had a couple ideas for top level posts (including one on my half-baked notion that “rationalists” should consider following virtue ethics), but I have not yet overcome my akrasia and written them. Just recently, I have started using Anki to really learn the sequences. I am also using it to memorize basically useless facts that I can pull out in pub trivia contests, which I enjoy probably more than I should.
deleted
Hi, chimera, and welcome to LW!
I think you are on the right track! The most important question is “What do I believe and why do I believe it?” Being a rationalist is essentially answering that question correctly.
Hello Less Wrong.
I am a philosopher that is apparently concerned with precisely your mission statement. To improve the art of human rationality, I am here to help an be helped towards that aim
Welcome.
Can you be more specific about what you’d like to be doing differently?
There are a couple things, I would like to apply my informationalist ontology to the vast variety of issues that are being considered here, I think it would be of great help but i won’t do that till I have some massive charma. I think it’s a novel ontology and i would like for it be used by others. I also hope to see if I can’t use some Dennett to help out all of the qualiaphiles that seem to hang out here. I love Yudcowsky (w/e his name is) but I can’t help but feel like he’s a little naive to modern philosophy’s success. I think Quine and Davidson could definitely present some useful positions on human cognition and reasoning, even stronger ones than Dennett. But I doubt that they would really factor into this side of the debate which is unfortunate. They were both hard naturalists, and even both considered themselves to be just a special sort of scientist. The problem with philosophy is clear, it lacks a method of hard inference by which to systematically dissolve competing hypothesis. But that problem is not universal through out the entire feild, there are certainly schools of philosophers that do have agreed upon formal methods by which to decide what hypothesis to eliminate. The problem is simply that they are not cross disciplinary methods. You can’t convince Zizek the same way you convince Pinker, and that is truly no surprise if you have ever read the two, but it is a problem that philosophers must overcome if they ever plan to become a serious feild of knowledge. I think the standard view of philosophers is of them as not considering the issue of peer reviewed verifiability in philosophy important, and that is not true. We have made a lot of progress as philosophers and logicians towards figuring out ways of classifying deductive and inductive arguments, and formalizing our competing hypothesis into deductive systems. The only problem is that that stuff isn’t popular because its formal, but let us not forget that it was a philosopher that made principia mathematica and a philosopher who proved its incompleteness.
Schools, plural. You can solve the Agrippa trilemma by appeal to arbitrary rules—but whose arbitrary rules? Maybe you can justify non arbitrary hypothesis selection rules—but how? Circularly? With regress? There are reasons why philosophy remains “unsolved”
So, I’m way past due for posting in this thread. But I’m here now, and that’s what’s important!
I found LW via a link to the Methods of Rationality, and hopping around the wiki I didn’t see anything mind-blowing (at first!), but I was delighted to see that there were other people who thought about this stuff.
I live in Worcester right now, working on my Master’s degree, but I’m a Cantabrigian originally. I’m studying social factors for (non-fooming) AIs, and shoring up my CS skills.
To be highly topical, I score extremely low on those autism tests that have been going around, despite being a nerdy mathematician (I got my BS in math) - like, 3-7 on a scale where 20 is “normal” and 35 is officially autistic. I’m generally considered pretty charismatic, and I love to do theater. So in those ways I might provide a new voice to our chorus.
This was sorta aimless and rambling. Feel free to ask questions about whatever, though! :D
What are social factors for AIs? My top two guesses are “social acceptance of machine intelligence taking more and larger roles in everyday life,” and “data mining social graphs.”
Well, in this case what I mean by that is “how can a current-technology AI agent semi-competently navigate social situations”, and in particular “how should interactions with such an agent change over time / with the development of a relationship”. It is related to social acceptance, from the direction of AIs being their own advocates.
Hi, everyone;
I saw this invitation and decided that it was finally time for me to register and say hello. I was led here by reading Eliezer’s excellent website, and have since really gotten quite a wake-up call from Less Wrong and the Sequences. So, thank you to all of you who have participated in this community and the craft. I hope that I will be able to learn a lot and contribute a little.
My focus and interest is artificial intelligence, but I’ve always known that I just don’t know enough to make an attempt yet. Thus, I’ve been studying other complicated systems such as weather, molecular biology, and neurology. At the same time, I am buffing up my math skills via correspondence courses. It’s intense, but I like intense, and it provides me a good distraction from the unpleasantness of living in an oil rush town such as I do.
Just a little background. I’ve done a bit of reading into akrasia and how to beat it (one of my big enemies), and hope to eventually have something interesting to say about it, but I’m content to continue research for now.
Thanks for all the wonderful material, everyone.
Hi!
I’m Balofsky (keeping first name blank), and I am a 24 year old undergraduate student in St. Paul, Minnesota. Interests include anything liberal art-ish, Judaism, politics and memorizing random facts I’ll probably never need in real life.
Have you tried Anki?
Interesting, I’ll look into it. I didn’t meant to retract my introduction, by the way- hit the wrong button.
It happens.
Re beoShaffer’s mention of Anki, if you haven’t heard of it before, it’s a suggestion to use spaced repetition.
Hi, I’m Laiste. I have been on a inner search to find myself, my meaning, my life..my happiness. I have found so many people speak in ways that make no sense to me—overly positive, self deceptive and the list goes on and on. In my search for understanding, I happened along this site, and for the first time, what I am reading, I GET. I do not hold any degrees or formal education, but my mind is my greatest asset. I am very much interested in many of the articles as it all relates to lifes journeys, but what brought me here was “The Science of Winning at Life”
Hello,
I am Yissar, living and working out of the UK. I assert that the human condition has many flaws due to biases: cognitive, cultural, emotional, biological, behaviour, ethical.
I think and believe that dealing with the biases is the only way to solve the human condition and create a mind fot for the future. It is time for guided evolution.
Hello LessWrong,
I’ve been reading the website for at least the past two years. I like the site, I admire the community, and I figured I should start commenting.
I like to think of myself as a rationalist. LW, along with other sources (Bertrand Russell, Richard Dawkins) has contributed heavily (and positively) to my mental models. Still, I have a lot of work to do.
I like to learn. I like to discuss. I used to like to engage in heated debates, but this seems to have lost some of its appeal recently—either someone is wrong or isn’t, and I prefer to figure out which it is (and how much), point out the error in either my or his thoughts, and move on.
Procrastination is a major problem for me. Risk-aversion too. I’ve lost many dollars to them. I’m working on it, although not as hard as I should (read: desperately hard). I’ve been having a lot of fun, in fact, ever since I realised that just because you’re aware of your biases doesn’t mean you’re no longer subject to them. :-|
There are a few areas where, after I do my due diligence, I will ask the LW community for help. How to properly learn (spaced repetition and (memorising better) [http://lesswrong.com/lw/52x/i_want_a_better_memory/] are of particular interest to me) and how to convince others of your perspective are two topics of particular concern.
In closing, I’d like to say I was very glad there was a Zurich LW meetup recently (even though I couldn’t attend) and there should be more Europe meet-ups. Preferably on the mainland because trains are moar better than planes.
Apteris
I’m a relative novice to rationality studies, but have become fascinated with this site. I’d like to take a part in the discussions here to explore my own views and open my mind to what others have discovered. I have many friends who have read Harry Potter and the Methods of Rationality, but I haven’t looked at it yet. Maybe I will, after seeing how many people here found this site through it. My academic background is mathematics, with a focus on logic and set theory, so rationality seems a logical next step in my personal growth. I look forward to interacting with all of you!
welcome to lesswrong!
Hi LW,
I joined this site not too long ago, but I missed this page and its request for an introduction. Better late than never, I guess.
I am 24, a Jr. Software Developer, and I live in Portland, OR. I was raised in a Baptist family, and left the Church during my junior year in high school over their stance on the Oregon Gay marriage bill. Once outside of the daily Sunday indoctrination, it took only a few short weeks to reason my way to atheism. I only wish I could have seen the truth sooner. I spent the next year or so on forums gaining a real variety of philosophical knowledge, and engaging in as many debates as I could. This made me stupid; I learned how to tear apart many arguments, and defend my own, skillfully.
I started reading Less Wrong at work, during down time, and it quickly devoured several weeks. I have always been drawn to science and rationality (though I used to have another name for it) and have found this community to be a fantastic resource. I have learned how to say oops, and to update quickly. I have learned how to see bias in my own thinking. I have started to learn (though still fail to grasp intuitively) Bayesian probability. This community has had a significant impact on me.
PS: How do you pronounce Eliezer?
Edit: Spelling
In his Bloggingheads videos, he says “Eh-lee-eh-zer”.
Hello, I’m John Lindberg a 26 year old Computer Scientist from Stockholm, Sweden. I’ve been reading this site for almost a year now. I’m a lurker by habit, preferring to listen rather than writing, but I really like this community, so I’ll see if I can fit myself in.
I originally found Less Wrong through Stumble Upon to the Ureshiku Naritai article, liked it enough to browse the front page and found that I liked almost all of the articles there. That set of a long bout of tab explosion as I followed links from those to other articles. Eventually started reading the sequences, but haven’t got through them all.
Much of the content here is stuff I’ve thought about on my own before, so I have not really taken on any groundbreaking new ideas, just a lot of refinement and applications for them. Many things have different names here of course, I called Map vs Territory for Story vs Life (forming a trinity along with Game for people’s intentions) for instance, but I’ve internalized most of the local terms by now.
Automating processes has always been of almost intrinsic value to me, for which I consider FAI the ultimate goal; automating the thought process itself. I considered “the Dark Arts” (by which I mean trying to influence people with anything but logic) an evil to the extent that I thought far less of the ancient Greek once I learned that they were the ones to develop the art of Rhetoric. I’ve relented on that in recent years, seeing it as a tool instead, which derives most of it’s value by how it’s used. Still carries a slight tinge of evil though. That also made me learn a lot about biases and I tend to get very defensive when I recognize one being used against me.
I suffer some from Akrasia, but mostly because of not having found any goals I really want to achieve. Three years ago I finally decided on self improvement as my new goal. The goal being to be able to end each day and come out of any situation feeling like I’d done a good job of it. (Discounting wireheading and averaging over time.) Still a long way to go, but I’m happy with the results so far: mostly getting rid of a few diseased memes which were holding me back, such as the “influencing people is evil” one mentioned above. Getting started with lifelogging will probably be the next step.
Hope to get involved.
Welcome to LessWrong!
Is it a coincidence if this reminds me of Gamist, Narrativist and Simulationist roleplaying?
Thanks.
I wasn’t consciously inspired by it, but was aware of the terms. I’d still discount it as a coincidence though. (Or being a natural division to make.)
The terms kind of grew on me from using them separately in expressions. (“That’s just story” when discounting some evidence, “”Life is” when being stoical (Or “Life is beautiful” when being happy.) and “What’s his game?” and similar when reasoning about entities (people, organizations, myself). Then they just fitted as words to partition the world around.
I am 29 and i am working as self empoyed and I am single high school graduate but also in teaching business I am from Ethiopia, East Africa. Anyone interested to get me any legal job in Australia please do so!
Hello.
I’ve only been checking this site for a short while and after reading all these interesting thoughts I posted something myself.
I’m interested in objective, rational thoughts about the ultimate reality of our existence (and Existence itself) and coming from a religious family—I also try to rationalize the notions I have about God.
I see that modal realism and Plantingas ontological argument don’t go down well in here and I concur—by themselves they are underwhelmingly weak.
But what if You combine these two views, based one assumption alone—that Existence (whatever exactly it entails) has to be past eternal.
It’s not an irrational belief—it’s even possible by some theories. I posted something in that line (shouldn’t be hard to find—there aren’t many posts about God here) and I would very much appreciate any valid comments.
It’s a simple theory, but I would very much appreciate some feedback. I have no idea if I’m talking rubbish or if it does make for a coherent logic.
Thanks in advance.
Saladin from Slovenia.
Yep, looks like rubbish. Sorry.
In general, looking to justify your existing beliefs doesn’t work. Say this to yourself: “If God exists, I want to believe that God exists. If God doesn’t exist, I want to believe that God doesn’t exist.”
Well, it’s not that I believe in a Posthuman God—but I do believe in a past eternal universe (multiverse, Existence,..).
“Believing” just in that is IMO a rational belief (until proven otherwise, of course).
Past eternity neccesarily leads to a kind of modal realism—all possible worlds are (or have been) real worlds.
If there is a possible world that allows for a God (to evolve) - then it is neccesarily true.
So the only guestion left is “is there a possible universe where God (-like entity) can evolve”?
That’s complicated—but I noted one oversimplified idea that “might” show such a possibility.
i’d like to discuss this in more detail.
Bad epistemology.
If a completely trustworthy person rolled a normal six-sided die, and told you the result is an even number—is it “rational” to believe that the result was 6 ? After all, it hasn’t been proven otherwise. No, the ONLY rational belief in that situation is assigning an equal probability to 2, 4 and 6.
If you go around asking “am I allowed to believe this?” for things you want to believe, and “am I forced to believe this?” for things you don’t, you’re shooting yourself in the foot.
I cannot imagine what evidence you could have for such a belief.
“I cannot imagine what evidence you could have for such a belief.”
From the 3 possibilities You can imagine regarding the origin of Existance itself (Creatio Ex Nihilo, Primus Movens or Eternal Existence) only the latter is fully compatible with all the basic physical laws/mathematics/logic that we know (an can imagine) of, while at the same time being the simplest possibility.
That is—if by Creatio Ex Nihilo You mean one, single event- that spawned exactly one universe in all off eternity: ours.
You might think that Creatio Ex Nihilo (and I mean True Nihilo: from Nothing, from Non-Existence) reduces the importance and complexity of everything to a minimum—but it does the opposite—it makes our existence a total unique, singular and extremely special event.
Why only once? Why only ours? Why only like this?
However You turn it—Past eternity (or Eternal Existence) is (and has been for a long time) the most plausible option. It stays in the realm of knowable, calculable, everyday laws & physics- of the of our universe and it’s possibilities.
Ex Nihilo (and to a lesser extent Primus Movens) incorporate unknowable origins with unknowable laws, mathematics & logis that are clearly in violation to our known one. If You have a simpler, knowablr solution to a problem, then it’s by occam the prefered one.
Unknowable isn’t less in Occams eyes: it’s always more.
I’m afraid I don’t think you’re ready for discussion on this website yet. Start by reading the Sequences, especially Mysterious Answers to Mysterious Questions.
I’m quite sure i’m not ready for such a discussion. I don’t have the education and the critical/analytic approach needed to state complex sets of axioms, to give formulaic approaches, to adapt physical theories etc. My sloppy english and writing in overly simplified terms doesn’t help much either.
But I think I know the laymans basics of the main physical theories and I have a general idea where the main problems lie.
Ignoring the problems, loopholes, paradoxes,… while good for solving localized problems and questions, is not good practice and science, if it doen’t give a big, coherent picture of things (the result being, for example, the Copenhagen explanation).
Lets start out simple:
Is it logically true, that by any known logic and in accordance with known physics a past eternal Existence (which is and/or includes our universe) requires the use of modal logic and it’s realisation as a type of modal realism?
Meaning: a past eternal Existence “must” include the realisation of all its pysical/logical possibilities (at minimum all the possibiities our universe physically/logically allows for)?
Is this correct?
No, it isn’t.
Infinite time doesn’t mean that everything physically possible happened. Maybe the same things kept happening over and over.
Doesn’t quantum indeterminism (edit: quantum uncertanty) prevent that?
Any kind of quantum fluctuation, which “could” have had a makroscopic, relativistic effect must have had such an effect (f.e, in an early universe).
Either you except indeterminism or a nonlocal hidden variable—my guess is indeterminism is far more exceptable.
I would be far more careful using quantum physics in informal “philosophical” arguments. In most instances, people summon quantum effects to create a feeling of answered question, while in fact the answer is confused or, worse, not an asnwer at all. The general rule is: every philosophical argument using the word quantum is bogus. (Take with a grain of salt, of course.)
More concretely, closed quantum systems (i.e. when no measurement is done) evolve deterministically, and their evolution can be periodic.
I thought that in closed quantum system there are only probabilities of a true indeterminisitc nature—and the only deterministic part is at the collapse of the wave function (where the positions, speed,… are truly determined—but impossible to measure correctly).
Still the fact remains that one universe is holding observers and even there is only one sollution to past eternity—that of a cyclic universe of the same kind and same parameters of the big bang—the futures of the universe would be determined by the acts of those observers. Different acts of observing—different universes in series (but strictly with the same physical constants).
All the consequences of observing in those universes would so have to be realized.
Mostly the opposite. In a closed quantum system, there are no probabilities, just the unitary, deterministic evolution of the wavefunction. On a measurement (which is a particular type of interaction with something outside the system), the collapse happens, and it is at this point that both probabilities and nondeterminism are both introduced. Whatever property is being observed sets an eigenbasis for the measurement. Each eigenspace is assigned a probability of being chosen proportional to the norm—the sum of the square of the lengths. This probability is the probability that the wavefunction is replaced by the renormalized projection of that wavefunction into the chosen eigenspace.
(This is the simplest version—it only covers von Neumann measurements in the Schrodinger picture applied to pure states.)
That’s not very “MWI” of you! “Collapse” currently has the status of a fantasy which is unsupported by any evidence.
Agreed—MWI (many-worlds interpretation) does not have any “collapse”: Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters—and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate tend to refer to parsimony.)
I find it somewhat strange that people who have problems with “all those huge numbers of worlds in MWI” don’t have much of a problem with “all those huge numbers of stars and galaxies” in our conventional view of the cosmos—and it doesn’t cause them to reach for a theory which has a more complicated basic description but gets rid of all that huge amount of stuff. When did any of us last meet anyone who claimed that “the backs of objects don’t exist, except those being observed directly or indirectly by humans because it is more parsimonious not to have them there, even if you need a contrived theory to do away with them”? That’s the problem with arguing against MWI: To reduce the “amount of stuff in reality”—which never normally bothers us with theories, and shouldn’t now, you have to introduce contrivance where it is really a bad idea—into the basic theory itself—by introducing some mechanism for “collapse”.
Somehow, with all this, there is some kind of cognitive illusion going on. As I don’t experience it, I can’t identify with it and have no idea what it is.
My problem with MWI is not the massive amounts of worlds—but how they are created.
How do You reconcile MVI with the 1st Law of thermodynamics?
And my problem is that questions like this are heavily downvoted. This isn’t a bad question per se, even if it may be a little bit confused. As I understand, only a minority of people here are physicists, and quite a lot of people on LW haven’t technical understanding of quantum theory. So the parent comment can’t be perceived as ignorant of some already shared standard of rationality. Also, MWI is still not a broad scientific consensus today, even if some portray it such. So why does the parent stand at −5? Do we punish questioning the MWI? If so, why?
Now on topic. MWI doesn’t violate thermodynamics any more than the Copenhagen interpretation. In the CI one can have a superposition of states of different energy collapsing into one of the involved energies; the estimated (mean) energy of the state is not conserved through the measurement.
The energy is conserved in two senses: first, it is conserved during the evolution of a closed system (without measurement), and second, it is conserved completely when using statistical mixed states to model the system—in this case, the collapse puts the system into a mixed state, and the mean value of any observable survives the collapse without change. Of course, the energy conservation requires time-independent dynamics (it means time-independence of the laws governing the system and all physical constants) in both cases.
An important technical point is that measurements always transfer the energy to the apparatus and therefore there is little sense to demand conservation of energy of the measured system during a measurement. To model a realistic measurement, the apparatus has to be described by a non-self-adjoint Hamiltonian to effectively describe dissipation, or at least it has to have a time-dependent Hamiltonian, or both; else, the apparatus will not remember the results. In both cases, energy conservation is trivially broken.
As for the (implicit) first question how the worlds are created: There is one Hilbert space consisting of all possible state vectors of the world. The state of the world can be, in a rough idealisation, decomposed into a tensor-product of smaller states of individual observers and non-observer subsystem (whether a subsystem is or isn’t an observer is not particularly important, and it is probably related to the problem of consciousness). In a subspace of a particular observer, some states are specific, while most of the states aren’t. The specific states correspond to certain thoughts. In an idealisation of an observer who cares only about one particular physical system, the observer’s specific states all correspond to states of the system, which are said to have a sharp value of certain observables.
Now, in the Schrödinger picture, all state vectors evolve. Interaction between the observer and the observed system takes the state vectors into correlation. After that, the overall state vector of the observer+system compound can’t be written as a tensor product of a observer-vector and a system-vector, and thus talking about the state of the observer alone doesn’t make sense any more.
The consciousness of the observer works in such a way, that it decomposes the state of the observer+system into sum of vectors, each of which can be written as a tensor product of an observer-vector and a system-vector (although entire the sum can’t), and lives a separate instance on each summand. Each of this instances forms what is called a world in the MWI jargon.
These worlds thus aren’t created from void by some physical action. It’s perhaps better to say that they are “interpreted into existence” by individual observers’ consciousnesses. The division of the whole universe into individual worlds is observer dependent.
These worlds aren’t being “created out of nowhere” as people imagine it. They are only called worlds because they are regions of the wavefunction which don’t interact with other regions. It is the same wavefunction, and it is just being “sliced more thinly”. To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world “splits” into two worlds, it makes sense to think of it as meaning that the “stuff” (actually the wavefunction) making up that world is divided up and used to make two new, slightly different worlds. There is no new “stuff” being created. Both worlds actually co-exist in the same space even: It is only their decoherence from each other that prevents interaction. You said that your problem is “how they (the worlds) are created” but there isn’t anything really anything new being created. Rather, parts of reality are ceasing interaction with each other and there is no mystery about why this should be the case: Decoherence causes it.
Do you think the number of worlds is a definite and objective fact, or that it depends on how you slice the wavefunction?
Well, it isn’t really about what I think, but about what MWI is understood to say.
According to MWI, the worlds are being “sliced more thinly” in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a “sort of yes” but I will qualify that shortly.
For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent from each other there is no interaction and each part is regarded as a separate world.
Now, to qualify this: Branches may actually interfere with each other in ways that aren’t really meaningful, so there isn’t really a point where you get total decoherence. You do get to a stage though where decoherence has occurred for practical purposes.
To all intents and purposes, it should be regarded as definite and objective.
Please check your sources on MWI. I think you must be misreading them.
So in reality, decoherence is a matter of degree. But I thought that the existence of one world or many worlds depended on whether decoherence had occurred. Is there a threshold value, a special amount of decoherence which marks the transition?
it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe’s history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn’t change the fact that conservation of energy appears to apply in each world: Observers in a world aren’t directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.
Advocates of MWI generally say that a split has occurred when a measurement is performed indicating that it has observed. It should also be noted that when it is said that “interference has stopped occurring” it really means “meaningful” interference—the interference still occurs but is just random noise, so you can’t notice it. (To use an extreme example, that’s supposed to be why you can’t see anyone in a world where the Nazis won WWII: That part of the wavefunction is so decoherent from yours that any interference is just random noise and there is therefore no meaningful interference. This should answer the question: As decoherence increases, the interaction gets more and more towards randomness and eventually becomes of no relevance to you.)
I suggest these resources.
Orze, C., 2008. Many-Worlds and Decoherence: There Are No Other Universes. [Online] ScienceBlogs. Available at: http://scienceblogs.com/principles/2008/11./manyworlds_and_decoherence.php [Accessed 22 August 2010].
Price, M,C., 1995. The Everett FAQ. [Online] The Hedonistic Imperative. Available at: http://www.hedweb.com/manworld.htm [Accessed 22 August 2010].
No, you are misunderstanding the argument. I am a MWI opponent but I know you are getting this wrong. If we switch to orthodox QM for a moment, and ask what the energy of a generic superposition is, the closest thing to an answer is to talk about the expectation value of the energy observable for that wavefunction. This is a weighted average of the energy eigenvalues appearing in the superposition. For example, for the superposition 1/sqrt(2) |E=E1> + 1/sqrt(2) |E=E2>, the expectation value is E1/2 + E2/2. What Q22 in the Everett FAQ is saying is that the expectation value won’t apriori increase, even if new worlds are being created within the wavefunction, because the expectation value is the weighted average of the energies of the individual worlds; and in fact the expectation value will not change at all (something you can prove in a variety of ways).
Well, this is another issue where, if I was talking to a skilled MWI advocate, I might be able to ask some probing questions, because there is a potential inconsistency in the application of these concepts. Usually when we talk about interference between branches of the wavefunction, it means that there are two regions in (say) configuration space, each of which has some amplitude, and there is some flow of probability amplitude from one region into the other. But this flow does not exist at the level of configurations, it only occurs at the level of configuration amplitudes. So if “my world”, “this world”, where the Nazis lost, is one configuration, and the world where the Nazis won is another configuration, there is no way for our configuration to suddenly resemble the other configuration on account of such a flow—that is a confusion of levels.
For me to observe interference phenomena, I have to be outside the superposition. But I wasn’t even born when WWII was decided, so I am intrinsically stuck in one branch. Maybe this is a quibble; we could talk about something that happened after my birth, like the 2000 US election. I live in a world where Bush won; but in principle could I see interference from a world where Gore won? I still don’t think it makes sense; the fact that I remember Bush winning means that I’m in that branch; I would have to lose the memory for the probability flow here to come into contact with the probability flow in a branch where Gore won. More importantly, the whole world configuration would have to morph until it came to resemble a world where Gore won, for some portion of the probability flow “here” to combine with the probability flow there.
I’ll try to explain what I’m talking about. The wavefunction consists of a complex-valued function defined throughout configuration space. Configuration space consists of static total configurations of the universe. Change exists only at the level of the complex numbers; where they are large, you have a “peak” in the wavefunction, and these peaks move around in configuration space, split and join, and so on. So really, it ought to be a mistake to think of configurations per se as the worlds; instead, you should perhaps be thinking about the “peaks”, the local wavepackets in configuration space, as worlds. Except, a peak can have a spread in configuration space. A single peak can be more like a “ridge” stretching between configurations which are classically inconsistent. This already poses problems of interpretation, as does the lack of clear boundaries to a peak… Are we going to say that a world consists of any portion of the wavefunction centered on a peak—a local maximum—and bounded by regions where the gradient is flat??
But here I can only throw up my hands and express my chronic exasperation with the fuzzy thinking behind many worlds. It is impossible to intelligently critique an idea when the exponent of the idea hasn’t finished specifying it and doesn’t even realize that they need to do more work. And then you have laypeople who take up the unfinished idea and advocate it, who are even more oblivious to the problems, and certainly incapable of answering them.
Paul, if I could convey to you one perspective on MWI, it would be as follows: Most people who talk about MWI do not have an exact definition of what a world is. Instead, it’s really an ideology, or a way of speaking: QM has superpositions in it, and the slogan is that everything in the superposition is real. But if this is to be an actual theory of the world, and not just an idea for a theory, you have to be more concrete. You have to say exactly what parts of a wavefunction are the worlds. And when you do this, you face new problems, e.g. to do with relativity and probability. The exact nature of the problems depends on how the MWI idea is concretized. But if you give me any concrete, detailed version of MWI, I can tell you what’s wrong with it.
First, let me say beautifully clear explanation of what MWI is and especially what questions it needs to answer.
I don’t think this is any more unreasonable than talking about firing two separate localized wave-packets at each other and watching them interfere, even if we don’t have a specific fixed idea of what in full generality counts as a “wave-packet”. Typically, of course, for linear wave equations we’d use Gaussians as models, but I don’t think that’s more than a mathematically convenient exemplar. For non-linear models, (e.g. KdV) we have soliton solutions that have rather different properties, such as being self-focusing, rather than spreading out. I guess I don’t see why it matters whether you have an exact definition for “world” or not—so long as you can plausibly exhibit them. The question in my mind is whether evolution on configuration space preserves wave-packet localization, or under what conditions they could develop. I find it hard to even formalize this, but given that we have a linear wave-equation, I would tend to doubt they do.
Of course relativity will be an issue. QM is not Einsteinian relativistic, only Galilean (relabeling phases properly gives a Galilean boost), and that’s baked into the standard operators and evolution.
I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the “original world”. If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.
Regarding the second issue, defining what a world is, I actually do understand your point: I feel that you think I understand less on this than is actually the case. Nevertheless, I would say that getting rid of a need for collapse does mean a lot and removes a lot of issues: more than are added with the “What constitutes a world” issue. However, we probably do need a “more-skilled MWI advocate” to deal with that.
Let me see if I am understanding you. You’re now saying that the average energy-per-world goes down, “due to the proliferation of worlds”? Because that still isn’t right.
The simplest proof that the average energy is conserved is that energy eigenstates are stationary states: subjected to Hamiltonian evolution, they don’t change except for a phase factor. So if your evolving wavefunction is Psi(t), expressed in a basis of energy eigenstates it becomes sum_k c_k exp(-i . E_k . t) |E_k>. I.e. the time dependence is only in the coefficients of the energy eigenstates, and there’s no variation in their norm (since the time dependence is only in the phase factor), so the probability weightings of the energy eigenstates also don’t change. Therefore, the expectation value of the energy is a constant.
There ought to be a “local” proof of energy conservation as well (at least, if we were working with a field theory), and it might be possible to insightfully connect that with decoherence in some way—that is, in a way which made clear that decoherence, the process which is supposed to be giving rise to world-splits, also conserves energy however you look at it—but that would require a bit more thought on my part.
ETA: Dammit, how do you do subscripts in markdown? :-)
ETA 2: Found the answer.
No, I think you are misunderstanding me here. I wasn’t claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn’t make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn’t really unreasonable, because it is merely a wave developing over time—we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, the energy associated with each world to decrease as we have a constant amount of energy in the wavefunction and the number of worlds is increasing. If you have some way of defining worlds, and you n worlds, and then later have one billion x n worlds, and you have some way of allocating energy to a world, then this would have to happen to maintain conservation of energy. Also, I’m not claiming that the issue is best dealt with in terms of “energy per world” either.
Now you are saying what I first thought you might have meant. :-) Namely, you are talking about the energy of the wavefunction as if it were itself a field. In a way, this brings out some of the difficulties with MWI and the common assertion that MWI results from taking the Schrodinger equation literally.
It’s a little technical, but possibly the essence of what I’m talking about is to be found by thinking about Noether’s theorem. This is the theorem which says that symmetries lead to conserved quantities such as energy. But the theorem is really built for classical physics. Ward identities are the quantum counterpart, but they work quite differently, because (normally) the wavefunction is not treated as if it is a field, it is treated as a quasiprobability distribution on the physical configuration space. In effect, you are talking about the energy of the wavefunction as if the classical approach, Noether’s theorem, was the appropriate way to do so.
There are definitely deep issues here because quantum field theory is arguably built on the formal possibility of treating a wavefunction as a field. The Dirac equation was meant to be the wavefunction of a single particle, but to deal with the negative-energy states it was instead treated as a field which itself had to be quantized (this is called “second quantization”). Thus was born quantum field theory and the notion of particles as field quanta.
MWI seems to be saying, let’s treat configuration space as a real physical space, and regard the second-quantized Schrodinger equation as defining a field in that space. If you could apply Noether’s theorem to that field in the normal way (ignoring the peculiarity that configuration space is infinite-dimensional), and somehow derive the Ward identities from that, that would be a successful derivation of orthodox quantum field theory from the MWI postulate. But skeptical as I am, I think this might instead be a way to illuminate from yet another angle why MWI is so problematic or even unviable. Right away, for example, MWI’s problem with relativity will come up.
Anyway, that’s all rather esoteric, but the bottom line is that you don’t use this “Noetherian configuration-space energy” in quantum mechanics, you use a concept of energy which says that energy is a property of the individual configurations. And this is why there’s no issue of “allocating energy to a world” from a trans-world store of energy embodied in the wavefunction.
A better question—how does the observed 1st law of thermodynamics arise from the laws of physics underpinning the many worlds?
Why do you see a conflict? You seem to be assuming both that the total energy of the universe is positive (not known!), and that each universe has the same total energy (i.e. that energy is not arbitrarily scalable). Why not assume that a universe with 100 joules of energy splits into two universes—one with 100 zorkmids of energy and the other with 100 arkbarks of energy (where it is understood that 1 zorkmid + 1 arkbark = 1 joule).
Erm, I can tell you less about physics than the creationist museum about evolution but I don’t think it applies to open systems.
Anyway, for some interesting information about thermodynamics go here:
I don’t understand the argument in that post, even now re-reading it over a year later. Do you? If not, why are you citing it?
The post shows that even intellectual high-fliers like EY and Hanson can err on this topic (if it is to be believed that they were wrong). In other words, I wouldn’t recommend asking questions made up of terms that are themselves poorly understood even by the best minds within this community.
Well, no, that language is not. But it’s the standard language. Of all the interpretations, MWI makes the most sense to me, but quantum mechanics really is “merely” a very good effective model. (See the conflict between SR and QM. QFT neatly dodges some obstacles, but has even more horrendous interpretational issues. And we can only barely torture answers out of it in some limited cases in curved spacetime.)
Even so—there is a nondeterministic variable in our universe.
Even if a cyclic model is true, which allows for only for one kind of parameter setup for the big bang, the nondeterministic variables would certainly lead to different outcomes in different cycles.
Hence, all the scenarios, that arise from these nondeterministic possibilities, would have to be realized.
“Physical” and “logical” are not the same thing. Even if all physical possibilities are instantiated (as Tegmark’s Level IV Multiverse implies, I believe), there are logical systems that do not describe any part of reality.
I always say “physical/logical” to note the known laws of physics of our universe and the logic that describes it.
If you say only “physical”—then you limit yourselve only to that which is directly observable, testable and foreseeable. And that hinders a more relaxed approach of discussing such “far-out” possibilities as required in such cases.
Point being: IMO the only valid physical/logical speculations are those that relate to the physics and logic we know of (or a variation of it in an indeterministic universe),
Only Past Eternity stays completely (or mostly) in such a physical/logical frame. Creatio Ex Nihilo is on the other hands, completely out of it with no hypothetical and (not to mention) no observational evidence offered.
It’s the most unlogical thing ever conceived: no theory explains it—yet it has the “same” probability as any other option in the physically unknown.
If You “can” put the known logic and laws into the physically unknown and make it into a coherent, workable, testable theory—then any such theory is “more” probable then others without it.
Minimalism and reductionism, which are the the main reasons for allowing/prefering Creatio Ex Nihilo, break down after some scrutiny. If You talk about one singular event in all eternity (or non-existence), which just happens to be a universe capable of intelligent life—then you need to offer some theory—any kind of theory, that explains just that (and the logic of it). How can “Non-Existence” allow for any kind of Existence and why in all eternity just for 1?
If we talk multiple Creatio Ex nihilos for completely separated spatial/temporal universes, then their numbers can easilly exceed the nr. of universes that happened in a Eternal Existence.
Not just that—universes born out of Ex Nihilo would allow not just all possibilities as Eternal Existence allows for (based on known physical laws and logic) - it would also allow for universes with laws and logics completely unknown to us (illogical to us—just as Creatio Ex Nihilo is illogical to us).
So when You think about it Past Eternity is the simpler and more logical solution and as such a valid starting point for further specualtions.
As far as I know, the big bang hypothesis is in accord with known physics.
Seconding ciphergoth’s suggestion. It’s very unlikely that you can make a positive contribution here until/unless you study more. We do have respected members who hold theistic beliefs, but their comments sound noticeably more rational than yours.
Tentatively offered—check out Spinoza. He came to the conclusions that God is completely identical with everything that exists, and that everything is determined.
To put it mildly, Spinoza’s God isn’t what most people are looking for when they want a God.
You don’t fight confusion with confusion.
You shouldn’t fight fire with fire either, but humans seem to use the term anyway...
Begone!
Don’t bring up you’re religious beliefs here or you will be voted to hell, like me. Just saying, as I am sure this comment will cost me a few more votes X(
Yes, there is a lot of hostility to religion here. Folks here are into “rationality”, and they have somehow gotten the impression that much religious thinking is irrational. “Somehow gotten the impression”. Ok, lets be honest here. They got that impression because a lot of religious thinking really is irrational. You will have a tough job convincing folks here that your own religious thinking is any different. So, I think that “Just lay low” is pretty good advice. There is a lot to be learned here, stuff about how to think clearly and about why we don’t always think as clearly as we would like to. So, I bet it will do you, Saladin, some good to stick around. But I don’t think you will get much useful feedback regarding your thinking about a deity or eternal first cause. There are probably better places on the web for that.
That’s an interesting question, actually. I would have been inclined to agree—I agree that a lot of religious thinking is irrational—but when I tried to think of places to send people, most of them are communities like the FRDB. These are not precisely dispassionate.
Did you have an Internet community in mind?
No, I don’t, though Googling is always worth a try. Using search strings containing words like discussion, theology, agnostic, first cause, and apologetics, I found a variety of resources and communities in which at least the spelling, grammar, and punctuation were tolerable.
In trying to work your way through these kinds of questions, you obviously need to avoid sites where a consensus exists that “The truth is already known”. But, I suspect that you also need to avoid getting too deeply emersed in communities like this one where the consensus is that “The way to the truth is known”. In my experience, people who believe they know the way are even more passionate, evangelical, and just plain impolite than are the self-satisfied folk who think they have already arrived at the truth. Which, of course, is not to say that passionate impolite evangelists are not worth listening to occasionally.
I would recommend totally eliminating your impressions of “the kind of people who think X” from your considerations about X, unless the X-ites are actually torturing babies.
By paying attention to their personal characteristics, you’re essentially guaranteeing that your opinions will be hijacked by how socially comfortable you feel with their group, which has nothing to do with truth. New agers are great people to hang out with, very… undogmatic, but I wouldn’t recommend swallowing any of their truth claims.
If LW thinks it knows the Way to the Truth, then the thing to evaluate is what exactly our way is, and why we think it leads to the truth.
Oh, I agree. I am busy evaluating exactly that. But I will point out that a large fraction of the techniques taught here have to do with how to communicate clearly, rather than simply how to think clearly. One presumes that the reason we wish to communicate is that we wish to be understood. If certain “personal characteristics” (I mentioned passion and etiquette) either promote or interfere with successful communication, then I think that both sender and receiver have some responsibility to make adjustments. In fact, in a broadcast model, with one sender and many receivers, the onus of adjustment lies mainly on the sender. [Edit: spelling]
Aha. Agreed, in that case; the onus is on us.
Really? A quick survey of recent posts suggests that we care a lot more about thinking than communication.
To the extent that communicating clearly affects one’s explicit verbal reasoning with oneself, the two are not at odds. Understanding why using words with excessively strong connotations is a cheap move in an argument will also help you understand why it’s a bad mode of thinking.
I was raised a believer and I never thought about it being irrational or not until I met the creationist crowd. After debating enough of them, mainly over the internet, I was appaled at their ignorance and butchering of science for some IDiotic predetermined conclusion. I still believe, but I certainly respect the atheists for trying to be rational. I have heard some pretty convincing evidence of stuff in the bible, but after meeting the creationists I had to think twice as to whether that is objective or not. I was going to go do some research on it and never got around to it because I’m lazy.
Most people in a crisis of faith find themselves especially lazy when it comes to seeking information that contradicts their (preferred) beliefs, and surprisingly diligent when it comes to seeking evidence that reaffirms them.
(This isn’t just about religion, but it happens pretty clearly there. A religious friend of mine recently went through a crisis of faith, decided that he needed to study more to decide on the truth of Christianity, and only read books by traditional Christians until I convinced him to add a few more, only one of which he read. I believe you can guess as easily as I did how his crisis turned out.)
Hi All,
I’m here for the most part because of my interest in the idea of singularity and the mechanical relation of creating consciousness in a non-traditionally-organic form. I can’t list here all of the books I’ve read on the subject, though I might be able to add a few to the list before I’m done, such as Piers Anthony’s Macroscope (haven’t checked the list yet).
I would not call myself an atheist, but a sub-proselytized human with autodidactic qualities. I do not deny religion off hand, because of the correlation with the development of science, but, one of my main arguments is that humans are born without religion and science may be instinctual. (Again, haven’t read everything), I think that the idea of “God” is just taking abductive reasoning to an extreme.
I see Less Wrong as a form of Peckham Experiment of which I am a participant and an observer.
That said, I can tend to be short in my posts, but I will work on that.
I hope that if I say anything extraordinary, or, seem like I am delivering a failing interpretation of another work, that I will be checked out for it. (I can hear the Yoda voice now, “You Will Be”...)
I read this paragraph twice, and I know all the words, but I still have no idea what you’re saying here. Could you elaborate on any of this, or give links to a web site that does?
I’m sorry, I cannot give one link that would explain this well.
I think the key may be with the idea of abductive reasoning, that the mind can relate multiple sensory observations and come up with a correlation (a derivative if you will) of the experiential world.
I’m being short...
Hello everyone. Erik here. A while back I’ve realized that there’s a lot a don’t know. Just wanting to know more. Not sure of what I want to know more of. I just want to learn more.
I am an animator, writer, comic book maker by trade.
However I have a deep interest in psychology, mental illness and the brain.. As these themes surround the art I make and so I am interested in engaging in discussion to learn current scientific theories about these issues...
You can see the type of animation I make here: http://vimeo.com/26954632
Look forward to participating in good discussions soon.
Gwynn.D.Earl
Hi, I’m Alison. I was once a professional tarot reader and astrologer in spite of having a science degree. I recovered from that over 15 years ago and feel it would be valuable for more people to understand how I came to do it and how I changed my mind. I’m also a 45 year old woman, which makes me feel in a tiny minority on LW—though I may be mistaken.
I spent some time trying and failing to fit into a skeptic community and have been reading large chunks of the sequences for the last year or so, as well as books like Risk: The Science and Politics of Fear (and been thoroughly sucked into HPMOR).
Topics I’m particularly interested in include tackling global warming, rationality from the perspecitive of people with mental health issues and tackling irrationality while maintaining polite and less arrogant discourse.
Hi, everyone! I’m Filipe, 21, from Rio de Janeiro, Brazil. I’ve dropped out Chemical Engineering in the 4th semester, and restarted College after one year off, with Mathematics, from scratch. I thought redoing the basic subjects, if I worked hard through them, would be a good idea. It probably would, but so far I’ve studied those subjects with the same sloppiness of before, heheh. Now I’m six semesters off College, due to depression, obsessive thoughts, and some suicidal tendecies. Some of this is related to a deconversion from Christianity at age of 18: I was really devout, and lived for the religion. My father is a pastor and all my family continues to be serious about Christianity, and I’
So what if anything is the standard lesswrong approach to Nelson Goodman’s grue problem? If there is any paradox I could imagine someone posing against LW, I would imagine it would be the Grue problem.
(damn down voters edit): Not that I think it would pose any real threat. Just curious, I’m sure LW has a brilliant solution. And if not it can def be made by assembling the bits of other posts. I would really like to know why this got down voted.
There’s a fair bit of discussion here, but I wouldn’t say it’s the standard approach to the problem. If you haven’t read Occam’s Razor or some of the stuff on hypothesis complexity, reading that might help.
Hi all,
I found out about LessWrong thanks to the MoR stories. I am interested in AI and becoming rational. Have been reading the site on and off for years and enjoying it and learning a lot!
Hi. I made an account here because I wanted to make a post. Subsequently, I found this thread. This is my second post here. I may or may not make many posts on this site at all.
I identify more with Christianity than rationalism. I have already heard that this community is going to jump on me if I say too much, so I plan on limiting on my comments accordingly. As far as I know, I have an internally consistent belief structure inside the context of Christianity, and I see my ideal belief structure to be that presented in the Bible. I enjoy this site occasionally because I appreciate having clearer thought processes, and these thought processes are able to assist my understanding of God. I do wish more of the posts on this site were focused on presenting new thought processes rather than advocating specific beliefs which those thought processes lead to. I have to skip over articles that mention evolutionary psychology because I believe that the human mind (and soul) is constructed differently.
I am visiting this site presently because I have a question best suited to this audience of thinkers. I was originally going to post on open thread, but I saw that this thread was also a space for questions, so I am including it here. I am going to avoid specifics because it involves God, so I’m going to speak in as general terms as I can manage while maintaining clarity. I am coming to you all because I seek knowledge on how to think, and I would appreciate it if you all do your best to be aware of any anti-theism biases which would label my own logical difficulties as a result of my theism.
I have a particular belief which I am fairly sure is true. This belief entered into my mind about a year and a half ago. In the time since I have believed that belief, I have extended that belief into many other beliefs which, at the time, I believed followed. Now, a year and a half later, I have collected a separate framework of beliefs. I am presently trying to integrate the two systems, and in light of my newer framework of beliefs, I find that the extensions of my old core belief are all incorrect—those beliefs are worth dissolving—partly because that core belief extended itself on top of other beliefs which I have since dissolved. I have currently decided that my solution is to salvage the old core belief, integrate it into by new belief structure, and weed out the obsolete extensions of that core belief.
However, I still have a problem. Integrating the old core belief into my new belief structure is awkward. I value the expectations associated with my new belief structure, and I do not value the expectations associated with the old core belief. If I snuff out the expectations of the old core belief, then I essentially snuff out the belief itself. My problem is that I believe that the old core belief is true, and even if I eliminate the old core belief, I will never eliminate my belief-in-belief about it, and resolving my expectations will potentially force me to lie to myself. I do not value walking around with a contradiction inside my head, especially if it’s a contradiction I know is there. I would not particularly like to force myself to avoid the contradiction because the belief regards a particularly important subject, and I would also not like to have a “semantic stop-sign” attached to the idea. I seek truth.
Thoughts?
Welcome to Less Wrong! I’m going to restate your dilemma in my own words to make sure I’m understanding you before I advise: you are aware of two different mutually exclusive things you could believe. You believe that the belief you currently hold is true, but you also believe that the other one makes better predictions. Is that correct?
Can’t use it
Can’t use what?
Greetings, LessWrong.
I’m a 21 y/o Physics undergrad at the University of Waterloo. I’m currently finishing a coop work-term at the Grand River Regional Cancer Centre. I’m also trying to build a satellite.
My girlfriend recommended that I read HPMoR—which I find delightful—but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I’m happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I’m also particularly intrigued by this Bayesian Conspiracy you guys have going.
I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo, and I couldn’t help myself.
I’m having trouble with formatting. Here is what I was trying to write, less my attempts to include links:
Greetings, LessWrong.
I’m a 21 y/o Physics undergrad at the University of Waterloo. I’m currently finishing a coop work-term at the Grand River Regional Cancer Centre. I’m also trying to build a satellite www.WatSat.ca.
My girlfriend recommended that I read HPMoR—which I find delightful—but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I’m happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I’m also particularly intrigued by this Bayesian Conspiracy you guys have going.
I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo http://lesswrong.com/r/discussion/lw/790/are_there_any_lesswrongers_in_the_waterloo/, and I couldn’t help myself.
It sounds like it’s an unfamiliar system to you so it’s no problem. There should be a little help link to the bottom right of the comment box, and there is a more extensive article on the wiki.
A note: Some of us disagree about the degree of acceptability of asking for explanations for downvotes, so your requests for explanations might also get downvoted.
Maybe especially so if they consist only of the interjection “WTF”.
I’d hazard that a request for a downvote explanation has a better chance of being answered satisfactorily if it is framed nicely, and perhaps an even better chance if you first think about why the comment might have been downvoted in the first place and offer a hypothesis.
And I’d strongly recommend not downvoting anyone who answers a request for a downvote explanation. Think about how that comes across.
That was an edit to an existing comment about something else.
The grandparent of that comment contained a parenthetical, polite discussion of a previous downvote. A sense of beleaguerment should have been understandable in that context.
EDIT: That edit has now been removed. Anyone still think −2 is an appropriate score for that comment?
I stumbled over here from Scott Aaronson’s blog, which was recommended by a friend. Actually, LessWrong was also recommended, but unfortunately it took a while for me to make it over here.
As far as my descent in to rationality goes, I suppose I’ve always been curious and skeptical, but I never really gave much direction to my curiosity or my skepticism until the age of 17.
I always had intellectual interests. In 3rd and 4th grade I tought myself algebra. I ceased to pursue mathematics not too long after that due to the disappointment I felt towards the public school system’s treatment of mathematics.
After my foray into mathematics, I took a very strong interest in cosmology and astronomy. I still remember being 11 or 12 and first coming to realize that we are composed of highly organized cosmic dust. That was a powerful image to me at that time.
At this point in time I distinctly remember my father returning to the church after his mother and sister had passed away. The first church we went to was supposedly moderate. I was made to attend Sunday school there. I did not fare so well in sunday school. During the second session I attended the subject of evolution was brought up. Now, I had a fascination with prehistoric animals and had several books that explained evolution at a basic level accessible to young adults, so when the teacher challenged evolution and told me that the concept of God was not compatible with it, I told her that she must be wrong about God (this was really an appeal to authority, since I considered anyone who had written a book to be more authoritative than anyone who hadn’t). Well, she didn’t take that well and sent me to stand in the corner. My parents didn’t take well to that (both of them being fairly rational and open to science and my mom not being religious at all, but rather trying to support my dad). And so was borne my first religion-science conflict!
Once I entered high school, my artistic interests came to the foreground and pushed science and mathematics into the background. I developed my skill as a visual artists and as a guitarist. I studied music theory and color theory and played. It was enjoyable work and I took it to the point of obsession. My guitar playing especially, which I would practice for hours every night.
Eventually I decided that I wasn’t happy with making art, I wanted to explore something I felt was much deeper and more meaningful. Thus began a period of self reflection and a search for personal meaning. I decided that I wanted to explore my childhood interests, and so I began to study calculus and mechanics during my senior year of high school. It was also at that point that I read Crime and Punishment, Steppenwolfe, The Stranger and Beyond Good and Evil.
Soon I found my way to Kant and Russell. They in turn led me to Frege, Wittgenstein and Quine. My desire to understand myself soon extended to a desire to understand the world around me. Shortly after turning 18, I read Quine’s Methods of Logic and was surprised by how natural it felt to me (up until the undecidability part, which threw me through a loop at the time).
By that time, I had begun my major in mathematics. I took every (read every seemingly interesting) course I could to get as broad a view as I could as quickly as possible. This past year (my junior year of college) I took my first few graduate courses. The first was theory of computation. I had no prior experience with the material, everything was new. It was a somewhat transformative experience and I have to say that it was probably the most enjoyable class I’ve ever taken. I also took a graduate sequence in mathematical logic and learned the famed incompleteness theorems.
I am interested in fighting ignorance in myself and in others and I find that I like the premise of this blog. My current interests include Bayesian Probability (thanks to this site and Eliezer, and to some extent the friend who recommended it to me as well), the game of GO, physics (I am woefully ignorant of real physics, and have decided that I need to read up on it), mathematical logic, Fluid Concepts & Creative Analogies (Hofstadter), cognitive science, music, history and programming. It is not hard to get me interested in something, so the list is much more extensive than that and is highly subject to change.
Well, I feel like I’ve rambled up a storm here.
I wonder if this should list contact people for those areas, especially the ones besides SF and NYC. (I can serve for Pittsburgh.)
Hiya, thanks to everybody here for making this such a welcoming and fun community.
I’ve identified a skeptic and an atheist for a few years now, but I was intrigued by the way that the Less Wrong articles I saw seemed to kick it up a notch further. “Weapons-grade rationality” I think I saw one article put it.
I’m (as of the moment) somewhat skeptical of singularity theory, but as an activist I’m interested in helping to raise the rationality waterline. My education and professional experience are in computer programming. Currently I’m serving as a Peace Corps volunteer in Jamaica.
Tom the Folksinger. My basic theory is “Everything is true in context”. I’m still sorting context. Myspace.com/tomloud
Hello Sirs and Mademoiselles. One of my many pseudonyms is Elijah Jakobi, and I have been directed toward this system of beautiful postings through (I believe) one of its members, certainly one of its followers, and due to my surplus of spare time (no redundancy intended), I have decided to become a member and speak what I see to those who are interested in listening (including myself). I suppose you may be interested in who I am, and what I may write about. I apologize if I may come off as rather vague or incomprehensible, but every single action that I take is refined, so to speak, through me, and I have thus no intent of being incomprehensible, although my use of language may indeed be rather sketchy at times, so bare through with me. I really don’t care if you want to know who I am. This site is not a networking site for social horseplay, not for fun and games and inattentiveness, so I deduce my actual person is not an issue. I prefer that you pay attention to what I mean rather than what I say, and with that quality, perhaps we can come off as more of friends rather than bickering debaters. It seems to me that being sensitive is far more important than seeking out through manual probing some sort of fault, because it is through manual effort that we are so biased. The method we use is different, not our energy. Am I becoming rather vague? Perhaps I said exactly what I should have, which is not, it seems to me, something that should be over-complicated with the hypothetical constructs. Do I sound like some sort of particular group? I may, but I have arrived where I am on my own, not through some external guide. I have not studied any theory, nor have I subscribed to any. I do not identify as a rationalist. I identify, if at all, as human, I suppose, and without any sort of governmentally official education in any pool of theories. I shall not yet give out much personal information, my age and gender included. I am, however, open to guesses at it, as long as they are serious and well executed, the term “well” being the quality of mind over which the execution took place. Not very many people live well, it seems. I am living in the Pacific North-West, at a lantern lit cherry wood desk covered in the scrawl of yellowed paper note-taking and many pages of rather healthily done calligraphy of the most Chinese variety. も
Welcome to LessWrong!
If you don’t want all your posts to be downvoted to oblivion, you may want to switch to a less self-centered, ornate and verbose writing style. As a rule of thumb, nobody on the internet cares about you (generic “you”) until given a reason to.
Not to make you feel unwelcome, but paragraphs are your friends. A little thing to remember if you want people to read what you say.
How can we know what you mean other than through what you say?
Welcome to Less Wrong!
Is English not your first language? You are having some difficulties with it. Studying writing will help you make yourself understood.
Hi there, I am saeidmhdv , I really like your weblog Mr. Norman Vincent ! and I would like to get benefit from it and contribute to it. I have already started and have spent time Translating at least 1000 best quotations gathered and given by Mr. Norman Vicent, Mr. Jim Rohn, Mr. Zig Zigular, and Mr. Brian Tracy in good, accurate, and fluent persian language and Would like to Share it with you. Please Let me know if you are intrested how I can do it?
I also want to share with you that I am in the Course of writing a book titled:
HUMAN RELIGION ENVIRONMENT : AND THE ABUSE OF SOCIAL POWER
in which I intend to look comprehensively into litretures, theories, and recent researches on the subject matter from a hosletic point of views including social psychology, ecology, sociology, history, education,...etc. I would like to seek contribution more making this work better and more valable for the man kind. If any body should care and be interested I would like to ask to establish a line of contact with me through
my weblog: saeidmahdavi.blogfa.com. my e- mail: saeidmhdv@yahoo.com. and my facebook.
I appreciate your possible interest in this and remain.
I honestly can’t tell if you are a spambot. You sound mostly like a standard smart person with a pet project and an intermediate level of English fluency, but you appear to believe this is the personal blog of a Norman Vincent (Peale? Roslyn?). Are you human?
Edit: do the downvoters believe that saeidmhdv is a bot, or do they believe that he is human but not fun to have around?
Having actually looked at the ‘weblog’, I still can’t tell if it’s a bot, human spammer, or very confused human.
bot or human spammer or very confused human, but this post seems very much like a comment spam