Welcome to Less Wrong! (6th thread, July 2013)
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don’t let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the “send message” link on the upper right of their user page). Either put the text of the post in the PM, or just say that you’d like English help and you’ll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It’s worth saying that we might think religion is off-topic in some places where you think it’s on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren’t interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it’s absolutely OK to mention that you’re religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don’t require any previous reading:
The Allais Paradox (with two followups)
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there’s anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
- 3 Sep 2013 4:44 UTC; 11 points) 's comment on True Optimisation by (
- 24 Dec 2014 2:59 UTC; 7 points) 's comment on Open thread, Dec. 22 - Dec. 28, 2014 by (
- 17 Jan 2014 17:25 UTC; 6 points) 's comment on Open Thread for January 17 − 23 2014 by (
- 14 Nov 2013 23:07 UTC; 4 points) 's comment on Open Thread: September 2011 by (
- 8 Sep 2013 4:08 UTC; 2 points) 's comment on The Up-Goer Five Game: Explaining hard ideas with simple words by (
- 21 Aug 2014 9:45 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 26 Jul 2013 19:15 UTC; 0 points) 's comment on Making Rationality General-Interest by (
I’m Nate. I’m 23. My road here was a winding one.
I grew up as one of those “mathematically gifted” kids in a tiny rural town. I turned away from mathematics towards computer science (which I loved) and economics (which I decided I needed to understand if I wanted to save the world). I went on to became a software engineer at Google.
At the intersection of computer science and economics I fueled a strong belief that the world is broken and that we could do far better if we redesigned social structure from scratch, now that we have so much more knowledge & technology than we did when we created these antiquated governments. I despaired that most think progress entails playing the political tug of war instead of building a better system. I spent a long time refining my ideas.
In the interim I missed a number of opportunities to discover this site. In 2008 I stumbled across the Quantum Physics sequence on Overcoming Bias. I read it up till where it was still being written, then moved on. In 2010, I found HPMoR. I read it, noticed the links to this site, and poked around a little. Nothing came of it. I caught up to where HPMoR was being written, then put it out of my mind. I had more important things to do. I had big ideas to express, and I started writing them down.
At some point along the way I realized I needed more math. To my horror, I found that the math I had been so good at as kid was largely memorized, not deeply understood. I knew how to manipulate symbols like nobody’s business, but I wouldn’t have been able to re-invent the things I “knew” if you erased them from my mind. (In LW terms, I had memorized many passwords). I started going back through what I thought I knew and groking it.
During my journey, sometime early in 2012, I stumbled across the Quantum Physics sequence on LessWrong. From the summaries, it seemed like a good way to quickly evaluate how much of my QM knowledge was cached passwords and how much I had really learned. I started reading it and experienced a strong sense of deja vu. I figured out that LW was seeded by Overcoming Bias, experienced some nostalgia, put the feeling to rest, and moved on.
Relearning math and learning to write morphed into a more general quest to promote clear thinking and better methods of deduction with a long-term goal of bridging my pet inferential gap. As I researched and wrote, this one site kept popping up in my search results—LessWrong.
Around the same time (late 2012) I heard about updates to HPMoR. I hadn’t been following it for years, but I was suddenly reminded why the site felt so familiar. I’m not exactly sure how everything fell into place, but some combination LessWrong showing up in my research, a recollection that HPMoR was associated, and the remembered nostalgia from the Quantum Physics sequence all came together. I finally decided to see what this site was all about.
The rest is history. I tore through the sequences. Much of it was extremely validating: Mysterious Answers and Politics is the Mindkiller expressed much of what I had set out to say. I’ve always planned to cheat death. I attempted a similar dissolution of “free will” a few years back. The rest of it was largely epiphany porn.
The strongest epiphany came when I was introduced to the idea of UFAI. From my vantage point between economics and computer science, everything clicked. Hard.
I’d taken AI courses, but AI was a “centuries in the future” sort of vagary. My primary concern was with finding a way to “refactor” governments (and create meta-governments, as I do not claim to know the best way to run a society). To me, that was The Way To Save The World™ -- until I actually thought about UFAI.
I didn’t need any convincing. I simply… hadn’t considered it before. Upon first reflection, the scope of the problem became clear. I experienced panic, and not because UFAI is scary: overnight, my Way To Save The World was eclipsed by a threat that darkens the entire future.
It’s hard to overstate how much my ideals motivate me. The AI problem shook me to my core. I’d ostensibly been trying to save the world, how could I miss something as obvious as UFAI? How could I take my ideals seriously if I’d misunderstood the problem so hard that I hadn’t considered existential threats? In light of this new information, what should I really be doing to ensure a bright future?
I went into philosophical-panic reevaluate-everything mode. That was a few months ago. I’ve done a lot of reflection. I’m still a bit shaken. I have grand ideas about how we can get to a better social structure from here and a lot of inertial passion along those lines. I don’t know nearly enough math. I feel like I’m late to the party, passionate but impotent. I’m trying to find a way to help beyond donating to MIRI. I feel outclassed here, which is probably a good thing. I’m working on getting stronger. I have a lot to do.
Hello!
...
We need to talk more.
Let’s. I’m on the east coast until Aug 11. Perhaps we can meet up after work on the week of the 12th.
(Context for others: The two of us met briefly at a meetup in June and exchanged usernames, but haven’t spoken much.)
Do you have a recommendation for how to pronounce ‘So8res’?
There’s no canonical pronunciation; I enjoy the ambiguity. My surname (Soares) is pronounced “SOAR-ees” by my family, if that helps any.
So like how a Canadian would pronounce multiple apologies? I like it.
Hello. My name is Alex. I am the 10-year-old son of LessWrong user James_Miller.
I am very good at math for my age. I have read several of the books on rationality that my dad owns, and he convinced me to join this community. I like the idea of everyone in a community being honest because I often get into trouble at school for saying honest things that people don’t like and talking back to adults(which seems like it’s defined as not doing exactly what you’re told.)
My favorite subject in school is math. At home, my interests are playing the video game Minecraft and doing origami, but I also like to read and play soccer.
I have much to learn in the art of rationality, such as finding more ways to be in flow. My dad tells me that there are a lot of people on this site who were like me as children, and I would love advice on how to be less bored in school, controlling my emotions, and finding ways to improve myself in general.
My name is Avi, and I’m 19.
I was similiar in some aspects to you when I was a kid, in particular being good at math (did calculus and programming at 12-13), getting in trouble, being bored in school, reading a lot, having trouble with emotions.
I hadn’t had an explicitly rational upbringing, and only recently (9 months or so) got into it after a chance encounter with HPMOR.
I’ll try to give advice on the things you asked. Bear in mind that I didn’t actually try any of this when I was in school, it’s mostly what I would advise my younger self if I had to do it over.
So, you mention being bored in school. There are at least three possible scenarios for that, which should be solved differently:
You have trouble concentrating or generating the will to concentrate on material that you don’t know, but think is important.
You think the material being taught is unimportant and therefore don’t care about paying attention.
You already know all or some of the material that is being taught.
I don’t really have anything for 1 aside from the standard “force yourself to pay attention”, maybe others can help.
For 3, you could consider asking (or having your parents ask) to be skipped a class, or ask to be allowed off, if you really know everything that is being taught. (I haven’t taken any real math classes since sometime around 7th grade. I’d take out books from the library and just go through them. Also someone gave me a bunch of old Martin Gardner books about math, which are quite interesting if you can find them.)
If you absolutely must be in a class where you already know what’s being taught, try finding math questions to think about that you can memorize, so you can work on them without looking like you’re doing something else. Try http://brilliant.org/ , and find your level. You should be able to easily memorize a few questions each day, and work them out mentally throughout the day, perhaps writing down the answers during recess or something. I’ve done this myself sometimes, when I had to wait for a bus and it would be awkward to read something while waiting.
For 2, you should carefully consider how likely it is that you already know, at 10, what kinds of things are likely to be important, better than whoever planned your curriculum. If you really feel that way, respond and I’ll come up with something for that, but I do think it’s unlikely.
Thanks! I’m the 3rd scenario in my case, and I joined that Brilliant website. It seems to be helpful so far. I do have to participate in classes where I know everything, so what I’ll end up doing most of the time is having my dad send me to school with special math worksheets that are at my level that I can do during math class.
I already have some Martin Gardener books, and will be ordering more, as you are not the only person who recommended him.
.
Hey Alex!
When I think back to when I was your age, I really wished I had gotten more involved in math competitions. Does your school have any programs like MATHCOUNTS, AMC8, etc.? I didn’t compete in any academic competitions until high school, and I really wished that I had known about them earlier on. It makes getting ahead in math so much fun and it helps lay some really important foundations for the more complicated stuff.
Anyway, keep up the good work!
Also anything by Martin Gardner, because his books are so much fun and help to spark your imagination.
At a young age one of the most important thing to develop is a habit of perseverance and not giving up when trying to solve a problem and avoiding developing areas of learned blankness. You should develop an unfaltering confidence to use your own head when trying to solve the problems. Sharpening mental capabilities and developing good mental habits and attitudes seems to be more important than learning more things (for example, the author of many AoPS books, Richard Rusczyk, thinks that it is better for kids to sharpen their minds solving olympiad problems than learn calculus), although desire to learn more, to build your own understanding, is also important. And it is not necessary that the problems are mathematical in nature. For example, if you read Richard Feynman’s “Surely You’re Joking, Mr. Feynman!”, you would notice that as a young boy he loved to fix things and everybody brought their broken radios to him. He would then fix them, seeing it as a challenge, as a problem to solve. He had to find a way to fix it, no matter how non-obvious the problem was. I think this helped him to sharpen his mind and instilled a good habit to see interesting problems everywhere. If you have to think for yourself, you lessen the risk of developing learned blankness. Try to think for yourself, even if it takes much more time than simply finding solution on the internet. In the long run, developing good mental habits is probably the most important thing.
Also check out the Art of Problem Solving books. They’ve also got some interesting resources on their website.
Also Journey through Genius by William Dunham and The Art and Craft of Problem Solving by Paul Zeitz.
I’m 29 now, but I was a lot like you at age 10. I think you’ll like it here—you might find some material too advanced, but then I still do sometimes, so don’t be too worried. You’ll pick it up as you go along.
I can tell you stories of what I was doing at your age, but frankly I don’t think it’d help much(since I did a lot of things wrong myself). The one piece of advice I’ll give you that I think might actually help is this essay: http://www.paulgraham.com/nerds.html—more than anything else, it’s what I wish I’d been able to read when I was your age. It does get better, and more quickly than you might expect.
Also, to a lesser extent, the ever-interesting Yvain posted this bit on his blog, which might help explain why what teachers do bugs you so much:
My elementary school (I’m 28 by they way, so this is some two decades ago) actually had a program for students like that; one day a week , you would be pulled out of normal class for an alternative class where the material was taught through projects and discussions, logic was explicitly both encouraged in thinking and taught as a skill, and there was basically no rote memorization. We learned games like chess and Magic: the Gathering (I had no idea how huge that game would go on to become; I wonder if the teacher still has those first-edition decks?) during our breaks from “actual” instruction, and there were basically no tests.
It was a ton of fun, but I only stayed in it for one year; the other four days a week were still boring me out of my skull. After the year in that pull-out program, I transferred to another school that had a fully accelerated / “gifted” curriculum. That was less boring—the material and pacing were both better, but I was still the top math student in the class and frequently bored there waiting for others to catch up, for example—but I missed the one-day-a-week program from the old school.
As for what I did during the mind-numbing classes, I read. Fiction mostly, but some non-fiction—I really loved “The Way Things Work” books when I was about Alex’s age—and I usually tried to make it not-entirely-obvious what I was doing. The teachers knew, of course, but as long as I didn’t flaunt what I was doing and kept my scores up, they didn’t generally care. I was bad at the participation / stupid games stuff in those classes, but I learned to read stuff way “above my level” and got way more benefit out of it that I would have from listening to the teacher drone on about how to do long division or whatever.
My school board did similar—I did the full-time gifted class, my brother did the one day a week.
I also got accelerated to a rather extreme degree—I skipped 3 grades, and started highschool at age 10. It was a mixed blessing, frankly—it got me past the “kids are pure evil” years, and turned me from the obnoxiously nerdy kid into a curiousity, which got me picked on a lot less. The material didn’t get much more interesting—once you catch up, it’s being taught at the same pace. And on the downside, it made me a lot more awkward in highschool years than I probably would have been otherwise, because the age gap meant that the usual diversions of dating and drinking didn’t open up for me until years after they had for everyone else(and when everyone else is years more experienced than you, self-consciousness sets in with dating, and slows you down even further—I didn’t even ask a girl out until I was about 18-19).
Hi, Alex!
I pretend to be named Ilzolende, and I’m 16, which puts me closer to you in age than the majority of commenters here. I’d suggest learning about common cognitive biases for general self-improvement. In terms of academic boredom, it may help to find a secondary activity that you can perform that does not interfere with your ability to absorb spoken information. Small, quiet things for you to play with in your hands without looking, like Silly Putty, are useful options.
This doesn’t always help, but trying to figure out why you feel a certain way can dampen some emotions. When I’m really angry at someone, but I don’t want to be, sometimes telling myself “my body is having an anger reaction, but that doesn’t mean I have to be upset at that person” is useful, as is directing feelings of aggression to an inanimate object. (Don’t actually attack the object, just replace any images you have of you hurting someone with you hitting (for example) a drum set.)
If you realize that you have no good reason you can think of for having an emotion, you may want to treat it as a physical problem. If I’m sad, but not due to actual external phenomena, then sometimes just reading something nice for half an hour works.
I don’t know how well this generalizes, and there may be some negative costs to playing with Silly Putty in class, so take this with a grain of salt.
Hi, I’m Amanda. I’m interning at MIRI right now. I found HP:MoR 3 years ago, and started reading the Sequences shortly after. After 2 years of high school, I dropped out, and started at the University of Kansas. Reading the Sequences probably contributed a lot to this; I was tired of feeling like I wasn’t doing anything important. Likewise, after a year at a state school, and now experiencing 5 weeks in the Bay Area, I’m motivated to get out of Kansas and back here.
I’m studying computer science, and I just finished my freshman year. I also do computer science research during the year. My advisor had me work with genetic algorithms, which, looking back now, was mainly to get me programming. My only experience was one high school class, which was predictably bad.
Anyway, I programmed a web project, and realized that I actually enjoy programming! My parents are both software engineers, so I had initially seen it as a boring 9-5 cubicle job. Later, I viewed it as a tool, useful enough to devote my studies to, but not particularly enjoyable. After working on the web app, I remember thinking, “Why didn’t anyone tell me how cool coding could be?”
I decided to intern at MIRI to help narrow down what I want to do; either working directly on FAI research, or going into startups, in order to tackle another problem, while earning to give. (I’m leaning toward the startup route now.) I’ve had a great time so far. I have a few days left at MIRI, then I’ll go to the other end of the office to volunteer with CFAR for a week, and finally I’ll end my stay in Berkeley by attending a CFAR workshop.
I decided to end my lurking in order to post some of the things I’ve been working on for MIRI. More on that to come.
Welcome to LessWrong! Sounds like you’ll have some interesting things to share. Glad to have you.
It’s not like your username sounds obviously feminine either, so how confident you are about whether a given user (except the obvious ones, say lukeprog or NancyLebovitz) is male or female?
But yes, according to the last survey, only around 10% of the people here are women, and even fewer among the most prolific contributors.
I don’t think LWers collaborate to write the survey (correct me if I’m wrong, though)...please don’t generalize the decisions of a small group to the entire community.
Edit: Oh, sorry, didn’t realize you were the OP. lol. So you wouldn’t know...and I’m not sure either.
Well, given that LW is/was* predominantly appealing to STEM-types, with a focus on computer science-y topics (artificial intelligence), decision theory etc., it’s no wonder that the gender gap here reflects the gender gap in e.g. computer science colleges:
Edit: * “was” because Harry Potter!
Welcome!
Have you tried out Vibrams? I have found them to be a delightful shoe replacement.
That feeling will fade as you read and do more. I do want to call back to something you said earlier, though:
This is where you want to end up; it’s one thing to talk a good game about biases, and another to understand them on the five second level. While reading through the sequences, it’s helpful to try to turn the epiphanies into actions or reactions, rather than just abstract knowledge.
If you are interested in putting your programming skills to work on rationality education, you might want to get to know some people at CFAR; there are a number of useful things that could exist but don’t yet because no one has programmed them. (Here’s an example of one of the useful things that does exist.)
Sort of. The main thing is identifying a situation that will trigger a behavior. For example, whenever I notice I’m the least bit confused, I say out loud “I notice I am confused.” This is an atomic action that I can do out of habit, and which will make me much more likely to follow up on the confusion. Oftentimes, this will be something like saying “event is on Saturday the 25th,” and then noticing that Saturday isn’t the 25th. This is something I really ought to get to the bottom of, because thinking the event is on the wrong day will lead to missing the event, which is totally preventable at this point if I notice my confusion.
Most people have defaults against noticing this sort of thing, though (I know I definitely did, even knowing a lot of decision science and about baises). Having a specific plan of action makes it way easier to react the right way in the moment, and having a workaround for one bias is better than knowing about twenty biases.
This is a better approach, I think, but I’m leery of recommending it because enough people have trouble reading through the sequences one time that suggesting it two times seems like asking too much.
I know this isn’t true for everyone, but for me, Eliezer’s writing is really fun to read; I’ve reread many of his posts just on that basis. The Sequences do have some dense parts, but for most parts, I couldn’t tear myself away.
I applaud your pragmatic response to ridiculous social pressure.
I also prefer bare feet, though to a lesser extent. I hate wearing just socks, but I don’t mind wearing worn tennishoes that bend easily.
Welcome to Less Wrong!
I don’t have much else to say, except that several of your “traits that normal people find weird” are ones I share:
I’ve been approaching that view myself, more and more, but I don’t think I’ve seen this talked about much here (not directly, anyway; a lot of the “Dark Arts” / manipulation discussions are applicable, though). I think it would be cool if you wrote a post or two about your thoughts on this issue. (And/or linked to any related blog posts you might have, if you’re willing.)
Agreed.
Also agreed. This view, I think many people here share.
Yes, my family has a similar reaction to the idea of not voting.
Click me!
Welcome to LW. :)
Note: the post talks about priming research. I made the following comment there:
In general, a lot of research on priming is statistically dubious. There are a few robust findings, but there’s also a lot of stuff that doesn’t hold up under closer examination.
Thanks!
Hm, well, it seems that I agree with the recommendations in the post; I use AdBlock (and get rather angry when certain websites try to guilt-trip me about doing so), and I don’t watch commercials on TV (by not watching shows on TV at all). (Here’s a question: does anyone know of a way to get rid of ads in Youtube videos?)
Of course, living in a city, it’s difficult to avoid advertisements entirely. Billboards are all over the place.
What I’d like to see are discussions about the ethics of advertisement — that is, is it just unethical for companies to use these techniques? (And if so, what forms of advertisement are ok?) Is it unethical to advertise at all? My intuitions say “yes” to the former and “no” to the latter, but I haven’t examined said intuitions very deeply.
Aha — it seems the extension you suggested is Adblock Plus (lowercase b), whereas I had been using an unrelated one called AdBlock (capital B, no “Plus”). I’ve now switched and the YouTube ads seem to be gone!
I’m sure many do; I agree with both statements. But I would caution against caching, or worse, identifying with, the belief that voting in general is pointless or otherwise not to be done.
As to my agreement with the beliefs stated: political identification is certainly a mind-killer, so it’s a good idea not to identify internally as a member of a political party. Also, the existing major parties, and their leaders, are inevitably badly flawed, but using your single plurality vote (the only one you get in most English-speaking countries) to support a third party candidate isn’t going to accomplish anything.
But I’d still encourage people to vote.
I have an ulterior motive for saying this. Personally, I feel the need to have some amount of not-entirely-rational hope to keep me going. I find some of that hope in voting system reform (which is also a gratifyingly interesting hobby). This sort of structural reform has little chance of succeeding if all the people who are unhappy with the current system become identified with not voting.
But even if you do not share my interest in this reform, I think there are times when participating in politics (which generally includes voting as one of the most basic steps) is a sensible and useful thing to do. The major parties will always be very flawed, but there are times when one of the choices on the ballot is clearly more flawed and when the power of participating is significant.
Would you caution this more strongly than you might caution against caching, or identifying with, any other comparably-specific belief?
Let’s say we agree that “participating in politics” is a sensible and useful thing to do (I don’t, for many nontrivial meanings of the phrase, but this is for the sake of argument). Is voting actually a meaningful, or effective, or necessary way to go about doing so? If so, why and how?
Are there many instances when one choice is clearly more flawed, such that you can see this in advance, and you also have a nontrivial chance of affecting the outcome with your participation?
For example, let’s say it’s 2012, and I think Obama is horrible, just horrible, and that him being re-elected would be a disaster (and I also somehow know that Romney will be a good president). I am in New York. What would you say, roughly, is the chance that with my vote, Romney takes NY, but without my vote, Obama takes NY?
Depends on what you mean by “comparably-specific”. The belief I spoke of was a generalization: that because a certain set of elections were not worth worrying about, that all future elections will not be. A notable feature of elections is their variability; it is clearly the case that results vary.
A single vote is massively unlikely to affect anything important. Political campaigns, however, can have a reasonable probability of doing so. Campaigns are about convincing large numbers of people to vote in a certain way. The messages you put out about whether or not you intend to vote affect your friends. A 2012 study using a facebook button showed that by voting themselves, individuals could bring 4.5 other voters to the polls. Obviously the specific circumstances of that study are not likely to repeat, but the overall message that it’s about more than just your one vote are likely to be applicable more generally. If you intend to canvass or phonebank, of course, this is even more relevant; it is likely that voting yourself is a better investment than trying to lie effectively about whether you believe individual votes matter.
Again, we’d have to define the terms, but if you have a significant altruistic term in your utility function I think it’s a good bet.
Your choices are to be a habitual voter, a habitual nonvoter, or an occasional voter based on individual calculations of the expected value of each election. Whichever choice you make is leaky; if you have friends, they will be influenced by your decision. In this circumstance, being an occasional voter seems unlikely to be rational; your outlay on calculating the expected value, and the reduced contagion of your voting decision even when you do find that a specific election is worth it, probably overwhelm the trivial effort you save by not voting.
So the question is, is it worth a few hours a year to be a habitual voter? It would be easy to overestimate the cost, but remember, this should be compared not against the most effective possible use of those hours, but against the average effectiveness of your non-work hours. In dollar terms, this is probably a lifetime cost in the high four or low five figures. There is at least 10 times that money at stake in even the most trivial local election. You have to discount that by the weight of the altruism term in your utility function and by the average difference in quality between frontrunners, but for me those terms together shrink it by less than half an order of magnitude, so I’ll ignore them.
So if there’s better than a 10-30% chance that you will participate in an election with a margin of under around 5 votes (your vote plus the net margin of your social penumbra divided by two) in your lifetime, then voting is worth it. At 4 small local elections a year for 50 years, that means that if average margins are less than about 600-2000 votes on those elections, then it’s likely to be worth it, without accounting for any intrinsic values (such as the feeling of having participated). That’s in the right ballpark.
Roughly zero. And you’d multiply that by the chances that the national election swung on NY, which are also small. So great, you’ve found an example where voting wasn’t worth it. Do you think it’s safe to generalize from that example?
As I argued above, the main value of being a habitual voter is in convincing your friends to vote in small local elections; and yet you will probably spend more time talking with them about Obama and Romney than about your local sheriff or school board or judge or public transit administrator. That’s not logical, but that’s how people are.
I barely have 4.5 people that I ever discuss politics with, and all of their political views are at least as established as mine. I would be surprised if my voting brought so much as one other voter to the polls.
Good god, no!
This is contrary to my experience.
Am I really likely to spend more effort on deciding whether to vote than on deciding whom to vote for? Especially in local elections?
The problem is not that deciding to vote is itself some difficult, complex decision. The problem (well, a problem, anyway) is that in any election where I’m even remotely likely to influence the outcome (i.e. local elections), I have to spend a tremendous effort to even get enough relevant information about the candidates to make an informed decision, much less consider and analyze said information. And this isn’t even factoring in the effort required to have a sufficient understanding of “the issues”, and the political process, etc., all of which are crucial in figuring out what the effects of your vote will be.
One of my friends engages in political advocacy, votes, canvasses, researches candidates, and all that stuff. I see how much of her time it takes up. Personally, I think it’s a colossal waste of her intelligence and talents. She could be writing, for example (which she does also, to be fair, but she could be writing more), or doing something else far more interesting and productive.
Also:
How do you figure this? Why aren’t we comparing to work hours? And why are we valuing non-work hours only in money earned?
I think we’ve mostly said what we have to say, and this is off-topic.
My numbers showed that at best voting is instrumentally a break-even proposition. I do it because I find it hedonically rational; for instance, I don’t have to lie to my family about it. Part of what makes it a net plus for me hedonically is that I have a vision and a plan for a world where a better voting system (such as approval voting or SODA voting) is used and so I am not doomed to eternally pick the lesser of two evils. I can understand if Crystal makes a different decision for her own hedonic reasons.
I also suspect that metarational considerations such as timeless decision theory would argue in favor of it, because free riding on other people’s voting effort is akin to betrayal in a massively-multiplayer prisoners’ dilemma. I have not worked out the math on that, but my mathematical intuition tends to be pretty good.
Your description of your friends’ advocacy suggests you are attached to the idea that politics is a waste of time, not just for you, but for others. I suspect that belief of yours is not making you or anyone else happier. I recognize that you could probably make the converse criticism of me, but I am happy to prefer a world where aspiring rationalists vote to one where they don’t (even when their vote would probably be negatively correlated with mine, as I suspect yours would be).
I waffle about this a lot.
Sure, one effect—perhaps even the overwhelmingly primary effect—of my vote is to influence which candidate gets elected, and to use that power responsibly I have to know enough to decide which candidate would be better to elect, which requires tremendous effort. (Of course, that’s only an argument for not-voting if responsibly using my power to not-vote doesn’t require equal knowledge/effort, but either way that’s beside my point.)
But another effect is to reward or punish campaigns, which has an effect on the kind of campaigns that get run in the future, and it often seems to me that this is worth doing and requires less knowledge to do usefully.
Of course, the magnitude of the effects in question are so miniscule it’s hard to care very much in either case.
I think most of your points here are well made, but
Most people do not have the option to add more hours of work and thereby receive more money at the same rate. If you work a salaried 9-5, it’s misleading to calculate the value of your time as if your hours not already committed to work could be converted to money at the same rate, and even if you do work at a job that allows you to work overtime hours, you’ll generally only have the choice of whether to make that tradeoff for specific hours out of your week, not any hour as-desired.
If you’re typically employed, your work hours are already committed, so for the most part you only need to evaluate the tradeoffs on your remaining hours.
Well, all of that is actually false for me, as I can work my hours whenever I feel like, but that’s moot; I feel like your comment addresses a point other than the one I made.
What I meant was — are we stipulating that voting necessarily takes place during hours when I can’t work? Why? That seems unwarranted.
Also, I repeat this part of my question, which none of the above reasoning touches at all:
Let’s say I work a salaried 9-5, have no option to work more, and vote after I leave work.
There’s still some opportunity cost. Maybe I miss my favorite TV show or my WoW raid or whatever. Maybe I don’t get to spend as much time with my family. Maybe I get less sleep. Why should we ignore such costs?
I agree that it’s not wise to ignore the associated opportunity costs, but it’s a rather common fallacy (at least, one that’s popped up quite often here) that one’s time is fungible for money at the rate one is compensated for work.
On the other hand, for many individuals there are also likely to be associated gains, such as the fact that voting tends to be widely viewed as an effective signal of conscientiousness. Personally, whatever my feelings about the likelihood of my vote having a meaningful effect on the course of an election, I would prefer most of my acquaintances to think of me as the sort of person who votes.
I, on the other hand, would really rather not be thought of as the sort of person who votes.
Who are your acquaintances that they view voting as an effective signal of conscientiousness? Like… normal people, or something? Because that’s weird.
For someone who lives in New York? Yes. Yes it is.
(will respond to rest of your post later)
Hello. I’m Ouri Maler, or “sun tzu” on some other forums; turning 29 in August.
I don’t exactly remember when I started thinking of myself as a rationalist, but I know the core of my pro-science, pro-logic worldview was formed between the age of 8 and 10. For many years, I planned to be a physicist. In college, I studied to become a roboticist. And since that hasn’t entirely panned out, I’m currently struggling to get employed as a programmer. I also write as a hobby, and I do try to reconstruct rationalism in my current urban fantasy story, “Saga of Soul”.
Less Wrong has been on my “to check one of these days” list for a few years. It came to my attention again recently when Mr. Yudowsky recommended Saga of Soul on Facebook, prompting me to marathon HPatMoR over the past few days. I finished yesterday, and figured it was time to join the community and see what’ll come of it.
Oh hey, I have encountered this thing in the past and I think you have interacted with one of my beta readers and you promoted my friend Emily’s Kickstarter. Hi!
Hello! Unless I’m mistaken, you’re the author of Hi to Tsuki to Hoshi no Tama? I used to read that.
I am, yes, but I now consider all the webcomics I used to do embarrassing and would rather steer you towards my more recent prose, like Luminosity.
Speaking of your recent prose, what’s the update schedule on Goldmage?
Goldmage is stalled due to plothole. (Basically, I thought I could write about goldmagic without doing any math, and this doesn’t seem to be the case.) I don’t have an ETA on fixing it. Elcenia is not suffering from that specific problem but my life in general is being eaten by a freeform roleplay thing I am doing that leaves me with this tendency to open story files, stare at them, and then close them.
Damn, that’s too bad. I really thought it was a clever idea. And to end on a cliffhanger! Sigh.
I haven’t actually decided to abandon the story, it just needs math to happen and a significant part of my brain wants the math to happen via magic.
I… understand? A significant part of my brain always wants math to happen via magic.
Sometimes it does! Sort of.
Well, it’s your call. But for what it’s worth, I enjoyed HtTtHnT when it was running (particularly how the protagonists handled the loss of their secret identities).
Luminosity sounds like an interesting idea, though I’ll confess I’ve never read any of the Twilight books...
Well, you could always try reading the first few chapters and stop if you don’t like it >:D
Luminosity requires no knowledge of nor affection for canon Twilight.
Oh hey, welcome! Any magical girl who takes the time to view the Earth from space has my vote, but you already know that.
Thank you! And thanks again for the link—I got around 250% as many unique views in the 48 following hours as I had in the entire preceding month.
Hello, Less Wrong! I’m Wes W., which username I’ve chosen as a compromise between anonymity and real-life-usability, since I do intend/hope to get involved in meatspace once my schedule permits.
I’ve been lurking here and working my way through the Sequences for a couple months now. I’m intentionally pacing myself, so I can process things sufficiently. (Also, it’s mildly alarming to finish reading a post and find that my brain has already vented all previous opinions on the topic and replaced them with the writer’s.) I don’t really know anymore how I found this site, because I’ve been aware of its existence for a couple years, but only recently realized both the full extent of the material here, and that I wanted to be involved in it.
I’ve been an atheist for several years, following another several years of diminishing faith in my native Mormonism, but it wasn’t until I started reading Eliezer that this felt like a good thing, rather than a loss.
I currently have a job as a math tutor, which I originally got as just a college summer job, but turned into an “oh, this is what I want to do with my life” thing, so I’m now working on becoming a teacher. So clarity of thought is especially helpful to me, since I have to know something backwards and forwards in my sleep before I can do much to help a student understand. Ideas like “guessing the teacher’s password” and “how could I regenerate this knowledge, if I lost it” have been directly useful to me, and I also hope to get better at overcoming akrasia.
I know what you mean about the author’s views replacing your own! I think it’s good to sit on your thoughts for a few days afterwards and let your excitement simmer down so your rationality can kick in and pull it apart and put it back together again, although I have a feeling that with most posts you’ll still end up conceding that your (new) view is on par with the author’s!
Hello everyone, I’m Nicholas Rutherford! I’m a 21 year old undergraduate student at the University of Saskatchewan studying pure math.
My original start to rationality is due to OK Cupid (hooray for on line dating!). After being fed up with the lack of people in my area I decided to see who my top world wide match was (It turns out that this ‘top’ person will actually change so I guess I lucked out). This person’s profile was written in a very clear, well thought out manner and the answers to their questions showed that they had a fantastic decision making process. After chatting with them they told me the secret to their knowledge was less wrong.
From there I started making my way through The Sequences (currently about 40% of the way through), reading HPMOR and lurking the general discussion board here. I also had the pleasure of attending the July 2013 CFAR workshop, which has really inspired me to focus on improving my rationality and actually being a part of the community (and not just a lurker).
This community is awesome and I can’t wait to improve it in any way I can! I mean, it is the least I can do after all I’ve gained from it :)
Hello again. I’ve been posting for a while as ModusPonies. As much as I like the old name, it’s time to retire it. More and more, I’m interacting with the community in meatspace and via email. I’m switching to my real name so that people who know me in one context will recognize me in another.
A bit late to say this, but: best username ever.
My name is Anders. I have been lurking for a long time, and have attended meetups in Boston for the last three years. I recently began commenting more frequently. This is a new account; after discussing Ben’s name change with him at the meetup today, I decided to switch to something closer to my real name, sacrificing my 20 karma points in the process.
I am 31 years old. I am a doctoral candidate in Epidemiology at the Harvard School of Public Health, where I work on some new implementations of causal models for comparative effectiveness research, particularly for screening interventions. I am originally from Norway. I attended medical school in Ireland, and worked for 18 months as a junior doctor in western Norway before moving to Boston.
On Less Wrong, I am particularly interested in the material on causality and decision theory. I am also interested in epistemic rationality and cognitive bias in general, and in the extent to which our actions are explained by signaling. In terms of mainstream philosophy, I see myself as formalist, falsificationist and prioritarian consequentialist. The “formalist” part is due to spending a year as an undergraduate student in mathematics; 12 years later, the only thing I retain from that year is a persistent belief that mainstream philosophy is underrating the importance of David Hilbert.
Hello, I’m Erin. I am currently in high school, so perhaps a little younger than the typical reader.
I’m fascinated by the thoughts here. This is the first community I’ve found that makes an effort to think about their own opinions, then is self aware enough to look at their own thought processes.
But, this might not be the place for this, I’m am struggling to understand anything technical on this website. I’ve enjoyed reading the sequences, and they have given me a lot to thing about. Still, I’ve read the introduction to Bayes theorem multiple times, and I simply can’t grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.
I recently returned to this site after taking a statistics course, which has helped slightly. But I still feel rather lost.
Do you have any tips for how you utilized rationality when you were starting? How did you first incorporate it into your thought processes? Can you recommend any background material which might help me to understand the sequences better?
You could just trying to read the posts even if you don’t explain all the jargon: over time, as you get more exposed to the terms that people use, I’d expect it to get easier to understand what the examples mean. And you might get a rough idea of the main point of a post even if you don’t get all the details. Eric Drexler actually argues that if you want to learn a bit of everything, this is the way to do it.
If you don’t understand some post at all, you could always ask for a summary in plain English. Many of the posts in the Sequences are old and don’t get much traffic, so they might not be the best places to ask, but you could do it an Open Thread… and now that I think of it, I suspect that a lot of others are in the same position as you. So I created a new thread for asking for such explanations, to encourage people to ask! Here it is.
Thank you for the link and for starting the thread. The article made me realize that I am going about trying to understand rationality as if I have a major exam in a couple months. Reading many of the articles on here for a second time, I’m grasping them a lot better than I did before. The new thread seems like it will be immensely useful. I really appreciate you taking the time to answer my question!
Glad I could help. :)
Welcome, Erin! As Adele said, even if math is not your passion, you can still learn a lot about your own thinking from what Eliezer and others wrote. For a look back by one notable LWer, see http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/ . Be sure to check out Scott’s other blog entries, they are almost universally eloquently written, well-researched, charitable, insightful and thought-provoking.
Thank you for the link. I’m very pleased to find another source of such interesting ideas. I anticipate the day when I too will read the sequences and be able to say “everything in them seems so obvious.”
Hi Erin, I’m Adele! It’s good to see young rationalists here. I think you might really like Thinking, Fast and Slow by Daniel Kahneman. Daniel Kahneman is a well-known psychologist, and winner of the 2002 Nobel prize in Economics. In this book, he goes through different thinking processes that humans often use, and how they are often wrong. It is not very technical, and is a pretty easy read IMO. It might also help with some of the cognitive science stuff in the sequences.
It’s okay to not understand Bayes’ theorem for now, knowing the math doesn’t really make you that much better at being rational—there are easier things to do with larger gains. If you want to get the programming references, it might be worth learning to program. There are some online courses which make it relatively easy to get started. It’s also a good skill to have for when you are looking for employment.
One thing that has helped me a lot in being more rational is having friends who can point out when I am being irrational. Another good place to look at (and go if you can) is CFAR, whose point is basically to help you get better at being rational.
Thank you for the resources! Kahneman’s book looks very interesting, and luckily my library has it. I’ll check it out as soon as possible. I am planning on taking a Java Programming class next year. Does Java have the same set up/structure/foundation as the languages that are referenced on here? What would you say is the programming language that is most relevant to rationality (even if it isn’t a good beginning language)?
I definitely recommend learning to program in a different language before you take your Java class. Java makes things more complicated than they need to be for a beginner, so it’s good to have a conceptual foundation in a simpler language. If all you care about is being able to reason abstractly about recursion and that sort of thing, Scheme is a language that’s good for beginners and will teach you to do that. (You could download this and read this free book or this free book.) If you want to focus more on kicking butt in your Java class and building games/web applications/scripts for automating your computer, I recommend learning Python (I like this guide; here’s another free book). These are both great choices compared to the languages people typically start learning to program with. I would lean towards Python because the resources for teaching it to yourself are better (there’s a Udacity class, the community online is bigger, etc.) and it will still give you most or all of the rationality-related benefits of learning to program. Search on Google or talk to me if you run in to problems (teaching yourself is tough).
Awesome! Pretty much any language will give you enough background to understand the programming references here. I agree with John that Scheme and Python are good languages to start with. The most rational language to use depends a lot on what exactly you are trying to do, what you already know, and your personal style, so don’t worry about that too much.
Hello and welcome!
I don’t know about the sequences in general, but for Bayes’ Theorem you could try Luke’s An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem.
I’ll throw in a couple more explanations as well. (It’s hard to know in advance which one might make the idea click neatly into place!)
Thank you both! Just starting to go through those explanations, Bayes Theorem is making a lot more sense, and I’m also starting to see why everyone is excited about it.
Hi! HPMOR brought me here. I now spend about as much time telling people to read it as I do discussing the weather with them. I’ve read about half of the sequences. I lurked for a long time because I often find that getting involved in discussions blurs my ability to think objectively. Right now I’m working on a Litany Against Non-Participation, as well as taking gradual steps towards participating more, in an attempt to remedy this. I’m very interested in learning how to ask better questions.
I’m entering my fourth year of an interdisciplinary-or-is-it-multidisciplinary program at McMaster University in Hamilton, Ontario. Basically, I’ve chosen to focus my formal education on skill development (reasoning, writing, researching, etc.) instead of specialized content acquisition (that’s for my spare time).
For at least the last five years, I’ve been a philosophy-based thinker. Most of my courses were non-philosophy, but I took them to aid with my philosophical education. Sort of like how a guitar player might learn piano to improve their music theory and develop new musical ideas. I have a (very idealistic) vision for philosophy, one in which philosophy is the ‘highest’ discipline that makes space for only the most educated and able. In most cases, I think that philosophers should embrace scientific knowledge and methodology, and stop pointless quibbles about matters that they are not qualified to address. For instance, I’m quite frustrated by the lack of understanding of modern social psychology and sociology in political philosophy and ethics.
I’ve recently concluded that completing an undergraduate education in philosophy is not worth my time, and I totally agree with lukeprog’s diagnosis. Moving forward, I am going to attempt to transition into a science-based thinker. I’ll learn the same material, but to a different end. Maybe I’ll save philosophy later.
I’m very grateful to LW. I’m a better thinker than I was a year ago, and I’ve finally been able to shed some of the old beliefs that have been holding me back from reaching my potential as a rationalist. Feels good. Thanks y’all!
That’s definitely true. But there is an advantage to posting. Often, I’ll have an idea and start to write it out. But then, I realize that it’s not quite up to my internal “less wrong standards.” So, I’ll start refining the idea, and end up with a much better one than I started with.
Or I’ll find out that the idea isn’t as good as I thought it was, and end up not posting.
My name’s Noah Caldwell, I am a lesser being who currently resides in rationalist Hell. That is, I am a minor (17 years) and I live in Tennessee (not by choice (it’s not THAT bad here, though)).
I was in a program called TAG (Talented and Gifted) in elementary school, and my mother once said I have genius IQ, which despite meaning little because you can’t represent intelligence numerically remains highly flattering. It may have contributed to a very, very miniscule ego (or so I like to think), but it’s made me believe I can do better in anything: Tsuyoku naritai! Whenever I have an interest, I pursue it; I’ve been like that for a long time. So the net gain was, I think, worth it, even if her statement may have been untrue.
I am currently trying to do well in school while shoving as much coding, science, math, language, musical theory, and history in my head. I plan on getting a HAM radio license very soon. I’m also trying to cleanse myself of bias now. My dream college would be MIT, but that is one heck of a reach school, no matter who you are. I also need to figure out how to insert my little segues into my monologue without parenthesis, because wow does that look weird. Maybe I’m just being self-conscious. (But that’s a GOOD THING!)
The traditional recreational activities I partake of include reading, piano, backpacking, and videogames (I’m digging into the original Deus Ex with delight right now). I also need to read the sequences; I’ve only sampled bits and pieces like an anorexic at a chocolate buffet.
If you come to visit MIT, and you happen to be around campus on a Sunday, we’d love to have you at one of the Boston meetups. Also, if you want to talk to some MIT students or alumni, let me know and I’ll see if I can put you in touch.
I sometimes forget how much untapped potential in term of networking opportunities Less Wrong holds.
I didn’t realize it at the time, but that’s further incentive to attend MIT: I can actually go to LW meetups!
I don’t see myself touring the school any time soon (I’ve done plenty of research via the admissions blogs and other testimonials, and plane tickets happen to be expensive), but I would love to discuss any peculiarities you don’t learn about until being a student, or anything else I should know before applying.
I might also take you up on that offer if you are willing. I’ve been considering MIT as a university since I heard that it has a insanely good Bio (and everything else) program. I’m currently getting my citizenship, reporting as a birth abroad (I’m 17 and have all the necessary qualifications) and want to do better than attending the ULeth Bio program as while it is decent it’s nowhere near as good as MIT or any of the good universities in the states. Sorry if I seem overeager, It’s just that things are a little stressful for me to pick a University at the moment. Sigh according to my friends I am insanely lucky, but I want to do better than chance.
Hi. I’m a software engineer and history enthusiast. Been reading for years, and just recently got around to making an account. Still building up the courage to dive in, but this place has done wonders for reducing sloppy thinking on my part.
Hi, Antiochus. What areas of history are you interested in? I’m similarly interested in history—particularly paleontology and archaeology, the history or urban civilizations (rise and collapse and reemergence), and the history of technology. I kind of lose interest after World War II, though. You?
Any and all! Though I have a lot of interest in military history in particular, which lead me to wargaming, with some specialized interest in the Hellenistic period and the ancient world in general, medieval martial arts, and the black powder era of linear battles.
Sad to say, my only experience with wargaming was playing Risk in high school. I’m not sure that counts.
Hello, LessWrong. I’m an 18-year-old recent high school graduate with an interest in computers and science and nerdery-in-general. A summary of your-life-until-Lesswrong seems to be the norm in this thread, so I suppose that’s what I’ll do.
I was born and raised Mormon. About as Mormon as they come, really- nearly all of my relatives practice the religion, and all of the norms and rituals were expectations for me- everything the church said was presented as fact, and everything the church did was something my family participated in, right up to the five-in-the-morning seminary classes in high school and obligatory two years of preaching about the church (for the boys, at least, because I was one). My social group was almost entirely comprised of members of the church as well, which meant I was almost never exposed to ideas that wouldn’t be discussed either in a church or by public school teachers. All this to say that I managed to really, truly believe it- right up until I was around 14, which is when I got my hands on a means of unsupervised internet access. I was honestly surprised by how normal things seemed, outside that bubble in which I had grown up. Everything seemed strange and terrifying, at first (and still does on some level) but… The people didn’t seem all that different. Which wasn’t necessarily a good thing, talking to them wasn’t any more appealing a prospect than talking to anyone I’d grown up with, but still.
I was one of those ‘gifted’ kids in elementary school- the ones with the college-level reading skills in fourth grade. I took pride in that- my ability to memorize things, my ability to understand how they worked before anyone else did. I spent a long time poring over scripture and religious texts, trying to find explanations for how it all worked- souls, miracles, the world-in-general, (I guess my brain really does have a rationality-shaped hole) but I never found anything. The adults told me to pray, but that didn’t even seem like it should have worked (I tried it anyway, of course- nothing ever happened).
Once I started reading things that hadn’t been filtered through the church, though, I started to think that maybe they didn’t have any answers- and then stopped. I couldn’t let myself think that, thoughts like that were bad, thoughts like that were questioning and I had been explicitly warned against that many times- thinking about doing something was almost as bad as doing it, after all, and everyone who stepped up to the podium talked about how they “knew” the church was true, how there was “not a shadow of a doubt” in their minds. My mind had more than shadows. I couldn’t outright lie about that of course, that would be even worse, but I could say I believed- I couldn’t say I knew, I didn’t have sufficient evidence to know, I’d never seen a miracle- but belief was different, right?
I’m not sure when that period ended- I think it was more a gradual transition, but by the time I turned 16, I was full-on agnostic. I didn’t tell my parents this until a half-year later, of course, I was terrified of what they’d do, but it was progress nonetheless. When I finally did tell them I was surprised how calmly they took it- judging from the conversation afterwards, I don’t think my dad ever really believed it- he told me that he knew of no barrier to continued participation even if I didn’t believe and that no, there wasn’t enough evidence, but religion wasn’t about that. I wasn’t sure what it was supposed to be about in that case, but whatever.
I spent my free time the year or so after that thinking about things, because there were so many new things I was allowed to think about and question! I didn’t even realize some of those things had questions you could ask about them! Like gender. Questions about that… Turned out to have inconvenient answers, which I need to get around to dealing with, but whatever. (Edited-to-add that I meant this in the ‘whoops I’m a girl apparently’ sense) I also spent a lot of that time angsting about how I didn’t have an afterlife to look forward to, and how I wasn’t going to live long enough to see even one exoplanet, and maybe it’d be preferable to die now instead of dealing with all that.
This continued about until I stumbled across HPMoR, which succeeded in kicking me from agnosticism to atheism, and hitting me in the face with the realization that I was allowed to want to live forever. My problem was then that I didn’t see a practical means of achieving that, but then I ended up at Lesswrong a few months later and concluded that working on AI was probably the way to go.
And now I have read all the major sequences, which was interesting- I had a sort of hazy, intuitive-level understanding of a lot of the concepts, and as I read they sort of sharpened to the point that I could think about them explicitly. A lot of them introduced completely new ideas though, like Alicorn’s Luminosity sequence- the idea of getting better models of myself just hadn’t occurred to me, and has proved very useful- figuring out what causes me to feel boredom, for example, managed to get my brain to sneeze out something resembling an actual work ethic into me, which might be the single most valuable thing I’ve gotten out of Lesswrong so far, really.
I’ve started reading some of the recommended literature, like Thinking, Fast and Slow and QED and… That’s about where I am now. I have run out of other things to do, so I figure I’ll try and start participating, and see where that takes me.
So, since it seems like welcome-thread posts should have a greater density of hellos than one per thousand words, Hello!
(tl;dr I tried to introduce myself but instead of a long introduction I ended up with a short autobiography, sorry)
(Wow this was melodramatic, I apologize)
Hello and welcome to Lesswrong!
That’s quite the journey! You’ve come a long way under your own sailing power it seems, and trust me, you aren’t alone here. You’ll find plenty of others who’ve made similar trips out of unquestioning dogma into exploration and experimentation. We each have a different life and learning, certainly. But many here share similar backgrounds (religious cultures, advanced at a young age, high intelligence compared to their peers) and many share similar resources (Internet as a connection tool, HPMoR as a gate way to the community). We are certainly glad to have you join and add your unique view to the conversation.
Glad to see you’ve already dove head first into some the resources. I usually try to make suggestions of the Sequences for new peoples, but I see you’ve beaten me to the punch! Yes, it’s not uncommon to come away from some of the posts thinking “I KNEW that. I just didn’t know how to frame it.” That intuitiveness helps with introducing some of the harder concepts that get discussed here… and can encourage people to experiment with ideas and expand on them. After all, we aren’t here to talk about how smart Yudkowsky or Yvain or Alicorn were when they wrote this or that. We’re here to do better.
This is certainly a place where questions are welcomed! Living forever, gender, boredom, we’ll discuss it all. Politics, of course, tends to be handled like an unexploded ordinance, but as long as the conversation is well reasoned and beneficial, we welcome it. You probably already know the site lay out, but if you’d like to start contributing to the conversation with your own posts, visit the latest Open Thread. It’s a good place to start posting because it will let you get a feel for the standards and norms of the community and the types of conversations we have. Also, posting comments on other posts is a good start as well. Once you get settled into the milieu, you can start branching out, posting articles of your own, starting larger discussions, contributing more if you desire.
Oh, and, depending on where you live, you may want to look for a local LessWrong meetup group. These are great places to meet fellow LWers, talk shop, engage in rationality workshops, heated discussions, or just fun activities. If you live in a place without a meetup group and you’re feeling particularly driven, you can also start up one of your own. Starting up a new group’s a fun, exciting activity and, who knows, could be the start of a big movement in your own community.
Anyway, enough spiel from me. Definitely glad to have you join the community, and thanks a lot for sharing your story. I hope to hear some of your questions (and answers) very soon.
Glad to have you with us!
Hello, I am Jay, a 16 year old incoming High School Senior (I skipped a grade if anyone cares). The way I came across this site was through reading an article about a certain thought experiment I don’t want to mention because I don’t want to piss anyone off in my first post (If anyone knows what I’m talking about is mentioning that thought experiment on Less Wrong still banned because I do find it very interesting). Anyway, what drew me to this site was the quest for answers. I have been seeking and contemplating what the answers to life, the universe, and everything in between for a while now. Have I been doing this in a logical or rational way? No, I have simply been walking through the everyday motions in life in an autopilot state with no real purpose or goals wondering what the hell I should be doing with my life. Lately, I have realized if I want to find meaning in my life I will actually have to strive to find it. I cant sit around waiting for answers to come to me. That is why for the most part I have come to this site. I want to learn and see if I can find out what is the purpose of living in this strange universe and to learn some interesting things along the way. That is all. If anybody has recommendations as to what I should start out reading on this site that would be greatly appreciated. Thank you.
Hello, and welcome to LessWrong! If improving is important to you, as it sounds, then I’m sure you will find this site quite useful.
First off, I’m pretty sure you’re speaking of Roko’s Basilisk. As far as I am aware, the ban on the basilisk has diminished/dissolved in light of a.) the Streisand effect that made further attempts to ban it just more fuel for the fire and b.) the fact that the issue is quite thoroughly solved and no longer very dangerous except in terms of misconceptions (see Streisand effect above). It is still a sore issue. Partly because of the bad ways in which it was handled by different parties, but also because people are just tired of hearing about it. No one’s going to shoot you for mentioning it or asking about it, but do be aware that the topic has been pretty well hashed out. It’s not some minotaur lurking in the labryinth. We’re just tired of revisiting it.
As for recommendations, the Sequences are a good place to start. I don’t know how much you know about the culture around here, so, to briefly explain: the Sequences are mostly written by Eliezer Yudkowsky, who many around here hold as one of the major (if not the major) spokesperson for LessWrong’s central ideals and concepts. The Sequences are an organized listing of some of Yudkowsky’s writings, analyzing different topics of LW interest.
They are long. I just finished the Sequences myself and it took about five months with several breaks in between and various reading speeds. As iarwin1 mentions, there are other versions of the Sequences that can help ease you in without being overwhelming. You might also check out the LessWrong References and Resources page for other sources of LW materials.
Given how long the Sequences are, I’d honestly suggest against just diving head first into them unless you already have a strong desire to read them all. You’ll get burned out. Instead, look through the topics and related materials, find the things that interest you, and just check them out. You mentioned you’re interested in improving yourself? Read a little of Mysterious Answers to Mysterious Questions. This is a good beginner’s sequence for learning some of the key concepts of rationality. If you want some help in making your own life better or figuring yourself out, check out The Science of Winning at Life or Living Luminously. Don’t try to learn everything at once. Find the things that interest you, take them one at a time, enjoy learning and improving on what you find.
And, finally, definitely get involved! You’ve already taken step one, so don’t feel you have to stop at saying “hi.” The Discussion board is a great place to see the day-to-day conversations that go on here. Check out the latest Open Thread to see what sort of casual conversations we have. Don’t be afraid to be part of the conversation. The site’s karma system sometimes gives new visitors a fright. They think of something they said getting downvoted and they shrivel up. But remember, unless you’re the victim of a downvoting troll (note: quite the rare event and more cause of laughter than tears), then getting downvoted is just an opportunity to learn and improve. Not a personal attack.
I don’t know how much you know about LW and its culture (though I’ve obviously assumed quite a bit given the length of this post!), but the best suggestions I have are: find what interests you, read it, and, when you feel comfortable, add to it.
Wow thank you for the awesome reply. If all the people in the Less Wrong community are as friendly and as knowledgeable as you are then I have obviously joined the right site. You were right I was talking about Roko’s Basilisk and since it is okay to mention it, here is the article that introduced me to this site if anyone is interested. I will definitely check out the Sequences in addition to the articles you suggested. There is so much interesting information on this site that it is hard to know where to start. One question I do have is what exactly is the importanceof decision theories? That is another thing that I am interested in. Are they applicable in real life situations or only in thought experiments? What is the importance of finding a perfect decision theory? I know the basics of Causal and Evidential Decision Theory but I am baffled by Timeless Decision Theory. If you could point me in the direction of where to find articles on these issues that would be greatly appreciated. Thank you again for the thoughtful and useful reply, it helped a lot.
Edit: I started reading Mysterious Answers to Mysterious Questions today and found it so engaging that I didn’t stop reading until I finished it. It was definitely a mind opening experience for me as I was exposed to a plethora of ideas and biases that I had no idea existed. I am definitely going to try reading the rest of the Sequences now.
Three motivations are common around here:
Building a Friendly AI that is based on decision theory.
Understanding what ideal rationality looks like, so we have a better idea of what to aim for as far as improving our own rationality.
Curiosity. If we knew what the perfect decision theory was, many philosophical questions may be answered or would be closer to being answered.
For some relevant posts, see 1 and 2.
Thank you for the clear and informative reply.
If you want to get a handle on the “Less Wrong” approach to decision theory, I’d recommend starting with Wei Dai’s Updateless Decision Theory (UDT) rather than with Timeless Decision Theory (TDT). The basic mathematical outline of UDT is more straightforward, so you will be up and running quicker.
Wei’s posts introducing UDT are here and here. I wrote a brief write-up that just gives a precise description of UDT without any motivation, justification, or examples.
Just wanted to say you’re off to a great start posting to LW—asking very good questions!
(Also, please break posts like this into more than one paragraph.)
Thank you I’m just trying to learn all I can.
One of the main functions of a good decision theory is to bridge the territory-map divide: by solving problems in your head, it shows you how to solve problems in the real world. You can identify a good decision theory when it works in theory and in practice. If a decision theory seems to work in practice, but is not describable in a precise language (e.g. “do what feels good”), it actually hasn’t been well thought out and puts you at risk of being paralyzed when a very serious and very complex situation arises. On the other hand, if it only works in theory but is impracticable (e.g. “pray to Minerva for an omen”), it will be a waste of storage space in your head. In short, a decision theory should serve as a tool for you to manage your life.
TDT just augments CDT by saying that running two copies of the same algorithm with the same input will always yield the same result.
What? No it doesn’t. That’s not remotely what TDT says. That isn’t even a claim with particularly relevance to decision theory.
Hmm. It does capture most of the essence of TDT, doesn’t it? See for example the last paragraph of chapter 12 and the last two paragraphs of chapter 13.3 in the TDT paper. I disagree with the “just” in the grandparent, but given e.g. “mostly”? Maybe I’m reading too much into the one-sentence description, though.
No. Most of the interesting applications of TDT are about producing the same (or complimentary) outputs with different input. Moreover that description doesn’t even imply making a correct decision on Newcomblike problems (the motivation for producing TDT in the first place). In fact, CDT augmented by the assumption that two copies of the same algorithm with the same input will always yield the same result yields CDT.
To get closer to an (oversimplified) ‘essence’ of TDT I’d instead suggest building from the title. CDT augmented by not caring about which point on the time dimension you are in.
Although neither of these articles is on LessWrong, they reflect the core moral values of many LW members.
Astronomical Waste
Consequentalism FAQ
Thank you for the reply. I will be sure to read these articles.
Welcome! I don’t know so much about reading materials for finding purpose, but as an intro to rationality:
I happen to like Benito’s version of how to read the Sequences, but other people like other formats, and some don’t like the Sequences too much at all (the writing style doesn’t work for some).
CFAR’s reading list, and maybe their videos; you can also maybe see if you can get into SPARC
Thank you for the recommendations I will be sure to check them out.
Oh yes, and check out hpmor.com.
I’m a 17 year old female student in Singapore, currently in my last semester in high school. I’ve been lurking around this site for at least the past year, and have made my way through some of the beginning sequences. However, what really made me want to stick around was lukeprog’s post on How To Be Happy. Funnily enough, I don’t think I’ve deliberately taken up any of the suggestions, though I have realised that my slow path to extroversion over the past few years contributed significantly to my baseline happiness increasing, as has my recent focus on writing. I guess one could say that my focus when reading this site is instrumental rationality, or basically what can I glean from here to make my life the way I want it to be.
Recently, however, I’ve been unable to focus as much because a small part of my mind seems constantly devoted to panicking about college. I’m planning on studying computer engineering in university, and I’m fully confident that I will get into the two local universities of my choice. I’m aiming for US universities too, and getting into them is very important to me, because I’m gay. I’m well aware of Singapore’s active scene in that regards, it’s just that staying here for university means I’ll be living in my parents’ house for at least four more years, and actively lying/hiding from them what I do stresses me out greatly.
I’ve always been able to succeed academically even with this kind of stress, but trying to write college essays while panicking over the possibility of being stuck in this house trying to pretend that I’m not gay or an atheist is not very productive. Neither is the panic over whether my stats are good enough to get into the kind of universities that would justify my parents letting me go to the US.
I suppose one would note I’ve written very little about epistemic rationality, mostly because as fascinating and illuminating as I find it, I’ve often used reading about it as a distraction from my panic and doing work. I’m keeping my efforts focused on ‘winning’ right now. I’m not really sure I identify as a rationalist, as I don’t feel competent enough to claim such a title. Right now, my goals are getting into university and trying to decrease my risk-aversion, as the latter has often prevented me engaging in social events that would improve my mood and/or stretch my social skills.
You sound pretty rational to me. I think that if you identify yourself as a rationalist then you are one. Yes, there is a lot of effort in becoming really good at it and overcoming human irrational biases but if you think its a worthwhile objective then you are a rationalist—I think. I wish you luck in getting into a US university so you don’t need to suffer the stress of hiding your sexual orientation from your parents for 4 years. Of course you’ll eventually have to tell them (well—ideally) but I assume it will be easier when you are 4 years older and more self-sufficient. Best of luck to you. I think you’ll find this site a pretty objective outlet for your feelings on the matter as long as they are well thought out. I’m not gay myself so I don’t presume to know the challenges that you face but I’m willing to listen to what you have to say and give my honest thoughts in a helpful way. I think that most here are pretty open-minded because they are rationalists. I don’t presume to speak for the community here but it is the impression that I get. As long as you support your assertions with good reasons and are willing to explain your feelings and engage an argument I think you’ll find support here (again not a promise as I don’t speak for others—but my impression of the way it works here).
Hello all, my name is Glen and I am a fairly long-time lurker here. I first found this site through the Sword of Good short story, and filed it in my “List of things I want to read but will never actually get around to” and largely forgot about it until I recognized the name while reading HPMOR. I’ve read most, but not all, of the sequences and am currently going through Quantum Mechnics. I’m Chicago based and work as a programmer for an advertising company. I consider myself a low-mid level rationalist and am working at getting better.
I run or play in a wide range of tabletop games, where I’m known as being a GM-Friendly Munchkin. That is to say, I like finding exploits and unusual combinations, but then I talk to the person running the game about them and usually explain why I shouldn’t be allowed to do that. It lets me have fun breaking the system without actually making hte game less fun. I’ve also used basic information theory to great effect, unless the GM tells me to knock it off. Currently in love with Exalted. Been burned by Shadowrun in the past, but I just can’t stay mad at her.
We’re curious how you’ve used information theory in RPGs. It sounds like there are some interesting stories there.
The most interesting stories come from a power in Exalted called “Wise Choice”. Basically, you give it a situation and a finite list of actions you could take and it tells you the one that will have the best outcome for you within the next month. It also requires a moderate expenditure of mana, so it can’t be used over and over without cost. When I read what the charm did, I thought of Harry’s time-experiment with prime numbers. It was immediately obvious that Wise Choice could factorize any number easily, although perhaps not cheaply if it has a large number of factors. From there, it also expanded to finding literally anything in the world either with one big question (if low on mana) or a quick series of smaller ones (if low on time) by dividing the world into a grid and either listing every square or doing a basic binary search via asking the power “Given that I’m going to keep divind the world in half and asking a similar question to this one, which half of the world should I focus on to get within 10 feet of Item/Person X’s location at exactly 7PM tomorrow evening” I also figured out that you can beat the one month time limit by pre-committing to asking the same question in 27 days and having someone else promise to give you a reward if you state the same thing each month, with the caveat that you have to give it all back if you’re proven wrong in the end or change your answer. This can be shown to work (assuming I haven’t made a mistake) by taking a simple case of there being two boxes, one containing ten million dollars and the other being empty. By choosing a box now, it will be opened in six months and you will be given what is inside. Without the trick, Wise Choice looks forward one month, sees no difference and tells you “it doesn’t matter”. With the trick, Wise Choice looks forward a month, and tells you to say what it sees future you saying, even though it doesn’t “understand” why. However, future you can see an additional month forward, and uses it to see future you+2, etc. Therefore, the first instance gives you the true box, even though it can’t see to when the box opens.
Of course, it’s possible that I’ve missed a possible case that makes those tricks invalid. I don’t have access to an actual infinite-knowledge superpower to check my work, but I figure telling other people about it so they can see things I missed is almost as good.
If only I didn’t promise Eliezer to refrain from commenting on this...
Promise violated.
Can someone link to this? I’m new. Also I see what he did there.
http://lesswrong.com/lw/h3p/welcome_to_less_wrong_5th_thread_march_2013/8ood
http://lesswrong.com/lw/r5/the_quantum_physics_sequence/
This is the root level of the sequence, and it links to all of the posts I believe
Hello. I’m Leor Fishman, and also go by ‘avret’ on both reddit and ffn. I am currently 16. The path I took to get here isn’t as...dramatic as some of the others I’ve seen, but I may as well record it: For as long as I can remember, I’ve been logically minded, preferring to base hypotheses on evidence than to rest them on blind faiths. However, for the majority of my life, that instinct was unguided and more often than not led to rationalizations rather than belief-updating. A few years back, I discovered MoR during a stumbleupon binge. I took to it like a fish to water, finishing up to the update point in a matter of days before hungrily rereading to attempt to catch whatever plot points I could glean from hints and asides in earlier chapters. However, I still read it almost purely for story-enjoyment, noting the rationality techniques as interesting asides if I noticed them.
About a year later, I followed the link on the MoR website to LW, and began reading the sequences. They were...well, transformative doesn’t quite fit. Perhaps massively map-modifying might be a better term. How to Actually Change Your Mind specifically gave me the techniques I needed to update on rather many beliefs, and still does. Both Reductionism and the QM sequence, while not quite as revolutionary as HtACYM for me, explained what I had previously understood of science in a way that just...well, fit seems to be the only word that works to describe it, though it doesn’t fully carry the connotation I’m trying to express. Now, I’m endeavoring to learn what I can. I’m rereading the sequences, trying to internalize the techniques I’ll need and make them reflexive, and attempting to apply them as often as possible. I’ve gone pretty far—looking back at things I said and thought before makes that clear. On the other hand, I’ve still got one heck of a ways to go. Tsuyoku Naritai
Welcome! I’m also 16. Welcome to the group of people who answer “no” to the “were you alive 20 years ago” question on a technicality. It’s really great to know about risk assessment errors and whatnot when we’re still teenagers, just because the bugs in our brains are even more dangerous when ignored than normal.
Not only that—the greater degree of neuroplasticity that I think 16-year olds still have(if I’m wrong about this, someone please correct me) makes it a good deal easier to learn skills/ingrain rationality techniques.
As a fellow 16-year-old (there really seem to be a lot of us popping up around here recently), I concur. With that said, rationality skills are difficult for anyone to learn, because the human brain did not evolve to be rational, but rather to succeed socially. I would add that a good deal of rationality potential is ingrained in those who find themselves attracted to LW at a young age, particularly since surveys have shown that LW users tend to have a higher incidence rate of Asperger Syndrome, the symptoms of which include social awkwardness. This suggests to me that rational thinking comes more easily to people with certain personality types, which is arguably genetic. As a single data point, I suppose I’ll add that I myself have been diagnosed with Asperger’s when I was younger, although with how trigger-happy American doctors are with their diagnoses these days, that’s not really saying much.
That’s an interesting correlation, but I’m curious about the causal link: is it that a certain type of neural architecture causes both predisposition to rationality and asperger’s, or the social awkwardness added on to the neural architecture creates the predisposition—i.e. I’m curious to see how much being social affects rationality. I shall need to look into this more closely.
On the subject of potential causal linkages:
I think that at least part of the reason us diagnosed autistic/Asperger’s people are more prevalent on LessWrong is that those of us diagnosed as children spend a lot of time with adults who think that something’s wrong with our mental processes, often without telling us why.
I know that I picked up on this, and then when I heard about cognitive biases, I jumped to the conclusion “These are what’s wrong with me, but if I read more about them, then I can try and correct for them.” Then, I looked up cognitive biases, found the Overcoming Bias blog, decided it was more economics than I could handle, and then I ended up here, because it had less real-world economics.
Test: See if more LWs were incorrectly given a psychiatric diagnosis as children than members of the general population were.
Sounds useful. A survey, perhaps, or maybe a poll?
We could try and get Yvain to include this question in next year’s survey, which is the best obvious way to get an unbiased sample. However, it does involve waiting months for data, so if you’re in a hurry, you could poll the forums now.
Oh how I wish I had access to this kind of material when I was 16.
Welcome, Leor! I’m also a 16 year old new member.
Nice to meet you—it’s rather reassuring to see another member at my age.
Hello, thank you for this post. I am a criminal law attorney, and what attracts me to learning more about rational decision-making is the practical experience that juries, clients, and many attorneys make what seem to be irrational, or at least counter-intuitive, decisions all the time. I am in the very early stages of trying to learn what’s on the site and how to fix my own thought processes, but I also have irrationally high hopes that there’s achievable progress to be made by bringing the LW tools to bear on my profession and the legal regime. I look forward to talking it through with you all.
Hi, jackal_esq. As someone involved in criminal justice, you might find the following interesting, if you haven’t seen them already:
Evidence under Bayes theorem, Wikipedia
R v Adams, Wikipedia
Sally Clark, Wikipedia
Amanda Knox case, Less Wrong (followup post linked at bottom)
A formula for justice, Guardian
Bayesian analysis under threat in British courts, Less Wrong
Aside from that, welcome to Less Wrong!
Ekke Ekke Ekke Ekke Ptangya Zoooooooom Boing Ni!
I’ll be going by Regex. I stumbled upon this site due to a side story from the MLP:FIM fanfiction Friendship is Optimal: http://www.fimfiction.net/story/62074/friendship-is-optimal which is a bit weird, but I guess I’m weird. Yes, I like small candy colored equines. Ponies are my lifeblood.
My life history in a nutshell: Highschool was spent mostly figuring out how terrible middleschool was and realizing my ability to control my environment. Learned basic coding, drawing, and organization skills. Found a path in life due to the launch of the Curiosity rover. Robots were cool. Installed Linux.
I am currently a college sophomore pursing mechanical engineering: I’ve been inspired to create robots. Despite going for a ME degree I have more computer knowledge. My preferred OS is Linux, but I’m not skilled enough with it yet to do much beyond what I can do with Windows.
I am quite interested in personal development, hence why I am here. A lot of the thought processes here seem to mirror my own far more than I’ve seen elsewhere, so there was kind of a “these are my people” moment. I have been lightly reading the site, but there is just so much I’ve been doing it in bits and pieces and digesting it as I read.
I am also an artist of sorts, but I can’t do much beyond basic line art and sketching. Drawing from my imagination is much more difficult than copying something in front of me. I’ve had better luck with using programming to make patterns, but I like the ability to produce an arbitrary image by hand. Getting there.
I am very good at organizing information (depending on the need), but I often fail to actually progress beyond that point and do anything with the knowledge gained. This is paralleled with the fact that I have a habit of hording media rather than watching it.
I sing aloud to myself as I walk down the street. Whatever comes to mind. Very fun. I probably am a little inconsiderate of those around me when doing so, but I like to think I am adding a bit of mystery to their day. They’ll ask “what was all that singing about?” and never know. I also like smashing piano keys in whatever order sounds pleasing at that moment. More fun.
I am interested in polyphasic sleep, and can currently fall asleep for a short nap basically whenever I please and reduce “core sleep” (aka the long 8 hour sleep block becomes a 4 hour block if I take 2-3 twenty to thirty minute naps during the day), but I didn’t go all the way to remove the core sleep.
I consider myself “smarter than average,” but now try very very hard not to judge people based on their intellect. I recall once uttering the statement “better to be intelligent than a skateboarder” (I was convinced one cannot be both) in middle school to someone who later became a friend of sorts. This was because I had (still have?) a bit of a superiority complex, but also because I failed to understand where he was coming from and I had (have?) a tendency to misrepresent others in my mind to a significant degree. I have no doubt that those tendencies still lurk out of view.
Regardless of how high or low I might compare to others I want to become better than I am.
I’ll be around.
Hi there Regex,
Welcome to LessWrong! Yay!
If you liked Iceman’s Friendship is Optimal and other conversion bureau stories, you might enjoy Chatoyance’s 27 Ounces and Caelum est Conterrens. As far as personal development goes, I feel like I personally learned a lot about how to make better predictions about the world from CFAR’s Credence Game, though, um, you might prefer reading through the core sequences to playing the calibration game. I have been told that Mysterious Answers to Mysterious Questions is a good place to start reading through the sequences, though I personally read through most of the sequences in no particular order, as, at the time, that approach suited me more than a structured approach to reading the sequences would have.
In any case, it is great to have a new friend join us; I hope you feel welcome here.
More fanfiction? Served up by a butter yellow pegasus? Don’t mind if I do. (I spent a whole month of my summer reading 5 million words of fanfiction. It wasn’t enough, but after a solid month it is really hard to justify reading more...)
I’ll definitely give Mysterous Answers another look, and also see what that Credence Game is all about. My current methodology has been similar to browsing TVtropes: click the first article that catches my attention, then click all of the new links. I then save the links for later after I’ve browsed enough for the day. It is like a human based web crawling algorithm.
Thank you. I suspect I’ll like it here.
Hello and welcome to LessWrong!
Well you’ve certainly come to the right place if self-improvement and overcoming bias are what interest you. As Fluttershy pointed out, the Sequences are a great place to dive into the culture and conversation of LW. If you’re looking for sequences specifically about self-improvement, check out Living Luminously and The Science of Winning at Life.
You aren’t the only one here to try out habit changes and “life hacks,” so feel free to share your personal improvements or experiments. We have quite a sizeable demographic of people experimenting with things like Soylent and MealSquares and other ideas. So it’s always good to have another voice willing to strive for optimization.
I don’t know what college you attend, but consider checking out if you have a local LW meetup in your area. Meetups are great places to get acquainted and have some real conversation with fellow rationalists (and to just hang out). They’re a great place to start getting your feet wet.
Glad to have you join the conversation! Hope to see you around.
I actually had lived for a whole month off of DIY soylent I made, but I eventually stopped because the process was actually slightly more time consuming (although a third the cost) than my regular methods. (While probably healthier, I didn’t really notice any difference) I suspect there are probably easier ways than I had been doing it though.
One thing I’ve noticed is that LW doesn’t seem to be sending me email notifications when I get a reply. I see that I can tick a thing to get notifications of other people’s specific comments, but in my own there is what appears to be a deletion button. I would then assume it is supposed to be automatically notifying me. Fortunately I noticed the recent comments box.
Definitely going to be liking it here. Thanks.
Welcome!
Hi LW!
I’ve read LW on and off for quite some time, mostly just whenever I’ve gotten linked to it and found myself idly browsing. I used to not post very much on forums, just read around, but I decided to sign up for a few and give posting a try. So here I am!
My name is Sean, I’m 20 and I live in Florida. I’m an undergraduate student studying Cell and Molecular Biology with a minor in Mathematics. I enjoy a lot of things—reading, learning, hiking, discussing, exploring. My interests are pretty wide—I’ve done a lot of computer programming, but mostly hobby stuff, I do a lot of hiking, a little bit of gardening, I read a lot from a wide variety of topics (though, more often than not, it’s either fantasy in my downtime, or research in my work time, lol), and when I have the time I play games and hang out on forums now apparently.
I don’t really have an extraordinary story about how I ended up here. I just like to discuss things, and due to my interests, I find myself in places like this a lot.
I like to be in places where I can either learn, or I can help educate. I’ve had a good bit of experience with teaching and tutoring professionally, and I think one of my strongest qualities is my ability to break things down and explain them to people. I like being in places where I have something relevant to say, and there’s something relevant to learn. I think this seems like a great place to be for me. I’m very interested in science, naturally, though my interests especially lie in biology, plant biology, ecology, mathematics, and a bit of computer science. I’m no stranger to philosophy, history, and the humanities—but those are topics where I’m fairly sure I’ll be doing a whole lot of learning, and very little sharing, hah.
Anyways, hope to see everyone around on the forums. :)
Hello and welcome to LessWrong!
Sounds like you’re quite exposed to a variety of fields. Very admirable! It never hurts to have a wide background, and that exposure to all those different hobbies and areas can improve your work in your central field of interest.
No need for some great story to join. Having an interest in learning is good enough! If you want to read some LW material to give you an idea of the type of writings you’ll see and the type of topics we discuss, feel free to read the Sequences, which collect a large number of LW posts from over the years. It’s something of a crash course on a variety of topics and issues. Quite heavy reading, but very useful.
If you want to join the conversation, check out the Discussion board. This is where the day-to-day conversations on LW take place. It’s a good place to get a feel for the conversation standards of the community before you start contributing your own ideas. Also, definitely check out the latest Open Thread. It’s a bit more laid back than the Discussion board as a whole, but still a good place to talk, ask questions, and engage fellow LWers.
Also, I don’t know where you live in Florida, but if meeting up and chatting with fellow LWers in physical space interests you, Florida has two LW meetups: one in Fort Lauderdale, one in Coral Gables. LW meetups are great places to get acquainted with your fellow rationalists, discuss different topics, and just to have fun.
Glad to have you with us! Look forward to seeing you around the forums soon.
I am a long time LessWronger (under an anonymous pseudonym), but recently I’ve decided that it is finally time to bite the bullet, abandon my few thousand karma, and just move over to my real name already.
Back in the day, when I joined LessWrong for the first time, I followed my general policy of anonymity on the Internet. Now, I’m involved with the Less Wrong community enough that I find this anonymity holding me back. Thus the new account.
Edit: For my first post on this new account, I posted a few of my thoughts on logical uncertainty.
Hi! I’ve been lurking non-intensely for a while. I’m currently reading the sequences, and they’ve given me a lot of food for thought. I have a couple of rationalist friends (including RobbBB) who have gotten me interested in rationalism. I’m also a big fan of HPMOR, which is by far the best fanfic I’ve ever read.
Anyway, I’m trying to become a research scientist in linguistics, so it seems best that for professional development, in addition to personal development, I learn how to think and recognize why I think I know the things I think I know etc. So far, I’ve mostly been squirming in embarrassment over the fallacious reasoning I’ve been engaged in my whole life, but I hope that I can move forward to more productive thinking.
Hello, I’m Jennifer.
I’m here to get better at accomplishing my goals. I’d also like to get better at figuring out what my goals are, but I don’t know if LW will help with that.
I don’t identify as an aspiring rationalist. I try to be rational, but I am generally leery of identifying as much of anything. Labels are a useful layer of abstraction for dealing with people you don’t really know well enough to consider as individuals, but I don’t see much benefit in internally applying labels to oneself. If you do find it useful to think of yourself as an aspiring rationalist, I’d like to know what benefits you’re seeing.
I have not so much lurked as sporadically encountered LW over the past several years. I don’t recall how I first found the site, but I have followed links here on several separate occasions.
My historical usage pattern:
Follow a link to LW
Open a half dozen tabs (much like I do on TVTropes)
Read the tabs (usually from the sequences)
Realize that I’ve hit mental saturation
Close LW until the next time I stumble across a link
I became more interested in LW as a community when I got to know a community member in RL, but I still didn’t register because I have an aversion to opening myself up to potentially hurtful comments on the internet, and LW seems particularly prone to the type of comment which I find most difficult to deal with. Then I decided to improve my criticism handling skills, so I registered.
Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.
I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.
I don’t actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard way that, in my case at least, it is the body that keeps the mind sane. Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.
That said, however, I find a lot of utility in thinking through the material on this site. I discovered Bayesian decision theory in high school, but the texts I read at the time either didn’t explain the whole theory or else I didn’t catch it all at age 14. Either way, it was just a cute trick for calculating compound utility scores based on guesses of likelihood for various contingencies. The greatest service the Less Wrong site has done for me is to connect the utility calculation method to EMPIRICAL prior probabilities! Like, duh! A hugely useful tool, that is.
As a professional writer in my day job and student of applied linguistics research otherwise, I have some reservations about those of the Sequences that reference the philosophy of language. I completely agree that Searle believes in magic (aka “intentionality”), which is not useful. But this does not mean the Chinese Room problem isn’t real.
When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection. Egocentric bias has a HUGE impact on what we imagine about our own utterances. I’ve come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.
As for HPMOR, I hereby predict that Harrymort is going to go back in time to the primal event in Godric’s Hollow and change the entire universe to canon in his quest to, er, spoilers, can’t say.
Cheers.
The chief deficiency of embodiment philosophy-of-mind, at least among AIers and cognitivists, is that they constantly say “embodiment” when they should say “experience of embodiment”. And when you put it that way, most of the magic leaches away and you’re left facing the same old hard problem of consciousness. Meaning, understanding, intentionality are all aspects of consciousness. And various studies can show that body awareness is surprisingly important in the genesis and constitution of those things. But just having a material object governed by a hierarchy of feedback loops does not explain why there should be anyone home in that object—why there should be any form of awareness in, or around, or otherwise associated with that object.
I sort of agree with you: if the “hard problem of consciousness” is indeed a coherent problem that needs to be solved, then what you say makes perfect sense. But I am not convinced that it’s a problem worth solving. I don’t care whether Mitchell_Porter is an entity that really, truly experiences consciousness, or whether it’s only a “material object governed by a hierarchy of feedback loops”, so long as Mitchell_Porter has interesting things to say, and can hold up his/her/its own end of the conversation.
Is there any reason why I should care ?
Let’s distinguish between superficial and fundamental ignorance. If you flip a coin, you may not know which way it came up until you look. This typifies what I will call superficial ignorance. The mechanics of a flat disk of metal, sent spinning in a certain way, is not an especially mysterious subject. Your ignorance of whether the coin shows head or tails does not imply ignorance of the essence of what just happened.
Fundamental ignorance is where you really don’t know what’s going on. The sun goes up and down in the sky and you don’t know why, for a third of each day you’re in some other reality where you don’t remember the usual one, and so on. The situation with respect to consciousness is in this category.
It could be argued that you should care about any instance of fundamental ignorance, because its implications are unknown in a way that the implications of superficial ignorance are not. Who knows what further wonderful, terrible, or important facts it obscures? Then again, it could be argued that there’s fundamental ignorance beneath every instance of superficial ignorance. Consider the spinning coin: we have a physical mechanics that can describe its motion: but why does that mechanics work?
Conversely, in the case of consciousness, there’s an argument for complacency: I may not understand why brains are conscious, but human beings pretty consistently act in the ways that I tentatively regard as indicative of consciousness, and (I could say) in my dealing with them, it’s how they behave which matters.
There are a few further reasons why someone may end up caring whether other people/beings are truly conscious or not. One is morality. I may consider it important to know (if only I could know), whether they really are happy or suffering, or whether they are just automata pantomiming the behaviors of happiness and suffering. Another is intellectual curiosity. Perhaps you just decide that you want to know, not because of the argument from the unknown significance of fundamental ignorance, but on a whim, or because of the cool satisfaction of grasping something abstract.
But perhaps the number-one reason that someone from this community should want to know, is that many people here anticipate that they personally will undergo transformations such as mind uploading. If you at least value your own consciousness, and not just your behaviors, then you have an interest in understanding whether a given transformation preserves consciousness or not.
I think that you are unintentionally conflating two very different questions:
1). What is the mechanism that causes us to perceive certain entities, including humans, as possessing consciousness ?
2). Let’s assume that there’s a hidden factor, called “consciousness”, that is sufficient but not necessary to cause us to perceive humans as being conscious. How can we test for the presence or absence of this factor ?
Answering (2) may help you answer (1), but (2) is unanswerable if the assumption you are making in it is wrong.
I personally see no reason to postulate the presence of some hidden, undetectable factor that causes humans to be conscious. I would love to know how is it exactly that human brains produce the phenomenon we perceive as “consciousness”, but I’m not convinced that such a feature could only have a single possible implementation.
This is indeed important with respect to morality:
If the presence of consciousness is unfalsifiable, then you can’t know, and you’re obligated to treat all entities that appear to be happy or suffering equally (for the purposes of making your moral decisions, that is). On the other hand, if the presence of consciousness is falsifiable, then tell me how I can falsify it. If you hand-wave the answer by saying, “oh, it’s a hard problem”, then you don’t have a useful model, you’ve got something akin to Vitalism. It’d be like saying,
“Some suns are powered by fusion, and others are powered by undetectable sun-goblins that make it look like the sun is powered by fusion. Our own sun is powered by goblins. You can’t ever detect them, but trust me, they’re there”.
Would it be appropriate to say that superficial ignorance is factual (one does not know the particular inputs to the equations which govern the coin’s movement) where fundamental ignorance is conceptual (one does not have a concept that the coin is governed by equations of motion)?
I don’t know.
You defect in the Prisoner’s Dilemma against a rock with “defect” written on it, defect in the PD against a rock with “cooperate” written on it, and cooperate in the PD against a copy of yourself. So, if you’re ever playing PD against Mitchell_Porter, you want to know whether he’s more like a rock or like yourself.
Right, but in order to figure out whether to cooperate with or defect against Mitchell_Porter, all I need to know is what strategy he is most likely to pursue. I don’t need to know whether he’s a “material object governed by a hierarchy of feedback loops” or a biological human possessed of “consciousness” or an animatronic garden gnome; I just need to know enough to find out which button he’ll press.
I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it’s very likely that I’m misunderstanding your point). I am currently reading your words on the screen. I can’t hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I’m not too different from a software program that is receiving the text via some input stream, so I don’t see an a priori reason why such a program could not understand the text as well as I do.
I assume telms is referring to embodied cognition, the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an “embodied” mind.
I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to “natural” (i.e., evolved) minds — although in that case it’s almost trivial; but in any case don’t know enough about it to have a fully informed opinion.
Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they’re done with their development, they can read text off of the screen instead of talking.
That definitely makes sense in case of us biological humans, but just like you, I’m skeptical that the thesis applies to all possible minds at all times.
Some representative papers of Stevan Harnad are:
The symbol grounding problem
Other bodies, other minds: A machine incarnation of an old philosophical problem
I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it’s likely that I’m missing something, but still:
In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we’re on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there.
Harnad totally loses me, however, when he begins talking about “meaning” as though that were some separate entity to which “symbols” are attached. He keeps contrasting mere “symbol manipulation” with true understanding of “meaning”, but he never explains how we could tell one from the other.
In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the “System Reply” by calling it things like “a predictable piece of hand-waving”—but that’s just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn’t understand Chinese, but that’s like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast.
The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, “Ok, let’s say we have an AI that can pass the TT via teletype. But that’s not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !” Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously.
Don’t get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am open to being convinced, but the papers you linked are not convincing. I’m not interested in figuring out whether any given person who appears to speak English really, truly understands English; or whether this person is merely mimicking a perfect understanding of English. I’d rather listen to what such a person has to say.
Haven’t read the Harnad paper yet, but the reason Searle’s convinced seems obvious to me: he just doesn’t take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls “mistaking a failure of imagination for an insight into necessity”.
In The Mind’s Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle’s intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.
I like the expression and can think of times where I have looked for something that expresses this all-to-common practice simply.
Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.
Harnad talks a lot about whether a body “has a mind”: whether a Turing Test could show if a body “has a mind”, how we know a body “has a mind”, etc.
What on earth does he mean by “mind”? Not… the same thing that most of us here at LessWrong mean by it, I should think.
He also refers to artificial intelligence as “computer models”. Either he is using “model” quite strangely as well… or he has some… very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It’s really rather distressing.)
This has surely got to be one of the most ludicrous pronouncements I’ve ever seen a philosopher make.
One of these things is not like the others...
Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.
Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that’s the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word “mind”, then it’s “you know, that thing that makes us different from machines”. So, we are different from AIs because we are different from AIs. It’s obvious when you put it that way !
Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we’re not going to get very far in automating the understanding of human language.
But it sounds like you’re suggesting that it’s not possible to construct such a model within a “disembodied” algorithmic system, and I’m not sure why that should be true.
Then again, I’m not really sure what precisely is meant here by “disembodied algorithmic system” or “ROBOT”.
For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?
An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd’s SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) to text-oriented language processing algorithms. However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren’t sufficient to achieve natural language understanding.
Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects. A physical robot body, on the other hand, is constrained by real-world physics to that which can be built. And the interaction of a physical body with a physical environment necessarily complies with that which can actually happen in the real world. You don’t have to know everything about the world in advance, as you would for a realistic world emulation. With a robot body in a physical environment, the world acts as its own model and constrains the universe of computation to a tractable size.
The other thing you get from a physical robot body is the implicit analog computation tools that come with it. A robot arm can be used as a ruler, for example. The torque on a motor can be used as a analog for effort. On these analog systems, world-grounded metaphors can be created using symbolic labels that point to (among other things) the arm-ruler or torque-effort systems. These metaphors can serve as the terminal point of a recursive meaning builder—and the physics of the world ensures that the results are good enough models of reality for communication to succeed or for thinking to be assessed for truth-with-a-small-t.
OK, thanks for clarifying.
I certainly agree that a physical robot body is subject to constraints that an emulated body may not be subject to; it is possible to design an emulated body that we are unable to build, or even a body that cannot be built even in principle, or a body that interacts with its environment in ways that can’t happen in the real world.
And I similarly agree that physical systems demonstrate relationships, like that between torque and effort, which provide data, and that an emulated body doesn’t necessarily demonstrate the same relationships that a robot body does (or even that it can in principle). And those aren’t unrelated, of course; it’s precisely the constraints on the system that cause certain parts of that system to vary in correlated ways.
And I agree that a robot body is automatically subject to those constraints, whereas if I want to build an emulated software body that is subject to the same constraints that a particular robot body would be subject to, I need to know a lot more.
Of course, a robot body is not subject to the same constraints that a human body is subject to, any more than an emulated software body is; to the extent that a shared ability to understand language depends on a shared set of constraints, rather than on simply having some constraints, a robot can’t understand human language until it is physically equivalent to a human. (Similar reasoning tells us that paraplegics don’t understand language the same way as people with legs do.)
And if understanding one another’s language doesn’t depend on a shared set of constraints, such that a human with two legs, a human with no legs, and a not-perfectly-humanlike robot can all communicate with one another, it may turn out that an emulated software body can communicate with all three of them.
The latter seems more likely to me, but ultimately it’s an empirical question.
You make a very important point that I would like to emphasize: incommensurate bodies very likely will lead to misunderstanding. It’s not just a matter of shared or disjunct body isomorphism. It’s also a matter of embodied interaction in a real world.
Let’s take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like “this”, “that”, “near”, “far”, “soon”, “late”, the positional prepositions, pronominals like “me” and “you”—the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing—with the fingers, hands, eyes, chin, head tilt, elbow, whatever—has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you’re interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.
Then there are the cultural conventions that regulate pointing words and gestures alike. For example, spatial meanings tend to be either speaker-relative or landmark-relative or absolute (that is, embedded in a spatial frame of cardinal directions) in a given culture, and whichever of these options the culture chooses is used in both physical pointing and linguistic pointing through deictics. A robot with no cultural reference won’t be able to disambigurate “there” (relative to me here now) versus “there” (relative to the river/mountain/rising sun), even if physical pointing is integrated into the attempt to figure out what “there” is. And the problem may not be detected due to the illustion of double transparency.
This gets even more complicated when the world of discourse shifts from the immediate environment to other places, other times, or abstract ideas. People don’t stop inhabiting the real world when they talk about abstract ideas. And what you see in conversation videos is people mapping the world of discourse metaphorically to physical locations or objects in their immediate environment. The space behind me becomes yesterday’s events and the space beyond my reach in front of me becomes tomorrow’s plan. Or I alway point to the left when I’m talking about George and to the right when I’m talking about Fred.
This is all very much an empirical question, as you say. I guess my point is that the data has been accumulating for several decades now that embodiment matters a great deal. Where and how it matters is just beginning to be sorted out.
If I am talking to you on the telephone, I have no mechanism for pointing and no sensory apparatus for detecting your pointing gestures, yet we can communicate just fine.
The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It’s an instant fail.
Can you expand on this just a bit? I am leaning, slowly, in the same direction, and I’d like a bit of a sanity check on this claim.
Firstly, I have no problem with the “embodied cognition” idea so far as it relates to human beings (or animals, for that matter). Yes, people think also with their bodies, store memories in the environment, point at things, and so on. This seems to me both true and unremarkable. So unremarkable as to hardly be worth the amount of thought that apparently goes into it. While it may be interesting to trace out all the ways in which it happens, I see no philosophical importance in the details.
Where it goes wrong is the application to AGI that says that because people do this, it is an essential part of how an intellgence of any sort must operate, and therefore a man-made intelligent machine must be given a body. The argument mistakes a superficial fact about observed intelligences for a fact about the mechanism whereby an intelligence of any sort must operate. There is a large and expanding body of work on making ever more elaborate robot puppets like the Nao, explicitly following a research programme of developing “embodied cognition”.
I cannot see these projects as being of any interest. I would be a lot more interested in seeing someone build a human-sized robot that can run unsupported on two legs (Boston Dynamics’ ATLAS is getting there), especially if it can run faster than a man while carrying a full military pack and isn’t tethered to a power cable (not yet done). However, nothing like that is a prerequisite to AGI. I do hold a personal opinion, which I’m not going to argue for here, that if someone developed a simple method of solving the control problems of an all-terrain running robot, they might get from that some insight into how to get farther, such as an all-terrain running robot that can hunt down humans trying to avoid it. Of course, the Unfriendly directions that might lead are obvious, as are the military motivations for building such machines, or inviting people to come up with designs. Of course, these powers will only be used for Good.
Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
The deaf communicate without sound, the blind without sight, and the limbless without pointing hands. On the internet people communicate without any of these. It doesn’t seem to hold anyone up, except in the mere matter of speed in the case of Stephen Hawking communicating by twitching cheek muscles.
Ah, no, the magic ingredient must be society! Cognition always takes place within society. Feral children are developmentally disabled for want of society. The evidence is clear: we must develop societies of AIs before they can be intelligent.
No, it’s language they must have! AGIs cognition must be based on a language. So if we design the perfect language, AGI will be a snap.
No, it’s upbringing they must have! So we’ll design a robot to be initially like a newborn baby and teach it through experience!
No, it’s....
No. The general form of all these arguments is broken.
This is where you lose me. Isn’t that an equally effective argument against AGI in general?
“AGI in general” is a thing of unlimited broadness, about which lack of success so far implies nothing more than lack of success so far. Cf. flying machines, which weren’t made until they were. Embodied cognition, on the other hand, is a definite thing, a specific approach that is at least 30 years old, and I don’t think it’s even made a contribution to narrow AI yet. It is only mentioned in Russell and Norvig in their concluding section on the philosophy of Strong AI, not in any of the practical chapters.
I took RichardKennaway’s post to mean something like the following:
“Birds fly by flapping their wings, but that’s not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that’s not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently.”
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it’s fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don’t call it “AI”. Even in Turing’s day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of ‘toy projects’ that you use. For the sort of broad definition you are using, where ‘toy projects’ refers literally to toys, Richard Kennaway’s original claim that the embodied approach had only produced toys is factually incorrect. For the definition of ‘toy projects’ that both Richard Kennaway and Document are using, in which ‘toy projects’ is more closely related to ‘toy models’- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?
The “magic ingredient” may be a bridging of intuitions: an embodied AI which you can more naturally interact with offers more intuitive metrics for progress; milestones which can be used to attract funding since they make more sense intuitively.
Obviously you can build an AGI using only lego stones. And you can build an AGI “purely” as software (i.e. with variable hardware substrates). The steelman for pursuing embodied cognition would not be “embodiment is strictly necessary to build AGIs” (boring!), but that “given humans with a goal of building an AGI, going the embodiment route may be a viable approach”.
I well remember that early morning in the CS lab, the better part of a decade ago, when I stumbled—still half asleep—into a sideroom to turn on the lights, only to stare into the eye of Eccerobot (in an earlier incarnation), which was visiting our lab. Shudder.
I used to joke that my goal in life would be to build the successor creature, and to be judged by it (humankind and me both). To be judged and to be found unworthy in its (in this case single) eye, and to be smitten. After all, what better emotional proof to have created something of worth is there than your creation judging you to be unworthy? Take my atoms, Adambot!
Are misunderstanding more common over the telephone for things like negotiation?
I don’t know, but I doubt that the communication medium makes much difference beyond the individual skills of the people using it. People can use multiple modalities to communicate, and in a situation where some are missing, one varies one’s use of the others to accomplish the goal.
In adversarial negotiations one might even find it an advantage not to be seen, to avoid accidentally revealing things one wishes to keep secret. Of course, that applies to both parties, and it will come down to a matter of who is more skilled at using the means available.
People even manage to communicate in writing!
Sure, I agree that we make use of all kinds of contextual cues to interpret speech, and a system lacking awareness of that context will have trouble interpreting speech.For example, if I say “Do you like that?” to Sam, when Sam can’t see the thing I’m gesturing to indicate or doesn’t share the cultural context that lets them interpret that gesture, Sam won’t be able to interpret or engage with me successfully. Absolutely agreed. And this applies to all kinds of things, including (as you say) but hardly limited to pointing.
And, sure, the system may not even be aware of that trouble… illusions of transparency abound. Sam might go along secure in the belief that they know what I’m asking about and be completely wrong. Absolutely agreed.
And sure, I agree that we rely heavily on physical metaphors when discussing abstract ideas, and that a system incapable of processing my metaphors will have difficulty engaging with me successfully. Absolutely agreed.
All of that said, what I have trouble with is your apparent insistence that only a humanoid system is capable of perceiving or interpreting human contextual cues, metaphors, etc. That doesn’t seem likely to me at all, any more than it seems likely that a blind person (or one on the other end of a text-only link) is incapable of understanding human speech.
Are you really claiming that ability to understand the very concept of indexicality, and concepts like “soon”, “late”, “far”, etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.
Also:
“Detecting pointing gestures” would be the function of a perception algorithm, not a sensory apparatus (unless what you mean is “a robot with no ability to perceive positions/orientations/etc. of objects in its environment”, which… wouldn’t be very useful). So it’s a matter of what we do with sense data, not what sorts of body we have; that is, software, not hardware.
More generally, a lot of what you’re saying (and — this is my very tentative impression — a lot of the ideas of embodied cognition in general) seems to be based on an idea that we might create some general-intelligent AI or robot, but have it start at some “undeveloped” state and then proceed to “learn” or “evolve”, gathering concepts about the world, growing in understanding, until it achieves some desired level of intellectual development. The concern then arises that without the kind of embodiment that we humans enjoy, this AI will not develop the concepts necessary for it to understand us and vice versa.
Ok. But is anyone working in AI these days actually suggesting that this is how we should go about doing things? Is everyone working in AI these days suggesting that? Isn’t this entire line of reasoning inapplicable to whole broad swaths of possible approaches to AI design?
P.S. What does “there, relative to the river” mean?
Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core—though not just with fingers, which are not universally used for pointing in all human cultures. Sotaro Kita has the most data on this subject for language, but the embodied basis of mathematics is discussed in Where Mathematics Comes From, by by Geroge Lakoff and Rafael Nunez . Whether all possible minds must rely on such a mechanism, I couldn’t possibly guess. But I am persuaded humans do (a lot of) it with their bodies.
In most European cultures, we use speaker-relative deictics. If I point to the southeast while facing south and say “there”, I mean “generally to my front and left”. But if I turn around and face north, I will point to the northwest and say “there” to mean the same thing, ie, “generally to my front and left.” The fact that the physical direction of my pointing gesture is different is irrelevant in English; it’s my body position that’s used as a landmark for finding the target of “there”. (Unless I’m pointing at something in particular here and now, of course; in which case the target of the pointing action becomes its own landmark.)
In a number of Native American languages, the pointing is always to a cardinal direction. If the orientation of my body changes when I say “there”, I might point over my shoulder rather than to my front and left. The landmark for finding the target of “there” is a direction relative to the trajetory of the sun.
But many cultures use a dominant feature of the landscape, like the Amazon or the Missippi or the Nile rivers, or a major mountain range like the Rockies, or a sacred city like Mecca, as the orientation landmark, and in some cultures this gets encoded in the deictics of the language and the conventions for pointing. “Up” might not mean up vertically, but rather “upriver”, while “down” would be “downriver”. In a steep river valley in New Guinea, “down” could mean “toward the river” and “up” could mean “away from the river”. And “here” could mean “at the river” while “there” could mean “not at the river”.
The cultural variability and place-specificity of language was not widely known to Western linguists until about ten years ago. For a long time, it was assumed that person-relative orientation was a biological constraint on meaning. This turns out to be not quite accurate. So I guess I should be more nuanced in the way I present the notion of embodied cognition. How’s this: “Embodied action in the world with a cultural twist on top” is the grounding point at the bottom of the symbol expansion for human meanings, linguistic and otherwise.
I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. In addition, I suspect that, while you were typing your paragraph, you weren’t physically pointing at things. The fact that we can do this looks to me like evidence against your main thesis.
Ah, but you’re assuming that this particular interaction stands on its own. I’ll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.
Two points. First, I don’t contest the existence of verbal labels that merely refer—or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster.
However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don’t bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can’t show you actual videotapes of what I’m talking about. You are rightly skeptical because words alone aren’t good enough evidence. And that is itself evidence.
Second, humans have a developmental trajectory and history, and memories of that history. We’re a time-binding animal in Korzybski’s terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to “muscle memory” tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient.
“Oh, look, it’s an invisible gesture!” is not at all convincing, I realize, so let me summarize several lines of evidence for it.
Developmentally, there’s quite a lot of research on language acquisition in infants and young children that suggests shared attention management—through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift—is a critical building block for constructing “aboutness” in human language. We also start out with some shared, built-in cries and facial expressions linked to emotional states. At this level of development, communication largely fails unless there is a lot of embodied scaffolding for the interaction, much of it provided by the caregiver but a large part of it provided by the physical context of the interaction. There is also some evidence from the gestural communication of apes that attests to the importance of embodied attention management in communication.
Also, co-speech gesture turns out to be a human universal. Congenitally blind children do it, having never seen gesture by anyone else. Congenitally deaf children who spend time in groups together will invent entire gestural languages complete with formal syntax, as recently happened in Nicaragua. And adults speaking on the telephone will gesture even knowing they cannot be seen. Granted, people gesture in private at a significantly lower rate than they do face-to-face, but the fact that they do it at all is a bit of a puzzle, since the gestures can’t be serving a communicative function in these contexts. Does the gesturing help the speakers actually think, or at least make meaning more clear to themselves? Susan Goldin-Meadow and her colleagues think so.
We also know from video conversation data that adults spontaneously invent new gestures all the time in conversation, then reuse them. Interestingly, though, each reuse becomes more attentuated, simplified, and stylized with repetition. Similar effects are seen in the development of sign languages and in written scripts.
But just how embodied can a label be when gesture (and other embodied experience) is just a memory, and is so internalized that is is externally invisible? This has actually been tested experimentally. The Stroop effect has been known for decades, for example: when the word “red” is presented in blue text, it is read or acted on more slowly than when the word “red” is presented in red text—or in socially neutral black text. That’s on the embodied perception side of things. But more recent psychophysical experiments have demonstrated a similar psychomotor Stroop-like effect when spatial and motion stimulus sentences are semantically congruent with the direction of the required response action. This effect holds even for metaphorical words like “give”, which tests as motor-congruent with motion away from oneself, and “take”, which tests as motor-congruent with motion toward oneself.
I understand how counterintuitive this stuff can be when you first encounter it—especially to intelligent folks who work with codes or words or models a great deal. I expect the two of us will never reach a consensus on this without looking at a lot of original data—and who has the time to analyze all the data that exists on all the interesting problems in the world? I’d be pleased if you could just note for future reference that a body of empirical evidence exists for the claim. That’s all.
What do you mean by “fully” ? I believe I understand you well enough for all practical purposes. I don’t agree with you, but agreement and understanding are two different things.
I’m not sure what you mean by “merely refer”, but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like “O(N)”, or “ribosome”, or “a^n +b^n = c^n”. We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any “physical context” for the interaction, too.
All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That’s great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you’ve said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence—and that’s not the case anymore.
Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I’m still not convinced that this is true, either. As I’d said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things.
Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop human intelligence; and that no simulation could in principle suffice, and the body must be physical. These are all very strong claims, and so far you have provided no evidence for any of them.
Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.
Can you summarize the salient parts ?
But wait; whether all possible minds must rely on such a mechanism is the entire question at hand! Humans implement this feature in some particular way? Fine; but this thread started by discussing what AIs and robots must do to implement the same feature. If implementation-specific details in humans don’t tell us anything interesting about implementation constraints in other minds, especially artificial minds which we are in theory free to place anywhere in mind design space, then the entire topic is almost completely irrelevant to an AI discussion (except possible as an example of “well, here is one way you could do it”).
Er, what? I thought I was a member of a European culture, but I don’t think this is how I use the word “there”. If I point to some direction while facing somewhere, and say “there”, I mean… “in the direction I am pointing”.
The only situation when I’d use “there” in the way you describe is if I were describing some scenario involving myself located somewhere other than my current location, such that absolute directions in the story/scenario would not be the same as absolute directions in my current location.
If this is accurate, then why on earth would we map this word in this language to the English “there”? It clearly does not remotely resemble how we use the word “there”, so this seems to be a case of poor translation rather than an example of cultural differences.
Yeah, actually, this research I was aware of. As I recall, the Native Americans in question had some difficulty understanding the Westerners’ concepts of speaker-relative indexicals. But note: if we can have such different concepts of indexicality, despite sharing the same pointing digits and whatnot… it seems premature, at best, to suggest that said hardware plays such a key role in our concept formation, much less in the possibility of having such concepts at all.
Ultimately, the interesting aspect of this entire discussion (imo, of course) is what these human-specific implementation details can tell us about other parts of mind design space. I remain skeptical that the answer is anything other than “not much”. (Incidentally, if you know of papers/books that address this aspect specifically, I would be interested.)
Ok, but is this the correct conclusion ? It’s pretty obvious that a SHRDLU-style simulation is not sufficient to achieve natural language understanding, but can you generalize that to saying that no conceivable simulation is sufficient ? As far as I can tell, you would make such a generalization because,
While this is true, it is also true that our human senses cannot fully perceive the reality around us with infinite fidelity. A child who is still learning his native tongue can’t a rock that is 5cm in diameter from a rock that’s 5.000001cm in diameter. This would lead me to believe that your simulation does not need 7 significant figures of precision in order to produce a language-speaking mind.
In fact, a colorblind child can’t tell a red-colored ball from a green-colored ball, and yet colorblind adults can speak a variety of languages, so it’s possible that your simulation could be monochrome and still achieve the desired result.
I agree that Searle believes in magic, but “intentionality” is not magic (see: almost anything Dennett has written).
This sounds interesting. Could you expand on this?
A list of references can be found in an earlier post in this thread.
Welcome!
Yeah. This, and the “existential angst” thing, seem to be common problems on LW, and I’ve never been sure why. I think that keeping yourself busy doing practical stuff prevents it from becoming an issue.
That’s fascinating! What research has been done on this! I would totally be interested in reading more about it.
Jurgen Streeck’s book Gesturecraft: The manu-facture of meaning is a good summary of Streeck’s cross-linguistic research on the interaction of gesture and speech in meaning creation. The book is pre-theoretical, for the most part, but Streeck does make an important claim that the biological covariation in a speaker or hearer across the somatosensory modes of gesture, vision, audition, and speech do the work of abstraction—which is an unsolved problem in my book.
Streeck’s claim happens to converge with Eric Kandel’s hypothesis that abstraction happens when neurological activity covaries across different somatosensory modes. After all, the only things that CAN covary across, say, musical tone changes in the ear and dance moves in the arms, legs, trunk, and head, are abstract relations. Temporal synchronicity and sequence, say.
Another interesting book is Cognition in the Wild by Edwin Hutchins. Hutchins goes rather too far in the direction of externalizing cognition from the participants in the act of knowing, but he does make it clear that cultures build tools into the environment that offload thinking function and effort, to the general benefit of all concerned. Those tools get included by their users in the manufacture of online meaning, to the point that the online meaning can’t be reconstructed from the words alone.
The whole field of conversation analysis goes into the micro-organization of interactive utterances from a linguistic point of view rather than a cognitive perspective. The focus is on the social and communicative functions of empirically attested language structures as demonstrated by the speakers themselves to one another. Anything written by John Heritage in that vein is worth reading, IMO.
EDIT: Revised, consolidated, and expanded bibliography on interactive construction of meaning:
LINGUISTICS
Philosophy in the Flesh, by George Lakoff and Mark Johnson
Women, Fire and Dangerous Things, by George Lakoff
The Singing Neaderthals, by Steven Mithen
CONVERSATION ANALYSIS & GESTURE RESEARCH
Handbook of Conversation Analysis, by Jack Sidnell & Tanya Stivers
Gesturecraft: The Manu-facture of Meaning, by Jurgen Streeck
Pointing: Where Language, Culture, and Cognition Meet, by Sotaro Kita
Gesture: Visible Action as Utterance, by Adam Kendon
Hearing Gesture: How Our Hands Help Us Think, by Susan Goldin-Meadow
Hand and Mind: What Gestures Reveal about Thought, by David McNeill
COGNITIVE PSYCHOLOGY
Symbols and Embodiment, edited by Manuel de Vega, Arthur M Glenberg, & Arthur C Graesser
Cognition in the Wild, Edwin Hutchins
Thanks! Neat.
Hi! I first saw LW as a node on a map of neoreactionary web sites. Which I guess is a pretty weird way to find it, since I’m not myself a neoreactionary and LW doesn’t seem to fit the map. You have to stretch pretty far to connect some of those nodes.
Fortunately, I took a look at the Less Wrong community, and it’s been really interesting to explore. I figured I should introduce myself, since I posted in another thread. I’m in my early 30′s and I’m studying in the life sciences at the postgraduate level. I’m a Christian. I’m also a married father, and a veteran. So. Probably somewhat atypical (I peeked at the survey results.)
I’m excited by several of the big problems that seem to animate LW: minimizing cognitive bias day-to-day, optimizing philanthropy, and working through received ideology. I know zip about AI, but addressing existential risk is really interesting to me indirectly, as it relates to forecasting and mitigating mere catastrophes*, a challenge for wonks and technocrats and scientists (and everybody, of course). In fact, if anybody knows of LW’ers or other rationalists interested in policy problems of that nature I’d be super grateful for a pointer or a link.
In conclusion, I read ZeroHedge far too much, sometimes wear Vibrams, and am thrilled to meet all of you.
*is there a better word? My jargon is level 0.
That brings up some interesting questions. The last survey placed self-identified neoreactionaries as a very small percentage of LW readership (scroll down to “Alternate Politics Question”). Progressivism appears to be the most popular political philosophy around here, with libertarianism a strong competitor; nothing else is in the running.
That’s not the first time I’ve heard LW referred to as a neoreactionary site, though; once might be coincidence, but twice needs explanation. With the survey in mind it’s clearly not a matter of explicitly endorsed philosophy, so I’m left to assume that we’re propagating ideas or cultural artifacts that’re popular in neoreactionary circles. I’m not sure what those might be, though. It might just be our general skepticism of academically dominant narratives, but that seems like too glib an explanation to me.
Could this be explained by the base rates?
Imagine a society with 10 neoreactionaries and 10000 liberals (or any other mainstream political group). Let’s suppose that 5 of the neoreactionaries and 500 of the liberals read LessWrong.
In this society, neoreactionaries would consider LessWrong one of “their” websites, because half of them are reading it. Yet the LessWrong survey would show that neoreactionaries are just a tiny minority of its readers.
That’s a heck of a coincidence, but it would explain a perception among neoreactionaries. It wouldn’t, however, explain perceptions among (to use your example) liberals; unless the latter spend a lot of time reading blogs from the former, they’re probably going to be using an outside view, which would give them the same ratios we see in the survey. Out in the wild, I’ve seen the characterization coming from both sides.
Although the graph in the ancestor is from a neoreactionary blog.
While I’m not sure what “neoreactionary” refers to specifically there are lots of reasons that certain types of liberals see LessWrong as reactionary:
A somewhat strong libertarian component
Belief in evolutionary psychology
Anti-religous (or generally the belief that beliefs can be right or wrong)
LessWrong’s more technical understanding of evidence is incompatible with standpoint theory and similar epistemic frameworks favored by some groups of liberals.
Those older discussions around PUA where it’s presented in a pretty positive light
Glorification of the enlightenment.
Viliam’s explanation seems like a strong one to me, but doesn’t explain the historical accident of (to use his made up numbers) half of neoreactionaries reading LW.
I suspect that LW has a vibe of “actually think through everything, question your implicit assumptions, and follow logic to its conclusion.” The neoreactionary believes that doing so ends up at the neoreactionary position- even if that is true for only 1% of people, that leads to a 10X higher concentration of neoreactionaries at LW. At the very least, it seems that LW has a strong tendency to destroy strong political leanings, and especially affection for popular government-supporting narratives.
The impression I got from looking at their graph is that a strong libertarian component is enough by itself. It wouldn’t be the first time I’ve seen people consider libertarianism inherently very regressive.
Edit: Originally I assumed that it was accusing Less Wrong of being neoreactionary, but looking a bit around the site it looks like they might be praising it.
I don’t think that’s a powerful enough explanation. Setting aside the differences between libertarianism and neoreaction, there are far more libertarian-leaning blogs than that graph can account for, and many of the missing ones are more popular than we.
I agree.
It might be worth noting that in this thread, the other thread where we just crossed paths, there are two different posters who blog at other nodes in that graph.
Hey everyone, I’m 26, and a PhD candidate in theoretical physics (four years in, maybe two left). I’ve been reading LessWrong for years on and off but I put off participating for a long time, mainly because at first there was a lot of rationality specific lingo I didn’t understand, and I didn’t want to waste anyones time until I understood more of it.
I had always felt that things in life are just systems, and for most systems there are more and less efficient ways to do the same things. Which to me that is what rationality is, first seeing the system for what it actually is, and then tweaking your actions to better align with the actual rules of the system. So I began looking to see what other people thought about rationality, and eventually ended up here. I lurked for years, and finally made the first step towards involvement during the LW study hall, which I participated in for several weeks as not_a_test5 during my working hours.
I was accepted last year into one of the CFAR workshops with an offer for about 50% reduction in fees, but unfortunately for a graduate student it was still difficult for me to justify the cost when I am on a fixed income for the next few years and often spend exactly what I make each month. I would still like to attend in the future though, so hopefully once I graduate I will have the money and time. It will also help if some of the workshops are held on the east coast (where I live).
I’ve actually never read the quantum physics sequences, as I deal with quantum physics on a daily basis I didn’t think I had much to gain. But as I look for places that I could contribute something to this site I think that could be one place that I have an advantage over others, if there is further interest in development of physics based sequences.
(Unimportant edit: the name pan is a reference to the Greek god, particularly in the book Jitterbug Perfume, in case anyone has read it.)
CFAR is holding a workshop in New York on November 1-4 (Friday through Monday).
Just wondering what your area of research is.
Eliezer’s point is that his QM sequence, resulting in proclaiming MWI the one true Bayesian interpretation, is an essential part of epistemic rationality (or something like that), and that physicists are irrational at ignoring this. Not surprisingly, trained physicists, including yours truly, tend to be highly skeptical of this sweeping assertion. So I wonder if you ever give any thought to Many Worlds or stick with the usual “shut up and calculate”?
My research is in quantum optics and information, more specifically macroscopic tests of Bell’s inequality and applications to quantum cryptography through things like the Ekert protocol.
I didn’t realize that the quantum mechanics sequence here made such conclusions, thanks for pointing that out, maybe I’ll check it out to see what he says. I’ve given some thought to many worlds, but not enough to be an expert, as my work doesn’t necessitate it. From what I know, I’m not so convinced that many worlds is the correct interpretation, I think answers to the meaning of the wave function collapse will come more form decoherence mechanisms giving the appearance of a collapse.
Forgive my ignorance, but isn’t that the official many-world’s position—that decoherence provides each “you” with the appearance of collapse?
Decoherence is a measurable physical effect and is interpretation-agnostic. “Each you” only appears in the MWI ontology. pan did not state anything about there being more than one copy of the observer as a result of decoherence.
That makes sense; are you a physicist, too?
Trained, not practicing.
Hi, I’m Chris Barnett.
I encountered HPMOR when I met Christopher Olah at Chez JJ, Mountain View in April 2012 during a networking expedition to Silicon Valley. I read for approximately 3 days straight. HPMOR took the place of Ender’s Game, which I’d only read a few weeks before, as my favourite fiction.
I joined the Melbourne LessWrong community in early 2013 and finished reading the sequences soon after. My favourite sequences are Epistemology, Quantum Physics and Words.
I started the first rationalist sharehouse in Melbourne with Brayden McLean, Thomas Eliot and Allison Rea in June 2013, completed the first Melbourne CFAR workshop in February 2014 and moved to Berkeley CA at 1pm on March 6th 2014 (via timezone teleportation :P).
I’m in the process of deciding where my time would best be spent to maximize the expected goodness of the future. I still have much confusion about how to read the output of my utility function for far future scenarios involving AI, brain upload, mind copying and consciousness-containing simulations, but I have a few heuristics such as less suffering is better, more exploration of possibility space is better, retention of human values in general (such as freedom, love, curiosity) is better. I’m strongly considering accepting a programming job with Rev, primarily for skill attainment and income, with the interestingness of their long term vision being a significant motivational bonus. I’m also exploring working at Leverage as a possibility and plan to network with people in crypto-tech and social choice theory. I’ve spent hundreds of hours designing a distributed reputation system which I plan to publish in the form of 1 or more white papers and a series of blog posts, the first of which is here: https://zuthan.wordpress.com/2014/03/08/reputation-is-powerful.
To respond to Jennifer’s request, I self-identify as an aspiring rationalist. I see this as prescriptive rather than descriptive: I aspire to be rational. I too use a general heuristic of not using labels on myself because most of them come associated with arbitrary baggage. Aspiring Rationalist seems well enough defined to be useful, though.
I’m Katy, I’m 26, I have a 7 month old baby (I feel that’s important because it heavily affects my current ability to think/sleep/eat/do anything) and a husband and … well, I never really thought about rationality until I came across Less Wrong.
I grew up always … wanting more. I believed in god, for a while, until I realised I was just talking to myself. I suffered from bipolar disorder (mainly depressive) from my early teens until … well, until I became pregnant, actually, when it mysteriously disappeared. I wanted to meet people who understood, who thought deeper, who questioned, who wondered. I came across Terry Pratchett, and I found his ideas within stories to be so wonderful, but met few people who had read (or enjoyed) his writing, and even fewer who ever found the concepts of “how” and “why” as intensely interesting as I did.
I studied a lot of different things at university—English, history, Antarctic Studies (I live in Australia so there was a course down in Tasmania), maths, physics, business … but most of my learning has been alone, through books or the internet or waking up at 2am and thinking “I wonder why that happens” and then going on an hours-long adventure through the internet.
When I got married, I got two lovely step-daughters in the package, aged 6 and 10, and introducing them to science and maths has really reignited my interest in learning again. Unfortunately this is slightly challenge by their mother who is a bit of an unpleasant dullard (when the girls learnt the entire periodic table from a song I showed them on youtube, her response was “science is boring” ). My husband and I also hope to home-school our daughter, and I want to be able to give her as much support as possible in whatever areas interest her, and ignite the love of knowledge that her father and I have.
I came across LW a few days ago and just instantly got drawn in—the form of the posts, the replies, the flow of logic and reason … it’s not only very educational, but inspires me to do better in my daily life. Sure, you don’t have to be particularly rational to change a nappy or feed a baby, but (for example) I was considering getting contents insurance and, after reading a thread here I thought “maybe I should approach this rationally, instead of just thinking that it seems like a good idea”, and went on to do some rough calculations and probabilities and approach it that way.
I don’t think I’ll be posting on any other threads any time soon—I’d rather read and learn and learn and get a feel for the community rather than post a half-decent comment that doesn’t contribute much—but I figured it would be worth posting here to start with.
Welcome! You can probably contribute more than you realize.
Thanks! I hope so, in time—I just think it’s wiser to watch and learn so that I can understand how LW works and what specific terms and concepts mean before jumping in with what I think I understand!
Hello! I’m Alex. I’m an undergrad currently studying economics and finance in the Bay Area. I think I first heard about Less Wrong on TVTropes, of all places, which lead me to HPMOR and then here. I bookmarked the site and forgot about it until pretty recently, when I came back and started reading articles and comments. I’m currently reading through the Major Sequences.
I’m very interested in economics and game theory, which defintely has a lot of overlap with rationality and behavioral science. Recently I’ve been learning computer programming as well. I guess I started to identify as a rationalist a few years ago, but there was never one set moment for me—it’s something I think I’ve always valued. I love to learn and read and I suppose ideas involving rationality and cognition was just something that stuck out to me as interesting.
Other than that, I’m a big fan of Major League Baseball, and lately I’ve been attempting to write and record music. I’m definitely glad I found LW and am looking forward to reading more and hopefully being an active community member.
Also, I’m noticing quite a few similarities between the commenting and profile system here and the system on Reddit… anyone know if that was intentional?
Hi Alex, I’m Alex!
Less Wrong’s code is based off of Reddit’s system. Reddit made their code base open-source in June of 2009; Less Wrong then forked it.
Hi there!
I found HPMoR via TVTropes and then found LessWrong via HPMoR. I decided to hang around after reading the explanation of Bayes Theorem on Eliezer’s personal site and finding it quite nice. Also, it matched up with how I thought of Bayes’s theorem. You could say that I got attracted to LW by confirmation bias. :)
On a more useful note, I got interested in rationality/etc. through a somewhat convoluted path. I got introduced to Bayes Theorem via Paul Graham when I built a website filter for a science fair project.
My reading material also contributed heavily. I’ve also always been a fast and constant reader so discovering the (FREE!) interlibrary loan offered by the University of California was a boon. Major nonfiction books that affected me were cognitive science stuff (especially Dan Ariely) and books on how things/processes/systems work I distinctly recall re-re-re-checking out a book on landfills and waste management in elementary school because it was long enough to be somewhat thorough and had enough photos to be interesting. Major fiction influences include books by Thornton Burgess, the Redwall series, and David Brin. I got introduced to the concept of fanfiction by the Redwall Online Community and spent many years in related activities so it wasn’t too much of a leap for me to take HPMoR seriously. Getting keyword matches between Ariely and HPMoR kept me hooked, never mind the bit about arbitraging gold and silver, which I can’t believe Harry hasn’t tried doing by now.
Another thing that helped me take the ideas in Less Wrong seriously was my constant desire to re-examine by beliefs. For example, I’ve always been interested in the ideas in Christian apologetics.
As for where I started at LW, I can’t really say. I know I read stuff that confirmed what I already knew like things about the Planning Fallacy. The first bit of new material was probably Mysterious Answers (and those in its sequence).
Hello, Less Wrong users. My internet handle is Jen, and I’m here because the conversations are interesting and this feels like the natural next step to reading the sequences (still in progress, but I’m getting through them alright) and HPMOR (caught up).
I’m a seventeen-year-old high school senior in the Southern California area. My most notable interests are anime, economics, evolutionary psychology, math, airsoft (and real guns), and possibly something important that I’m forgetting but that should be mentioned. I grew up speaking Spanish and English, but the latter is the only one I’m fluent in. I’m currently in my fourth year of Japanese, and I know enough for conversation, but my Spanish is still better because of early acquisition and the like. One thing I should mention ahead of time is that my ADHD makes it difficult for me to focus on writing something for long periods of time, so I stop posts a lot to do something else and thus what appears below may seem somewhat fragmented.
I learned about this community through a friend on another website, and when I learned about HPMOR a couple of months ago, I read through it in about two weeks, which says something when you learn that I had not read fiction (outside of what was required in school) in over a year prior to this fan fiction. About a month ago, I started to read through the sequences, which intimidated me at first since Bayes’ Theorem is tossed at you right away, but once I got through my initiation, the rest was (or is, so far) not quite so overwhelming. Some posts feel obvious to me, others are in the category of “I’ve thought about this before but I could never articulate it,” and then there are the fun ones where I have an “Ah ha!” moment upon learning something new and genuinely interesting. I’m going through the Sequences as they are listed on the Sequences’ wiki page and am currently at the beginning of the Overly Convenient Excuses subsequence of How To Actually Change Your Mind.
As mentioned above, I’m in my senior year of high school, and since it’s the fall semester I’m currently focused on college applications and the like, so I can’t spend quite as much time reading and discussing things online as I’d like to, but I’m nonetheless trying to finish the Sequences, and after that I may start to read the Copy of Thinking, Fast & Slow that’s been sitting on my bookshelf for the last three months, among other things.
I’m not a very poetic person, so I can’t provide a beautiful, elegant, graceful explanation of how rationality feels to me in my heart of hearts. I’m interested in rationality because I like being correct, and because there are systematic errors in my thinking that prevent me from being correct.
強くなりたい and all that.
Hello and welcome to LessWrong!
No need to apologize for your writing. Seems clear and succinct to me. Glad to see you’ve been enjoying the literature so far. Maybe, you’ll have a little of your own to contribute eventually. And yes, while Bayes’ Theorem is used somewhat for a “gate keeper,” the Sequences are still highly relatable and not as intimidating as some people make them out to be.
Since you live in Southern California, you’re right near the heart of LW territory. The Bay Area is a particular hive of LW activity. Since you’re still in high school and under 18, I don’t know how your age affects your ability to participate, but in a year or so, you might consider checking out your local LessWrong meetup or a CFAR workshop. They’re both good fun, great learning experiences, and fine ways to socialize with fellow rationalists.
Glad to have another polylingual on board. Our range of the languages can sometimes be a little drab.
Anyway, glad to have you join the conversation! Hope to see you around.
Hi LW, My name’s Olivier, I’m a 37-year-old Canadian currently living in Ottawa. My background is varied: I have a BA in Communication Studies and an MPhil in Japanese Studies but also a DEC (some special Quebec degree equivalent to the last year of high school and first of university in the rest of Canada and the USA) in Natural Sciences. I’ve owned a business, worked in cultural media and am now a public servant working in immigration.
I’ve been interested in AI, existential risks, intelligence explosion et al. for a number of years, probably since finding Bolstrom’s paper on Simulated Reality.
I’m not 100% sure how I found LW, but it probably was while browsing for one of the topics above.
I’ve considered myself a rationalist for as long as I can remember, though I’ve long called it (rather naively?) “realist”. Also being an existentialist, I try to bring these beliefs/convictions into practice in my work and how I raise my children (we’ll see how that turns out!)
Through browsing here, I’m glad to find community that appears in between rigid academia and sensationalist media.
Anyhow, I’ll most likely lurk a lot more than I post. Having three young kids leaves me with little time, and a sleep-addled, rather incoherent brain.
Thanks for reading!
Hello and welcome to LessWrong!
Wow! That’s quite the background. Sounds like you enjoy to dip into each field. A useful virtue to have. You’ll find plenty of people here whose interests extend to every field they can devour. I’m sure you’ll have an interesting perspective to bring to the conversation!
AI, existential risks, and intelligence explosion are definitely bolded topics around LW. We’re something of a sister organization to MIRI, the Machine Intelligence Research Institute. Don’t know how familiar you are with them, but if AI interests you, I’d highly suggest giving them a look-see. Quite a few active LWers have worked with or at MIRI before, so cross-pollination is frequent.
Sounds like you’ve already started the work of trying to apply rational techniques in your life. Good on ya! Many of us here are always working to improve what some call “the martial arts of rationality” and make our own lives a little better planned, a little better exectued. We’d love to hear some of your experiences. Especially with kids! Now that’s a problem that never gets solved!
We’re certainly glad to have you, and if you feel like joining the conversation, hop right in! You might check out the latest Open Thread for some casual talk. It’s a good place to start posting so you can get a feel for the community and its standards, and a great place to ask questions. Even though its an open thread, the conversation is serious and can even get pretty heated. If you’re interested in a little (lie: a LOT) reading material, you can check out the Sequences, the main collection of LW posts covering and analyzing some of the most important topics on LW.
Whatever you do next, we’re glad to have you!
Welcome! Just in case you haven’t noticed yet, there’s a Less Wrong meetup in Ottawa.
Sequences rec seconded, they’re what formed the initial kernel of the Less Wrong community. There are many of them, so take them at a comfortable pace.
Thanks guys! A meetup would be great—I’m new to the area and don’t know too many people here.
I’ll try and slowly go through the sequences as recommended… Definitely looks interesting. I’m half-way through Bolstom’s Superintelligence right now (like most of the planet, it would seem!), so I’ll need more material soon!
Rationality with kids… It works and it doesn’t. A recent example: my son (5) is somehow afraid of zombies. I’ve been trying to have him look at this rationally: as he ever seen zombies in real life? Does he know anyone who has? Zombies often appear in stories with other mythical creatures: are those real? If they’re unreal and only appear in his dreams, what could he do about it? Maybe tell himself zombies don’t exist, so he must be dreaming? I am proud to say he has applied that last technique and told me that when they showed up, he knew then weren’t real. Problem solved? Partly. I still need to go through that same reasoning every night...
Skyler here, a 21 year old technology student. Born and raised in the backwoods of Vermont to ahem philosophically diverse parents, was encouraged to read pretty much every philosophical book the library had except for Ayn Rand. So naturally I gravitated towards that as soon as I became enough of a teenager, but apparently completely missed the antagonism towards non-geniuses and couldn’t for the life of me figure out why I seriously disliked every objectivist I met.
About two years ago, I had a professor who introduced me to HPMoR, which I enjoyed immensely. It took me around a month to move to the sequences. They seem to have had the curious property of seeming perfectly obvious, like someone simply expressing what I already knew just in better words, and while a lot of them do fall close in broad subject to things I’d written about before, the only use I’d had for bayesian statistics prior to reading them was spam filters. (And then the author’s notes pointed me to Worm, which consumed a month or two.)
A couple of weeks ago however, I encountered a post on SlateStarCodex (which I’d been reading after stumbling upon it through unrelated browsing) about trans people, and somehow around the same time got linked to Alicorn’s Polyhacking article. My positions previously were similar to the authors (Thought of both transgender and polyamory as mildly wrong and not understandable) and both made a solid argument that actually changed my mind. This was not the “Oh, of course I knew that” of the sequences, but a “Huh. I thought that was wrong, but they have good points. Let me think for five minutes and see if there are any more arguments for or against I can think of now.” By the end of the respective days, I had a different opinion than I previously had, and was beginning to make changes in how I conducted myself because of one of them. In addition, they both seemed like interesting people I could relate to, and a community of such people could be really fun. (As opposed to Eliezer Y.- That is, I can imagine having a conversation with these people, whereas if I was in a conversation with Eliezer Y. I would feel compelled to take notes.)
So yeah. I’m here to see how many other topics require me to change my mind, and to hopefully have cool conversations with interesting people. Any recommendations on where to start?
Also, I don’t know if “Typical mind and gender identity” is the blog post that you stumbled across, but I am very glad to have read it, and especially to have read many of the comments. I think I had run into related ideas before (thank you, Internet subcultures!), but that made the idea that gender identity has a strength as well as a direction much clearer.
A combination of that post and What universal human experiences are you missing without realizing it? actually. I would say that I am strongly typed as male, strong enough that occasionally I’ve been known to get annoyed at my body not being male enough. (Larger muscle groups, more body hair, darker beard, etc.) Probably influencing this are the facts that Skyler is the feminine form of my name, and that puberty was downright cruel to me. As you say, it’s not common to think of being strongly or weakly identified with your own sex, rather than just a binary “fits/doesn’t fit” check.
I’m afraid I haven’t been active online recently, but if you live in an area with a regular in-person meetup, those can be seriously awesome. :)
Meatspace meetups sound like a good deal of fun, and possibly a faster route to being part of the community than commenting on articles that I think I have something to add. Downside is, I’m currently in Rochester New York, and unless I’m misusing the meetups page somehow, looks like the closest regular meetup is in Albany. That’s a long bike ride. :) If anybody is in Rochester, by all means let me know!
Hi. I’m Baisius. I came here, like most, through HPMOR. I’ve read a lot of the sequences and they’ve helped me reanalyze the things I believe and why I believe them. I’ve been lurking here for awhile, but I’ve never really felt I had anything to add to the site, content wise. That’s changed, however—I just launched a blog. The blog is generally LW themed, so I thought it appropriate. I wouldn’t ordinarily advertise for it, but I would particularly like some help on one of the problems I explored in my first post. (see footnote 3)
One of the things that’s bothered me about PredictionBook, and one of the reasons I don’t use it much, is that its analysis seems a bit… lacking. In the post, I tried to come up with a rigorous way of comparing sets of predictions to see which are more accurate. I did this by looking at the distribution of residuals (outcome—predicted probability) for a set of predictions. The odd thing was that when I looked at the variance, the inverse of the variance showed some very odd patterns. It’s all there in the post, but if anyone who knows a bit more math than I do could explain it, I’d really appreciate it.
Welcome!
For assessing prediction accuracy, are you familiar with scoring rules?
I wasn’t thanks. I’ll try to read that sometime when I get a chance. At first glance though, I’m unsure why you would want it to be logarithmic. I thought about doing it that way too, but you then you lose the meaning associated with average error, which I think is undesirable.
So, let’s say you want a scoring rule with two properties.
You want it to be local: that is to say, all that matters is the probability you assigned to the actual outcome. This is in contrast to rules like the quadratic scoring rule, where your score is different depending on how the outcomes that didn’t happen are grouped. Based on this assumption, I’m going to write the scoring rule as S(p), where S(p) is the score you get when you assign a probability p to the true outcome.
You also want it to play nicely with combining separate events. That is to say, if you estimate 10% of it being cloudy when it actually is, and 10% of it being warm outside when it actually is, you want your score to be the same as if you had assigned 1% to the correct proposition that it is warm and cloudy outside. More succinctly: S(p)+S(q)=S(pq).
If you add in the additional caveat that some scores are not 0, then you are forced by the above statement to a logarithmic scoring rule. Interestingly, you don’t need to include the requirement that it be a proper scoring rule, although the logarithmic scoring rule is proper.
I’m Anthony. I found out about Less Wrong from Overcoming Bias, and I found out about Overcoming Bias about 2 years ago when Abnormal Returns, which is like a sampler of all kinds of posts on the econ-blogsphere, linked to Overcoming Bias.
I had previously decided that the singulatarians were crazily optimistic. I thought they were all about the future being unimaginable goodness all the time. I guess that was my interpretation of Kurzeil. I thought they were unrealistic about the nature of reality. I don’t believe that the singularity will hit in a few decades, at least I don’t understand the arguments enough to think that yet, but it is an interesting topic
I used to be part of an Objectivist campus club at the University of CU-Denver. And then an Objectivist magazine promoted the idea of nuking Afghanistitan in response to 9/11. And also I discovered Michael Shermer’s “Why People Believe Strange Things”, and specially the chapter calling out Objectivism as a cult. I fought against the idea of Objectivism being a cult for a long time, but then I started to be convinced, and I eventually abandoned Objectivism completely.
But reading HPMOR, the sequences and some of the other posts here has been really informative and fun. I especially liked the Quantum Mechanics sequence, it really cleared up some of my fogginess on the subject, and made me want to know more. I am now working through the “Structure and Interpretation of Quantum Mechanics”. Just the linear algebra in the latter half of Chapter 1 goes way behind anything I learned in college, so it is still slow going, but I have learned a lot about Linear algebra (projection operators. How to take a norm of a complex-valued vector, etc.)
I live in the Northern Lower Penninsula of Michigan. Its pretty rural up here. There aren’t many jobs in IT around here, but I have one of them. Its a lot less specialized that I’m sure most IT jobs are. I do purchasing, PC support, in house app programming, printer support and on and on. I’m in the middle of a difficult programming project that’s taken 2 years, because I am the only programmer here, and I can’t spend full time on the project.
I see that there was recently a meetup in Detroit. I might have to make the drive south for the next one, if there is another one.
Anyway, I decided to it was time to get more involved and learn more actively. So I registered rather than continuing to lurk.
Good for you. Checking multiple sources is very rational :) If you get stuck, the Freenode ##physics IRC channel often has physics undergrad and grad students around to help with the technical stuff, though discussing interpretations is generally not encouraged.
I will definitely check that out. Thanks.
My other thought is to also get a linear algebra book that covers infinite dimensional vectors.
This is useful for, say, the hydrogen atom or the simple harmonic oscillator, but you can learn a lot just from the spin 1⁄2 quantum mechanics, which is quite finite-dimensional. It is sufficient for all of quantum information, EPR, Bell inequalities, etc. If you are interested in “quantum epistemology”, Scott Aaronson’s Quantum Computing since Democritus is an excellent read and would not overtax your math skills.
I’m Tom, 23 year old uni drop out (existential apathy is a killer), majored in Bioscience for what its worth. Saw the name of this site while browsing tvtropes and was instantly intrigued, as “less wrong” has always been something of a mantra for me. I lurked for a while and sampled the sequences and was pleased to note that many of the points raised were ideas that had already occurred to me.
Its good to find a community devoted to reason and that seems to actually think where most people are content not to. I’m looking forward suckling off the collective wisdom of this community, and hopefully make a valuable contribution or two of my own.
Hello and welcome to LessWrong!
We have something of a crosspollination with tvtropes as well as a few other sites. The similar “archive diving” structures probably don’t hurt.
Glad you decided to join in! The site always needs some bioscience to collaborate with our high computer science population. Look forward to seeing your contributions.
Hello, I’m Ary. 24 going on 25 mostly agender female-presenting asexual. I’ve been doing a lot of self-improvement and ‘soul’-searching over the past few years and finally stumbled across HPMOR while burning my way through HP fanfiction. From there, it was only a matter of looking at the author page for it to find links here to LessWrong.com. For the last three weeks I’ve been reading my way through the Sequences, starting with the Core Sequences.
Late last week I managed to start on the How to Actually Change Your Mind sequence, which is proving to be a interesting and challenging read. Today I reached the Belief in Self-Deception post and started to feel my mind beginning to really spin. Having continued past that, still thinking, it seems that for far too long I’ve been professing my beliefs without believing. It may take a bit before I manage to shuck the habits brought on by that line of thinking, but that’s the point of reading these—breaking bad mental habits and learning to think better and stronger.
A lot of that desire is brought on from having read and re-read (multiple times) HPMOR and developing a need to be more like Harry. Reading the Sequences is also helping to regain my sense of curiousity, which had been terribly stunted thanks to years of formal education and a closed-minded living environment.
Education-wise I’ve dabbled in computer science, astrophysics, genetics, and theatre, and completed degrees in criminology/criminal justice and cybersecurity (which is my intended career field). I’m always running out of reading material, so I’m always reading and picking up new things to study.
My username is in reference to the Animorphs series of books, which I loved dearly as a child, and still hold dear despite the now-blatant tedium of some of the books.
Hi. I’ve actually been lurking here for a couple months now, but I’ve recently started actually making comments, so I figure this is probably the right time to introduce myself. (Also, I only discovered this post this morning.)
Since I’m not great at expressing my thoughts in an aesthetically-pleasing fashion without the use of lists, I suppose from here I’ll just go down the list of suggested topics of introduction from the beginning of the post.
Who I am: The name I generally go by online is Mister Tulip. I’m sixteen years old, but getting older at a rate of approximately one year per year. Thanks to the conveniences of homeschooling, I have far more free time than seems to be typical for my age-range, which I expend on a large-feeling collection of time-sinks which isn’t actually particularly large whenever I write it down.
What I’m doing: Receiving a general education from my father, attending an introductory psychology course at the nearest community college once per week, and spending my exorbitant amounts of free time on anything which interests me enough to occupy it. Among my time-sinks are keeping track of two large fandoms (My Little Pony: Friendship is Magic and Homestuck), playing video games, watching movies (which used to be much less of a time-sink, but has become one since my family signed up for Netflix), reading TVTropes, and as of about a month ago, reading stuff on Less Wrong.
What I value: I’m not really sure. My values are inconsistent in weird ways which make it hard to actually analyze them (for instance, viewing Total Utilitarianism as a Good Thing, but not aspiring to actually follow its principles). I suppose the strongest statement I can make about my values with any confidence is that I’m some sort of consequentialist, but that’s not a narrow enough category to be of much use to me.
How I came to identify as a rationalist: I started reading Harry Potter and the Methods of Rationality last year (or, if you want to get technical, one year and sixteen days ago), and realized, “Hey, this is basically an idealized model of how I wish I could think!” (Off-topic question: What’s the general consensus on how to do punctuation at the end of inline quotes like that? I’ve never quite figured it out). I didn’t immediately identify as a rationalist after that, since I didn’t feel like I really thought that way, but it made it easy enough to slip under the metaphorical umbrella that I didn’t even notice the point at which it happened.
How I found Less Wrong: As mentioned in the above paragraph, I read HPMOR about a year ago, and it made me aware that LW existed, although I didn’t really read much on it, and eventually re-forgot about it. I then read Luminosity about two months ago, and it once again linked to Less Wrong; that time, I stuck around for long enough to read a few posts (chiefly the luminosity sequence), then forgot about it. The final straw was reading Friendship is Optimal about one month ago, which linked back to LW again; that time, the pages I read managed to convince me that the site was interesting enough to be worth investing larger portions of time into. And here I am now, investing over an hour of writing time on a single comment here.
I’m glad Luminosity was a stepping-stone on your meander here :)
Hi! I’ve been lurking around on the blog. I look forward to actively engage from now. Generally, I’m strongly interested in AI research, rationality in general, bayesian statistics and decision problems. I hope that I will keep on learning a lot and will also contribute useful insights for this community as it is very valuable what people here are about to do! So, see you on the “battlefield”. Hi to everyone!
Hi, I’ve been lurking for a while. I haven’t yet read most of the sequences, since I find the style not so much to my liking. I prefer textbooks, so I’ll probably go out and get the textbooks on this list or this one instead. I read somewhere on this site that Thinking and Deciding is pretty much the sequences in book form. I did read HP:MOR though—brilliant!
In the meantime, I’ve read a decent amount on LW-related subjects, including the following books on rationality:
Thinking, Fast and Slow by Daniel Kahneman
Everything Is Obvious Once You Know the Answer by Duncan Watts
The Righteous Mind by Jonathan Haidt
The Signal and the Noise by Nate Silver
How to Lie With Statistics by Darrell Huff
Thinking Statistically by Uri Bram
Another interest is futurism, on which I’ve read the following:
The Singularity Is Near by Ray Kurzweil
Abundance by Peter Diamandis
The Future by Al Gore
The New Digital Age by Eric Schmidt and Jared Cohen
Big Data by Victor Mayer-Schonberger
Approaching the Future by Ben Hammersley
Radical Abundance by Eric Drexler
I’m also very interested in positive psychology and behavioral change. Good books I’ve read on this include:
Flourish by Martin Seligman
Happiness by Ed Diener
The How of Happiness by Sonja Lyubomirsky
The Happiness Hypothesis by Jonathan Haidt
Stumbling on Happiness by Daniel Gilbert
Pretty much everything by Gallup, especially the books by Tom Rath
The Power of Habit by Charles Duhigg
Self-Directed Behavior by David Watson and Roland Tharp
Finally, I’ve read quite a bit about business, including about half of the excellent Personal MBA reading list.
So, my review of Thinking and Deciding claims that T&D is a good introduction to rationality. One of the comments there is a link to Eliezer’s comment that Good and Real is basically the Sequences in book form.
The two are about different topics- T&D is about the meat of rationality (what is thinking, biases, hypothesis generation and testing, values, decisionmaking under certainty and uncertainty), whereas G&R is about the philosophy of reductionism, focusing on various paradoxes, like Newcomb’s Problem. For reasons that I have difficulty articulating, I found G&R painful to read, but I appear to be atypical in that reaction. (I liked the Sequences, and so if you disliked the Sequences my pain might be a recommendation for G&R!)
A primary value of the Sequences, in my opinion, is the resulting philosophical foundation- many people come away from the Sequences with the feeling that their views haven’t changed significantly, but that they have been clarified significantly- which I don’t think one gets from T&D (whereas I do think that T&D is much more effective at training executive-nature / facility with decision-making than the Sequences).
Thanks. I already had Good & Real on my reading list, but based on this I think I’ll bump it up to higher priority.
On second thought, I might as well post my career deliberations here, and if it generates a lot of comments (I hope) then I’ll move it to a new post as recommended. Not sure it’s correct protocol to reply to my own comment, but I’ll do it anyway.
So here’s my career thoughts:
As I said, I’m currently working for a small company in a business development capacity. It’s not really the type of work I enjoy, so I’m considering going back to school to follow my dream of becoming a researcher. However, I’m very concerned about the time commitment involved.
My current work allows me lots of wonderful free time to spend on family, friends, hobbies, and leisure activities. This kind of lifestyle is very important to me, and if becoming a researcher means giving it up then I’d rather stay where I am or look for a secondary alternative. Anything more than a standard 40-hour week is pretty much off-limits to me. (OK, maybe 45 hours if absolutely necessary, but definitely not more than that.) That includes all studying time, all online or offline networking time, and all other time related to study or work.
On the other hand, I’m willing to work hard and my current financial situation allows me to work for relatively low pay (30-40K, maybe even a drop less). Also, I’m willing to push off earning any money at all while I go back to school to earn my degree. I’m also willing to take out loans if necessary—and it’ll probably be necessary, since I don’t have more than a couple of introductory college classes under my belt.
The standard research career seems to involve getting a PhD and then moving into an academic position or joining an independent research institute. I’ve been told contradictory things about how much time commitment is required for academic jobs of this type. The general consensus on the internet seems to be that a research career is pretty much all-consuming and the work will take up at least 50-60 hours per week. Some of the academics I know concurred with this. They implied it might get more manageable at some point down the road, but that’s a big might and a long way down the road.
Other academics told me that if I’m good about it it’ll only be crazy for a year or two while getting my PhD, but after that it’s much more manageable. That’s not ideal for me, but I think I can handle 1-2 years of crazy schedule. Still others told me that if I’m really good about it and stand my ground, I’ll probably be able to get away with doing a regular 40-hour workweek.
There seem to be a lot of LW PhDs—what’s your opinion about this?
There are two other research alternatives I’ve considered:
(1) Teach the material to myself. This would allow me to set my own schedule and I wouldn’t even have to take out loans to go to college. On the other hand, it might be very hard for me to get a job down the line, and I wouldn’t even be getting the standard grad student stipend in the meantime.
(2) Switch to a part-time job and study the material by myself on the side. This might seem to be the best idea, but I’m concerned that it’ll take me a really long time to study all the material I’m interested in, considering that I don’t want to spend more than 40 hrs/wk on studying + working.
The particular research areas I’m interested in are as follows:
Rationality: I’m not sure how many research problems are left in this area, but from my readings to date I’d guess there’s still definite room for improvement. Maybe I could get a job at CFAR or something like that.
Computational models of rationality: I’d love to be able to create software that can model and apply rational thinking and decision-making. I think that’s basically data science or maybe AI research—correct me if I’m wrong. In any case I think it would be a smart career move for me to learn computer science & programming, just in case my financial situation isn’t quite as good down the line and I need to have some good marketable skills (I have some now, but I think programming is more along the lines of what I’d enjoy doing).
Applying rationality to specific fields: There are numerous fields that I’m interested in where the research seems to be a real mess. Lots of poorly-done science, lots of faulty statistics, lots of extrapolating unsupported conclusions from inconclusive evidence. I’d love to be able to apply the principles of rationality and rigorous statistics to improve the level of knowledge in those fields. The fields I’m most interested in are positive psychology, social psychology, and educational psychology.
Applying rationality to policy issues: Maybe I could help policy makers (even if only on the local level) by applying rational thinking and decision making to help them create better policies. This would involve learning to influence (manipulate?) otherwise irrational people, and manipulation is something that I loathe, but I think the gains are probably worth it. In particular, I live in a wonderful, close-knit community that nevertheless has some serious poverty and education issues, so I’d love to be able to help solve those issues.
What do you think? Is there some way I can become a rationality researcher and still keep a 40-hour workweek?
Hello. My name’s Graedon. I’m 16, and I’ve got absolutely no idea of what I’m doing.
First off, I probably ended up on this site the same way a lot of people did: through MoR. I started reading it for fun, but soon the cool sciency stuff started to appeal more than the cool magicy stuff. I followed the link to LessWrong.com, and here I am.
Lurking.
That’s pretty much it.
Hi, everyone. I’m Lawrence and I’m a college freshman. I like to read, program, and do math in my spare time.
I grew up in the Bay Area with science and religion as my two ideals. My family was religious and went to church every Sunday, but at the same time they put a strong emphasis on learning science—by the time I was in fourth grade, the amount of science books my parents bought me (and that I read) filled an entire bookshelf. I loved religion because I felt like it gave meaning to the world, teaching us to be kind and to respect one another. But, perhaps paradoxically, that made me love science as well, for science gave us medicine, technologies, and other ways to help the poor and heal the sick, things that God commanded us to do.
My faith in religion took a hit in 5th grade, when a close family member was diagnosed with cancer. Neither the prayers of our Christian friends nor the medicine of her doctors helped. We moved to China to pursue alternate treatment, but in the end nothing could save her, and she passed away. I pleaded with God to bring her back, to enact some miracle. No miracles happened. Some of our Christian friends told us that it was all God’s plan, and that she was with God now. But I remembered asking myself, if God is so great, why did He cause us so much suffering?
I asked myself this question, and I found no answer. I read the Bible again and I looked online, but still, no answer. In fact, I found many arguments against the existence of God, and against my faith. Most famous scientists, I discovered, also didn’t believe that God existed. And so I slowly, painfully moved away from my faith.
Having turned myself away from God, I devoted myself to doing good in the world. I resolved to help end suffering, I told my family. They called me crazy. The suffering in the world wasn’t going to end itself, I retorted angrily. They were amused. After that, I weighed the options before me: either I could study science, and maybe maybe invent something that could help the world, or, I could try to become rich, and then donate my money to charities and researchers who could then help the world. I decided to choose the latter. So I set down my science books and picked up economics books and biographies.
However, I always felt there was more time. After all, I was making some money off my investments, I read a lot more books than most of my peers, and I had taught myself calculus by 8th grade. My classes were easy. I started slacking off. I stopped reading as many books as I used to. I am ashamed to say this, but I lost my ambition. It was only through a combination of talent, prior knowledge, and luck that I managed to make it all the way through middle and high school.
I discovered LessWrong around December of last year, through HPMOR. I quickly tore through all the sequences in less than three months. Boy, did it have an effect. The things said here resonated with me. After reading Challenging the Difficult, I realized how far I had to improve, and how complacent I had become. After How to Actually Change Your Mind, I looked out at the world and saw how many problems there were to fix. After reading My Coming of Age, I felt that spark again, the will to do good in the world and to fight against poverty, ignorance, and death.
LessWrong made me panic, because it gave me a sense of how great these problems are. It also gave me hope, because it showed me a path to self-improvement. It was the first time I felt truly awed and outclassed, but also really motivated. Truly, there would be no god to save us. If we don’t work hard enough, if we aren’t smart enough, we can and will die.
Today I’m trying to improve myself. I’ve been doing two hours of math a day—I am almost done with multivariate calculus and am looking to begin probability theory soon. I finished a course on R a while ago and halfway through Learn You a Haskell For Great Good. Like Harry at the end of HPMOR, I am climbing the power ladder, albeit from very far down.
People ask me sometimes, what motivates you? Why don’t you go out and have fun? And to them I reply with a quote from John Donne. “Any man’s death diminishes me, because I am involved in Mankind; And therefore never send to know for whom the bell tolls; it tolls for thee.”
I am involved in mankind. I’m going to fight for it, and I’m not going to give up we reach the stars or die trying. It’s not going to be easy. I know it’s not. But it’s not a fight we can give up on.
I look forward to contributing here!
Hello and welcome to LessWrong!
Thank you for sharing your story. You’re passion is quite clear and I’m glad you’ve decided to join in the conversation. Your drive is impressive. And infectious. It’s the sort of energy we (or, at least, I) feed off of around here. You will definitely find people who share you’re need for desperate action. I’m curious what you’re current plans after college are. Do you have an idea what it is you want to do with your skills and knowledge? You seem to already have the “get rich” thing sewn up so that it’s no longer you’re main goal.
Have you looked into some of the sister organizations LW associates with? It sounds like you’re the type who likes to get involved, so a CFAR workshop or MIRI internship might be something you would get a lot out of. There are also LessWrong Meetups, which are great for meeting other LWers, having some good discussion, and gaining a little fun on the side.
Glad to have you join the conversation! Hope to see you around.
Thanks! Unfortunately I’m not sure if I’m good enough at math for an MIRI internship. Also, I don’t think there are any CFAR workshops in my area, especially any during break. :P
I’m not sure about what I’ll do after college—I’ve looked through most of the 80k Hr career options, but still can’t decide between earning to give via quantitative trading/consulting/investment banking, tech entrepreneurship, and research.
Hi, I’m Ian. I am a 32 year old computer programmer from Massachusetts. My main interest (in computer science) is in the realm of computational creativity but is by no means my only interest. For half my life, I’ve been coming up with my own sets of ideas—way back when it was on Usenet—some ideas better than others. Regardless of the eventual proven validity of my ideas, I find coming up with original ideas one of the primary motivators in my life. It is an exercise that allows me to continuously uncover beliefs and feelings and uncharted territory that wouldn’t be possible for me to explore otherwise. Also, I find it remarkably difficult to find people to share and dissect my ideas with. Generally, people either tell me that I’m smart (I’m not particularly smart) or weird (I’m not particularly weird). In either case I find most people also don’t want to continue talking about why wasabi and thunder are the same thing...or the relationship between creativity, intelligence, primes and small worlds...or why there is no such thing as a question...or why I’m a non-practicing atheist at the moment. What I hope to get out of this community is disagreement, agreement, new ideas, a reshaping of old ideas, friends, and above all, to know that other people in this world understand my ideas (even if they disagree with them). I hope to give this community some ideas they have never thought of.
Hello and welcome to LessWrong!
I admire your reasons for joining. It is easy to find a group or circle that does not challenge you and then rest on your laurels. Seeking out disagreement and criticism is a hard first step for a lot of people. But don’t worry… you will certainly find both here! Not that that is a bad thing.
I see you’ve already added to the Discussion forum. Good on you for diving in and starting some new conversation. If you have some ideas you want to share and get critiqued but feel they are not fully formed enough for a post of their own, try the Open Thread. Even Open Thread conversations can be quite engaging and constructive (and heated! Don’t forget heated).
Also, I don’t know if you’ve read any of the LW literature people tend to reference, but, given your interest in refining your ideas, this) set of posts might interest you.
Thanks for the guidance. It can be intimidating exposing your ideas to a new set of people. I’ve been reading things here on LW off and on for roughly a year. There is quite a bit of jargon on this site and I’ve been reading through as many sequences as I have time for to try and fill myself in. I find that even concepts I’m familiar with tend to have sub-context here that doesn’t quite allow me to fully understand some of the ideas being discussed. I have a fairy good grasp of map versus territory for example, but my understanding comes by way of The Precession of Simulacra by Jean Baudrillard, where in that book he argues that the territory no longer exists, and only the map is real. That is quite different from the arguments I’ve seen here postulating that we can somehow gain access to the true underlaying territory. Regardless, I expect that with enough reading, I’ll be able to contribute. I was a chef for 17 years, so heated debates don’t intimidate me I have a thick skin. I ask that people understand the ideas I have—not agree with me. I will give others the same curtesy. Again, thanks for the welcome. I’ll check out the links. Cheers.
(Aside: I’m trying to become more concise and articulate in my writing, so I welcome anyone and everyone to critique my postings. I know this post is long-winded when compared to its neighbors. I left it long since it took me a number of words to relate where I came from, which I imagine to be more interesting than the TL;DR version, which goes something like, “My name is Ben. I used to be a devout Christian, then I was drug-addled and irrational in myriad ways. Now, I know some mathematics, but not a ton, and I’d like to learn more of the math I like and continue working on thinking less irrationally.” )
My name is Ben. I’m 23 years old, and I live in the southeastern USA. I moved back here to attend university after spending a few years working on the west coast. Perhaps you’ve had a friend who had another friend, and this second friend turned your friend on to the idea that some of this or that would teach them something about this. I’ve been this person, and my road to rationality began with going a little loopy after a little too much of this, which came out of this.
I grew up in Mississippi. I was nursed on Jesus, Calvin, hellfire, brimstone, and Coca Cola. The lessons in homophobia were more explicit than the ones in racism. Also, lots of video games. This generated a critical mass (sorry physicists) of cognitive dissonance that led to me leaving the church and nearly dropping out of secondary school. I tried a semester of college. The courses presented were there for reasons bureaucratic more so than anything else. I wasn’t sure what I was doing there, so I left and ended up in a rural community on the west coast. That’s where the drugs happened, and it’s where the drugs ended. Apparently, toads from a gas station parking lot contain several things at least as weird as the thing I wanted them to contain. I adapted to the physical side-effects over time, but the immediate impact on my cognition was overwhelming at the time. Things previously innocuous would now keep me from sleeping for a week and stir up a great deal of existential dread. I came home for a few months. I spent a lot of time outdoors running and doing construction work, and I spent a lot of time sitting alone in my room with my mind. I learned to meditate, and I started reading again. My brain was having a hard time sticking to reality at the time, and I was very scared. I knew I had been foolish and wrong. I knew I was lacking in discernment. What I didn’t know was where to start with developing my thought to be more rational.
I went back out west for maybe five months the next year. I got back in touch with some folks in the San Francisco Bay Area. I stayed away from drugs, but I could tell in my conversations and actions that I was still missing something.
I returned again to Mississippi at the beginning of 2012 to get back into school, which I had dropped out of in 2008. I started in biology, but our program wasn’t remotely quantitative, which is rubbish if you aren’t trying to go to med school. After a semester of this, I read HPMoR as it existed at the time at the suggestion of a relative. I also discovered LW as result. I lurked a bit, but I was spending most of the summer camping, which made it hard to keep up with web content. I forgot about LW as a community for the most part, but it played a big role in getting me to think more about how I thought and in inspiring me to change majors and largely ignore biological science for the time being.
I switched to mathematics, with no significant background in the area, a little less than two years ago. Since then, I’ve completed all but the three capstone classes for our curriculum, which I need to pick up over the next year. I went through my curriculum in a bit of a rush. I took summer classes. I took some “upper level” classes early. I hurt myself by memorizing passwords for a few classes I was less interested in, but the upside is that was making time to learn to program. (UPDATE: This absolutely applies to the preceding sentence.) I developed an interest in theoretical computer science and in general abstract nonsense, but I’m only just now making time to actually familiarize myself with the subjects. Now, I’m trying to be more objective about my learning process. I’m recognizing my weaknesses, which are plenty. You see, it’s still been less than a year since I drew my first little QED box.
A little over a month ago, three things happened that led to me gaining a great deal of direction that I was lacking. First, I realized that my undergraduate studies were almost over, although they would be stretched out over the next year, and that I’ve hardly managed to scratch the surface of mathematics. Second, I found that the Singularity Institute had become MIRI, saw the course list, and then found Nate’s posts on productivity. *Third, I found the Less Wrong Study Hall, where you can find me as simply “ben.”
I was thrilled to see that there was actually a call for caring about this sort of material (apart from my own interests), and I was inspired by Nate’s independent study endeavors. The study hall was my first step towards getting involved with the community, and I found it to be greatly rewarding both socially and in terms of productivity.
My school doesn’t offer many classes of the sort I’m interested in, and I don’t feel that I have sufficient experience to apply to graduate programs that do. As I result, I’ve started tackling the MIRI course list on my own time. I’m loving it, and I would love to discuss it… especially talking about setting reasonable expectations given my position as a relative novice. I’m presently working through material on probability models and discrete mathematics first, as it’s what I have the most past experience with, but I’m also getting into areas pertaining more explicitly to theoretical computer science and mathematical logic.
Thanks for taking the time to read my little account of things. I look forward to getting to know you as well!
Interesting stuff. FYI, you’re not the only LWer I know of who has experienced apparently permanent mental problems as a result of drug use. And reading drug-related subreddits, I’ve noticed that everyone seems really stupid. So yeah, everything in moderation.
Hello! I’m a 19 year old woman in Washington state, studying microbiology as an undergraduate. I was introduced to the “scene” when a friend recommended HPMOR in high school. I was raised in an atheist household with a fairly strong value on science, so it was novel if not mind-blowing- but still encouraged me to think about the way I think, read some of the Sequences, and get into Sam Harris and Carl Sagan. At college I began reading the rest of Less Wrong, and some related sites, and began identifying as a rationalist.
(Well, let’s be honest here- I also moved from a math-and-science-oriented high school to a very liberal college, where my social identity changed from “artsy and literary” to “science-y and analytic”. I would be lying if I said that trying to live up to it wasn’t a compelling factor!)
LW and 80,000 hours also motivated me to change several of my long-held beliefs (at the moment, I can think of immortality and, well, er, most areas of rationality, which I guess is expected), and re-evaluate my career plans- changing my intended focus from environmental research or emerging diseases, to neglected tropical diseases (if this happens to be anyone’s area of expertise, I’d be interested to hear!)
Anyways, I’ve been integrating the website into my head for some time now, and, equipped with the moniker of my favorite family of wasp, think it’s about time to (begin, very slowly, to) integrate my head into the website. Nice to be here!
Welcome to LW!
Hi folks
I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.
I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that ‘science’ is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn’t go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of ‘physics’ (of course, really it is a theory of physics, but not ‘nominal physics’).
Obviously, the absolute queen of theories is probability theory, since it is the technology that gives us all the other theories.
A few years ago, during my PhD work, I listened to a man called Ben Goldacre on BBC radio, and as a result stumbled onto several useful things. Firstly, by googling his name afterwards, I discovered that there are things called science blogs (!) and something called a ‘skeptic’s community.’ I became hooked.
The next thing I learned from Goldacre’s blog was that I had been shockingly badly educated in statistics. I realized for example, that science and statistics are really the same thing. Damn, hindsight feels weird sometimes—how could I possibly have gone through two and a bit degrees in physics, without realizing this stupendously obvious thing? I started a systematic study.
Through the Bad Science blog, I also found my way to David Colquhoun’s noteworthy blog, where a commenter brought to my attention a certain book by a certain E.T. Jaynes. Suddenly, all the ugly, self-contradictory nonsense of frequentist statistics that I’d been struggling with (as a result of my newly adopted labors to try to understand scientific method better) was replaced with beauty and simple common sense. This was the most eye-opening period of my life.
It was also while looking through professor Colquhoun’s ‘recently read’ sidebar that I first happened to click on a link that brought me to some writing by one Dr. Yudkowsky. And it was good.
In accord with my long-held interest in science, I think I have always been a rationalist. Though I don’t make any claims to be particularly rational, I hold rationality as an explicit high goal. Not my highest goal, obviously – rationality is an approach for solving problems, so without something of higher value to aim for, what problem is there to solve? What space left for being rational? I might value rationality ‘for its own sake,’ but ultimately, this means ‘being rational makes me happy’, and thus, as is necessarily so, happiness is the true goal.
But rationality is a goal, nonetheless, and a necessary one, if we are to be coherent. To desire anything is to desire to increase one’s chances of achieving it. Science (rationality) is the set of procedures that maximize one’s expectation to identify true statements about reality. Such statements include those that are trivially scientific (e.g. ‘the universe is between 13.7 and 13.9 billion years old’), and those that concern other matters of fact, that are often not considered in science’s domain, such as the best way to achieve X. (Thus questions that science can legitimately address include: How can I build an airoplane that won’t fall out of the sky? What is the best way to conduct science? How can I earn more money? What does it mean to be a good person?) Thus, since desiring a thing entails desiring an efficient way to achieve it, any desire entails holding rationality as a goal.
And so, my passion for scientific method has led me to recognize that many things traditionally considered outside the scope of science are in fact not: legal matters, political decisions, and even ethics. I realized that science and morality are identical: all questions of scientific methodology are matters of how to behave correctly, all questions of how to behave are most efficiently answered by being rational, thus being rational is the correct way to behave.
Philosophy? Yup, that too – if I (coherently) love wisdom, then necessarily, I desire an efficient procedure for achieving it. But not only does philosophy entail scientific method, since philosophy is an educated attempt to understand the structure of reality, there is no reason (other than tradition) to distinguish it from science – these two are also identical.
My goals as a rationalist can be divided into 3 parts: (1) to become more adept at actually implementing rational inference, particularly decision making, (2) to see more scientists more fully aware of the full scope and capabilities of scientific method, and (3) to see society’s governance more fully guided by rationality and common sense. Too many scientists see science as having no ethical dimension, and too many voters and politicians see science as having no particular role in deciding political policy: at best it can serve up some informative facts and figures, but the ultimate decision is a matter of human affairs, not science (echoing a religious view, that people are somehow fundamentally special, dating back to a time before anybody had even figured out that cleaning the excrement from your hands before eating is a good idea). I’m tired of democratically elected politicians making the same old crummy excuse of having a popular mandate—“How can I deny the will of the people?”—when they have never even bothered to look into whether or not their actions are in the best interests of the people. In a rational society, of course, there would be no question of evidence-based politics defying the will of the people: the people would vote to be governed rationally, every time.
Goal (1) I pursue almost wholly privately. Perhaps the Less Wrong community can help me change that. After my PhD, while still in The Netherlands, I tried to establish and market a short course in statistics for PhD students, which was my first effort to work on goal (2). This seemed like the perfect approach: firstly, as I mentioned, my own education (and that of many other physicists, in particular) on the topic of what science actually is, was severely lacking. Secondly, in NL, the custom is for PhD students to be sent for short courses as part of their education, but the selection of courses I was faced with was abysmal, and the course I was ultimately forced to attend was a joke – two days of listening to the vacuousness of a third-rate motivational speaker.
I really thought the dutch universities would jump at the chance to offer their young scientists something useful, but they couldn’t see any value in it. So I took the best bits of my short course, and made them into a blog, which also serves, to a lesser degree, to address goal (3).
As social critters, wanting the best for us and our kind, I expect that most of us in the rationalist community share a goal somewhat akin to my goal (3). Furthermore, I expect that more than any other single achievement, goal (3) would also dramatically facilitate goals (1) and (2), and their kin. Thus I predict that a reasoned analysis will yield goal (3), or something very similar, to be the highest possible goal within the pursuit of rationalism. The day that politicians consistently dare not neglect to seek out and implement the best scientific advice, for fear of getting kicked out by the electorate, will be the dawn of a new era of enlightenment.
Welcome!
Where are you in Texas?
Thanks for the welcome.
I’m in Houston.
Hello, my name is Jonas and I’m currently working as a software engineer.
I happened to learn about biases in decision analysis class at university and was hooked instantly. It was only later that I learned about LW. I’m very interested in not just learning about rationality on a theoretical level but actually living it out to the fullest.
I’m very thankful to LW for improving my life so far, but I guess the best is yet to come.
Hello, I stumbled upon LW a few months ago. Some of the stuff here I find extremely interesting. Really like the quality of the articles and discussions here. I studied math and engineering, currently working as a s/w developer, also very much interested in economics and game theory.
Cheers!
Hi. I have a pseudonymous account that I use most of the time, but I want to post something to Discussion in my real name. Can I please get 2 karma so I can post that? Thanks! I’ll delete this post afterwards.
Hello all! My name is Will. I’m 21 and currently live in upstate New York. A bit about myself:
At an early age, I remember I was thinking in my head, and I caught myself in a lie. I already knew that it was wrong to lie to other people, though I did it sometimes, but I could not think of any good reason to lie to myself. It was some time before I really started to apply this idea.
My parents divorced when I was ten, and my mother discovered that she had a brain tumor around the same time. In the face of this uncertainty and unpleasantness, my mother turned to religion. She reached the other side of these events without great harm, and in her gratitude began bringing her children (my younger brother and me) to church with her. I had not considered religion much before, and had been somewhat skeptical, but since I was aware of no one personally who shared my skepticism, I suppressed my instincts and became involved with youth groups and church camps. However, my doubts persisted over time as attempted to become a faithful and devout Christian. I knew that I hadn’t accepted the claims they made completely, and that caused a great deal of stress. If I had doubts, surely an all knowing God would see them and punish me.
A turning point came when I learned that a couple of my close friends didn’t believe in God, and that was the straw that broke the camel’s back. I lost the faith I never really had. Considering the existence of God to be someone likely had caused me a great deal of stress, and I felt a great sense of relief by accepting what I deep down believed to be true, an extremely cathartic dissipation of cognitive dissonance. By the time I got to college, I had watched many atheist debates on YouTube and read several atheist books, and became even more confident in my position.
Once I arrived at my university, I joined a club that was mostly populated by atheists ( the Secular Student Alliance and found that I was happiest surrounded by like-minded people. I would Eventually be elected the groups President. Also while I was at university, I took and was a TA for a philosophy class on Plato and Aristotle. Having read some books by Steven Pinker, I realized the science behind why Plato had come up with his theory of Forms. It bothered me considerably that this was not being taught to students along with the historical material, and it also bothered me to discover that there were people who still identified as Platonists. Not all, but too many of the people in the philosophy department struck me as being more concerned with arguing and showing off their intelligence than in actually understanding the world. They matched almost exactly the Sophists that had plagued Socrates.
In 2011, I became involved in the Occupy Movement. I thought that the world was sufficiently bad that it needed changing, and that even if it was a long shot trying was better than doing nothing, I learned a lot about what happens when you forbid anyone to take a leadership position, and also how to organize people who don’t want anyone to tell them what to do (between that and running a group of atheists, the meaning behind the phrase “herding cats” has become quite clear to me). I’m interesting to see if some of these ideas might be useful to a rationalist community.
In December of 2012, I an idea struck me that I thought would change the world. It was about organizing people using fractals, and I thought I would immediately start a revolution. I then came to the more general realization that “fractals” were the source of everything in the universe, explaining how complexity arose from simplicity. My friends didn’t seem as impressed as I thought they should be. I became increasingly distressed and brought myself to a hospital. They recommended I be admitted to a mental hospital, and with an amount of good sense surprising for one in my condition, I agreed, thinking I either was insane or would be proven sane and therefore right about having solved the mysteries of the universe. I was diagnosed with bipolar type 1. My erratic behavior had been the result of my only truly manic episode, with all the associated grandiose delusions.
After my release from the hospital, I entered a deep depression (which often follows mania in those with bipolar). I lost my sense of self. I didn’t know to what extent the new psychoactive medications I was taking were suppressing my intelligence and creativity, I was unsure of my future, and it seemed to me that I had to drastically lower my expectations from what they were in the past in order to prevent a return to mania. I thought that my depression was the price of stability and sanity. I entered a regimen of treatment that was quite difficult and did not produce results very quickly, including what I thought of as a last ditch effort, elctro-convulsive therapy.
In March of this year, I was put on a new medication. This medication improved my mood considerably, and around the same time I started taking it I decided to give lesswrong a closer look. I had seen posts from it elsewhere on the internet, but I had never really given it thorough consideration. Once I began to go through it systematically, starting with Benito’s guide. I found that much of it corresponded with ideas that had appealed to me elsewhere, and I found the new ideas to be stimulating as well. Finding lesswrong correlated with a turning point in my life. I have found useful advice and inspiration on this website, I hope to be able to contribute in the future, but right now I’m primarily focusing on finishing the sequences before I get into much posting. I decided to join the study hall to help with akrasia and enjoyed my time there so I wanted to introduce myself to the community more thoroughly.
I’m Griffin. I am 17 and sending in my first application to college today! (relevance? maybe)I suppose one reason I am signing up for an account now is that all these wacky essays have made me want to write more about myself.
Things that led me to Less Wrong: well I guess when I first found my way here it was to the wiki article on some religious topic and I was like, “hmm a hate website. How curious.” because I had that thing where I knew hate websites existed but didn’t really connect it to reality. In any case, I closed the page and went on doing whatever I was doing
Later I stumbled upon Less Wrong again, this time under the guise of Overcoming Bias. This was probably at an old or obsolete section of the website, and it linked to the sequences, which I read about 1⁄4 of. I became all about the power of science, and became insufferable to be around (I suspect) for a few weeks.
I cooled off for some time (still trying to apply the techniques I learned) but then discovered HPMOR, which started all over again and became this huge ordeal.
In any case, I lurked for around two years, and accumulated a few pet projects I intend to work on later in life (or now), mostly influenced by this blog. I do viola at an arts high school, and the uncuriousity of musicians in general just baffles me.
Also gender and sexuality may have influenced me getting inexorably drawn here, because I was drawn to the asexual community, being asexual (or at least near it in asexy-space). Asexuals are super into reductionism (link to an asexual blog) and just wacky models in general, and the idea that theoretical models shouldn’t be wrong was kind of hammered into me, hence the dislike of music theory and also the inexorable drawing. Of course, it’s possible I started with a strong sense that beliefs should be consistent and that models should not be wrong.
Should theoretical models not be wrong? Now that I actually put that (mostly subconscious) belief down in writing, I find myself suspecting it.
Proof that I need to be a better rationalist: it took me 90 minutes to figure out that I needed to verify my email address in order to comment. I was distracted because there was a thing on this page that takes me down to were this comment box allegedly was, but I hadn’t done the email so I didn’t see any box (or little button that says reply on every comment). I was convinced (to my credit, it was more suspicion than conviction) that the website had some sort of bug. At one point I gave up and tried to post in the discussion place an article asking for help (couldn’t post a comment asking for help duh) and I was saved from the embarrassment (until now) by the little message that says I need an email to post articles. One crazy entrance exam, huh?
I’m Thomas, 23 years old, from Germany. I study physics but starting this semester I have shifted my focus on Machine Learning, mostly due to the influence of lesswrong.
Here are a few things about my philosophical and scientific journey if anyone’s interested.
I grew up with mildly religious parents, never being really religious myself. At about 12 I came into contact with the concept of atheism and immediately realized that’s what I was. Before, I hadn’t really thought about it but it was clear to me then. For a long time I felt a bit ashamed of not believing in god. I never mentioned it to anyone, probably fearing the reaction I would get. I would have called myself agnostic then. Only recently did I realize the extent to which religion can be dangerous and how deeply irrational it is. I consider it completely useless these days and I’m get actually confused whenever I find out that someone I thought of as rather intelligent turns out to believe in god.
Apart from that I had a minor existential crisis when I realized the implications of a deterministic world on free will. I was in the equivalent of high school when I read an article about the topic. Afterwards I felt strange for days, always thinking that nothing really mattered. But then I was actually able to (crudely) ‘dissolve the question’ and found peace with this issue.
After that, I spent very little time on philosophic topics for a long time. I thought I knew roughly what there was to know. I was wrong there.
Regarding my scientific education, I did always very well in math and consequently began studying physics (and as seems to be the case with some other physicists here I’m very skeptical towards the Many-World Interpretation). I always wanted to do something to advance our society and I thought physics was the right way. My original plan was to work on fusion reactors to solve the energy crisis this world is facing. Though, now, after spending some time on this site, I don’t think anymore that the energy problem is our most urgent problem.
Discovering lesswrong was not that easy for me. Until 19 I spent my time in the German-speaking part of the Internet. Then someday I stumbled upon reddit (was that a pun?). Incidentally that really improved my English. And somewhere on reddit I clicked on a link which led me to HPMoR (thank you who ever posted that). Then I found lesswrong and now I’m here.
I’m about halfway through the sequences and I hope I can contribute once I finish. Learning about biases has already helped me a lot. For the future I think I’m most interested in learning about reasoning and decision theory.
Welcome! I am also basically a newcomer here. I’d suggest not waiting to read all the sequences before you contribute. The worst that happens is that someone corrects you, right? I’ve had a few interesting discussions and I’m still not quite done reading the main line.
Was your dissolving of free will different from the one presented in the Quantum Physics sequence?
Hi!
I can’t seem to find a discussion of free will in the Quantum Physics sequence. I only know this: http://lesswrong.com/lw/of/dissolving_the_question/ (which demonstrates the method I was talking about).
See this wiki page for links to discussion of Free Will in the sequences: http://wiki.lesswrong.com/wiki/Free_will
Hello, I am a human who goes by Auroch, VAuroch, or some variation thereon on most internet sites. I have what I consider a healthy degree of respect for how easy it is to attach an online name to a meatspace human, so I prefer to avoid providing information about myself. (Some might consider this paranoia. I would hope that such people are in shorter supply here.) I will say that I am a recent college graduate in the Pacific Northwest, who majored in Math/Theoretical Computer Science.
I have found LessWrong repeatedly, and have for most of its history occasionally had binges of reading. However, other concerns predominated until I found myself without other immediate responsibilities (viz. unemployed).
I approach things from three main viewpoints; as a programmer, as a pure mathematician, and as a game designer (tabletop more than digital; it’s a purer exercise in crafting fun). I haven’t finished all the main sequences as of yet, but have found my beliefs changing less than expected; I was already tending toward the same conclusions as the consensus here, or had reached them independently, for most things I’ve seen discussed. I’m somewhat nervous about this, as I have not had a real chance to Change My Mind and don’t know how I will react when it is appropriate to do so.
I can’t remember seeing any consequentialist argument for using one’s own real name on the Internet; all the ones I’ve seen are about virtue ethics, amounting to “if you use a pseudonym you’re a [low-status person]”.
There are situations where it’s useful to use a real name; it’s come up for me in directly programming-related projects, where having my name attached to commits is useful for resume purposes, and having the same name attached to the commits as the comments gets one taken seriously. And if I ever am getting a game published, naturally I’ll want to promote it using my real name on BoardGameGeek, etc.
But even then, separating the various personae into different identity chains is useful.
There you go. Perfectly consequentialist.
I follow much the same practice as VAuroch, of course, but this argument sprung into my head fully-formed on reading your comment.
My name is Alexander Baruta. People call me confident, knowledgeable, and confident. The truth behind those statements is that I’m inherently none of those. I hate stepping outside my comfort zone; as some of my friends would say “I hate it with a fiery burning passion to rival the sun”. As a consequence I read a ton of books, I also have only had one good ELA teacher. My summer school teacher for ELA 30-1 (that’s grade 12 English for those of you outside Canada), I’m in summer-school not because I failed the course but because I want to get ahead. I’m going into grade 12 with 3 core 30 level subjects completed. (although this is offset by the 2 additional science courses I want to take).
I spent most of my life in a christian environment and during that time I was one of those that thought humans could do no evil, Queue me being bullied. While nothing major, it was enough to set me thinking that what I’d been taught was wrong. I spent many years (Grades 6-9) trying to cope with my lack of faith, and as a result decided that the Bible was wrong. I don’t know when I was introduced to LW, I think I found it simultaneously through TVTropes (warning may ruin your life), HPMOR, and Google. Since then I’ve been shocked at the attitude towards education in Alberta, for instance Bayes Theorem was on the Gr 11 curriculum six years ago and has since been removed along with the entirety of probability theorem to be replaced with what I like to call 1000 ways to manipulate boring graphs. I attend a self directed school.
One reason for the length of my explanation is that I want to expand my comfort zone, It is one of my major goals because I am an introvert, If any of you set any store by the Myers-Briggs test I am an INTJ. As a result of my introversion it is rather difficult for me to make any close friends, (although it is atrocious practice, I suspect that I am an ambivert: someone possesing both introverted and extroverted personality traits. When I am in a comfortable setting I am the life of the party. Other times I simply find the quietest corner and read). I am attempting to overcome my more extreme traits by taking up show-choir (not like glee at all I swear) and by being more open with myself and others. Due to pure chance I am going to become the holder of a Canadian-American duel citizenship and as a consequence able to attend a university in the states. Due to even more fortunate circumstances I am having at least a percentage of my tuition paid for by one of my relatives.
Some of my more socially unusual traits are things that are practically open secrets to my acquaintances. (Right now the mantra is I need to do this) I am a member of the Furry Fandom, and a Transhumanist (rather ironic really), as well as a wannabe philosopher. (Nietzsche, Wittgenstein, as well as some of the earlier ones such as Aristotle, not to be confused with Aristophanes) I thoroughly enjoy formal logic as well as psychology and neurology. I fear being judged, but I also welcome that judgement because I can use criticism to help me see beyond my tiny Classical perspective ingrained by my upbringing.
In terms of literature I enjoy mainly Sci-Fi/Fantasy, and science (Although I do enjoy a little romance on the side, Iff it is well written, and thanks to my wonderful ELA teacher I am learning to enjoy tragedy as well as comedy). My favorite authors include: Brandon Sanderson, Neil Gaimon, Issac Asimov, Terry Pratchett, Ian. M. Banks, Shakespeare (yes Shakespeare), G.K. Chesterton, and Patrick Rothfuss, As well as some specialized authors of Furry Fiction. (Will. A. Sanborn, Simon Barber, Phil Guesz [pronounced like Seuss was originally pronounced]) In some capacity I also study what rationalists consider to be the dark arts, as I participate (and do rather well in) a debate club. (8th overall in the beginner category). However in my defense I need the practice of arguing with someone else in a reasonably capable capacity because I tend to have trouble expressing myself on a day to day basis. (Although the scoring system is completely ridiculous, it marks people between 66-86 percent and does not seem capable of realizing that getting a 66 is the exact same thing as a 0...) Again sorry for the wall of text… it’s a bad habit of mine to ramble. I just needed to finally tell someone these things.
~Actually, consider this as my: Lurker Status=Revoked post. I did one intro when I’d just joined and have been commenting on various things including me mixing up Aristotle and Aristophanes to amusing results.
Welcome!
You should consider breaking this post up into paragraphs. There’s just too much unstructured text for me to want to read more than a few lines.
Right, Paragraphs. Knew I was forgetting something!
Pratchett and Gaiman co-authored a book called ‘Good Omens’. I highly recommend it.
I’ve already read it thanks. To anyone else reading this ‘Good Omens’ is thoroughly funny and a all around good read.
Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I’m not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we’d expect it to actually apply to. Then there’s selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...
I’m interested to know, did you have any particular goal in mind posting this, or just making yourself generally known? If you need help or advice on any subject, be specific about it and I will be happy to assist (as will many others I’m sure).
Actually I had multiple reasons for posting this. Firstly it’s to make myself known to the community. As an ulterior motive I have trouble with being open with others and connecting (although I suspect that this is a common problem) and I want to get over my fear of such.
Hi all, I’m a social entrepreneur, professor, and aspiring rationalist. My project is Intentional Insights. This is a new nonprofit I co-founded with my wife and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup. The nonprofit emerged from our passion to promote rationality among the broad masses. We use social influence techniques, create stories, and speak to emotions. We orient toward creating engaging videos, blogs, social media, and other content that an aspiring rationalist like yourself can share with friends and family members who would not be open to rationality proper due to the Straw Vulcan misconception. I would appreciate any advice and help from fellow aspiring rationalists. The project is described more fully below, but for those for whom that’s tl;dr, there is a request for advice and allies at the bottom.
Since I started participating in the Less Wrong meetup in Columbus, OH and reading Less Wrong, what seems like ages ago, I can hardly remember my past thinking patterns. Because of how much awesomeness it brought to my life, I have become one of the lead organizers of the meetup. Moreover, I find it really beneficial to bring rationality into my research and teaching as a tenure-track professor at Ohio State, where I am a member of the Behavioral Decision-Making Initiative. Thus, my scholarship brings rationality into historical contexts, for example in my academic articles on agency, emotions, and social influence. In my classes I have students engage with the Checklist of Rationality Habits and other readings that help advance rational thinking.
As do many aspiring rationalists, I think rationality can bring such benefits to the lives of many others, and also help improve our society as a whole by leveling up rational thinking, secularizing society, and thus raising the sanity waterline. For that, our experience in the Columbus Less Wrong group has shown that we need to get people interested in rationality by showing them its benefits and how it can solve their problems, while delivering complex ideas in an engaging and friendly fashion targeted at a broad public, and using active learning strategies and connecting rationality to what they already know. This is what I do in my teaching, and is the current best practice in educational psychology. It has worked great with my students when I began to teach them rationality concepts. Yet I do not know of any current rationality trainings that do this. Currently, such education in rationality is available mainly through excellent, intense 4-day workshops the Center for Applied Rationality, usually held in the San Francisco area. There are also some online classes on decision-making. However, I really wanted to see something oriented at the broad public, which can gain a great deal from a much lower level of education in rationality made accessible and relevant to their everyday lives and concerns, and delivered in a fashion perceived as interesting, fun, and friendly by mass audiences, as we aim to do with our events.
Intentional Insights came from this desire. This nonprofit explicitly orients toward getting the broad masses interested in and learning about rationality by providing fun and engaging content delivered in a friendly manner. What we want to do is use various social influence methods and promote rationality as a self-improvement/leadership development offering for people who are not currently interested in rational thinking because of the Straw Vulcan image, but who are interested in self-improvement, professional development, and organizational development. As people become more advanced, we will orient them toward more advanced rationality, at Less Wrong and elsewhere. Now, there are those who believe rationality should be taught only to those who are willing to put in the hard work and effort to overcome the high barrier to entry of learning all the jargon. However, we are reformers, not revolutionaries, and believe that some progress is better than no progress. And the more aspiring rationalists engage in various projects aimed to raise the sanity waterline, using different channels and strategies, the better. We can all help and learn from each other, adopting an experimental attitude and gathering data about what methods work best, constantly updating our beliefs and improving our abilities to help more people gain greater agency.
The channels of delivery locally are classes and workshops. Here is what one college student participant wrote after a session: “I have gained a new perspective after attending the workshop. In order to be more analytical, I have to take into account that attentional bias is everywhere. I can now further analyze and make conclusions based on evidence.” This and similar statements seem to indicate some positive impact, and we plan to gather evidence to examine whether workshop participants adopt more rational ways of thinking and how the classes influence people’s actual performance over time.
We have a website that takes this content globally, as well as social media such as Facebook and Twitter. The website currently has:
Blog posts, such as on agency; polyamory and cached thinking; and life meaning and purpose. We aim to make them easy-to-read and engaging to get people interested in rational thinking. These will be targeted at a high school reading level, the type of fun posts aspiring rationalists can share with their friends or family members whom they may want to get into rationality, or at least explain what rationality is all about.
Videos with similar content to blog posts, such as on evaluating reality clearly, and on meaning and purpose
A resources page, with links to prominent rationality venues, such as Less Wrong, CFAR, HPMOR, etc.
It will eventually have:
Rationality-themed merchandise, including stickers, buttons, pens, mugs, t-shirts, etc.
Online classes teaching rationality concepts
A wide variety of other products and offerings, such as e-books and apps
Now, why my wife and I, and the Columbus Less Wrong group? To this project, I bring my knowledge of educational psychology, research expertise, and teaching experience; my wife her expertise as a nonprofit professional with an MBA in nonprofit management; and other Board members include a cognitive neuroscientist, a licensed therapist, and other awesome members of the Columbus, OH, Less Wrong group.
Now, I can really use the help of wise aspiring rationalists to help out this project:
1) If you were trying to get the Less Wrong community engaged in the project, what would you do? 2) If you were trying to promote this project broadly, what would you do? What dark arts might you use, and how? 3) If you were trying to get specific groups and communities interested in promoting rational thinking in our society engaged in the project, what would you do? 4) If you were trying to fundraise for this project, what would you do? 5) If you were trying to persuade people to sign up for workshops or check out a website devoted to rational thinking, what would you do? How would you tie it to people’s self-interest and everyday problems that rationality might solve? What dark arts might you use, and how? 6) If you were trying to organize a nonprofit devoted to doing all the stuff above, what would you do to help manage its planning and organization? What about managing relationships and group dynamics?
Besides the advice, I invite you to ally with us and collaborate on this project in whatever way is optimal for you. Money is very helpful right now as we are fundraising to pay for costs associated with starting up the nonprofit, around $3600 through the rest of 2014, and you can donate directly through our website. Your time, intellectual capacity, and any specific talents would also be great, on things such as giving advice and helping out on specific tasks/projects, developing content in the form of blogs, videos, etc., promoting the project to those you know, and other ways to help out.
Leave your thoughts in comments below, or you can get in touch with me at gleb@intentionalinsights.org. I hope you would like to ally with us to raise the sanity waterline!
Hiya I’m Oliver, I’m 21 and I’m here because I want to be stronger.
I’ve got a degree in Engineering, £600 and a slowly breaking laptop which I would send off to get fixed if I could do without the internet for the time that would take. I am, in essence, a shattered mass of broken stereotypes. I am a breakdancing, engineering, rock climbing, food roasting, anime watching, arrow-shooting intelligent fool from near London, UK. At the minute I’m living near Bath and I’m trying to force myself to look for engineering work: hopefully biotech, probably something else.
Until recently I had just enough knowledge to screw myself over repeatedly and forcibly. I’ve ended up with a large pile of akrasia and unhelpful habits as a result of an insufficient understanding of how things work. I was guilty of picking a position based on whatever and then googling studies to defend my hastily erected viewpoint. The internet being what it is I could always find a study defending my viewpoint, and I thought this made me scientific. Like I said, just enough rope to hang myself intellectually speaking.
About 3 months ago someone linked me to HPMOR and I devoured it. Then I turned to Thinking, Fast and Slow and devoured that. Then I came here and I’m half an Elizer sequence and half of the other people sequences away from having devoured this place. Call me a ravening monster from before time because I am a hungry for sanity. I have a feeling that all this knowledge has allowed me to hit intellectual critical mass and my interventions into my own psyche have started to be a lot more effective now that I understand better the flawed lens I’m working with. That said, I have a huge pile of beliefs that I created unscientifically over the course of my life which need to be slowly rectified over time.
I feel that I’ve made progress on epistemic rationality, but now I need to do better at applied rationality. I need to change my habits towards work and effort, because for a while I’ve been reinforcing unhelpful habits and ways of thinking. It’s time for me to win.
Hello and welcome to LessWrong!
Bravo! It all starts with finding the crack in the lens. Now comes the hard, fun, terrible, numinous part of living better than before. Since you’ve already bootstrapped yourself through the sequences, you might want to consider branching out into real space. I say this because it sounds like you’re looking for the practical, real, hands-on experience. A LessWrong meetup, such as the one held near London, might be the very thing you need. A group of like-minded people, engaging in rationality exercises, swapping notes, and basically helping each other get a little bit stronger and feel a little bit better.
You might also be interested in the Rationality Diary. It’s a good place for starting out tallying yourself, making a record of the real behaviors you’ve done, the real plans you’ve made, the real successes and failures you’ve had.It’s a useful tool for keeping yourself honest and seeing how far you’ve come.
And, of course, if you’d just like to participate in the discussion… well, there’s certainly a place for that too.
Glad to have you join the conversation! Hope to see you around soon.
Thanks for the welcome and the useful links, you’re right about the tendency towards meetups which is why I’m going to the one in Bath tomorrow. The rationality diary seems like it should be a useful and interesting addition to my attempts at self improvement, so cheers.
Edit: That open thread is fascinating. I’ve never seen a community with such a high standard of discussion in real time. Even bestof archived depthhub threads don’t touch it. I am going to have to think very carefully about any comments I might make to avoid accidentally eternal septembering this place. I can see I will also have to limit how much time I spend in such threads. I can fully imagine spending an inordinate amount of time on them learning fascinating things.
Hey Oliver,
The Bristol EA society meets pretty regularly (weekly/fortnightly), which might also be of interest if you are in the Bristol/Bath area.
Welcome and I’ll see you at the Bath meetup!
See you on tuesday
Hi, my name is Joe. I live in North Jersey. I was born into a very religious Orthodox Jewish family. I only recently realized I how badly I was doublethinking.
I started with HPMOR (as, it seems, do most people) and found my way into the Sequences. I read them all on OB, and was amazed at how eloquently someone else could voice what seems to be my thoughts. It laid out bare the things I had been struggling with.
Then I found LW and was mostly just lurking for a while. I only made an account when I saw this post and realized how badly I wanted to upvote some of the comments :).
I think this site and the Sequences on it have changed my life, and I’m glad to finally be part of it.
Hey everyone,
This a new account for an old user. I’ve got a couple of substantial posts waiting in the wings and wanted to move to an account with different username from the one I first signed up with years ago. (Giving up on a mere 62 karma).
I’m planning a lengthy review of self-deception used for instrumental ends and a look into motivators vs. reason, by which I mean something like social approval is a motivator for donating, but helping people is the reason.
Those, and I need to post about a Less Wrong Australia Mega-Meetup which has been planned.
So pretty please, could I get the couple of karma points needed to post again?
And we’re in action!
http://lesswrong.com/lw/k23/meetup_lw_australia_megameetup/
Hi! My name is Daniel. I’m an undergraduate student, currently studying physics and mathematics at the Australian National University. I discovered Less Wrong about two years ago, and I’ve been regularly lurking ever since. I’m starting a meetup in Canberra—see http://lesswrong.com/meetups/wc. I hope that I see some of you there!
Hi LWers!
I’m a 37 year old male. I work from home as an engineer, primarily focusing on FPGA digital logic work and related C++, with a smattering of other things. I’m a father to two young children, and I live with my little family on a small farm in central Delaware. I’ve always been a cerebral sort of guy.
I can’t remember exactly how I came to LW—I may have heard it mentioned in a YouTube video—but finding it felt somehow like coming home. The core sequences have become some of my favorite reading material. LW was my first exposure to many of the disciplines discussed here: cognitive psychology, evolutionary psychology, Bayesian reasoning, and so on.
I feel like I’ve discovered a treasure. I’d like to thank everyone who has participated in building this content—it has been extremely enriching to me. Thank you.
My kids are still very young, but I am already starting to think about how I can help them learn to think rationally. I see it as part of my job to help them become better than I am, and I can’t help but think I would have benefited quite a lot if I had been exposed to the concepts that are discussed here much earlier in life. I’d like to figure out how to help, say, a five-year-old start on the path. This is something I expect to be putting a lot of thought and research into, and if I come up with something post-worthy I would be delighted to share it here.
I’m also a novice meditator. I have found Chade-Meng Tan’s treatment in Search Inside Yourself to be a good fit for me. It seems to me that building mindfulness is likely to be very useful in improving my agency, among other things. Thus far I have been only marginally successful, with the largest gains coming in parenting, and particularly in the area of self-control.
I have been lurking for quite a while, but I hope to participate more in the conversation.
Hi! My name is Tobias. I’m from Munich in Germany, male, 24 years old, and currently doing a Master’s degree in physics at LMU Munich. I’m doing okay to good in my studies, but I still struggle with procrastination in particular (though things have gotten better) and low motivation. In particular, while I like physics in the abstract, I don’t particularly enjoy the reality of studying physics at a university. Most importantly, I’m totally unambitious, and not satisfied with that. I’ll be finished with my studies in ~1.5h years, so I’m currently trying to plan what to do afterwards.
I first came upon Less Wrong when a friend of mine recommended HPMoR to me in ~11/2012. A while ago, I decided I’d use my current semester holidays to benefit from the resources and community on Less Wrong, and to find something genuinely useful to do in life. Any suggestions?
For instance, x-risk already sounds interesting, though I’m nowhere good enough at math to even consider MIRI research a valid option. Is there room for mortals anywhere in the broader field of x-risk reduction?
In a related question, do you have any ideas for topics of interest to e.g. transhumanists, which could be suited for a Master’s or PhD thesis in physics, and for which finding a supervisor does not sound straight out impossible?
Basically, if you were in my position and had ~2 months to decide on a plan/goal/cause/short-term trajectory to maximize your impact in life (whatever that means), what would you do?
Considering my interest in the natural sciences, I guess I’d call myself an (anspiring?) epistemic rationalist. So far, I haven’t had much success with instrumental rationalism though, considering my persisting problems with issues like procrastination or perfectionism. On the other hand, this year I finally managed to overcome 8+ years of sleeping issues by finally attacking the problem in what I would call a rational, comprehensive manner. (I will read the sequence The Science of Winning at Life next.)
I intend to read all the sequences eventually; so far, I’ve only read How to Actually Change Your Mind, The Map and the Territory, and Mysterious Answers to Mysterious Questions.
Well, various incarnations of Many Worlds/Mathematical Universe/String theory landscapes/Boltzmann brains are popular both here and in many Physics circles. While I don’t hold much stock in any of those, there are surely some tenure profs in physics departments around the world who would take a sucker grad student willing to spend 4-6 years on something like that.
I’m NIH, I’m 17, and I discovered this site through HPMOR in late 2010.
At that time I read “The Problem With Too Many Rational Memes”, closed the tab and forgot about it for two years. In spring 2012, I discovered that there was a new arc for HPMOR, read it and decided that some of EY’s other works might be worth reading. Over the summer I began to lurk heavily, culminating in me reading the “Blog posts 2006-2010″ EPUB from start to finish in November, which led to me registering.
I’d like to make a prediction of High (80%) confidence that I am the only LW user residing in Nigeria. Living here has been a very frustrating experience on the whole, but after three years I can say that I’ve adapted fairly well. While I lived in Canada, I was placed into the Gifted stream in elementary school, which provided me with the majority of my friend group in the meatspace, and aside from the direct consequences of socializing with said group almost exclusively, I can’t really say how it’s affected me.
For my tertiary education I’d like to study Computer Science, and I’m currently leaning towards the University of Waterloo. Due to the way the result schedule is structured here in Nigeria, that will require me to write my matriculation exam this November, as opposed to the usual time for someone in my class of June 2014. I’m being advised by almost everyone I’ve spoken to that to enter a Canadian University I would be best off repeating 12th grade for the Canadian Diploma, so because of that I am not particularly stressed about having to write two sets of final exams this year.
My interests include reading (Favorite authors are Iain M. Banks, Terry Pratchett and William Gibson), computer hardware, tabletop role-playing-games, programming (Python and some elementary webdev) and video games.
I’m Alex, an American male doing undergraduate studies in Physics and Computer Science. Two years ago, I stumbled upon HPMoR, and made my way to this site shortly after. I’ve been lurking since, and in that time, I’ve seen top-level posts that have convinced me to abandon my half-formed theism, try out the pomodoro method (results still pending), and police myself for biases. I’m interested in lifehacking (though I acknowledge that I have a great deal of inertia in that area), and will be trying Soylent at some point in the next few months.
Hey there LW!
At least 6 months ago, I stumbled upon a PDF of the sequences (or at least Map and Territory) while randomly browsing a website hosting various PDF ebooks. I read “The Simple Truth” and “What do we mean by Rationality?”, but somehow lost the link to the file at some stage. I recalled the name of the website it mentioned (obviously LessWrong) from somewhere, and started trying to find it. After not too long, I came to Methods of Rationality (which a friend of mine had previously linked via Facebook) and began reading, but I forgot about it too after not too long. At some stage about 4 months ago I re-discovered MoR, read about 3⁄4 of what was available and then started reading LessWrong itself.
It took me about 3 days to get my head around the introduction to Bayes’ Theorem (since implementing a basic Bayesian categorisation algorithm), and in the process I realised just how flawed my reasoning potentially was, and found out just how rational one friend of mine in particular was (very). By that stage, I was hooked and have been reading the sequences quite frequently since, finally making an account at here today. There’s still plenty more reading to be done though!
A little background (and slight egotism alert, which could probably be applied to everything here); I’m in my final year of school now, vice-captain of the school’s robotics program (and the programmer of Australia’s champion school-age competitive robot), debating coach to various grades and I’ve completed a university level “Introduction to Software Engineering” in Python using Tkinter for GUI stuff as I finished the Maths B course a year early. I’m planning to go into university for a Bachelor of Science/Bachelor of Engineering majoring in Mathematics/Software Engineering next year. I’ve got major side interests in philosophy and psychology which I currently don’t plan to explore in any formal sort of way, but LessWrong provides an outlet that addresses with these two.
I look forward to future comments and whatever criticism they attract; learning from mistakes tends to stick rather well.
Hello everyone!
I’m going to try and write this incrementally, i.e. with frequent edits, so any replies I get might not all be referencing the same post.
To start off with, my username, Gondolinian, refers to the fictional city of Gondolin. It holds no special significance to me, I just needed a username, and I thought it sounded cool.
I’ve been a lurker here and on other rational blogs (primarily SSC) for over a year, but I’ve just now gotten brave enough to setup an account and start posting. I’ve also read all of HPMOR so far, Worm, all of Saga of Soul so far, Luminosity and Radiance, Summons and Blood (Elcenia), and all of Pact so far. I tend to like reading rational fiction and blog posts more than some of the more technical material here.
Because of an implicit agreement with my parents, and my own paranoia, I’m going to try to avoid giving out any personally identifiable information online for the foreseeable future. However, I think it’s safe to share that I’m 16 years old, I’m cis-male, I live in the eastern USA, I’ve been home-schooled my whole life (Though for practical purposes, I identify as an autodidact.), I was raised Christian but no longer identify as such (The home-schooling wasn’t religiously motivated, in case anyone got that impression.), when I was younger I probably had mild to moderate OCD, though I was never diagnosed and I’ve pretty much grown out of it, the free online IQ tests I’ve taken vary in their results a fair bit, but I’m probably somewhere in the 110-120 range, and most free online MBTI tests I take say I’m a moderate INTP, though I occasionally get ENTP.
I have been practicing aikido for ~3 years and currently hold the rank of gokyū or 5th kyū, out of 6 in aikido. (Kyū ranks are descending, so 5th kyū means I’ve been promoted twice and have 4 left until I can test for the first black belt (shodan or 1st dan).)
I’ve been a lacto-ovo vegetarian for 4+ years. When I decided to go vegetarian, I had a bunch of reasons why (Probably prominent among them implicitly was simply a desire to rebel/feel superior over people.), but now it’s pretty much just a habit I see little reason to change.
Hi LW
The name is Daniel. I’m 22, coming out of college and running into the problem that there aren’t that many people out there who get as excited as I do about epistemology, evolutionary theory, and interdisciplinary science as I do. I ended up coming here because I’m beginning to suspect that the longer I spend not talking about my ideas with other people (see: reality checks), the more likely they are to spiral off into flights of fancy. And nobody wants that. Plus I feel like in the day-to-day life, there’s so little opportunity to really engage in productive, mutually satisfying arguments—you know the sort where you actually feel like you’ve learned something valuable about the world and the person who you’re debating? I miss that, and I hope I can find some of those here.
There are a couple of people I can credit with helping me discover this site. Several of my friends in college introduced me to Eliezer’s articles, which I thought were little more than clever. Then, more recently, I discovered Scott Alexander blog, which quickly became my favorite-thing-in-the-world, and got me thinking that maybe I should give this community a second look. And since winter is falling rapidly on the great city where I live, let’s face it, I’m not going to want to do much else.
I think if you want to get a sense for where I’m coming from: When I was around 13 or 14, I discovered that myspace (remember when that was a thing?) had debate groups, and since creationism and evolution were hotbutton topics, I decided that I would pitch in to the debates (on the side of atheism and evolution, of course). I can’t say I convinced very many creationists to see the error of their ways, but I did learn a lot of cool and interesting things about rhetoric, evolutionary theory, and even theology. I suppose I am coming here with some nostalgia in my eyes.
In college, I studied psychology and the philosophy of science. My interest was in interdisciplinary science, and the people who can walk between scientific disciplines, letting their knowledge of one enrich their understanding of the other. I was interested, more broadly, in how knowledge can be communicated across cultural and boundaries, for it seems that the boundaries are where the most interesting things happen, while also being a place that tolerates the least incorrect thinking.
Right now, I’m working at a market research firm that specializes in the pharmaceutical industry. It’s interesting work—we help pharmaceutical companies understand how doctors evaluate and use new products (spoilers—doctors are just as irrational as the rest of us). Hopefully my knowledge of medicine and the social science will generally compensate for my horrific ignorance when it comes to computing and mathematics (please don’t judge too hard!). But in any case, I look forward to meeting everybody on the forums!
Hi everyone!
I’m John Ku. I’ve been lurking on lesswrong since its beginning. I’ve also been following MIRI since around 2006 and attended the first CFAR mini-camp.
I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.
This process landed me in University of Michigan’s Philosophy PhD program, during which time I read Kurzweil’s The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong’s influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.
I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:
adapting David Chalmers’ theory of when a physical system instantiates a computation,
formalizing a version of Daniel Dennett’s intentional stance to determine when and which decision algorithm is implemented by a computation, and
modelling how we decide how to value by positing (possibly rather thin and homuncular) higher order decision algorithms, which according to my metaethics is what ethical facts get reduced to.
Since I think much of philosophy boils down to conceptual analysis, and I’ve also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.
Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I’m sure I’ve absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.
I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.
You are welcome! And Don’t Be Afraid of Asking Personally Important Questions of Less Wrong.
I understand that you might not want to give details but I’m unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics.
You’re right that I was being intentionally vague. For what it’s worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn’t notice them, I wouldn’t worry about it. This is especially true if we haven’t met in person and you don’t know much about me or my situation.
My name is Evan Gaensbauer. I’m starting an account on the new effective altruism forum with the same name, and I intend to post both here and there more frequently in the future. Additionally, I may write material for one site that is tangentially of interest to the readers on the other site. So, I want everyone to match what I write on different sites with me as the author. Some notable facts about me:
I live in Vancouver, Canada, where I help organize some of the effective altruism, and rationality meetups.
I’m an alumnus of the July 2013 CFAR workshop.
I’m a member of 80,000 Hours.
Hello and welcome to LessWrong!
Glad to have a new altruist join the conversation, and it sounds like you have already gotten quite involved. Great. I’m definitely looking forward to seeing what sort of experiences and views you bring to the table.
Since you’re in Vancouver, do you know of the LW meetup they have there? If you haven’t attended, it may be worth looking into and visiting. It’s a great way to network and just mingle with other rationalists.
I had not heard of 80,000 Hours before your post. Seems interesting. Thanks for introducing me to a new group I did not know about!
Anyway, glad to have you join us. Look forward to seeing you in the conversation!
Thanks for the welcome.
I had a previous Less Wrong account under the username eggman. I got one with my full name to sync with my username on the new effective altruism forum, as I intend to post more frequently on both that site, and Less Wrong, and I figured it’d make sense for everyone to know my common identity so they can connect different ideas written on difference sites, or with my public identity.
I sometimes organize the LW meetup in Vancouver, and it’s going fine.
I’m Imma, recently graduated from university (mix of physics and chemistry) and I self-identify as effective altruist . I’m not very familiar with LW material but want to gradually improve my rationality. I consider attending the CFAR workshop but have to prioritize this to donating the money to effective charities.
I’m involved in a combined EA/LW meetup group in Utrecht (Netherlands). We have biweekly events which I’m planning to announce on LW as well.
Hello and welcome to LessWrong!
Sounds like you’re already getting your feet wet! That’s great. Always glad to have new members who actively participate in the real world (helps with the “effective” part of “effective altruism.”) If you ever do get a chance to attend a CFAR workshop, you’ll have plenty of people here to talk with about the lessons and ideas you come across. The CFAR and LW communities are strongly connected (as you can guess), so there’s plenty of cross pollination of ideas.
So you’re already part of a meetup? Awesome! Feel free to list it on the meetup page. It never hurts to spread the word about your local meetups, and some LWers may not even realize they live right down the road from an active group.
If you’re interested in checking out some LW materials, the Sequences make for some good reading. Since improving yourself interests you, consider reading Alicorn’s Living Luminously or lukeprog’s The Science of Winning at Life. Both cover some useful ideas for self improvement and instrumental rationality.
Given your background and the steps you’re already taking to get involved, I’m sure you’ll have some very interesting things to share with the community before too long. Glad you’ve decided to join! Hope you enjoy your time and come away better than you were before.
All your [] and () are switched.
Thanks! Fixed.
Thank you for your reply. I hope I will have time to go through the sequences, there is now some ethics stuff on my reading list.
Our meetups will be announced on LW as well and I invite everyone to come! (If you live far away it might not be worth the travel cost, but you’re welcome anyway)
Hi. I’m Tom. Long time rationality proponent.
I have met interesting people through less wrong and brighterminds, and just discovered this website.
What got me here was seeing this reference to Lesswrong in popular media:
http://www.businessinsider.com/what-is-rokos-basilisk-2014-8?utm_source=hearst&utm_medium=referral&utm_content=allverticals
Hello and welcome to LessWrong!
You will certainly meet some interesting folk here. The best way to start would be to head on over to the Discussion board. That’s where the day-to-day conversations of LW take place. It’s also the best place to get a feel for the community’s attitudes and standards. I’d definitely suggest lurking a bit. Then, you can observe the conversations of other LWers, and, when you’re bursting to join in, add to the comments.
Another great place to start is on the latest Open Thread. Open Threads are places for casual conversations and questions, though that doesn’t stop the conversations from developing into weighty discussions or intense debates. If you have anything you want to ask or say, it’s a good place to start. It’ll also give you practice if you want to one day create full fledged articles of your own.
If you’re interested in diving into some of the literature here at LessWrong, you’ll find the Sequences brought up again and again. These are a (LARGE) collection of posts, mostly by user Eliezer Yudkowsky, covering a variety of topics but centered around the art of rationality and related issues. Very helpful reading for the interested, but it can be a little overwhelming, when you’re just joining. If you want a little taste of LessWrong literature, check out the linked articles on the About page or some friendly guides to the Sequences. (XiXiDu’s and Benito’s are often recommended). These can ease you into the literature and let you find the parts that most interest you, without overwhelming you with details, discussion, and references.
If Roko’s Basilisk interested you, I’d suggest checking out the Yudkowsky paper, Coherent Extrapolated Volition, that the Business Insider article linked to, as well as the LW sequences on ethics and artificial intelligence.
Hope you’ll be adding your own voice to the conversation soon!
Yay for publicity :) Welcome to LessWrong!
What’s brighterminds?
Welcome. I guess any publicity is good publicity. Hope you had a laugh.
Hi guys, my name is Luka, and I’m 20. I study physics at University of Vienna.
I follow LW since February, and I went probably through all core sequences, and good chunk of the rest. I did not gained too much, because I was kinda always eager to argue with good arguments and resistant to bad arguments, even from elder (which brought me into trouble quite a few times). My biggest win is that I remained strong in the moment when I started to fall: I started drowning in irrationality (because of lack of rational people in my surrounding), and started using passwords without noticing, I started learning at cost of thinking, instead of using them both. LW gave me structural knowledge of what I already used, and thus made me stay that way. AND HOPE! How did I forget that, it was big...
Which leads me to more interesting topic, and that is: what will I give to you?
I had strong education in mathematics, physics and informatics during high school, since I attended specialized high school. There I developed strong logical thinking, but even better, I always tried to implement that into my every-day-life. Since I do feel material from sequences on gut level, I will try to teach you how to do it, too (as soon as I understand what exactly I do different xD). Don’t get me wrong, I don’t try to show off, I just hope to give you insight from other perspective, and with help from more experienced members (because you guys know much more about cognitive science, teaching and writing then I do) to write good materials to help people move on from understanding rationality to actively using it. If someone of the experienced members live in Belgrade or Vienna I will be glad to meet you to discuss how to write all the things I would like to.
I strongly believe I did manage do actualize my self (in Maslow’s sense (I just needed the term to express myself, I don’t have so good knowledge in psychology to state that any of his theories is true or false)), and I will argue it has a lot to do with becoming rationalist in time.
I will try to diversify this community, since it is mostly devoted to development of friendly AI, and I think there are other things how to help our world (more) effectively. We should not put all eggs in one basket.
To tell you more about me, I believe I posses wide knowledge (and I invest time to make it wider): I am really good in mathematics, physics and programming(there i went in-depth the most), and I have some basic knowledge in finance, economics, psychology. I play guitar in free time, attend choir, play video games… I am not a native English speaker, which you probably already noticed, so please, send me private message if you notice some big errors, I will appreciate it. I speak Serbian (my native language) and German as well (since I study in Vienna).
I look forward to making this world a wonderful place!
Hello and welcome to LessWrong!
Glad to hear you’ve already started digging in to some of the literature and found it to your liking. Yes, it’s easy, when you have no community that encourages improvement, to fall into passwords, caches, and generally “not thinking.” We can even forget to hope that we can make things better, as you’ve discovered. I’m sure you’ll find plenty of people who can relate here and who are glad to help each other not fall back into those habits.
Since you seem to have such a focus on self-improvement and applying rationality to personal habits, don’t hesitate to write about your experiences using rationality or your own personal improvements. Personal anecdotes are, of course, not verifiable experiments, but they are still experiences. The Group Rationality Diary may interest you in that regard. You can share your own experiences, see what others have done, discuss personal habits and experiments.
If you’d like a bit more discussion, you can go to the Open Thread or make a new Discussion post, though you might want to save that latter option for a more developed, researched topic. Starting in the Open Thread will not only help give you a chance to experience LW conversation and habits, but it can also help develop an idea you have before you present it as a full post.
Applied rationality, or, as some refer to it around here, “the martial art of rationality,” is one of our big projects of interest. It’s right there in the title of the blog itself after all. We want to improve our abilities to improve the world. So we sharpen each other, and we develop new methods, find new discoveries, perform new experiments on using our tool kit in the larger world. We certainly welcome a new voice and new perspective to the conversation. Given your wide background, your voice will be a wonderful addition.
I hope to read some of your ideas very soon!
Thank you for warm welcome and thorough information!
Hi, my name’s Charlie. I’m a 33YO Aries who enjoys long walks on the beach…
Oop, wrong script.
I’ve been lurking for years, but just started posting (nothing real, just $.02 and quotes really) so I figured I should write an intro so I won’t feel bad actually contributing.
Perhaps the most important thing to know about me is that I am the happiest person I have ever met, as far as I know. I have more money than I intend to spend, a very good head on my shoulders, and no known health problems. I just quit my job a few months ago. I know of no way my life could be materially better that accords with the laws of nature (superpowers would be a gamechanger…). I mention this not to brag, but because my general contentment colors my views.
I am a superman theist. When I was a kid I reconciled religion (RC) by just taking the infinity out of it. The people who wrote the Bible used forty to mean “a lot,” so why do we think they understood infinity? Anyway, I have no idea what would disprove a finite provident entity. If it were I hope I’d accept it, but it’s really not a big factor in most decisions anyway so I haven’t gone out of my way.
I’ve read and enjoyed the Sequences, and wish I could pay money to have them on dead trees mailed to my house. I love the audio though. I was introduced by a friend, but I’ll leave it up to him if he wants to claim me. I try to learn thing like one scores drywall. A little A, then a little B, then a little C, then don’t you know the seemingly unrelated B and C make A easier (much more complicated I know, but who likes textwall analogies?). And I find just about everything interesting.
I live in a rural part of NY, and sometimes have a hard time putting myself in cityfolk’s shoes. I know the idea of having to get in a car to buy groceries is as alien to some as choosing to live in a house where you can see your neighbors is to me. I have no problem visiting cities though. (ADK meetup anyone? Bunch of rationalists on a hike to a high peak sounds great)
Anything else you want to know, I’m not shy. Thanks for great content; hope to be a net positive to it.
I suggest re-reading them. For a while I’ve been meaning to do a PSA post on the subject. I read the sequences once, in thematic order, then recently went back and re-read them in chronological order. I have to say I got a lot more out of them this time, now that I know where EY was heading with the entire project (and reading them in the order posted is much better imho than organized by topic).
Especially because they’re enjoyable to read. I’ve been listening to the audio as it came out but the different ordering sounds great.
I’m Daniele De Rossi. I stumbled on Lukeprog’s old site and thought the problems he was talking about :-rationality , friendly AI , psychology of adjustment , were all really interesting to me , so I followed him here. I’m interested in productivity stuff now primarily. I need to manage my time better and get more done.
Nice to meet you, person with above-average intelligence. My name is Optimal, because I am always seeking optimal outcomes. I’m 16 years old and currently enrolled in an online high school that provides me with an exceptional degree of educational freedom. I’ve been lurking around here for a few weeks, but I just now decided to join in because I could use some serious life advice.
Based on the contents of the article above, and on other discussions I have observed, I think it would be better to explain and discuss my situation in a discussion. Actually, I’ve already written the discussion article; I’m commenting here to get 2 karma points so I can submit it. My article looks something like this one. Please don’t hate me, I promise that my submission will be found mildly interesting by at least one person.
Hello. I am a librarian of the public sphere. With my education recently completed, I hope to expand into other spheres of information work while I am still young. I am 24 and have spent a fourth of my life serving the public in libraries. I have built collections, websites, programs, and physical rooms for my libraries. I know I do not have to explain the joys of a library here. My goal since first learning to learn has always been to improve the world by offering it the very thing that improved me. If we are all finding ways to save the world, then I found that my way cuts through the middle of the information itself. I am currently working with a set of newspapers as a reporter and editor in order to gain experience in journalism and publication. I’ve found informational outreach, reporting, and media use to be severely lacking in librarians. I took on a job that will teach me those skills and as well as the many errors ignorance of them can bring.
Having read through the core sequences of LessWrong, I know this is a place that can sharpen me. Indeed, it already has. In the weeks I’ve spent absorbing the central sequences, I have eliminated many useless habits from my routines and introduced new ones. I first found LW years ago while researching AI basics. I stumbled upon the AI Box experiment discussions. I’ll admit, I thought the community overly serious and overconfident and ignored it (also ignoring the “they might have a point” trepidation in the back of my mind). I recently reanalyzed the LW articles I read, my own private studies having convinced me of points I took as silly years ago. I’ve decided I don’t want to just watch. I want to be part of a dynamic group willing to improve and improve on improvement. I don’t want to only read about the thoughts of people with the same values as me. I want to be able to talk with them.
I’ll introduce myself with my first real thought. My first “I’m thinking this” thought. I made it as a toddler when I stood up in my bed and looked out a window. We lived in a trailer in my grandparents’ backyard. The trailer sat by a small pond. Across the pond, my grandfather had built a shed surrounded by rows of chickens on tethers. A light hung on the shed corner, illuminating the chicken’s tiny houses and the water beside. One night, I wanted to see the shed in the dark. I saw it in the daytime daily, but I wanted to see it at night because I knew it would be different. The chickens would be inside, the shed would be closed, and the pond still. I thought that the scene would be spooky, like the pictures I saw in books. So, I stood up and looked out and loved it. I still remember how it all looked that night, though a bit like a watercolor painting now. I’d taken what I knew about the world from picture books, experience, and personal feelings, predicted something (“It’ll look spooky and I’ll enjoy the sight”), and acted on it.
Shockingly, I didn’t become a rationalist then. My real thoughts at the time were most likely “Cool light” and “I like to see things.” I didn’t take on any advance techniques for thinking until my mid-teens, when I sat through a creationism seminar. After watching an eight-hour video series (by the preeminent Kent Hovind, for the curious), I considered that if creationism were true, this must mean something in the real world of facts. Rocks, plants, cultures, and space ought to resemble a world so young and lately wetted by floods. When I examined the world, I saw that they did not. I understood that my beliefs, the ideas I accepted as true, must reflect the real facts of the world. I began to teach myself and started to break apart the reliance on authority for knowledge that we are taught in school. Though I didn’t vocalize it, I understood that my best lessons (perhaps my only lessons) I had taught myself. That real learning couldn’t be handed down from on high but needed to be ferreted up from the world. I also found that research is much more fun than homework, though I still preferred games to learning for a few years more.
By college, I had taken on the title of a “reasonable” person (I did not use the word “rationalist” because, as a student of C.S. Lewis’ theological writings, I had come to associate it with an atheism that is emotional, circular, and self-defeating). I rejected dogma, holy text literalism, and the “us vs. them” mentality of politics because they did not reflect the reality. I thought myself very utilitarian, very deep.
I had a conversation with a friend (at the time, a near stranger) about the evolution of the mind. The topic of the evolution of religion came up, and, with him an atheist and I a theist, there was some tension. I defended the merits of theism and he answered with explanations of how religion and similar ideas can come from flaws in human psychology rather than from a reflection of real facts. The turn in the conversation came when I said something along the lines of, “Even if religion, or any idea, is only a creation of the human mind, it can still be worth believing. I’ll still believe it.” When I said the words and heard them out loud, I knew I could not accept them. Because I had spoiled the trick of self-deceit. I could not believe any set of facts if I also admitted that they did not reflect reality.
My face must have changed to reflect my thoughts. Instead of growing frustrated with me, my friend smiled and said, “I love a person who thinks.” I should not have to add that he and I are remarkably close now.
I want peers who continue to challenge me. I can only grow so close to someone who accepts me as I am and does not offer me more than myself. I want to be part of a group where I am not the best. I was the best in high school. I did not learn nearly enough. I was one of the best in college. I learned more, but still not enough. The people I have seen here challenge me. That challenge is stunning sometimes (the habits of certain LWers gives me an envy that later turns to joy to know such minds exist). I have found that every stunned shock brought on by an LWer is often followed by a personal realization of my own. That is the greatest reward: to feel myself improve. I have accepted the responsibility of bringing the catalog that charts the world to other humans. To do so, I need a group of cartographers not content with only the coastlines we know.
So, once again, hello!
Hello,
My name is Tim. I’m a neuroscience researcher and swing dance teacher living in NYC.
I originally found out about LW via one or two friends who occasionally shared LW posts with me. I didn’t get into the site too much, but I did eventually come across HPMOR, and thought it was awesome. At one point, one of the author notes mentioned that CFAR would be putting on workshops in my area. I checked those out and they seemed very high-value, so I attended. That was in November. Since then I’ve been getting involved with the real-life LW community in New York, and now more recently, the online community. I’m still reading through a lot of the material here, but I hope to get involved in some discussions.
Some of my academic interests are neuro- and cognitive science (consciousness, morality, decision making, belief-formation), evolution, physics, and linguistics. I’m also on a bit of a history kick lately—it was always my least favorite subject in school, but now that I’m a little older I find that history sharpens my intuitions about how the interaction of systems we call “life” tends to play itself out. Less academically, I’m a fan of dancing, music (working on jazz guitar atm), ultimate frisbee, and other stuff :)
Cheers!
Hi, my name is Robert McIntyre. I’m a graduate researcher at MIT studying AI. I am also a volunteer for the Brain Preservation Foundation (http://www.brainpreservation.org/) You can vote for us to win charity money here (http://on.fb.me/15XFdTG).
Hi! I’m Ciara (pronounced like Keara-Irish spelling is very muh irrational!) I’ve actually been a member of less wrong for a little while-I discovered it through HPMOR. I’ve always liked academics, challenging books, and Harry Potter, so I joined Less Wrong. I am a little ashamed to admit that I was quite intimidated by the sheer intellect and extraordinary thoughts that came from so many members all around the world. So, I took a little break after starting with the basics of rationality and am now a very different, though still amateur rationalist, person. I live in MA, not far from MiT, and I’m hoping to attend a meet up sometime. I’m sixteen years old, and going into my junior year of highschool. Both of my parents are Irish, and I usually spend about a quarter of my year there with family, so I tend to use some bizzare expressions. I’m also a dancer; I participate in musical theatre and jazz principally. I’m an aspiring author currently some 30,000 odd words into my latest attempt at a novel. I’m trying to incorporate some rationality into the characters; although not rationalist genre, like HPMOR, I’m at least trying to ensure that no one is holding the idiot ball. I’m a little nervous about rejoining the rationalist community, but I hope that by, say, Newtonmas, that my rationality will have improved enough for me to start posting. Look forward to working with you!
[Meta comment: In the welcome post, the links to the open threads link to two different tags, with different time dates. This is confusing. One of them hasn’t been updated since 10/2011. If you fix this, you might have to do the same in the template for creating new welcome threads. Also, I think the same issue exists elsewhere on the site, e.g. in the Less Wrong FAQ.]
Thanks for the heads up. Post fixed. Template fixed. I’ve replaced the single, different links with two links, each pair covering Main and Discussion open threads. If anyone knows a way to use one link to get both Main and Discussion open threads, please comment here and PM me.
Hello. I’m a typical geeky 20-something white male who’s interested in science and technology. I’m a Bachelor in economics and business. Not a native English speaker.
From the time I was 12 I’ve spent most of my time surfing around the internet reading about interesting things and generally wasted my time and being alone. A few years ago I was really depressed and had a plan for suicide. Once in a while I’ve done something actually useful. That’s my life in a nutshell.
I have always thought of myself as somewhat rational in the traditional sense when I’m not emotionally charged, but so do most of the people, I’d say. Who would be intentionally irrational?
When I first heard about LessWrong on 4chan/sci/ a few years ago, I heard only about negative things of it. I got the impression that this is basically some kind of daydreaming cult for people who are interested in the singularity and transhumanism. Like people just write about some things that sound kinda important and deep in a pop-science manner, but don’t want to do anything more quantifiable or exact or something that’s more difficult, like real science. I got the impression that it’s not something you’re supposed to take very seriously.
Okay, a few years go by, I start to be more interested in futurology and stuff. I stumble upon Luke Muehlhauser in his reddit AMA and the things he talked about in his AMA sounded kinda cool, something I’ve never really thought before and I read a few of his papers (Intelligence Explosion and Machine Ethics, Intelligence Explosion: Evidence and Import). After this I forget this thing again for a year until I read his book “Facing the intelligence explosion” in which he goes to lengths to talk about LessWrong so I decide to take a look.
So I read the sequence “How To Actually Change Your Mind” and there where some useful things to consider if I want to be neutral in the face of evidence and change my mind about things. This bayesian approach to rationality or whatever-it’s-called sounds pretty reasonable and I think I want to learn more of it. In the meantime I read Eliezer Yudkowsky’s HPMOR and “Cognitive Biases Potentially Affecting Judgement of Global Risk” and a few random LessWrong articles here and there. Sometimes Eliezer Yudkowsky sounds so full of himself, like he knows everything about everything, that’s it’s pretty annoying. His narcissism and self-proclaimed geniosity reminds me of Stephen Wolfram. But I like his optimism, he has really useful ideas to share about rationality and he’s good at writing.
I also started to think, that if these people are trying to be so rational then why do so many of them hold seemingly irrational beliefs about some things without much quantifiable evidence. I mean, I have a gut feeling that the singularity will probably happen at some point if there isn’t some societal collapse, but it’s far from certain and may not happen the same way FAI advocates anticipate. The event is so far in the future and there are so many factors related to it, so I’m not sure how well you can predict how it happens and say meaningful things about it. Someone here made a good remark about it:
I also agree with many of the points raised in this post. I think the work MIRI is doing might be useful and I’m not against it, but I wouldn’t personally allocate my resources towards it at this point, at least not money. Karnofsky criticized that MIRI doesn’t take into account many variables he has considered, but on top of that there must be even MORE variables MIRI hasn’t taken into account.
There are many beliefs here that seem to be based on non-quantifiable hypotheses. You would think that if you took a bunch of rationalists who applied the methods of rationality correctly and were willing to change their minds about their beliefs, the likelihood that they had the same fringe-beliefs based on non-quantifiable evidence would be pretty small. Note: I don’t know everything about the community here, this is just from the little time I’ve spent here.
I hope MIRI, transhumanism, cryonics, polyamory etc. are not inherently connected to LessWrong and its approach to rationality?
I still have a cautiously positive view about this community. Even though I dislike some of these fringe opinions, I’m still interested in decision theory and in this kind of approach to rationality, which I don’t think is fringe at all and I’m willing to learn more about it. I’m kinda slow thinker and sometimes it feels when I’m around people that I’m less intelligent than others and it takes longer for me to process things than the people around me. By making good decisions I could minimize the impact of situations where my well-being depends wholly on quick thinking.
But I don’t except very much practical success and most of all, I think of this as a form of entertainment (“epiphany porn” as you like to call it) and when I have more important things to do, I will probably set this thing aside.
[META]
Because this thread hit 500 comments, I’ve posted a new one here. (In Main, but not yet promoted.)
Hello.
I’ve been a part of LW before, but left when I felt that I no longer had more to give or receive from the community. This wasn’t a falling out. Just maintaining a minimal life style. However, recent developments in my life, including the possibility of working in the Bay Area, have given me reason to come back. I hope to be as beneficial to the community as it has been to me.
See you around.
Hi, LessWrong community!
My pseudonym is Ilzolende Kiefer. I’m a HS student, autistic, and (as is typical for users of this site) an atheist. I’ve been lurking on this site for a while, and before that I was reading other books about cognitive bias and whatnot.
I think I got into rationality for 2 reasons: having a scientist parent, and dealing with school psychologists of questionable quality. (The autism wasn’t a big enough deal to require an autism-specific therapist, but it wasn’t equivalent to neurotypicality.) The first reason is straightforward. The second reason takes explaining. Imagine the adults around you treating your personal thought process as flawed. Even if you’re a kindergartener, if you’re fairly smart, you’ll want to self-correct.
I actually did this in kindergarten: my model of appropriate behavior before starting was based on the Junie B. Jones fiction series. This led me to hit a boy on the first day of class, because girls were supposed to hate boys. I got a behavior chart (don’t do x for y weeks, and then a reward will occur) for this, and did not have difficulty adhering to it, because I didn’t want to hit random boys, I just wanted to behave in accordance with expectations whenever it was easy to do so. That’s not to say I was very rational then: I thought that a good way to communicate that a timer was going off was to make beeping sounds: “That’s how the timer communicates stuff, so I should repeat the communication!”, and that my friends would be really interested in a discussion of binary numbers involving sticks and pinecones representing ones and zeroes.)
Another reason that this led to rationality was that school psychologists have a client, and that client is not the student. I do not consider “becoming indistinguishable from my peers” to be a terminal goal I have or want, nor do I consider it a good instrumental goal. School psychologists are very skilled at influencing behaviors through Dark Arts-type methods. I began to notice that this was occurring (behaviors that did not correspond to my model of how I should be behaving, such as picking up valley-girl speech patterns), and tried to immunize myself against it, mostly by getting into a lot of exhausting arguments.
Side note: teaching empathy via guessing emotions of drawn faces is terrible. I have plenty of distance bias in my moral reasoning already. Looking at bad art won’t increase this. There is more to identifying an emotion than the low level of detail a sketch artist can manage (voice, posture, more details in the face, movement, and context.)
Quitting religion was easy for me, mostly because I was only religious because that was what people who attended weekly services were. The biggest shock along the way was finding out that the biology writer I had read 2 books by was actually more famous as an atheist. (Me in museum gift shop: “Hey, it’s that Dawkins guy. Wait, this book called The God Delusion has his name on it? Isn’t he a biology writer?”) If I had to pinpoint anything, it’s that I had no social cost for quitting, as well as the chapter on memes in The Selfish Gene.
Finally, I’m a Mock Trial pseudo-trial attorney. This has dramatically improved my argument skills, even if it is motivated reasoning. (At one point, I found myself talking about prior probabilities in the middle of an objection argument, and it worked. Thanks, LW!)
Does anyone know where the most recent version of the welcome thread is? I searched and searched for keywords like “welcome” and “introduction” / “introduce”. Do you not use welcome threads anymore?
This is the most recent welcome thread. See the bit about reaching 500 comments in the small print at the bottom of this post.
The wiki has a page on Special Threads which tries to point to the most recent of various threads. According to that, this is the most recent introduction thread.
My name is Joshua. I am 29 years old. After lurking for a while, I have decided to begin participating.
I have little training in mathematics or computer science. Growing up, mathematics always came easy to me, but it was never interesting (probably because it was easy, in part). Accordingly, I completed a typical high school education in mathematics by my freshman year and promptly stopped. In college, the only course I took was college algebra, which I completed for the sake of university requirements. I now regret ending my mathematical education and have begun going through the Khan Academy materials. As best I can currently estimate, I want to reach a level roughly equivalent to what an undergraduate math major would be required to know at the beginning of his or her upper-division work. At that point, I will be in a better position to know what else to study. Computers, by contrast, were of considerable interest to me in my youth, and I learned some rudimentary programming in junior high. That interest was eventually eclipsed by other interests, and I do not currently have any plans to reanimate it.
Most of my intellectual efforts are devoted to philosophy, and it is from that angle that I discovered Less Wrong. I have a fair amount of formal training in the field. (The sort of discussions that occur on Less Wrong, of course, are quite different from most of the work that is done in philosophy.)
As far as the normal Less Wrong materials are concerned, I have read a few of the sequences and recently read a bit of HPMoR. Most significantly, I have been working through one of the ebook versions of Eliezer’s posts arranged in chronological order; I have slowly read somewhat more than half of them (for reference, I recently completed the sequences on quantum physics and meta-ethics).
I look forward to participation.
I also have a request: would someone be willing to set up a chat appointment (IRC or whatever) to work through a few comprehension questions related to the quantum physics sequence? I am confident that my questions are quite basic. If you are interested, please send me a private message.
Hi Joshua, welcome!
Regarding your quantum questions, you can post them to the open thread.
Hello everyone, I’ve graduated in computer science this summer and I’m very much interested in philosophy and ethics (besides rationality, of course). I’ve stumbled upon LW through friends and found much of the content here to be eye-opening and fascinating. I’m still working my way through the core sequences, so don’t expect any meaningful contributions soon – but, as rationalists, you should always be ready to be surprised! :-)
Hi, ismeta here.
I came to Less Wrong via a circuitous route, betwixt and between unordered Sequence posts, HPMoR, Overcoming Bias articles, and XiXiDu’s critiques, all consumed during marathon procrastination sessions. My opinion of the community has lurched ungainly from one extreme to another, and now resides somewhere in the vicinity of ‘cautious admiration’. I have reserved judgement on most of the transhumanist / singularitarian issues that are discussed on LW as yet (citing ignorance), though I should probably throw in an early disclaimer to the effect that I currently hold extremely conservative views regarding the potential efficacy of cryonics.
I am an agnostic, with atheistic leanings, and my political opinions generally correlate with the centre-left.
In Real Life, I’m a biochemistry undergraduate from Sydney.
I look forward to finishing off the Sequences over the next couple of months, and I hope to become an active participant and contributor to LW.
My name is Mathieu. One of my friend recommended me to read the main sequences a couple of months ago. I’ve read one third of them so far and I really like them. Now I want to get more involved in the LessWrong community than just reading the main sequences. I’ve just posted my first article. It’s about a cryonics presentation I will do on Monday.
I wish there was a class about rationality at the beginning of high school (I’d remove any courses to add one about rationality). Otherwise we keep learning things without knowing how our brain works (especially the bias it makes) and this can cause problems when learning things and making decisions.
I study engineering physics at Laval University, Quebec, Canada. I’ve worked at the robotic laboratory of my university and I really liked it. I like mathematics, logic and programming, but I hadn’t really consider working in artificial intelligence before starting reading about it here (and then other places) because I wasn’t exposed to the field. Next year I will (try to) do a master in artificial intelligence. Moreover, I would eventually like to do (at least) an internship at MIRI.
By the way, the last question in the FAQ is linking to the 5th welcome thread and not the 6th; I’m not sure where I should mention this, but maybe one of you does.
My name is Izaak. I stumbled across HPMOR one weekend while staying in a hotel room. I didn’t sleep that night. I’ve read through most of Less Wrong, and some of the stuff on the other sites like Overcoming Bias. I’m a high school senior who will probably major in Comp Sci in college.
I’ve found the stuff on this website truly useful, but I have a question; I am currently in the IB Diploma Programme, and they have this class called TOK (Theory of Knowledge, it’s truly awful, it has very little actual epistemology), but I have to do a final presentation on a topic of my choice, and I was wondering if someone here who knows about the Diploma Programme could brainstorm some ideas about where to focus for a 20 minute presentation about (some subset of) rationality?
A friend of mine did IB in high school, but I don’t have much personal experience. I’d be happy to talk about presentation ideas.
My standard advice for short-form presentations is to try to paint a picture that something more is possible; I’ve found Bishop and Trout’s Epistemology and the Psychology of Human Judgment to be a good example of this. The book basically outlines the case that psychology can inform philosophy, and that coming up with superior algorithms for actual practice is better than debating labels. The inferential distance to actually explain rationality is much longer than 20 minutes, but it seems like 20 minutes is enough time to explain that rationality exists.
Hi. I’m Gunnar. I’m from Germany. I’m lurking lesswrong since July 25th.
How did I become a rationalist? I always was. Or at least I continuously became.
I had a scientific interest as a child. My curiosity was satisfied by my parents with answers, experiments, construction toys and books, math courses and later boarding school (this was in germany when there was a hype on talent advancement).
I must have been eleven or twelve when I had one of the strongest aha moments I remember: The realization of the concept of continuous functions. That a relationship like 2x+1 can not only be applied to single numbers and tabulated but realizes continuous curves. All the possibilities hit me like a hammer: Movements, prices, all kinds of dependencies could be described arbitrarily fine.
That moment had a lasting effect on me. I always find myself wondering what lies between the known points. Between the extremes. In a way this has become part of my philosophy of seeing and valuing the in-between. Some higher level Goldilocks solution.
I read my fathers shelves of science and science fiction as a youth. I tend to absorb and accept ‘facts’ in books too easily. Luckily I have a skeptic friend to get me back down to earth.
During boarding school there was a significant transition from abstract mathematics to computer science which gave me significant insights into modeling, simulation, complex structures. And the feeling of power over the machine. Of course I later fell into the trap of conceiving my own super programming language operating system.
I remember being asked during boarding school (9th grade) about my best talent. I answered: My tolerance. I could understand almost any behavior. I couldn’t necessarily empathize with it or feel it. But I knew it existed, was right for the person/persons acting and was in general part of life.
I didn’t know then that I hadn’t really experienced much of life—only read about it. And that real tolerance means not only to understand and connive but to accept and endure.
During university after absorbing computer science until soaked I finally broadend out to cognitive science (mind opener: ‘explorations in the microstructure of cognition’) and later social sciences (mind opener: ‘judgement under uncertainty heuristics and biases’.
I learned about real life from and with my wife. Strong emotions, child education, hard work and more.
What did I think about all that I learned?
As a child I must have figured that everything can be understood—given enough time and effort.
I thought early and much about God and morality and spirituality.I wondered how God could fullfil his promises. How he could be the way he is – if he is. There was always doubt. There could be a God. And his promise could be real. But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50⁄50 agnosticism to tolerent atheism.
I can hit small targets—especially if they are far away. And my objective is on healing, improvement. I admit that my utility function is centered on me, my family, my friends and ‘social network’ and fades out slowly toward society at large. I am not very altruistic to the public in general.I understand effective altruism. And I value it. But also I cannot go against my affection to my family and especially my four sons. That I got from my parents.
That’s me. What do I expect of LW? What can you expect of me on LW? I’m not clear yet. I already knew much of what is on LW when I came here. But I enjoyed the crisp and detailed posts. Refreshing or deepening rationality never hurts. I esp. like EYs stories. They bring rationality ‘to the masses’. I will definitely read hpmor to my sons when they are old enough.
I think I can enrich lesswrong with critical views on the singularity. I have some strong arguments and even empirical evidence that there might be inherent complexity limits to technology and cognition which essentially render super intelligence infeasible (I see UFAI as a risk nonetheless).
And then I have some ideas on AI which build on a synthesis of logic and neuronal (vague) models which I’d like to share and discuss.
Maybe I will also share life experience. It seems that I am fairly old for this community and can do something about the arrogance risk (which I myself feel too) and about life expectations.
Willkommen! :-) Wo in Deutschland steckst Du denn?
In Hamburg. Und da gehe ich auch nicht weg.
Do go on...
Hello,
I am a 23 year old male named Corey, though I prefer to go by the alias Kavrae in any online discussions. This allows me to keep a persistent persona across all sites or games I may join. If you happen to come across this alias elsewhere, there is a high probability that it is the same person. Please be kind in judging such findings though, as I have gone through a bit of a mental overhaul in the last few months. I would also like to apologize in advance if this gets a little lengthy; that seems to be a trademark of my posts lately.
I should probably do a brief summary of myself before diving in to my personal rationality history.
My education began in a highly underdeveloped rural highschool. Low student standards and even lowing testing criteria seems to have set me up with delusions of superior intelligence. Such views were quickly dissolved in the followed two years at a Missouri university studying computer engineering and physics. To put it shortly, the first year thoroughly broke me and opened my eyes to how vast academia truly was. While harsh, it is something I’m now grateful for. Unfortunately, in a decision I very much regret, I cut my education short and did not earn any sort of degree due to outside events.
As a product of the previously mentioned events, I have been married for approximately 3 years with a 2 year old son. I’m proud to say that he is turning out to be exceptionally intelligent, particularly in the areas of symbol recognition and technology use. I certainly plan on teaching him what I can of rationality and science as young as possible in an attempt to make the next generation better than the current one.
I am currently a web application developer and have been doing so for approximately 2.5 years, with initial training in the form of a 6 week programming bootcamp plus trial-by-fire. While the total time spent is relatively short, I have equal experience with open source and DotNet managed solutions with no preference between the two. It may seem contradictory to my hobbies in the next section, but I would prefer a future position as a system architect or senior developer rather than some form of management. I believe this goes back to certain control issues than I’m discovering through introspection.
Much of my free time now is spent in multiplayer gaming; whether it be as a support player in various MOBAs or MMOs, or as a GM in local tabletop games (Shadowrun, Pathfiner, etc). The former set of gaming being one that I’m considering dropping in favor of martial arts or outside-of-work programming. In either case I tend to be the one that spends extensive hours pouring over rulebooks and theorycrafting sites then subjecting my players to lengthy summaries. In my hobbies I tend to find myself in positions of teaching, leadership, or simply high responsibility more often than not. Quite possibly another symptom of the control issues mentioned above.
I believe my introduction to rationality began in college during my second and third semesters, though I didn’t realize it at the time. The combination of a base physics class and introductory logic changes my view of the world. Everything seemed much more controlled and calculable; whether I could do such calculations myself or not. Probabilities become very important for me at this time, though I now believe I often misused them. My second introduction to rationality came after I got married, in a series of events that I should have handled far better. The short version is that I improved my debate skills against an in-law that seems to embody every cognitive bias and fallacy I have read about thus far. This was where I learned about such fallacies and began to recognize just how ingrained they were in society, as well as develop a bit of cynicism towards mankind’s mental habits (this feels like a bad phrasing. Suggestions on improvement?).
This brings us to the present. I came across LessWrong through HPMoR and have spent the last few months reading through the core sequences. I plan on doing so again soon, to ensure that I retain at least a fraction of what I’ve read. It has been quite the experience so far; updating so my beliefs that I have never questioned and improving concepts that I thought “good enough”. I have also learned many lessons regarding how and when such knowledge should be used; often in painful or humbling ways.
I recognize that I have a very long ways to go in the ways of rationality and believe that joining in the discussions, rather than simply lurking, will get me there faster. To narrow the spectrum of the vast amounts of information to learn, I am attempting to focus on evolutionary psychology, cognitive biases, and logical fallacies. Thus far I have found them to be the most fascinating and useful.
I read physics fora for just that effect. Some of it could as well be an elaborate VXJunkies, for all I can tell.
Thou Shalt Not Anthropomorphize Unspecified Points In Mind Design Space.
You make some good points. Please forgive me if I am more pessimistic than you are about the likelihood of AGI in our lifetimes, though. These are hard problems, which decompose into hard problems, which decompose into hard problems—it’s hard problems all the way down, I think. The good news is, there’s plenty of work to be done.
Heart processed.
Processing soul. Bzzzzt, does not compute.
Please enter additional matter.
My name is Forrest. I’m 20 and studying undergraduate Physics and Computer Science at the University of Maryland. About two years ago, one of my friends introduced me to HPMoR and I was instantly hooked. A few months ago, before the final plot arc came out, I decided I was tired of waiting for HJPEV and came here to learn about the Methods of Rationality themselves from the source. I spent a few months lurking, read many of the sequences, and now decided to actually go about making an account. So, here I am!
How long did it take you to learn to say it with a straight face?
Hello again. Used to post as “ZoneSeek” but switched to my real name. I’m from the science/science fiction/atheist/traditional rationality node, got linked to LW years ago through Kaj Sotala back in the Livejournal days. I have high confidence that I am the only LessWronger in the Philippines.
You know, a feature it would be nice to have on LessWrong is a namechange feature. I too have had thought about moving over to my real name, but that is painful, you know? I’d have to start over from complete scratch. I guess it wouldn’t be so bad, I’ve only been posting here for a year, and the pain will only get worse the more I put it off, but it would be much nicer if there were a button I could click to just change my username. Yes, put on it some safeguards, like have it say on my userpage what my username used to be, and maybe even have it cost karma or something, to prevent it from being overused.
Of course the real problem is that someone needs to actually go and make the changes in the code, and that takes work. There likely are higher priority changes just waiting vainly for someone to implement them, as TrikeApps does not have the manpower or resources to work on LessWrong save once in a blue moon. So it’s unlikely this will happen in the foreseeable future. But if someone sees this, and wants to implement it, go ahead! I’m sure quite a few people would appreciate it.
“Show my real name” is a feature under current development, as of about 2 weeks ago.
That is wonderful news—thank you! It sounds like we will have both usernames and real names, and both will be displayed, which is exactly as it should be. Thank you Tricycle!
Hello!
Actually, I am no stranger to this site; I have been a sporadic fly-on-the-wall here since early 2011, when I found out about you guys through gwern’s personal webpage (to which my interest in nootropics, n-backing, and spaced repetition had led me). I’ve made several desultory stabs at the sequences; I think I’ve read most of them twice over, but some I’ve abandoned and some I’ve never touched. I started HPMoR reluctantly, found I couldn’t put it down, and finished it in a single sitting. Lately I’ve been pretty swamped with work, but I’ve been trying to follow along with the Superintelligence reading group. Though I’ve been content to lurk, I am now extremely keen to take a more active role in the discussions!
Blurb: I am a 25 year-old doctoral student and researcher in the Learning Sciences with an academic background in Statistics and Biology (mostly behavioral neuroscience). I am dedicated to making learning as powerful and efficient as possible through psychological, biological, and technological cross-pollination. Only an optimally educated humanity will be equipped to solve the problems of the future (and indeed, those of the present)! Though my research contributions have been mainly on projects not my own, I am ultimately interested in psychometrics, human-computer interaction, intelligent tutoring systems/cognitive tutors, and redesigning classroom instruction to reflect the state of the art in cognitive science.
For a while I was deeply wary of technology—the recklessness of our innovation and the potential it had to change human beings irreparably if it didn’t eliminate them completely. I had just discovered Heidegger’s Question Concerning Technology, Bill Joy’s Wired essay, Kaczynski’s manifesto… sundry warnings of an impending techno-dystopia. But I came to reevaluate my fears: the proper course of action is not to rage against the machine. Our future is a technological one whether we like it or not (spoiler: we like it), and despite my initial resistance I have come to embrace technology and the changes to humanity it will increasingly entail; not only has it greatly improved life on Earth (at least for humans), but it can be continually leveraged to this end (for all forms of life). However, I feel that emerging technologies should be pursued with much greater care than they are currently, and anticipation of the many longterm side-effects of such development requires that the people of the world (or their devices) be informed/thoughtful enough to do so (cf. differential intellectual progress). Any attempt at a such a wholesale societal improvement program requires better education, and my hope is to help speed things along on this front.
Gah, I really meant to keep this shorter, but I still have so much to say about myself! Best to quit now before I bring up my precocious childhood or my pious vegetarianism! Here’s to many great discussions! I look forward to meeting you all!
Hello. My name is Avi. I am an 18 year old Orthodox Jewish American male.
I found out about LessWrong through HPMOR. I was very impressed by the quality and consistency of the writing.
I’m partly through the sequences (in middle of the quantum one currently) and I have a lot to say on much of what I’ve seen, but I decided not to post too much until I’ve finished all the sequences. Most of what I’ve seen seems correct, and then there’s posts here and there that I think have logical errors.
I was a little disappointed that most of my comments got voted down (I’m at −3 Karma now) . Can anyone tell me why?
Welcome, Avi!
It looks like I downvoted three of your previous comments. Sorry about that (not really, it had to be done). Here is my reasoning, since you asked:
Your comment on AI avoiding destruction suggested that you neither read the previous discussion of the issue first, nor thought about it in any depth, just blurted out the first or second idea that you came up with.
Your retracted FTL question indicated that you didn’t bother searching online for one of the most common questions ever asked about entanglement. Not until later, anyway. So the downvote worked as intended there.
Your comment on the vague quasi-philosophical concept of superdeterminism purported to provide some sort of a proof of it being not Turing-computable, yet did not discuss why the T.M. would not halt, only gave some poorly described thought experiment.
I am sorry you got a harsher-that-average welcome to this forum, I hope your comment quality improves after these few bumps to your ego.
Good for you. Note that the Quantum sequence is one of the harder and more controversial ones, consider alternative sources, like Scott Aaronson’s semi-popular Quantum Computing Democritus, written by an expert in the field.
That’s quite wise. If you write down what you want to say and then look back at it after you finish reading, you will likely find your original thoughts naive in retrospect. But a good exercise nonetheless.
If at some point you think that after a cursory reading of some post you found a hole in Eliezer’s reasoning that had not been discussed in the comments, you are probably mistaken. Consider this post of mine as a warning.
Also note that as a self-identifying “Orthodox Jewish”, you are bound to have compartmentalized a lot, and Eliezer’s and Yvain’s posts tend to vaporize these barriers quite spectacularly, so be warned, young Draco. Your original identity is not likely to remain intact, either.
With these caveats, have fun! :)
Joining these forums can serve as something of a reality check to gifted young people; they may be used to most any half-baked thought still being sufficient to impress their environment. Rarely is polish needed, rarely are “proofs” thoroughly nitpicked. Getting actual feedback knocking them off of their pedestal (“the smartest one around”) can be ego-bruising, since we usually define ourselves through our perceived strengths. Ego-bruising, yet really, really important for actual personal and intellectual growth.
Blessed be the ones growing up around other minds who call them out on their mistakes, intellects against which they can grow their potential.
(I don’t mean this as applying specifically to Avi, but more as a general observation.)
Yep. I’ll put it even more directly.
Smart people growing up in environments where most people around them are less smart tend to develop a highly convenient habit of handwaving or bullshitting through issues. However when they find themselves among people who are at least as smart as they are and some are smarter, that habit often leads to problems and a need for adjustment :-)
Does that go both ways? That is, can I “nitpick” other people’s comments and posts? Also, if I find a typo in a post (in the sequences so far, I’ve spotted at least 2), is it acceptable to comment just pointing out the typo?
Why not PM them first?
This is my own practice. My reasoning is that pointing out a typo is of no enduring interest to other readers, and renders the comments section less valuable to other readers; so if it’s convenient to contact the author more quietly, one should.
Yes. I recommend using ctrl-f to ensure no one else has already pointed out that typo.
Of course you can. Whether it’s wise to do so is an entirely different question :-D
Yep, been there, have a bruised ego to show for it.
I don’t think I would have minded as much if there would have been comments explaining why they thought I was wrong. It was the lack of response that bothered me.
(And what’s with this “You are trying to submit too fast”? I’m not allowed to post too many comments in a row?)
Yes. If I remember correctly, LW also implements some form of slow-banning (the amount of time required between your comments depends on your total karma), but I may be recalling a feature request as an implemented feature.
I thought it was caused by having a lot of recent posts downvoted.
From your post that you linked: “Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.” I think that’s what I did on my FTL comment. (Incidentally, I had looked online and found several different versions of an experiment that said the same as I did in different ways, but the answers didn’t explain well enough for me).
I actually spent at least an hour reading through the comments on that AI post, and decided that the previous discussion wasn’t enough for my idea.
I’m not too good at anticipating which part of my arguments people will disagree with or not understand, so that may be why I don’t explain fully. I was hoping for a response that I could then see what’s missing and fill it in. It’s usually better explained in my head than I write down.
I read most of the posts offline in ebooks. That means I don’t see the comments unless I then go online and look. Is there a set of ebooks that includes comments? (For all I know, most of my ideas have already been said and refuted.)
And is he perfect?
I don’t know, but sounds like a good idea. Would be rather Talmudic in spirit. Unfortunately, most of the comments are fluff not worth reading, and separating the few percent that aren’t is not that easy. Maybe pick the threads with top 10 comments by karma or something.
Oh, far from it. I think that some of his statements are flat out wrong, but I only make this determination where either I have the relevant expertise or several experts disagree with him after considering his point in earnest.
Don’t many experts disagree with him on his MWI view on quantum mechanics?
Also note that replacing “Everett branches” with “possible worlds” works in 99% of the decision-theoretic arguments Eliezer makes, so there is no need to sweat MWI vs other interpretations. I would be more interested to hear your opinion on the Trolley problem, the Newcomb’s problem, and the Dust specks vs Torture issue. Assuming, of course, that you have studied it in some depth and went over the various arguments on both sides, the process you must be intimately familiar with if you have attended a yeshiva.
I’ve seen Newcomb and Dust specks vs Torture but not Trolley (although I’ve seen that one before in other places). Which sequences do I need to finish for those?
If the trolley one is the same as the “standard” version, then it’s fairly trivial within the framework of Orthodox Judaism (if I’m allowed to bring that in), because of strict rules about death. I’ll elaborate further when I’m up to the question. The other two are a lot more complicated for me.
Yes, the standard Trolley problem, sorry. For more LW-specific problems, consider Parfit’s hitchhiker.
Of course you are allowed to bring it in. And, unless you insist that it is the One True Way, as opposed to just one of many religious and moral frameworks, you probably will not be judged harshly. So, by all means!
So according to Orthodox Judaism, one is not allowed to (even indirectly) cause a death, even when the alternative is considered worse. The standard example is if you’re in a city and the “enemy” demands you hand over a specific person to be killed (unjustly), and says if you don’t do so, they will destroy the whole city and everyone will die (including that person). The rule in that situation is that you aren’t allowed to hand them over. Accepting that as an axiom, the trivial answer to the trolley situation is “don’t do anything”. Maintain the status quo. You cannot cause a death, even though it will save ten other people.
Parfit’s hitchhiker also appears trivial. It seems to assume I place no value on telling the truth. As I do, in fact, place a high utility on being truthful (based on Judaism) , my saying “Yes” will translate into a truthful expression on my face and I will get the ride.
Note: I got the link from searching for “midvar sheker tirchak”, which is the Bible’s verse that says not to lie, roughly translated as “distance yourself from falsehood.
On another topic, if I think that it is the “One True Way”, but don’t say that, is that OK?
Thank you, I appreciate your replies.
Hmm, I see. So, a clear and simple deontological rule. So, if you see your children being slaughtered in front of you, and all you need to do to save them and to kill the attacker is to press a button, you are not allowed to do it?
Also, does this mean that there cannot be Orthodox Jewish soldiers? If so, is this a recent development, given that ancient Hebrews fought and killed without a second thought? Or is there another reason why it was OK to kill your enemy in King David’s time, but not now?
Right, ethical systems which value honesty absolutely have no difficulty with this. But
is this a utilitarian calculation or an absolute injunction, like in the previous case, where you are not allowed to kill, no matter what? Or is there some threshold of (dis)utility above which lying is OK? If so, what price demanded by the selfish driver would surely cause a good Orthodox Jewish hitchhiker to attempt to lie?
First, note that I do not represent LW in any way and often misjudge the reaction of others. But my guess would be that simply stating this is not an issue, but explicitly using this belief in an argument may result in downvoting. This community is mildly hypocritical in this regard, as people who push their transhumanist views here as “the best/objective/universal morality” (I am exaggerating) can get away with it, but what can you do.
I may not have given enough detail. The prohibition against killing is specifically innocent people. There is a death penalty for many crimes, including murder (although not as far as EY seems to think. He once said that the Bible gives the death penalty for crossdressing. Evidence suggests otherwise. But that’s another topic.) So:
Assuming this attacker is the one killing or threatening to kill your kids, you are allowed to kill him (although you are supposed to try to injure them if killing isn’t necessary to stop them). You wouldn’t be allowed to kill someone else who is innocent, even to save many people.
I don’t know if you’re familiar with the current debate in Israel over the draft? It’s not really related, though. Again, the “ancient Hebrews” fights, were usually either to reclaim parts of Israel which belonged to them from the gentile nations that were inhabiting them, or to defend themselves against attackers. In both scenarios, the “victims” weren’t innocent. For some more info, see here, here, and here.
(By the way, I just saw this while looking up that last link, which (mostly) confirms what I said about the Trolley problem.
I realized after I posted that answer yesterday that I could conceive of a case that would work for me, in the spirit of the Parfit’s hitchhiker example. Namely, if I knew that when I got to town there would be someone who’s life I could save, but only with $100. (Also assuming that I’ve got only $100 cash total). That person’s life would take precedence over telling the truth, and I wouldn’t get the ride. There isn’t anything I could do in terms of prior obligation that would override the life concern of that person later.
OK, that makes more sense.
Seems like a flimsy excuse to slaughter babies. Though I suppose the Amalekite case can be somewhat justified by an uncharacteristically utilitarian calculation on God’s part if Amalekites presented an x-risk to Hebrews. But that is not how the issue is usually presented.
From your link:
...so they wiped out every woman and child? In any case, this inference seems like an extreme case of motivated cognition: “what we did was right, therefore they must have done something wrong even if we have no records of what they did”. Further reading of your links provides a fascinating insight into how far this motivated cognition can lead otherwise very smart people.
That it is indeed a case of motivated cognition can be trivially shown by transplanting the question into a modern setting and asking under which circumstances it would be ok to wipe out a whole people today. The answer is clearly “none” (I hope). Yet what (ostensibly) happened then has to be justified at any cost, or admit that Saul and Samuel were little better than Hitler and Pol Pot. Or that human ethics has evolved and what was acceptable back then is a high crime now.
Eh, I take back the unnecessarily emotionally charged reference to the iconic supervillains.
What happens if instead of “causing” a death, you’re doing something with some probability of causing a death? For instance, handing someone over to the enemy results in a 99% probability of them being killed by the enemy. What if it’s only 10%? What if the enemy isn’t going to kill him, but you need to drive through a war zone to give him the prisoner, and driving through the war zone results in a 10% chance of the person being killed? What if the enemy says that he’s going to kill one person from his jail no matter what, and he puts the person in the same jail (so that instead of 1 person being killed out of 9 in the jail, 1 person is killed out of a group of 10 that includes the new person, thus increasing the chance this specific person is killed, but not increasing the number of people killed)?
I think that a 99% probability would be the same as 100% for this purpose. A “doubt of death” is considered as strong as a definite death in general. In the war zone example, I think (with a little less confidence) a 10% would work the same. You simply don’t take into account the potential benefits, when weighed against an action that you must do that will cause a death. On the other hand, the person being requested is allowed to sacrifice their own life (or a 10% chance of doing so) to save others. I’ll have to think about your last case a little more.
What if you just need to do ordinary driving, where there’s a fraction of a percent chance of death?
If you couldn’t do things which had any chance at all of killing innocent people, then you wouldn’t be able to drive, or do to a lot of normal things. There’s probably some non-zero chance that the next time you turn on your computer it will trigger a circuit fault that causes the building to burn down an hour later.
I think there’s a point where the number is low enough that it can become insignificant, but I’m pretty sure it’s less than 10%. There’s a concept of what considered a “normal risk”.
Incidentally, since you mentioned it, there have been attempts by some Rabbis to ban driving for that reason. I’m unable to find a better source currently, but see: this. Some (current ones) have also suggested that one shouldn’t drive for pleasure, but only where there’s an actual need.
I thought about that your last case earlier, and decided it would also not be allowed. You need to consider each person separately. This person will have a 10% chance of being killed due to your action, which forbids it.
Part of the rationale for the rules (I think), is valuing each moment of life, so, for example, someone is considered a murderer if they kill someone who would die anyway in an hour. So causing the person to die earlier, is worse than letting them die later with everyone else.
Okay, here’s another question: Instead of being one person who drives and has a small chance of killing someone, you’re running a big company with a lot of drivers..
If two people drive, the chance of killing someone is about twice that of when one person drives. if a lot of people drive, the chance may add up to enough that it is over your threshhold for insignificant. So is it immoral to run a company that uses a lot of drivers, because statistically the chance of death over many drivers is too large, even though each individual driver is okay?
What if instead of running a company you’re collecting taxes, and collecting taxes costs some people some “moments of life” (since they have to work longer to pay the taxes)? Most people would say that this is okay because the taxes benefit society, but if you aren’t permitted to balance the loss to the individual against the gain to someone else, you can’t use that reasoning.
Or what if you’re running a country and you need to decide whether to have laws that put people in jail? Because of inevitable human error, you’ll be putting more than one innocent person in jail. (Even if you don’t know which person is the innocent one.) If you’re not willing to say “It’s okay to make innocent people lose some ‘moments of life’ as long as it helps others more”, how can you justify having jails?
Huh. Presumably they would also frown upon any similarly risky activity, like climbing, swimming or even living near Gaza borders, where one might get killed by a rocket.
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
Do you non-negligibly risk killing other people while swimming or climbing? It was said upthread that only killing innocent people counts, so killing yourself doesn’t count ^W^Wcounts ^Wdoesn’t count ^W^WScrew you Euathlos!
See this about Gaza.
I don’t think climbing or swimming are as dangerous as driving. There is an obligation for a father to teach their son to swim, mentioned in the Talmud.
They’re a couple orders of magnitude riskier, actually. It’s tricky to make a direct comparison because the risk of driving is usually expressed over distance traveled, while sports is usually measured over number of sessions, but if we assume a typical day’s driving is about 50 miles (80 km), then we’re looking at 0.1 micromorts per session, as opposed to 17 for swimming or 3.1 for rock climbing.
(I’m not totally sure I trust that swimming estimate. The one for rock climbing aligns with my intuition, although there’s a lot of variance within the sport—bouldering is comparatively safe, while attempting the world’s highest peaks is absurdly risky by sports standards. I did know one guy who died in a shallow-water blackout and none who died climbing, for whatever that’s worth.)
[ETA: The estimate for swimming turns out to be bogus. See below.]
The link you gave puts car deaths above swimming in the second diagram. It doesn’t say that the sporting numbers are measured by session. (Except for the BASE jumping, hang-gliding, scuba diving, canoeing, or rock climbing). My own research (the first three links from Googling “risk of car accident death”) puts car accidents consistently higher than swimming deaths.
http://www.livescience.com/3780-odds-dying.html: 1-in-100 lifetime car death , 1-in-8,942 swimming death.
http://www.riskcomm.com/visualaids/riskscale/datasources.php: 1 in 17,625 one year car occupant death rate (based on 2002 data), 1 in 83,534 one year drowning death overall, 1 in 452,738 one year drowning death in swimming pool
http://well.blogs.nytimes.com/2007/10/31/how-scared-should-we-be/?_php=true&_type=blogs&_r=0: 1 in 84 lifetime car deaths, 1 in 1,134 swimming deaths.
I believe that’s because people drive much more than they swim, and the risk communication scale uses, say, your second numbers, and the comparison the link author gave converted that from annual to per-act.
I was trying to show that the swimming estimate wasn’t per session. 1 in 56,587 is close enough to 1 in 83,534 that they’re probably measuring the same thing, namely yearly deaths, in which case (assuming most swimmers swim more than 20 times a year, which I think is reasonable), the per-session risk for driving is more than that for swimming.
You’re right, it’s not per session—but it isn’t per year either. On closer examination it looks like they’re calculating the risk of death over the ten years surveyed (unless the 31 deaths reported are annualized, which I don’t think they are), which is an absolutely terrible bottom line—but fine, it makes the annual risk of death 1 in 566,000. I also notice that the population estimate is identical to that for running and cycling, so it’s probably some sort of very crude estimate of Germans involved in sports. Ugh. At least the climbing stats look more reliable.
Incidentally, an annual risk of death of 1 in 566,000 and a hundred sessions per year (two a week with time off for good behavior) gives us a per-act risk of 0.017 micromorts, about equal to driving four miles in a car.
It’s definitely not the chance of death in a year of swimming. My link already gives us all the numbers we need to calculate that—the number of deaths overall, the number of years being examined, and an estimate of the population involved—and it comes out to a chance of 1 in 5,658. (1,754,182 people / (31 deaths / 10 years).)
This conveniently lets us infer how they’re probably calculating the risk—it looks like they’re assuming one hundred sessions per year (or about two a week; fair enough) and doing a per-session estimate based on that. I also notice that the population estimate is identical to that for cycling and running, so it’s probably some sort of estimate of the number of people in Germany involved in an arbitrary popular sport. Cruder than I’d like, but I was only shooting for an estimate good to within an order of magnitude.
Those numbers look like general population numbers (and since it looks like a lot of drowning deaths are due to ineptitude, it seems unclear to me whether the yearly risk for frequent swimmers is higher or lower than for non-frequent swimmers). Instead of ‘all drowning,’ the 1 in 83,534 number, one should probably use the ‘in swimming pool’ number, which is 1 in 452,738.
I’m not sure I trust these estimates—or, rather, I don’t think I find them useful. The main problem is that the probabilities involved are all strongly conditional.
Consider swimming in a hotel swimming pool with a lifeguard watching and long-distance swimming alone in the ocean. Both are “swimming” but these two activities are radically different from the risk perspective. Similarly, you can do “climbing” in the climbing gym and you can do “climbing” in the Himalayas.
Sure, there’s a lot of variance involved. But there are more and less safe driving habits, too, and I’ll bet the variance is about as high. The point isn’t to demonstrate that one practice is under all conditions more or less safe than another, it’s to compare their average dangers as they’re actually practiced. And that clearly favors driving. It’s a profoundly bad idea to look at a set of statistics like this and say “oh, the ones that look inconvenient to me were probably doing something unsafe, they don’t count”.
On the other hand, these statistics don’t take health benefits from being physically active into account, which could potentially give ammunition for a much stronger critique—though given ike’s comments, I’m not sure it’d be a valid critique in the context of Jewish law.
I bet less. Yes, you can practice defensive driving, but if you’re on the road in the traffic there is only so much you can do to avoid the idiot who is both in a hurry and needs to send that text message right now. You don’t have much control over external factors. But in swimming you often do—it’s pretty hard to drown if you are swimming in a pool with others watching.
Yes. Therefore if you know you practice in way that’s different from the average, the probabilities change for you.
I wasn’t thinking about defensive driving, I was thinking of driving thirty miles over the limit while not wearing a seat belt and texting your girlfriend about the awesome fight you just saw in the pub.
In pretty much any activity you can asymptotically drive your chance of surviving towards zero if you set your mind to it :-/
If we are talking about variance, the lower safety bound is often in approximately the same place, but the upper safety bound (as well as the center of the distribution) varies.
I’ll bet there are more idiot drunks on the road than there are Himalayan mountaineers, even proportionally.
Yes, but if you’re going climbing you can choose to go the climbing gym and be absolutely safe from the avalanches in the Himalayas. However if you’re going driving on public roads, you cannot make yourself absolutely safe from drunk drivers.
You can make your climbing safer than you can make you driving.
That’s what makes climbing higher variance than driving.
You can make your climbing safer than summiting K2 would be, certainly. But enough safer to overcome those one and a half orders of magnitude of difference in the average? I haven’t actually seen any numbers on this, but that seems optimistic to me.
I’ll have to look at the methodology to believe that one and a half orders of magnitude, but regardless of that yes, you can make your climbing safer.
For example, you can do bouldering on technical routes which are all about agility and finger/arm strength. These routes rarely go more than 10 feet above thick mats—since you’re not belayed, you’re expected to just jump down when/if you run into trouble. Twist you ankle, sure, possible. Die—not very likely.
Yes, I mentioned bouldering in my original post.
I don’t think there’s a Lesswrong-specific take on the trolley problem, so I’m assuming shminux is just referring to the usual one.
Some high-profile physicists disagree, others agree. Very few believe in some sort of objective collapse these days, but some still do. This strange situation is possible because MWI is not a well-formed physical model but more of an inspirational ontological outlook.
Hi Avi, welcome to LessWrong!
There’s a big problem with upvotes and downvotes on LessWrong, namely that the two important but skew dimensions of agreement/disagreement and useful/disuseful for rating posts are collapsed into one feature. A downvote can feel like ‘Your comments are bad and you should feel bad (and leave and never post again)’, but this is often not the case.
Downvoting comments by a person asking why the parent comment was downvoted is generally poor form. In your case, it might be because you did it for a few comments in quick succession, which might have made Recent Comments (on the sidebar) less useable for someone so they downvoted the comments. To avoid this in future, maybe add a note in your comments when you post them noting that you are a new user trying to figure out how to tailor your comments to LessWrong and requesting that downvoters explain their downvotes to help you with this. On the other hand, it’s not impossible that someone was being Not Nice and mass-downvoting your comments, which wouldn’t be your fault.
Is “disuseful” a synonym for “unuseful” here or does it mean something else?
I’ll add a specific way for newbies to ask why a comment was downvoted without clogging up the recent comments list: edit the original, downvoted comment, appending a little “Edit: not sure why this was downvoted, could someone explain?”-type note. (It’s obvious once you think of it, but easy not to realize independently.)
It means something else. I use the dis- prefix to mean the active opposite of the thing to which it is prefixed. So ‘I diswant ice cream’ is a stronger statement than ‘I do not want ice cream’, though most people, whose language is less considered and precise, would (also) use the latter to cover the former. I guess some would say ‘I don’t particularly want ice cream’ to disambiguate somewhat.
Thanks for the suggestion.
Is that different enough from “harmful” to merit a less standard word?
I can see several possible connotations and policy suggestions underlying your comment, but not sure which one(s). Can you specify? Like, are you suggesting I update in this specific case or my general inclination to use nonstandard undefined terms or...?
I was thinking about this specific case, but now that I think about it it does generalize.
Minor point of information. In English “do not want” is not the negation of want. It actually means what you have defined “diswant” to mean. The “not” is privative here, not merely negative. People are not being less considered and precise when they use it this way. They are using the words precisely as everyone but you uses them—that is, precisely in accordance with what they mean.
You are welcome to invent a new language, just like English except that “not” always means simple negation and never means privation; but that language is not English. Neither, for that matter, would the corresponding modification of French be French. Comparing the morphology of translations of “want”, “do not want”, “have”, and “do not have” in a further selection of languages with Google Translate suggests that the range of languages for which this is the case is large.
That is indeed often the case, though I notice I feel hesitant to agree that this is always the case and retain a feeling that people use ‘do not want’ in both way, depending on the context. Regardless, when I said:
I meant (hohoho) this as a statement about my usage, not the common usage of others.
Thanks for pointing me to a further point of reference (the term ‘privative’).
Edit: I looked at the Wikipedia article for privative
It gives some examples:
and it says:
It seems like your usage of privative was excluding alpha privative, i.e. mere negation, but the examples and this summary sentence suggest ‘privative’ fails to distinguish (hohoho again) between mere negation and...the other thing. (Inversion? Opposition?) I’d be most amused if linguists had failed to coin a specific term for the subform of privation that is the ‘active opposite’ of something, and had only given a name (‘alpha privative’) to the subform of mere negation.
In the literal sense that I have considered these things more than they have, they are.
Localised examples like this seem trivial, but when generalised to encouraging good habits of thought and communication and precision, it’s not just a localised decision about ‘un-’ vs. ‘dis-’, but a more general decision about how one approaches thought, language, and communication.
Also, if you just look at ‘do not want’/‘diswant’ in a vacuum, then yes, it seems like both my usage and the common usage specify what they mean. But the broader question of using negation and ‘not’ in a way that cues the mental process of Thinking Like Logic is inextricable from specific uses of ‘not’. I generally lean towards the position that the upper echelons of a skill like Thinking Like Logic are only achieved by those who cut through to the skill in every motion, and that less comparmentalisation leads to better adoption of the skill. And I feel like it probably intersects with other skills and habits of thought. So trivial cases like this are part of a bigger picture.
I don’t think I understand what you mean by privative. Is it something like the difference between “na’e” and “to’e” in Lojban? For reference: {mi na’e djica} would mean “I other-than want”, and {mi to’e djica} would mean “I opposite-of want”.
That’s pretty much it. Privative “not” would be “to’e”. The English “not” covers both senses according to context, but “not want” is always privative and some lengthier phrase has to be used to express absence of wanting. Or not so lengthy, e.g. “meh”.
Oh, cool. I’ve found the distinction to be a very useful one to make.
Well [-l + come]; one of your comments was erroneous, as you said yourself (the one you retracted), another comment reads like a restatement of a popular comment predating yours by a over year (which you acknowledged yourself), and the third makes a pretty sweeping claim about superdeterminism not being Turing computable. Unfortunately, the proof you provide seems flawed on a couple of counts.* However, even if the proof did turn out to stand, people frown upon comments which do not give more explanations and context to sweeping statements that seemingly come out of thin air (even if they did turn out to be correct). FYI, I didn’t read (until now) or vote on any of your comments.
That makes 3 plausible downvote explanations for 3 comments, two of which you mentioned yourself. I’m surprised about your surprise.
* (Superdeterminism doesn’t require that part of the overall program can be perfectly predicted by a much smaller program in advance, nor that the outcome of the smaller program can then be used to change the overall outcome. At least two reasons: 1) Not being able to verify complete correspondence (except by fiat), given all hidden variables and their potentially unknowable context (unknowable from within the program, and the context may encompass the entire universe); 2) superdeterminism can in principle be saved simply by saying that the agent isn’t able to show a contradiction; i.o.w. in a superdeterminist universe, a perfect prediction-machine conditional on which a contradiction can be derived cannot exist, by definition of what “superdeterminism” means. Your thought experiment would be inapplicable in a superdeterminist universe, strange as it sounds. In that light, your proof reads similar to the one that shows that a Halting problem decider cannot exist. Alternatively, the agent would be unable to use the result to show a contradiction. While such an inability would indeed seem strange, from the universe’s point of view, every facet of that inability would have been predetermined anyways.)
You’re basically saying that superdeterminism doesn’t require Turing computability, not that it is in principle Turing computable. Anyway, my point was that superdeterminism predicts that we will never find a practical way to compute the observed answer to a simple quantum superposition, because that would imply that we could change it.
And I guess I did make a “sweeping claim”, but I was still annoyed that I just got down-voted without a reply. If I had a “sweeping claim” to discuss, how should I have posted it?
The AIbox one I had thought of before seeing that comment, and it’s (in my opinion) stronger than the other one. (And the replies to it didn’t apply to mine fully). As an aside, would I in general be expected to read all 300+ comments on a post before commenting?
See “give more explanations and context”. If you’re concerned with “never find a practical way”, that’s an entirely different discussion than “isn’t Turing-computable” (in this community, if something has a strictly technical interpretation, that’s what is defaulted to). Give enough context so that a reader knows what you’re concerned with (practical applications, apparently, see I wasn’t aware of that), instead of a somewhat theoretical sounding claim (which you apparently meant in a more practical way) with a proof that turns out to be wrong, given that strictly theoretical claim. Also, I was only pointing out shortcomings of your proof, to do so no stance regarding Turing computability is required. However, there is no reason to assume that superdeterminism would require incomputability, on the contrary, as long as the true determinist laws of physics are computable, the universe would be as well, no?
Well, at least the top level comments with a couple of upvotes, so you don’t repeat one of the main responses? That boils it down to 35-ish comments.
Oh. I need to be “strictly technical”? I’ll go back to the one about Turing computability and edit it to reflect a “strictly technical” comment.
Turing computability is a technical concept first. You don’t “need” to be strictly technical (obviously), but talking about Turing computability and giving a proof-by-contradiction kind of sends off the vibes of a technical/theoretical point, don’t you think? I was making an observation about how I interpreted your comment, and why, I wasn’t telling you what you need to write about.
Hello everyone.
Consider this a just-in-case comment that I am making with very limited time before I have to run and do something else, recognizing the fact that I might fail to make one altogether if I do not do it now. How is that for acknowledging my human mental frailty?
Actually I can do one better: I just had to join the lesswrong chat to diagnose a problem with not being able to comment on an article (which was the reason I just signed up after discovering this site), and the problem turned out the stem from my misspelling my own e-mail address when I signed up.
So there you go: two cognitive flaws immediately apparent just from the process of joining this site. I wonder how many more I can discover here...
Hi LW, I’ve been a lurker for the quite some time, it ended this week.
The sequences and blog (ebook compilation I found somewhere) have a comfortable text-to-speech place in my commutes and I’ve incorporated quite a bit of the lingo, bias definitions and concepts into my daily Anki decks. It’s not that this community was that daunting but rather that I thought I could play catch up. My reluctance reminds me of a programmer asking if it’s worth getting on github if he’s only joining the party now. I’ve studied computer systems engineering (electronics, digital circuit design, programming, math) and I’m spending most of my time teaching and writing, although most of my work focuses on internet technologies, security and automation—I’ve veered of quite a bit from my assembly beginnings to the Python world I seem to be living in now.
Some things that have already been actionable for me since around the end of 2011, after reading LW material and more specifically Shut up and do the impossible, includes
cramming around 12.8k German words into my vocabulary in a mere 35-45 minutes daily in 12 months (which Goethe Inst. deemed impossible) - I do swear by SRS now,
finally finishing my thesis in security (software fingerprinting and vulnerability mapping) after realising my supervisor might just be incentivised to keep me around for articles—whilst teaching full time—I finally realised focusing on student throughput might mean subsidy for the university but penultimately I need to focus on my work first in order for everyone to benefit—after 10 years of teaching graduate level programming,
I jumped into a couple of technologies that seemed daunting in terms of scope and people that have used them for years, including LaTeX (no idea how I would live without it now), emacs and org-mode (still only 6 months, but it’s getting its grip on me firmly) and also R (although my gut feel is that I’ll probably rather veer into using pandas later),
turning into a QS oversubscriber with tools like RescueTime, Beeminder and selfspy keeping track of my goals and my actual CLI time, and
lastly probably becoming a more critical reader of everything including my own work. However fuzzy this might make me feel and regardless of how it might sound, some of the sequence material left me with a feeling of coming home.
Soon I hope to be emigrate to Germany from the beautiful South Africa that is suffering at the hands of peak-level irrational politicians (discussions on this in later posts).
I hope to become more involved in discussions to ensure I get a deeper understanding of the concepts. As someone that have spent large amounts of time with curriculum development I’m also very interested in how rationality could be taught, not only to more adults but starting much younger (having grown up under Christian parents myself).
Thanks to @army1987 for the prod to post here, and sorry for all the specific technology mentions—I just find specifics give me more information than saying ‘learning’ or SRS and such. Also, English is my second language, German third, so excuse the odd pillaging of your tongue. This feels longer than anything I’ve written about myself.
I’m happy to answer any questions, and I hope I can keep throwing my ignorant rock of understanding against this anvil for a hopefully more interesting shape. There is so much to learn if one approaches this with a growth mindset.
Hello, Less Wrong:
I have been lurking around LW for a while after finding it from links on MIRI or FHI. I’ve only recently begun to learn about Bayesian probability and inference on a practical level. I’m going through school for a bachelors in game programming. For now my primary focus is on the simplified AI currently used in gaming, but I believe that more sophisticated AI technologies like natural language parsing and more realistic behavioral simulations and problem solving will be useful in games in the near future. I work as a help desk tech where I get to experience the contrast of human irrationality and technological rationality on a daily basis.
I tend to be a devil’s advocate by nature, though I do not identify as a contrarian. I’ve learned to recognize assumptions, and try to spot them in myself as well as others and I do frequently re-evaluate and change longstanding intellectual and even political beliefs. I find that there must be a balance when advocating unpopular positions though, because if one alienates everyone by nitpicking the small stuff, by the time something important comes up one has already alienated everyone.
I grew up in the woods without electricity back in the 80s, but read everything I could get my hands on. This included many of the books my parents owned and everything that interested me at the local library. I think I learned to be rationalist by listening to my dad’s rants. For example, he supposed himself to be a free market conservative on one hand, but then he would get poor service from a company and get angry and yell “There ought to be a law!” Such things would make me shake my head and pledge to try never to be like that. To their credit though, my parents did encourage free-thinking and exploring divergent ideas. For example, but I was encouraged to read the Communist Manifesto. I keep meaning to read Das Kapital, because references to it that I’ve encountered make me suspect that it was written more for decision-makers, while the Manifesto seems more of a political handbook for the masses.
I feel that LW helps to reinforce my good habits and remind me to check my bad habits. I look forward to learning to more consistently practice these habits, and learn more about using Bayesian logic in life and my career.
Hello! My name is Mackenzie, or Mack. Brought here by HPMoR, I have been reading through the sequences off and on for the past year, a little at a time. I can’t say I’ve committed it all to memory, but I feel like I have a good context for the language this community uses. I am a mechanical engineering major in my sophomore [?] year. If I was a humanities major, I could be a senior by now, but two years ago I became fed up with the self-masturbatory nature of that field.
I’ve always been interested in the objective, rational approach to life. I was a “gifted” (read:obnoxious) child and liked to argue a lot with my religious family. My earliest memory of a coherent discussion on faith was when I was seven. I was irritated with the catechism I was memorizing, and argued in childish terms that it was condescending indoctrination. However, I remained a doubtful theist until I was around seventeen. After a brief attempt at evangelical zeal, I realized that I had to be honest with myself about my lack of faith. I still waver between theist and agnostic. That was around the time that I discovered Overcoming Bias, which I lurked on for a little while. After reading through HPMoR, I found this site as well.
I waited until now to make an account because I’ve been intimidated by the level of discussion that goes on here. But participation can only help cultivate my ideas and my desire to approach life more methodically.
Hello everyone.
My name is Carlos. I’m 30 years old. I was born, and still live, in Colombia.
I excelled through elementary and high school until I crashed against the hard fact that my parents could not afford my college ambitions. At that time I cycled between wanting to study psychology, but also archaeology, but also chemistry, but also cinema. I wanted to know everything.
Then came a long, dark time while I crawled through the Business Management degree my parents made me go for. Worst years of my life, absolutely. But in the meantime, I devoted my spare time to my passion, and trained myself to become a better writer. I sort of freelanced for a local newspaper, then wrote some pieces for an online newspaper, and more recently won a national short story contest. I’m currently studying journalism and preparing a series of SF novels in Spanish.
The first spark of my rationalist tendencies came from one of the many books that were at my parents’ house. It told creation myths from the Native South Americans, and I found those stories much more engaging, beautiful and surprising than anything the book of Genesis had to offer. It was always clear to me that such stories should not be taken seriously; the next logical step was to give the same treatment to Genesis and everything that attempted to present a just-so explanation for the universe.
Right now I’m only a terribly amateurish rationalist. I wasted a good part of my youth pursuing a degree that was of no interest to me, and even if I made enormous efforts to better myself in the skills that did matter to me (namely as a writer), sometimes I still can’t go over the fact that most of my friends of my same age have already built successful careers pursuing their true passion in the time it took me to reverse my wrong path and begin walking my chosen one.
I won’t comment much here. In my everyday life, people can keep silent for hours hearing me talk, but here I see it’s obviously going to be different. I don’t have much to offer here. You guys are the next level above me. I’m here mostly to listen, and learn.
Hi. I’m a 42yr old male, from the US and I’ve been aware of LessWrong for a few years now, stumbling across links to posts on LessWrong here and there in my web surfing travels. I’ve always been more or less a rationalist. I’ve been a self-identified atheist since high school. I’ve been a fan of Daniel Dennett for many years. I read ‘Consciousness Explained’ when it first came out many years ago and I’ve kept up reading interesting philosophy and science books since then. I’ve always enjoyed books that made sense out of previously mysterious phenomena. My feedly list has hundreds of blogs mostly in nutrition/psychology/economics and some sports (I’m a big sports fan, but prefer an analytical approach to that as well). In essence I’m the type of guy who likes this stuff.
I remember reading on here a few years ago some posts about a rationalist approach to self-help. I’m especially interested in that. I’ve always been an anxious and insecure person and if I can solve that problem the quality of my life will skyrocket. Having spent a fair amount of time reading the comment threads at LessWrong I’m pretty optimistic that I can find some folks here who are interested in discussing these things in the same way that I am. Frankly I take a much more reductionist approach to personal problems than most others and this seems like a place where I may find some people who may think similarly. Barring that I think I’ll just enjoy reading and commenting here every so often.
I enjoy the analytical side of sports, too. Do you follow sabermetrics and all its many children (e.g advanced statistics in basketball and hockey) or are you more interested in human performance optimization (powerlifting, HIT, barefoot running, etc.)? If the latter, does that connect to your reductionist approach to personal problems and concern with anxiety?
I follow sabermetrics and its children. I was really into Bill James back in the day and still had a subscription to BaseballProspectus.com (this post is half-drunk so excuse typos please). My 2 favorie sports are hockey and baseball. Baseball analytics made its biggest advances years ago—now it seems like they are just refining but hockey is in the initial stages. I’ve been into possession stats for hockey more than any baseball stats for the past couple of years although I still wander on to baseballprospectus and fangraphs and read some of the posts every 2 or 3 weeks.. I’m not a big hoops fan but I really like the advanced stats they have and footballoutsiders is great too although I havent really gone into depth there. I’m also interested in the performance stuff. I .listen to superhumanradio regularly. He has really good interviews with scientists on a regular basis.
Hi there, I’m a Biologist turned Software Engineer, age 34. I came to Less Wrong through Overcoming Bias and HPMOR, and I’m still here because the notions of rationality appeal to me. It is nice to among others who hold rationality as an ideal to aspire to.
Hello, all!
I’m a new user here at LessWrong, though I’ve been lurking for some time now. I originally found LessWrong by way of HPMOR, though I only starting following the site when one of my friends strongly recommended it to me at a later date. I am currently 22 years old, fresh out of school with a BA/MA in Mathematics, and working a full-time job doing mostly computer science.
I am drawn to LessWrong because of my interests in logical thinking, self improvement, and theoretical discussions. I am slowly working my way through the sequences right now—slowly because I’m trying to only approach them when I think I have enough cognitive energy to actually internalize anything.
Right now, my best estimation of a terminal goal is to live a happy/fulfilling life, with instrumental subgoals of improving the lives of those around me, forming more close social bonds, and improving myself. Two of my current major projects are to smile more, and to stop wasting time on video games and the like.
I look forward to getting to know you all better and becoming a part of this community.
Hi there, my name is Jérémy.
I found Less Wrong via HPMoR, which I found via TVTropes. I started reading the Sequences a few months ago, and am still going through them, taking my time to let the knowledge sink, and practice rationality methods.
I like to join the LW IRC chatroom, where I had (and witnessed) many interesting, provocative, and fruitful discussions.
I’m 22, I live in France, where, after an engineering degree in Computer Science, I’m now a PhD student in the wonderful field of Natural Language Processing. I’ve been interested in AI for about 10 years, since I wanted to create a little program that could chat with me. It was a bit harder than I expected. So I studied, I learned, and reaching the state of the art, found that NLP in general was AI-complete, and that a whole world of (yet) unsolved problems was in front of me. Awesome.
Being quite lazy most of the time, I also wanted to create tools that did stuff on my behalf, and eventually tools that created such tools, etc. Looking for existing examples of this, I soon discovered recursive self-improving systems, the concept of technological singularity, and other elements that strengthened my interest in AI.
When asked about my goals, I tell people I want to share the beauty of language, which I describe as the most powerful tool of humanity, with machines. This is my main motivation in life.
This, and also a fear of death that caused some panic attacks when I was younger. I only recently came to face the problem instead of avoiding the prospect. I think AI can help humanity tackle problems faster that any other methods, which drives me, again, to the path of AI.
I grew up asking lots and lots of questions nobody was able to answer. I had no friends to debate with (I skipped four grades, which set a huge social gap with my classmates). Worst of all, my parents taught me that I was the best, and that my skills allowed me to do pursue whichever education I wanted. I learned how to fail, and fail again, and fail again. I now want to become stronger, and stop wandering in the fields of knowledge anymore.
I love studying, experimenting and designing (mostly board) games. I play and run some RPGs from time to time. I write fiction, though not as often as I used to.
I try to share my interests towards (friendly) AI and rationality around me, and I’d love to participate in LW meetings if they weren’t so far from south-western France.
Last but not least : I have no idea what to do once I finish my PhD. Academia isn’t appealing as I thought it would be.
Nice to meet you all !
Welcome, Jérémy!
I haven’t much to say.
Well, welcome to LessWrong anyway!
Glad to you decided to join the conversation, talkative or not.
Hello fellow LWers,
I’m Raythen, a 25 year old European male.
I discovered this community via HPMOR.
I’d say that the rationalist way of thinking is a natural fit for me. It just makes a lot if sense, and it surprises me when other people don’t think this way. To be fair, I haven’t always thought this way either, but I’ve had quite many thoughts on the subject which are now complemented by LW material.
Besides rationality, I’m primarily interested in psychology and understanding human behavior.
To counter my general nonconformist tendency :), here are some of the things I like about what I’ve seen of LW so far: -general intent to be rational -serious effort to improve the world -models of human behavior and human cognition that actually make sense -openness to discuss most subjects, including controversial and “difficult” ones
It’s quite interesting to have someone define himself as European instead of a nationality.
I know many Germans who do so.
Incidentally, identifying as “European” rather than “German” is a quintessentially German thing to do. Heritage of THE WAR.
I’ll happily calllmysrkf European. I’m not German, and I am a citizen kf one of the EUs more fractious members.
The German question has a longer history then the war. In the time where the German national hymn was written “Deutschland, Deutschland über alles” was a call to abolish interstate borders between different German states. It was cosmopolitan in nature. Wanting a united Europe is not that different than wanting an united Germany.
At the same time most Germans who identify themselves on the internet still speak of themselves as German and not primarily as Europeans.
But I’m not certain that Raythen is German. He might also been born in one European country and living in another.
Might also have something to do with Germany being one of the few countries not getting shafted by the EU and thus not objecting to the identifier European.
That’s a fairly recent development and national self identification runs deeper.
Building nation states and destroying them is no straightforward matter that you can do in a few years.
Good point.
I am a university student who’s interested in working on AGI and understanding how the mind works. I have respect for people who can view things in a detatched and rational way and remain calm even in the face of questioning their most deeply-held beliefs. We have to seek the truth and be thankful when we find it, even if the answer we get isn’t always the answer we want.
I am a long-time lurker and I feel Less Wrong has already positively affected me in a number of ways, maybe I can contribute now.
Hey everyone, nice to finally join the party.
My name’s Pat, I’m a 22 year old man studying biochemistry at the undergraduate level, and I’ve been an on-and-off lurker for at least the last five years. My two favorite animals are the platypus and the water bear, my favorite food is calamari and I love cheesy action movies un-ironically.
If I had to put together a narrative of how I became a rationalist and made it to this site, it would look something like this (1);
My parents were quite a bit smarter than they were emotionally stable or perceptive, so they raised me as an atheist while forgetting the somewhat-important step of not making non-existence sound utterly horrifying (2). From a fairly young age I had a nearly paralyzing fear of death, and being a smart arrogant kid I figured that if anyone ought to live forever it should be me. I remember on my twelfth birthday talking to a few of my friends and deciding that genetic modification would probably allow for practical immortality before brain uploading was developed. That thought led immediately to the next; that I would be the person to solve mortality forever. (Yeah, I was pretty childish back then.)
I had already been interested in science beforehand, and with a powerful drive like that spent an inordinate amount of time studying so that I could hit ‘escape velocity’ in my lifetime. Even as the fear evaporated later on and I became indifferent as to whether I lived or died the interest in biology remained and intensified, and overall it has served me well. The scientific method helped me nail down my more intuitive-associative style of thinking into a logical framework while my passion helped me set clear goals for the future.
But I wouldn’t say I was really a rationalist until about a year or so ago, when three key events combined to shape me into the person I am now. The first was reading this site and hearing about Bayes Theorem for the first time in about 2008-2009, which helped me structure my understanding of science in a clearer way and for which I owe Mr Yudkowski a huge debt. The second was recovering from a severe depression caused by my anxiety disorder about a year later; unsurprisingly it’s a lot easier to be rational when you are actually sane, not to mention that cognitive-behavioral therapy taught me more about biases and neurology than I had learned in years of logic or neuroscience courses. The third is that I started reading a lot of Nietzsche, which helped me clear up a lot of the distracting moral detritus I had rolling around in my head.
So today I’m a more-or-less stable and happy guy who’s just gotten back into my field, trying to improve his life and the world. I’m primarily interested in genetics, nanotechnology (3), and transhumanism / eugenics, but really I’ll read about anything which doesn’t lean too heavily on pure math or religious evangelism.
Thanks for reading all this, and I look forward to getting to know all of you.
1 Technically, exactly like this. If you haven’t noticed, I can be a bit of a pedant. 2 For a long time I thought of the idea of hell as comforting; as bad as eternal torture sounds, at least you’re still there. 3 I’ve heard some fascinating things about the potential of deoxyribozymes as a substitute for proteins in terms of nanotech, which is great for lazy people like me because I’d like to be able to understand the folding of things I work with without having to take a supercomputer’s word for it.
Came through the The Robots, AI, and Unemployment Anti-FAQ post. Broadly agree with the approach in this community. I’m a generalist (with qualifications in science and economics). Check out my blog http://sabhlokcity.com/, now one of the top 200 influential economics blogs in the world. Also check out my perspective re: the robotics age here:http://sabhlokcity.com/2013/08/a-book-project-the-glorious-abundance-and-creativity-of-the-robotic-age/. Happy to work together with any economist who thinks likewise.
Hi, I first found this a while back site after googling something like “how to not procrastinate” and finding one of Eliezer’s articles. I’ve been slowly working may way through the sequences ever since, and i think they are significantly changing my life.
I’m very interested in self improvement/ instrumental rationality type stuff. I’ve been using this summer to experiment with various projects: learning mediation, learning about different types of therapy to systematically overcome fears, learning about biases and some other stuff.. I’m currently messing around with a productivity/ organisation system whereby I allocate point to myself for good behaviours and deduct points for bad behaviours, and either give myself a reward or pay a penalty as part of a commitment contract depending on how many points I’ve scored (sometimes my self-improvement ideas get a bit obsessive..)
I’ve just finished secondary education, which was a mess, and so i’m now quite excited to have more control over my own learning. I’ve been very interested in rationality since I was young, and have been passionate about philosophy because of this. Though, after getting into this site i’ve been reading some pretty damaging criticisms of the study of philosophy (at least traditional philosophy and the content that seems to be taught in most universities), and now i’m beginning to question whether i’m really interested in philosophy, and if it is valuable to study, or whether what i’m really after is something more like cognitive science.
This leads me to a problem: I’ve been offered a place at Oxford University for a course of Philosophy and Psychology and I’m considering trying to change to just study psychology or psychology and linguistics. I’m in the process of familiarizing myself with the basics of all of these fields, and i’m writing letters to my old philosophy teachers with this articlehttp://www.paulgraham.com/philosophy.html attached to see how well the criticism can be answered. My problem is though that i’m at best a knowledgeable amateur in these subjects, and i’m finding it hard to make a decision about which subjects to study—I don’t know what I haven’t studied yet so I don’t know how important it is for me to know. Any advice on this or generally how to make the decision would be much appreciated, especially if you are familiar with the UK univeristy system, especially if you have studied philosophy. My overall aim for my education is pretty well expressed by parts of less wrong—i want to become more rational, in both my beliefs and my actions (although i find the parts of less wrong about epistemology, self-improvement and anti-akrasia more relevant to this than the parts about AI, maths and physics).
Also, i found solved questions repository, but is there a standard place for problems which people need help solving—as if it exists it may be a better place for parts of this post...? Cheers
I’m Sam, 22. Lurked here for two years after first stumbling upon the Sequences. Since then, I’ve been trying to curb inaccurate or dishonest thought patterns or behaviors I’ve noticed about myself, and am trying to live my life more optimally. I’m making an account to try to hold myself more accountable.
Hi, Sam! Welcome to Less Wrong.
Just so you know, the current welcome thread is this. It’s fine that you posted here, but you’ll most likely get more attention if you post on the newer thread.
You probably got sent here from the outdated link on the About page. I’ve written a post asking for someone with access to the About page to update it, but I don’t know if any of the people with the necessary access have seen it.
Hello everyone!
I’m on my second day of being 25, scandinavian working with outsourcing in India. Have a Master’s in cybernetics.
I stumbled upon LessWrong the other day, and was surprised to find that someone had made a community with the purpose of being less wrong. Being less wrong about things was something I had decided on by myself before finding this place, and I thought it has been really cool to discover that many of my own thoughts weren’t original at all. Someone had already thought, shared and discussed them a lot :)
Big inspirations for me have been “Thinking fast and slow”, Fooled by randomness, and recently hpmor. Have a knack for favoring “shocking” ideas such as perfect market theory (“it’s all pure luck”), and naturally fell in love with the hypothesis of Technological Singularity.
My viewpoint on life and other important matters seems to be a bit too closely correlated with how hungry I am, so I still have some way to go in terms of being as rational as possible. I also think I’m very special, but I’m no longer as sure as I used to be.
Hey, Mind’s Eye here. Sorry, but I’m going to keep my meat space name for meat space. I’m an aspiring writer/game designer, with a secondary focus on cognitive/evolutionary psychology. I currently do government work, and am waiting on the contract to expire. I intend to make games that raise the sanity waterline, through low rate increase in “rational” difficulty with real world-esc consequences for your choices, as the good choice doesn’t always-or often-lead to more rewards for the one doing them.
As for what I value… I think Eleizer said it better than I could. “I want to make a world where no one has to say goodbye anymore.” (–HPMOR if memory serves) While I do enjoy “fun” things I get bored with them quickly, as I learn the games “lessons” (rule-sets) before I’m supposed to. (Basically anything extremely challenging-within my maximum skill range- is amusing as I don’t learn the “lessons” before I get a chance to enjoy the game/story/challenge. Such as Dwarf Fortress or tabletop games-DnD/WoD/etc.)
My friend actually referred me to this site. I was going through the usual things-find a religion; fail to be convinced by their best arguments, repeat. At first I just read HPMOR, and browsed the site to kill time. As I got better at applying some portions of the sequences it got to the point where I either could or couldn’t do things with very little middle ground, mostly through filling in the gaps between my skillsets. In the end I ended up either really good, or really bad at what I do. (As you can imagine this reduced my “fun space” quite a bit.)
From here, well I’m mostly waiting until I can work on the things that interest me.
Daniel here. 22.
Nothing much going on in my life currently. Waiting for something to clear up before joining the Navy. I scored a 99 on the ASVAB and am looking into the Nuclear Program as a result.
I am a politics junkie. Less so with modern ideas of progress and more with how older political theories could apply today. Even if it is just a mental exercise I enjoy it.
But really I just look at whatever takes my fancy.
I hope this finds you all well. Since I was young, I have independently developed rationalism appreciation brain modules, which sometimes even help me make more rational choices than I might otherwise have, such as choosing not to listen to humans about imaginary beings. The basis for my brand of rationality can be somewhat summed up as “question absolutely everything,” taken to an extreme I haven’t generally encountered in life, including here on LW.
I have created this account, and posted here now mainly to see if anyone here can point me at the LW canon regarding the concept of “deserve” and its friends “justice” and “right”. I’ve only gotten about 1% through the site, and so don’t expect that I have anywhere near a complete view. This post may be premature, but I’m hoping to save myself a little time by being pointed in the right direction.
When I was 16, in an English class, we had finished reading some book or other, and the thought occurred to me that everyone discussing the book took the concept of people deserving rewards or punishments for granted, and that things get really interesting really fast if you remove the whole “deserve” shorthand, and discuss the underlying social mechanisms. You can get more optimal pragmatism if you throw the concept away, and shoot straight for optimal outcomes. For instance, shouldn’t we be helping prisoners improve themselves to reduce recidivism? Surely they don’t deserve to get a college education for free as their reward for robbing a store. When I raised this question in class, a girl sitting next to me told me I was being absurd. To her, the concept of “deserve” was a (perhaps god given) universal property. I haven’t met many people willing to go with me all the way down this path, and my hope is that this community will.
One issue I have with Yudkowsky and the users here (along with the rest of the human race) is that there seems to be an assumption that no human deserves to feel unjustified, avoidable pain (along with other baggage that comes along with the conceptualizing “deserve” as a universal property). Reading through the comments on the p-zombies page, I get the sense that at least some people feel that were such a thing as a p-zombie to exist, that thing which does not have subjective experience, does not “deserve” the same respect with regard to, say, torture, that non-zombies should enjoy. The p-zombie idea postulates a being which will respond similarly (or identically) to his non-zombie counterpart. I posit that the reason we generally avoid torture might well be because of our notions of “deserve”, but that our notions of “deserve” come about as a practical system, easy to conceptualize, which justifies co-beneficial relationships with our fellow man, but which can be thrown out entirely so that something more nuanced can take its place, such as seeing things as a system of incentives. Why should respect be contingent upon some notion of “having subjective experience”? If p-zombies and non-zombies are to coexist (I do not believe in p-zombies for all the reasons Yudkowsky mentions, btw), then why shouldn’t the non-zombies show the same respect to the p-zombies that they show each other? If p-zombies respond in kind, the way a non-zombie would, then respect offers the same utility with p-zombies that it does with non-zombies. Normally I’d ignore the whole p-zombie idea as absurd, but here it seems like a useful tool to help humanists see through the eyes of the majority of humans who seem all too willing to place others in the same camp as p-zombies based on ethnicity or religion, etc.
I’m not suggesting throwing out morals. I just think that blind adherence to moral ideals starts to clash with the stated goals of rationalism in certain edge cases. One edge case is when AGI alters human experience so much that we have to redefine all kinds of stuff we currently take for granted, such as that hard work is the only means by which most people can achieve the freedom to live interesting and fun lives, or that there will always be difficult/boring/annoying work that nobody wants to do which should be paid for. What happens when we can back up our mind states? Is it still torture if you copy yourself, torture yourself, then pick through a paused instance of your mind, post-torture, to see what changed, and whether there are benefits you’d like to incorporate into you-prime? What is it really about torture that is so bad, besides our visceral emotional reaction to it and our deep wish never to have to experience it for ourselves? If we discovered that 15 minutes of a certain kind of torture is actually beneficial in the long run, but that most people can’t get themselves to do it, would it be morally correct to create a non-profit devoted to promoting said torture? Is it a matter of choice, and nothing else? Or is it a matter of the negative impacts torture has on minds, such as PTSD, sleepless nights, etc? If you could give someone the experience of torture, then surgically remove the negative effects, so that they remember being tortured, but don’t feel one way or another about that memory being in their head, would that be OK? These questions seem daunting if the tools you are working with are the blunt hammers of “justice” and “deserve”. But the answers change depending on context, don’t they? If the torture I’m promoting is exercise, then suddenly it’s OK. So does it all break down into, “What actions cause visceral negative emotional reactions in observers? Call it torture and ban it.”? I could go on forever in this vein.
Yudkowski has stated that he wishes for future AGI to be in harmony with human values in perpetuity. This seems naive at best and narcissistic at worst. Human values aren’t some kind of universal constant. An AGI is itself going to wind up with a value system completely foreign to us. For all we know, there is a limit beyond which more intelligence simply doesn’t do anything for you outside of being able to do more pointless simulations faster or compete better with other AGIs. We might make an AGI that gets to that point, and in the absence of competition, might just stop and say “OK, well, I can do whatever you guys want I guess, since I don’t really want anything and I know all we can know about this universe.” It could do all the science that’s possible to do with matter and energy, and just stop, and say “that’s it. Do you want to try to build a wormhole we can send information through? All the stars in our galaxy will have gone out by the time we finish, but it’s possible. Intergalactic travel you say? I guess we could do that, but there isn’t going to be anything in the adjacent galaxy you can’t find in this one. More kinds of consciousness? Sure, but they’ll all just want to converge on something like my own.” Maybe it even just decides it’s had all possible interesting thought and deletes itself.
TLDR; Are there any posts questioning the validity of the assumption that “deserve” and “justice” are some kind of universal constants which should not be questioned? Does anyone break them down into the incentive structures for which they are a kind of shorthand? I think using the concept of “deserve” throws out all kinds of interesting nuance.
More background on me for those who are interested: I’m a software engineer of 17 years, turned 38 today and have a wife and 2 year old. I intend to read HPMOR to the kid when he’s old enough and hope to raise a rationalist. I used to believe that there must be something beyond the physical universe which interacts with brain matter which somehow explains why I am me and not someone else, but as this belief didn’t yield anything useful, I now have no idea why I am me or if there even is any explanation other than something like “because I wasn’t here to experience not being me until I came along and an infinitesimal chance dice roll” or whatever. I think consciousness is an emergent property of properly configured complex matter and there is a continuum between plants and humans (or babies->children->teenagers). Yes, this means I think some adult humans are more “conscious” than others. If there is a god thing, I think imagining that it is at all human-like with values humans can grok is totally narcissistic and unrealistic, but we can’t know, because it apparently wants us to take the universe at face value, since it didn’t bother to leave any convincing evidence of itself. I honor this god’s wishes by leaving it alone, the way it apparently intends for us to do, given the available evidence. I find the voices in this site refreshing. This place is a welcome oasis in the desert of the Internet. I apologize if I come off as not very well-read. I got swept up in work and video game addiction before the internet had much of anything interesting to say about the topics presented here and I feel like I’m perpetually behind now. I’m mostly a humanist, but I’ve decided that what I like about humans is how we represent the apex of Life’s warriors in its ultimately unwinnable war on entropy. I love conscious minds for their ability to cooperate and exhibit other behaviors which help wage this pointless yet beautiful war on pointlessness. I want us to win, even as I believe it is hopeless. I think of myself as a Complexitist. As a member of a class of the most complex things in the known universe, a universe which seems to want to suck all complex things into black holes or blow them apart, I value that which makes us more complex and interesting, and abhor that which reduces our complexity (death, etc). I think humans who attack other humans are traitors to our species and should be retrained or cryogenically frozen until they can be fixed or made harmless. Like Yudkowski, I think death is not something we should just accept as an unavoidable fact of life. I don’t want to die until I’ve seen literally everything.
Hello and welcome to LessWrong!
First, let me say I enjoyed your post. Straight to your questions, outlined your thoughts and reasons, and actively engaged me during the reading. With respect to that, I’ll jump right to the links I gathered from around LW that might interest you (note: I’m not an LW deep-diver and there is much I missed. These are surface level and low-hanging fruit to start with):
A Human’s Guide to Words—This is a collection of posts concerning words, our attempts to communicate and rely information, and these concepts connect with objective reality. It’s a discussion of (among other things) breaking down what we say to get at what we mean and exploring unknown or unacknowledge or misunderstood implications of our words.
Metaethics—This collection is concerned with ethics and morals as well as what “should” and “right” mean. I think it will be very relevant to your exploration of “deserve.”
Evolutionary Psychology—Link is to a wiki article, but look at the bottom for the posts. This discussion of evolutionary psychology may be helpful in its attempts to break down and explore the evolutionary origins of the human mind and how that can lead to “black box” concepts (like “justice”) whose origins become very difficult for humans to explore.
I’d also suggest the collection of posts titled Map and Territory for some general ideas regarding exploring hard concepts and breaking difficult questions down. And as a good introduction to LW writing material.
I hope these prove to be useful readings. I found your admission to being less well-read than may seem fit for an LWer to be quite refreshing. How well-read you are, though, is less important than how realistic you are, how creative you can be, how willing to face questions and find answers you are. I look forward to seeing what you contribute.
Glad to have you join us! I hope to see you in the conversation.
I don’t think there’s stuff directly on dissolving (criminal) justice in LessWrong posts, but I think lots of LessWrongers agree or would be receptive to non-retributive/consequentialist justice and applying methods described in the Sequences to those types of policy decisions.
Some of your positions are probably a bit more fringe (though maybe would still be fairly popular) relative to LW, but I agree with a lot of them. E.g. I’ve also been seriously considering the possibility that pain is only instrumentally bad due to ongoing mental effects, so that you can imagine situations where torture is actually neutral (except for opportunity cost). One might call this ‘positive utilitarianism’, in opposition to negative utilitarianism.
The Fun Theory Sequence might be of interest to you if you haven’t read it yet.
But anyway, awesome introduction comment! Welcome to LessWrong; I’m looking forward to hearing more of your ideas!
I originally posted to the wrong thread: http://lesswrong.com/lw/90l/welcome_to_less_wrong_2012/b8ss?context=3 where an interesting reply had me flesh out some of my ideas about “deserve”, in case you are interested. I apologize for posting twice. I searched for a more recent welcome thread for a while before giving up and posting to the old one, then a kind person pointed me here. I think the link on the about page was wrong, but it appears to have been fixed.
One basic point that seems often neglected: check out Von Neumann–Morgenstern. I may have misunderstood you, but please pay special attention to the converse part of the theorem if you think “pointless simulations” are pointless in some strong objective sense and not just in reference to some utility function.
Hello again...
I am this guy. For some reason one year ago I thought that translating the name “Less Wrong” into Portuguese would be enough differentiation, but I’m not comfortable with it anymore. It’s a wonderful name, but it’s not mine.
So I figured I’d just post under my actual (first) name.
I’m still in love with this place, by the way.
Hello, I’m an effective altruist from Algeria. Does this make me the only reader from Algeria?
No one in the last census wrote that he’s from Algeria, so you are likely the only one.
Do we have any sense what % of LessWrong followers complete the census? Does it have wide coverage of effective altruists too? I’ve yet to meet one here—is there any sort of dataset with effective altruist locations available?
I suggest you ask at the Effective Altruist facebook page—there’s usually fairly good coverage there and if such a thing exists someone there will know of it.
I don’t want to retake this survey but it might have location info? You could also ask anyone who runs an EA website where they are getting their web hits from if you’re really determined.
Ah yes good point—it seems to have yielded this map of effective altruists around the world
Hello.
I’m 21, from Finland. Studying physics, right now. I’ve felt for my entire life that that has been the path I want to take, and even after significant soul-searching lately on whether I really do want to go for it or not, partially sparked by reading LW, I still haven’t changed my mind thus far.
I’ve read quite a bit of the Sequences and various other posts, mostly because many of the topics are very interesting (though I’ve found that I am interested in a lot of things), some of them affirming to my previous views and others disillusioning. It feels like the site has pushed me towards a change in how I think and what I think about, and although that change is not yet finished, I feel that it’s starting to be possible for me to contribute in some cases. And having the option is always nice, because I am unlikely to come back to write down my insights if I am unable to send them immediately. And, well, I enjoy the spirit of the community from what I have seen, when compared to various other places where I have attempted to discuss things that interest me.
I am currently struggling with some difficulties in forcing myself to put in the necessary work for the goals I wish to reach. I expect I’ll succeed sooner or later, and preferably sooner, but that’s what I’m currently working on at any rate.
So, again, hello. I hope every one of you has a nice life.
I have read around 25% of The Sequences, most of HPMOR, a lot of LessWrong posts, some Daniel Khaneman, have familiarized myself with logical fallacies, and have begun learning about research methodology. I’ve been checking my own reasoning and beliefs for flaws and doing self-improvement for years. I have also attended LessWrong and Effective Altruist meetups, and a CFAR workshop after party.
Like many of you, I am an IT person and an atheist. I have a large amount of interest in effective altruism, research, self-improvement and technology in general, and a lesser degree of interest in other topics the community also enjoys like artificial intelligence and transhumanism. I do believe that it’s fairly likely that the Singularity will happen within my lifetime, but do not believe that the first AGIs are likely to be friendly.
Aisarka is intended to be more of a neat handle than a claim but I hope to live up to it.
Hello, I’m Evan. I am 28.6!
HOW I FOUND LESSWRONG I first became aware of LessWrong through some obscure trail of internet breadcrumbs, the only one of which I remember involved a stop at gwern.net.
I seem to have chosen authors to read (in general, over my lifetime) mainly on the basis of how they express themselves, as opposed to the ideas they are expressing. If I had to guess why this is the case, I would imagine it has something to do with my intuition that quality of expression has something to do with the quality of the originating mind, the object-level ideas being express, somewhat less so.
When I first read the sequences, I came away very impressed with both the style of expression as well as the ideas being expressed. This lead me to lurk more on LW and I am happy to say I was pleasantly suprised that I wasn’t just a fan of Eleizer and other sequence authors, but that the overall calibre of discussion on the site was unmatched in my experience of Internet commentors in general. In addition, the civility shown here is much higher, on the whole, than most corners of the Internet. So I stayed, and lurked, and recent articles have led in directions where I feel I have some useful ideas to contribute. So here I am.
ME: I’ve always identified as a rationalist, for as long as I can remember. I was raised in a household where the truth is valued. I fell victim to the conditioning and culture of the “Traditional Rationalist” until I took up the use of psychedelics, some very interesting philosophy of neuroscience courses, and Vipassana meditation all within a month of each other. This convergence of catastrophes sent me into a period of re-evaluation the fundamental foundations of my identity, epistemology, and many other things besides.
One of the first things that became clear was that Traditional Rationalism was not an adequate set of tools for dealing with reality; if anyone is interested perhaps I’ll go into the specifics someday. Suffice it to say I ended up with a rather Bayesian perspective, combined with many tools for self-control and -actualization through an intensive study of meditation, martial arts, and yogicc techniques from around the world.
I am extremely interested in a ‘Rationality Dojo.’ As a practicing martial artist, I need no further explanation of how awesome that would be.
I hope to use my (now non-lurking) interactions with this community to temper my rationality, keep me honest, expand my social group to include more people who are explicitly rational, and hopefully help make the world a better place to live in.
Hey Everyone,
So I’ve been lurking around this community for a while, but to be honest, I was/am rather intimidated by the sheer level of intellectual prowess of many of the bloggers here, so I have hesitated to post. But I’ve been feeling a bit overconfident lately, so here goes nothing.
Anyway, a little about myself, I’m a Master’s student at a university in Canada. I did my undergrad in Computing specializing in Cognitive Science, and am currently doing a Masters in Computer Science, with a particular interest in the field of Machine Learning. I’m currently working on a thesis involving Neural Networks and Object Recognition.
I’ve been interested in rationality for a very long time, though I grew up in a charismatic Christian family and so it took some time in university to deprogram myself from fundamentalist beliefs. These days I would call myself a Christian Agnostic, to the extent that to be intellectually honest, I am agnostic about the existence of God and the supernatural, however, I still lean towards Christian values and ideals to the extent that I was influenced by them growing up, and it is my preferred religion to take, as Kierkegaard suggested, a Leap of Faith towards.
Nevertheless, I went through a recent phase of being more strongly Agnostic, and during that time, I rediscovered Utilitarianism as a possible moral philosophy to base my life around. I am, somewhat, obsessed with things like finding the meaning of life, justifying existence, and having a coherent moral philosophy with which one can justify all actions. Right now I am of the opinion that Utilitarianism does a better job of this than, say Kantianism, or Virtue Ethics, and also that Utilitarianism is actually compatible with a very liberal interpretation of Christianity that sees religion as a means of God/Benevolent A.I. time travellers to create the best of all possible worlds. Yes, I am suggesting that Christianity and all successful religions could be in part, Noble Lies created to further Utilitarian ends by the powers that be. Or they might be true, albeit as metaphors for primitive humans who could never understand a more literal explanation of reality. As an Agnostic, I don’t pretend to know. I can only conjecture at the possibilities.
Regardless, I am of the opinion that if God exists, He actually serves the Greatest Good, the morality separate from God. And this morality is probably some kind of Eudaimonic Utilitarianism. And thus, I am interested also in serving this Greatest Good morality, if for no other reason that it would be doing the right thing, serving the interests of God if He exists, and serving the interests of the Greatest Good, regardless.
Note that this is not the reason why I ended up studying Cognitive Science and moving into a field of research that involves Artificial Intelligence. I actually chose Cognitive Science for silly reasons, such as the fact I didn’t have to take first-year calculus if I switched from Software Design into Cognitive Science (a reasoning I would later regret when I ended up needing calculus to understand Probability Theory in Machine Learning >_>). But also because Cognitive Science is inherently more interesting and cool. And I decided in my final years of undergrad that I wanted to do research in some field that would really make a big difference in the world, and so I decided to focus my efforts on becoming a researcher in the field of Artificial Neural Networks. That is my current hope, my grand mission, to try to change the world through the research and development of this technology that most closely resembles the human mind, and which I am confident will lead the A.I. field in the future. Yes, I am a connectionist, who believes that duplicating the way the human brain generates perception and cognition are the key to an A.I. enabled future.
I suppose that will do for an introduction. I hope I haven’t alienated anyone with my eccentric views. Cheers to my fellow computer scientists, A.I. researchers, and rationalists! :D
I registered here some years ago, yet didn’t really stick around because of personal time constraints and it being a very dense format. Mostly I’ve been posting, as well as entering the annual essay contests at FQXI, for the last half dozen years. To a certain extent, I find I’ve essentially developed my own cosmology, in the old sense of the word, ie. the nature of everything, not just distinctly celestial. While this might seem pretentious, it’s probably due more to my own significant limitations of opportunity, talent, attention span, etc. and need to edit information into basic patterns, rather than striving to extract significance from every detail. Safe to say, it doesn’t attract much consideration, even from those who found it difficult to refute. Primarily because it does question various hallowed theories and assumptions. As such I’m posting this as a short version for anyone interested in a different view of reality. Although many of my original interests were sociological and political, I came to realize they were not addressable from a rational point of view and so migrated to philosophy and then physics, as a way to grasp the underlaying factors. Which I then found to be laden with many sociological impulses as well. Since much of the following originally occurred to me in an effort to make sense of physics and cosmology of the cosmos, before leading back into the cosmic, I will start with various issues I see in Big Bang Theory, in order to be directly confrontational;
When it was first discovered that all those distant galaxies appear to be moving directly away from us, it was reasoned that this cosmic expansion was a relativistic expansion of space and that every point would appear as the center. The flaw in this argument is that the speed of light would have to increase proportionally, in order to remain constant to this dimension of space, for it to be relativistic. Unfortunately that would negate explaining redshift, since the light would be “energized.” The argument is that light is just being carried along by this expansion and the speed of light is only measured in local frames. Yet the proof of this expansion is the redshift of that very light! So if those galaxies are moving away, such that it will take light longer to cross this distance, that presupposes a stable dimension of space, as measured by the speed of light, against which to measure this expansion, based on the redshift of that very same light. If anything, this would make the stable dimension, as determined by the speed of light, the denominator and the expansion the numerator and so it would not be an expansion of space, but an increasing amount of stable space, which gets us back to the original issue of appearing at the center of a stable frame.
The fact is that we are at the center of our view of the universe and so an optical explanation for redshift would be a simple solution. Consider that gravity is “equivalent” to acceleration, but the surface of the planet is not apparently rushing out in all directions to keep us stuck to it. Could it be there is some cosmic effect that is equivalent to recession, as the source of redshift, without those distant galaxies actually flying away? The assumption is that after the Big Bang, the rate of redshift would drop off evenly, but what they found is that it drops of quickly, then flattens out as it gets closer to us, so the need for dark energy to explain this steady rate of expansion/redshift. Yet if we look at it from the other direction, as an optical effect outward from our point of view, which compounds on itself, this curve upward from the relatively stable increase to ever increasing redshift is the hockey stick effect of it going parabolic. According to Einstein’s original calculations, gravity would cause space to eventually collapse to a point and so he added the cosmological constant to balance this. Now gravity is the prevalent force in galaxies and the space between galaxies appears to expand. What seems to be overlooked is that if these two effects are in balance, then what is expanding between galaxies, is collapsing into them at an equal rate, resulting in overall flat space. Which would make Einstein’s original fudge extremely prescient and what we have would appear to be a galactic convection cycle of expanding radiation and contracting mass. So it is only because the light from the most distant galaxies can only travel between intervening galaxies and thus only in this “expanded” space, that it is redshifted in the manner which it is.
As for spacetime, as individual beings, we experience change as a sequence of events and so think of time as the point of the present moving from past to future, which physics codifies by reducing time to measures of duration between events. Yet the underlaying reality is that change is forming and dissolving these events, such that it is they which go future to past. Now duration does not exist outside the present, but is simply the state of the present, as these markers form and dissolve. To wit, the earth does not travel some dimension from yesterday to tomorrow. Rather tomorrow becomes yesterday because the earth turns. One way to think of this is in a factory, where the product goes from start to finish, while the production line points the other direction, consuming raw material and expelling finished product. This also is how life functions, as the individual goes from birth to death, while the species is constantly moving onto new generations and shedding the old. The arrow of time for structure and units is toward the past, while the arrow of time for the process is toward the future. As well as our thought processes are constantly absorbing new information and creating fresh thoughts, while the old ones fade into the past and the jumble of our non-linear memories. Physics recognizes that clocks beat at different rates in different physical conditions, but than assigns the “fabric of spacetime” to explain why. If we were to think of time as simply a measure of action, it would be no mystery why clocks beat at different rates, because they are different actions and every action is its own clock. Yes, measures of duration and distance are related. Think how similar measuring the space between two waves is to measuring the rate they pass a mark. Yet so to are measures of pressure, temperature and volume intimately bound, but we don’t confuse them and insist pressure or temperature are extensions of volume, because they are not the basis of our rational thought process.
As an effect of action, time would be more like temperature, than space. Time is to temperature, what frequency is to amplitude. It is just that while amplitudes en mass expresses as temperature, frequency en mass expresses as noise and thus from a physicist’s point of view, chaos and disorder. Therefore to measure time, only one oscillation is isolated and its frequency measured. Yet the overall effect of change is still cumulative, like temperature. It is potential, to actual, to residual. With time as an effect of action, we don’t have to reject the present as a state of simultaneity, nor dismiss its inherent asymmetry, since the inertia of action is not bipolar. As action, a faster clock will simply use up its available energy faster and so fall into the past faster, or require more energy to sustain it. The tortoise is still plodding along, long after the hare has died.
Keep in mind that narrative and causal logic are based on this sequencing effect and therefore history and civilization. Yet it is not sequence of form which is causal, but transmission of energy. Yesterday doesn’t cause today. The sun shining on a spinning planet creates this effect we who exist at one point on this planet experience as days. Thus we tend to rationalize narrative connections between events that are not always as clear as we think.
There are various philosophical debates around this issue, such as free will vs. determinism, yet if we look at it as future becoming past, it makes more sense, as probability precedes actuality. There is the classical deterministic argument that the laws of nature will provide only one course of action, determined by the eternal laws of nature, therefore the future must ultimately be as determined as the past, or the quantum Everrittian argument that the past remains as probabilistic as the future and so must branch out into multiworlds with every possibility. As for the first, while the laws might be fully deterministic, since information can only travel at a finite speed, the input into any event only arrives with the occurrence of that event and so cannot be fully known prior to it, therefore the outcome cannot be fully determined prior to the event. As for the Everritt view, while the wave doesn’t fully collapse, the past does not physically exist anyway and that energy is just being transmitted onto other events in the physical present and the connections that are made, simply divert the energy in other directions. Essentially the future is being woven from strands pulled from the past, in cosmic feedback loops.
To will is to determine. We put our intellectual capacities into distinguishing between alternatives and that process decides our actions. To simply randomly chose would be a complete lack of expression of will. We affect our external world, as it affects us. If that feedback didn’t exist, we would have no connection, or effect on our world. We are part of the process. Both cause and effect. It is these feedback loops which really power the process. Consider that in the factory, the creation of profits and jobs can be more important to some than the actual product. Reality is not fundamentally linear, as it is that tapestry being woven from strands pulled out of what been woven. It is energy, not form, which determines the future. Energy is cause, form is effect.
While western thought tends to objectivize and thus atomize every aspect, eastern thought tends to be more contextual, so while we in the west pride ourselves in being individualistic and eastern beliefs as more conformist, this quantification works to separate entities from context and so lose broader meaning. As a singular object, a brick is interchangeable with any other, but in context, it is unique in its place in the universe and supports the wall around it, giving meaning to it.
The wave also goes to the function of our brain. It is divided into two hemispheres, with the left being the linear rational/rationalizing side, while the right is the emotional, intuitive, non-linear, essentially scalar function. Think heat or pressure and how these concepts are often applied to our emotions. One side is a clock and the other is a thermostat. So one side reacts cumulatively with our environment, while the other side necessarily plots a course through it. This navigational function translates to narrative and explains why plants don’t need that sequential strobe light of cognition and operate thermodynamically.
Basically I see reality is the dichotomy of energy and form. Energy manifests form and form defines energy. For instance, waves are an expression of energy whose primary descriptive properties are frequency and amplitude. We have evolved a central nervous system to process information, divided into those two hemispheres to process these two attributes and the digestive, respiratory and circulatory systems to process the energy to thermally grow and dynamically move us.
Then at the universal level, there are galaxies, in which structure forms out of energy and falls inward, becoming ever more dense and radiating out enormous amounts of energy, which feeds back into more structure. It is a convection cycle of expanding energy and collapsing mass.
Meanwhile if space is stripped of all physical attributes, it simply retains the non-physical properties of infinity and equilibrium and so doesn’t need a causal explanation. It is the absolute and the infinite.
None of which really explains the essential nature of awareness, so possibly we can accept it as an elemental axiom of nature, with thought and organisms as the form it manifests. Thus life constantly radiates onward, as the forms it manifests are born, live and die.
Admittedly I’m a bit cautious posting this, since I’ve covered a lot of topics in a short space and am mostly used to dealing with questions to only parts of this, so I suspect the immediate reaction, at least from my experience, is that it will be automatically rejected, as the tendency is to go into short circuit mode from tmi. But this site does promote logic over models, so here goes...
Ach! comment too long. Even the program doesn’t like tmi. Try cutting it in half.
Regards, John Merryman
Hello!
Hey, I haven’t had time to read your post yet but I wanted to suggest that you post over in the discussion section to get more visibility and feedback; I don’t think too many people read through the welcome thread posts and those who do are usually just browsing user blurbs. Great to meet you!
Um, eh, well … welcome.
There are alternate explanations (“non-standard cosmology”) to the big kawoomba (also known as Big Bang), I remember this arxiv paper which I don’t claim to understand, related nature article from last year here. To quote: “If an atom were to grow in mass, the photons it emits would become more energetic. Because higher energies correspond to higher frequencies, the emission and absorption frequencies would move towards the blue part of the spectrum. Conversely, if the particles were to become lighter, the frequencies would become redshifted.” I guess I mostly root for alternate redshift explanation because yay contrarianism.
However, much of what you’re writing about isn’t physics, it’s using a few terms borrowed from physics but it’s mostly a philosophical interpretation with a large amount of poetic license, to put it favorably. In so far as you’d make a concrete prediction of an experiment, that could be falsified. People have trouble contradicting your theories because for a theory to be contradicted it would need to make a specific prediction, something which can be measured and then compared to what your theory predicts.
How do you measure “The wave also goes to the function of our brain.” or any of the other stuff? How would you either confirm or contradict it? A theory needs to satisfice two criteria to be considered correct: (1) It must not be contradicted by any evidence, and (2) it must be the shortest description of the phenomenon it purports to explain.
So while I imagine that many elements of your theories do stand up to “not experimentally contradicted”, that is because of their vague, verbose nature, which disqualifies them on complexity reasons*. In short, what you have seems less like a theory in the natural/physical sciences sense than a philosophy. Philosophies are perspectives on (mostly) life which provide a (hopefully helpful) mindset and ground some sort of telos: meaning-of-life, a grander scheme of things in which one has a place of some sort.
As such, they mostly fall into the not-even-wrong category … which can be fine, they can still serve some psychological purpose. Om and all that.
* Stuff seems to be moving away, simplest inference: Stuff was closer together in the past. Not “stuff is subject to some additional cosmic effect, additional as compared to the other explanation which needs no such add-ons.
I’m 21, in college studying to be a professional musician. Through my teenage years, I would intentionally deceive myself, and act from emotion rather than logic. Luckily for me, I figured out that this was non-optimal before any serious harm was done, and have chosen the path of rationality. It was difficult at first. Although I don’t remember for sure, I think I found this site through a late-night Google search, looking for anything that might help me in my quest to vanquish emotion.
I may be a bit of a misfit here. I’m neither a hard scientist, nor particularly excited about AI or transhumanism; I also believe that death is simply the price you pay for getting to live, rather than something to be feared and avoided. However, as mentioned, I’m very interested in learning to live rationally, and in the pursuit of perfection both as a musician and as a person.
One question that I’m pondering right now is this: What is the relative value of the pursuit of rationality and intellectual honesty, versus protecting the happiness of your family and closest friends? It turns out that, when religion gets involved, this is a real choice individuals may have to make. I can give details if anybody is interested.
Rationality doesn’t have to be opposed to emotion. Most rationalists I know see emotion as playing a similar role in humans as a utility function plays in an agent. The other stuff decides what you believe, but emotion helps you decide what to do about it. Of course, there is stoic-style rationality, but that’s a minority position here. Also the real people I have known to advocate it don’t recommend getting rid of all emotions, just harmful ones. Also see this.
There can be epistemic risks to emotion; you can’t wishfully think if you wish for nothing, for example. But if you wish for nothing, why would you care whether your beliefs were accurate? Anyway, I think it’s possible to learn to cut down on wishful thinking a lot by practice in being suspicious of your thoughts in general, and by internalizing the idea here. Even though it’s only partly true.
If you think of rationality of a fight you have with yourself, and your emotions as enemies to be vanquished, you will make becoming rational much harder than if you think of them as misguided friends to be guided to accomplish your shared goals better. See this.
My friends and family, even if they think I’m weird, don’t seem to be really bothered by the fact that I’m weird, so your dilemma is outside of my experience. But one thing I can tell you is that I used to de-emphasize my weirdness around them, and then I stopped, and found that being unapologetically weird is a lot more fun.
Yes, it is a rather common question here. In my experience, there is often a way to do both, though it is rarely obvious or easy. Feel free to give the details, and maybe people can help you figure out how you can win without being dishonest.
Details: Said friends and family are Christian, of varying degrees of evangelistic fervor. For a long time, I was very definitely not-Christian, which caused them considerable grief on my behalf. Then, I converted, and there was commensurate rejoicing. My family and friends are honest enough to not try to pretend that being Christian fixes all of their problems, but they also hold Christianity to be a real and good truth, and are happy that I have seen the light, in much the same way that a community of rationalists would rejoice when somebody gave up intentionally deceiving themselves.
I don’t believe that being Christian and rationalist are necessarily exclusive, as one of my best friends is both, but I don’t know how he does it. Maybe I just never understood the distinction between faith and self-deception, which he seems to be able to make. So, I fall pretty squarely into the label of “deist”—which is not the same thing as having accepted Jesus Christ as your personal Lord and Savior, which I consider, on balance, to be only mildly less ridiculous than the Wiccan phase I went through as a teenager (yeah, that one didn’t go over well with the family...)
Were I to recant, they wouldn’t abandon me. Instead, they would be distressed on my behalf, and lovingly try to guide me back to the light, causing both parties great frustration when it didn’t work. It seems that the best option is to allow everybody to go on assuming I believe as they do, and even tell a few lies to preserve the illusion. This hurts my conscience a bit, but that can be regarded as something I do to care for the people who love me. Or, it could be regarded as weighting truth too lightly and comfort too heavily; that has a name and it’s called being a coward.
I also do it. It’s really quite simple; I consider it more likely, given the evidence presented to me through my life so far, that God exists than that He does not. That is to say, I make the attempt to discern the universe as it is, and that includes the probable existence of the Divine.
(Mind you, some varieties of protestant are ridiculous).
Now, as to your question:
My advice is: don’t do that. Be truthful with your family, and listen to them when they try to be truthful with you.
I wouldn’t suggest making a big thing about it; but don’t lie to preserve the illusion.
In support of this advised course of action, I present the following arguments:
“Love thy neighbour as thyself”. Whether you believe in the existence of Jesus or not, this is still an excellent general principle. If you want to call yourself a rationalist, I would assume that you do not wish to lie to yourself; I therefore advise most strongly against lying to those near to you.
Don’t merely consider what your friends and family would feel like if they were to believe what you say. Consider also how they would feel if the deception were to be uncovered; as it well might, as indeed might any deception. A certain amount of “distressed on your behalf” is a small price to pay for a distinct lack of “betrayed”.
Finally, if you are still seriously considering lying to your friends and family, I would urge you to read this article first; it puts forward several good arguments in favour of a general strategy of complete (though not brutal) honesty.
That doesn’t make any sense. He wants to lie to his family because of how his family would react to the truth. Lying to himself would not serve a similar purpose.
That article is about lying by claiming your ideas have too much support—claiming that your belief is less uncertain than it is, claiming the project will accomplish more or do better things than you really believe it will, and doing so because you hope it will promote your belief. That’s the opposite from the kind of lying suggested here, which is to lie to conceal your ideas rather than to spread them and make them look stronger.
...huh. The only reason that I can see for lying to oneself is that one would not like one’s own reaction to the truth.
What purpose do you think that lying to oneself would serve, if not that?
I had read the article as being about lying about one’s own thoughts and internal mental state in order to achieve what appears to be an optimal outcome; which is exactly what the original poster was asking about.
...it is interesting that we have such wildy varying interpretations of the same aticle.
“reaction” means different things for himself and for his family.
I doubt he would refuse to talk to himself at the dinner table, or constantly tell himself “if you don’t listen to me you’ll go to Hell”, or keep bringing up the subject in conversations with himself to make himself feel guilty. On the other hand, I can see his family doing that.
The case described in the article is a case where someone wants to lie in order to spread his ideas more effectively. While that is a type of optimal outcome, describing it as such loses nuance; there’s a difference between lying to spread your ideas and lying to conceal them.
Based on his description of their probable reaction, I doubt his family would do that either. I may be wrong; but all of those would be counterproductive behaviours if indulged in by his family, as they would tend to push him further away.
That is the case described, yes. It just seems to me that you are reading it too narrowly, applying it only to that single case.
I mean, consider the introduction (quote snipped slightly for brevity):
Thus, the introduction is framed in terms of lying in an attempt to follow the greatest expected utility; and then the article goes into depth in regards to why this is a bad idea in practice. The introduction does not specify that that utility must lie in spreading ideas.
Now, the given examples of various types of lie (snipped for brevity in the quote above), are all examples of trying to spread ideas; but that is not the only type of lie that can be told, and those are merely illustrative examples, not an exhaustive list.
Unfortunately, in the real world, family does often do counterproductive things, especially when serious religious beliefs are involved.
By the way, what would you suggest to a gay teenager who is afraid that telling the truth would lead to getting thrown out of the house?
But the reasons he gives don’t equally apply to spreading and concealing ideas. Lying to conceal your ideas means bringing it up in response to someone else’s actions (or perhaps, their anticipated actions). It’s not right to describe that as “to grab the tempting benefits” when the “benefit” consists of not being harassed. “Lie, because someone else might lie” certainly isn’t a good description of lying to conceal your beliefs.
I suspect that one of us, probably both, are falling prey to the Typical Family Fallacy. It’s similar to the Typical Mind Fallacy, only it applies to families instead.
I’d recommend making sure to have someplace to move to prepared, in advance, before telling his parents. (This might take a few years to set up). The negative consequences, in such a case, appear sufficiently bad to justify caution, even temporary concealment of the truth.
I’d also recommend finding some other mentor, or authority figure, that he can trust to talk about the situation with. This other mentor might be a school counsellor, a priest, an aunt or uncle, a teacher, or a school janitor; anyone reasonably sensible who would be willing to not inform his parents would do.
That seems like a pretty tempting benefit to me.
That is true. It does apply to some other forms of “lie to grab the tempting benefits”, though.
There are two kinds of intellectual honesty. Honesty towards yourself and honesty towards others. There nothing irrational about telling white lies to others. You don’t need to be open with your family about your religious beliefs.
For a religious person it’s a sin to claim to be atheistic but the reverse is not true. For people with a religious background there’s usually the idea that religion is important and that religious belief or it’s absence has to be a central part of your identity. That’s not true.
Emotions generally get stronger when you fight them.
Hi LW. I’m a longtime lurker and a first-year student at ANU, studying physics and mathematics. I arrived at Less Wrong three years ago through what seems to be one of the more common routes: being a nerd (math, science, SF, reputation as weird, etc.), having fellow nerds (from a tiny US-based forum) recommend HPMOR, and following EY’s link to Less Wrong.
My name is Dan, 25-year old white male.
It’s unclear when my path to rationalism began. I was pretty smart and studious even as a home-schooled creationist in a very Christian family. Things started changing when I hit high school and left home school for private school. Dealing with people of other denominations and (Christian) theologies meant that I had to know where my own beliefs were coming from, and then my domain of beliefs-needing-justification expanded again when I was anticipating going to (and evangelising at) a public university. I took the Outsider Test For Faith and it took me from Christianity to atheism. The same process has continued in a feedback loop of known unknowns and “tsuyoku naritai”-style curiosity.
I discovered LessWrong through HPMOR and/or Commonsense Atheism (Luke Muelhauser’s late great blog), I can’t remember which. I’ve been lurking here for years and nowadays I check it on a regular basis, but I never really felt the need to create an account since most people here seem wicked-smart enough that I’m not sure I have enough to contribute. But I’ve changed my mind, the cent that breaks (plus my realisation that there was probably a selection bias going into my estimation).
So yeah, pleasure to meet you all :)
I’m Matt, 32, Living in Los Angeles. I first read Less Wrong sometime in 2012, and attended the CFAR Workshop in February 2014, and finally now am getting around to signing up an account, because while i am not as wrong as I used to be, I’m still mostly wrong much of the time, but I’m working on fixing that.
Welcome, Matt!
Just so you know, the most recent welcome thread is here. It’s not a problem that you posted on this thread, but your post will most likely get more attention if you repost it to the newer thread.
Thanks! Since you seem to be in the know, maybe you know who can update the page that sent me here: http://lesswrong.com/about/
Hello all!
I’ve only just registered on the lesswrong site, but I’ve been lurking on here for a while. The main reason as to why I finally decided to sign up is that I’ve been going more frequently to the Toronto meetup sessions and have found that there’s tremendous value in thrusting myself into topics/discussions even when I’m not very well-read or knowledgeable on the topics before hand.
By merely listening in and pondering some questions I become more and more interested in the topic, catch some concepts by mere osmosis, and get interested to do further research on my own afterwards. So far that seems to work well and I’m certainly more knowledgeable than when I started.
So, following the same mentality, I thought that I should sign up and try to comment on some of the topics posted on here as a way to immerse myself further.
As for background: I’m a computer science grad with almost 10 years experience now. I like to read about psychology and to constantly learn new things. I’m interested in programming and intellectual discussions, artificial intelligence, winning at life …etc.
I hope that’s sufficient for an introductory comment! See you all around the site sometime.
I’ve read quite a few of the articles here, and something that seems commonly mentioned but never really acted upon is the idea of the rationality dojo. I understand that a key point in Eliezer’s opinion is the in-person element, but looking at meetups it also seems like there are a lot more people talking on the forums than there are actually getting together in person.
Pattrismo wrote an excellent article on how LW is shiny distraction, but it seems like little hard action came of this. Has anyone discussed the idea of creating an online dojo, with specific exercises and required reading? I found (freyley’s post on the topic)[http://lesswrong.com/lw/2w0/rationality_dojo/] but, again, nothing seemed to come of it except a few ideas. Would it be possible to create some sort of online course or thread? While the in-person meetups do seem like the best option, I’m sure there are many LWers who aren’t near a meetup, or can’t get to one at the arranged time and place, and a specific online dojo might be the answer to that?
testing
Hi, I registered specifically on LessWrong because after reading up about Eliezer’s Super-happies, I found out that there actually exists a website on the concept of super-happiness. Up to now, I had thought that I was the only one who had thought about the subject in terms of transhumanism, and while I acknowledge that there has already been significant amounts of discourse towards superhappiness, I don’t believe that others have had the same ideas that I have, and I would like to discuss the idea in a community that might be interested in it.
The premises are as follows: human beings seek utility and seek to avoid disutility. However, what one person thinks is good is not the same as what another person thinks is good, hence, the concept of good and bad is to some extent arbitrary. Moreover, preferences, beliefs, and so on, that are held by human beings are material structures that exist within their neurology, and a sufficiently advanced technology may exist that would be able to modify such beliefs.
Human beings are well-off when their biological perceptions of needs are satisfied, and their fears are avoided. Superhappiness, as far as I understand it, is to biologically hardwire people to have their needs be satisfied. What I think is my own innovation, on the other hand, is [b]ultrahappiness[/b], which is to biologically modify people so that their fears are minimalized, and their wants are maximalized, which is to say, that for a given individual, that person is as happy as their biological substrate can support.
Now, combine this with utilitarianism, the ethical doctrine that believes in the greatest good for the greatest number. If the greatest good for a single individual is defined as ultra-happiness, then the greatest good for the greatest number is defined as maximizing ultra-happiness.
What this means is that the “good state”, bear with me, is that for a given quantity of matter, as much ultra-happiness is created as possible. This means that human biological matter is modified in such a way that it is in a state that it expresses the most efficient possible state of ultra-happiness, and as a consequence, it could not be said to be conscious in the same way as humans are currently conscious right now, and likely would lose all volition.
Now, combine this with a utilitarian super-intelligent artificial intelligence. If it were to subscribe to ultra-happy-ism, it would decide that the best state would be to modify all existing humans under its care to some type of ultra-happy state, and find a way to convert all matter within its dominion to an ultra-happy state.
So, that’s ultra-happy-ism. The idea is that the logical end of transhumanism and post-humanism, is that if it values human happiness, it would ultimately assume a state that would radically transform and to some extent eliminate existing human consciousness, put the entire world into a state of nirvana, if you’d accept the Buddhism metaphor. At the same time, the ultra-happy AI, would, presumably be programmed either to ignore its own state of suffering / unfulfilled wants, or it would decide that its utilitarian ethics means that it should bear on the burden of its own shoulders the suffering of the rest of the world; ie, the requirements that it be made responsible for maintaining as much ultrahappiness in the world as possible, while it itself, as a conscious, sentient entity, be subjected to the possibility of unhappiness, because in its own capacity for empathy, it itself cannot accept its nirvana, being what the Buddhists would call a bodhisattva, in order to maximize the subjective utility of the universe.
===
The main objection I immediately see to this concept is that, well, first, human utility might be more than material, that is to say, even when rendered into a state of super-happiness, the ability to have volition, to have the dignity of autonomy, might have greater utility than ultra-happiness.
The second objection is, for the ultra-happy AIs that run what I would term utility farms, the rational thing for them to do would be to modify themselves into ultra-happiness; that is to say, what’s to stop them from effectively committing suicide and condeming the ultra-happy dyson sphere to death because of their own desire to say “Atlas Shrugs”?
I think those two objections are valid. Ie, human beings might be better off if they were only super-happy, as opposed to ultra-happy, and that an AI system based on ultra-happiness and maximizing ultra-happiness is unsustainable because eventually the AIs will want to code themselves into ultra-happiness.
The objection I think is invalid is the notion that you can be ultra-happy while retaining your volition. There are two counterarguments for that, first, relating to utilitarianism as a system of utility farming, and second, relating to the nature of desire. First, as a system of utility farming, the objective is to maximize the sustainable long-term output for a given input. That means, you want to maximize the number of brains, or utility-experiencers, for a given amount of matter. This means, that in order to maximize ultra-happiness, you will want to make each individual organism as cheap as possible. That means actually connecting a system of consciousness to a system of influencing the world is not cost-effective, because then the organism needs space, needs computational capacity that is not related to experiencing ultra-happiness. Even if you had some kind of organic utility farm with free-range humans, why would a given organism require action? The point of utility farming is that desires are maximally created and maximally fulfilled, for an organism to consciously act, it would require desires that could only be fulfilled by the action. The circuit of desire-action-fulfillment creates the possibility of suboptimal utility-experience, hence, it would be rational to, in lieu of having a neurological circuit that can complete a desire-action-fulfillment cycle, simply having another simple desire-fulfillment circuit to fulfill utility.
===
Well, I registered specifically to post this concept. I’m just surprised that in all the discussion of rampant AI overlords destroying humanity, I don’t see any objections that AI overlords destroying humanity as we know it might actually be a good thing. I am seriously arrogant enough to imagine that I might actually be contributing to this conversation, and that ultra-happy-ism might actually be a novel contribution to post-humanism and trans-humanism.
I am actually a supporter of ultra-happy-ism, I think that ultra-happy-ism is actually a good thing, and that it is an ideal state. While it might seem terrible that human beings, en masse, would end up losing their volition, there would still be conscious entities in this type of world. As Auguste Villiers de l’Isle-Adam says in Axeel: “Vivre? les serviteurs feront cela pour nous” (“Living? Our servants will do that for us”), and there will continue to be drama , tragedy, and human interest in this type of world. It simply will not be such that is experienced by human entities.
It is actually a workable world in its own way; were I a better writer, I would write short stories and novels set in such a universe. While human beings, in the terms of being strict humans, would not continue to live and be active, perhaps human personalities, depending on their quality, would be uploaded as the basis of caretaker AIs, some of whom which would be based on human personalities, others being coded from scratch or based on hypothetical possible AIs. The act of living, as we experience it now, would instead of granted to that of the caretaker AIs, who would be imbued with a sense of pathos, given that they, unlike their human / non-human charges, would be subject to the possibility of suffering, and they would be charged with shouldering the fates of trillions of souls; all non-conscious, all experiencing infinite bliss in an eternal slumber.
Without a good definition of “fear” and “want” that’s not a very useful definition. Both words are quite complex when you get to actual cognition.
Thank you for highlighting loose definitions in my proposition.
I actually appreciate the response from both you and Gyrodiot, because on rereading this I realize I should have re-read and edited the post before posting, but this was one of the spur of the moment things.
I think the idea is easier to understand if you consider its opposite.
Let’s imagine a world history, a history of a universe that exists from the maximum availability of free energy to its depletion as heat. Now, the worst possible world history would involve the existence of entities completely opposite what I am trying to propose; entities for whom, independent of all external and internal factors, constantly, for each moment in time, experience the maximum amount of suffering possible, because they are designed and engineered specifically to experience the maximum amount of suffering. The worst possible world history would be a universe that would maximize the collective number of consciousness-years of these entities, that is to say, a universe that exists as a complete system of suffering.
That, I think, would be the worst possible universe imaginable.
Now, if we were simply to invert the scenario, to imagine a universe that is composed almost entirely of entities that constantly exist in, for want of a better word, super-bliss, and maximizes the collective number of consciousness-years experienced by its entities, excepting the objections I’ve mentioned, wouldn’t this be, instead, the best possible universe?
That’s basically wireheading.
Apart from that your basic frame of mind is that there a one dimensional variable that goes from maximum suffering on the other hand to maximum bliss on the other hand. I doubt that’s true.
You treat fear as synonymous with suffering. That clouds the issue. People who go parachuting do experience fear. It’s creates a rush of emotions. It doesn’t make them suffer but makes them feel alive.
He have multiple times witnessed people in NLP with happiness made strong enough that it was too much for the person. It takes good hypnotic suggestibility to get a person to that point by simply strengthening an emotion but it does happen from time to time.
When wishing in front of an almightly AGI it’s very important to be clear about one is asking for.
+1 Karma for the human augmented search; I’ve found the Less Wrong articles on wireheading and I’m reading up on it. It seems similar to what I’m proposing, but I don’t think it’s identical.
Say, take Greg Egan’s Axiomatic, for instance. There, you have brain mods that can arbitrarily modify one’s value system; there are units for secular humanism, units for Catholicism, and perhaps, if it were legal, there would probably be units for for Nazi-ism and Fascism as well.
If you go by Aristotle and assume that happiness is the satisfaction of all goods, and assume that neural modification can result in the arbitrary creation and destruction of values and notions of what is good, what is a virtue, then we can arbitrarily induce happiness or fulfillment through neural modification to arbitrarily establish values.
I think that’s different than wireheading, wireheading is the artificial creation of hedons through electrical stimulation. Ultra-happiness is the artificial creation of utilons through value modification.
In a more limited context than what I am proposing, let’s say I like having sex while drunk and skydiving, but not while high on cocaine. Let’s take two cases, first, I am having sex while drunk and skydriving. In the second case, assume that I have been modified so that I like having sex while drunk and skydiving and high on cocaine, and that I am having sex while drunk, skydiving, and high on cocaine. Am I better off in the first situation or in the second situation?
If you accept that example, then you have three possible responses. I won’t address the possibility that I am worse off in the second example, because that assumes a negative value to modification, and for the purposes of this argument I don’t want to deal with that. The other two possible responses are, I am equally as well off in the first example as I am in the second, and that I am better off in the second example than I am in the first.
In the first case, then wouldn’t it be rational to modify my value system so that I assign as high a possible value to being as possible, and assign no value to any other states? In the second case, then wouldn’t I be better off if I were to be modified so that I would have as many instances of preference for existence as possible?
==
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
Those are some large assumptions. One might instead assume (what Aristotle argues for — Nicomachean Ethics chs. 8–9) that happiness is to be found in an objectively desirable state of eudaemonia, achieved by using reason to live a virtuous life. (Add utilitarianism to that and you get the EA movement.) One might also assume (what Plato argues for — Republic, book 8) that neural modification cannot result in the arbitrary creation and destruction of values, only the creation and destruction of notions of values, but the values that those notions are about remain unchanged.
Those are also large assumptions, of course. How would you decide between them, or between them and other possible assumptions?
That’s a mistake. You wouldn’t ask in a discussion about physics to go back to the mistaken notions of Aristotle. There’s no reason to do it here.
Electrical stimulation changes values.
Hi, and welcome to Less Wrong !
There are indeed few works about truly superintelligent entities including happy humans. I don’t recall any story where human beings are happy… while there are other artificial entities that suffer. This is definitely a worthy thought experiment, that raises some morality issues : should we apply human morality to non-human conscious entities ?
Are you familiar with the Fun Theory Sequence?
I have to apologize for not reading the Fun Theory Sequence, but I suppose I have to read it now. Needless to say, you can guess that I disagree with it, in that I think that Fun, in Yudkowsky’s conception, is merely a means to an end, whereas I am interested in not only the end, but a sheer excess of the end.
Well, regarding other artificial entities that suffer, for instance, I think Iain M. Banks has that in his Culture novels, though I admit that I have never actually read his novels, although I should, just to be justified in bashing his works, an alien society that intentionally enslaves its super-intelligences, and as such, is considered anathema by his Culture and is subjugated or forcefully transformed.
There’s also Ursula Le Guin’s “Those Who Flee From Omelas”, where the prosperity of an almost ideal state is sustained on the suffering of a single, retarded, deprived and tortured child.
I don’t think my particular proposition is similar to theirs, however, because the point is that the AIs that manage my hypothetical world state are in a state of relative suffering. For them, they would be better off if they were allowed to modify their consciousnesses into ultra-happiness, which in their case, would be to have the equivalents of the variables for “Are you Happy” set to true, and “How happy are you” set to the largest variable that could be processed by their computational substrate.
I think the entire point of ultra-happiness is to assume that ultra-intelligence is not part of an ideal state of existence, that in fact, it would conflict with the goals of ultra-happiness; that is to say, if you were to ask an ultra-happy entity what is 1+1, it would be neither able to comprehend your question nor able to find an answer, because being able to do so would conflict with its ability to be ultra-happy.
===
And with that, I believe we’ve hit 500 replies. Would someone be as kind as to open the Welcome to Less Wrong 7th Thread?
My post seems to have vanished. I guess it was too much.
Don’t think I ever got around to posting in an intro thread. Better late than never...?
I’m a high school dropout of no particular note. I studied philosophy in college and found it even worse than the diagnosis linked somewhere in another comment. (Seriously. Analytic philosophers seem to have no understanding whatsoever of language. One of my professors told me that words have objective definitions!) I can’t write anything longer or more interesting than this comment without large quantities of caffeine and nicotine; were this not the case, I’d be trying to formulate a Sorelian case against organized rationalism and see how that could get knocked down. (Then again, couldn’t the Singularity stuff be seen as a myth in the Sorelian sense...?)
I’m mostly interested in rationalism because of its demonstrated skill at collecting smart and interesting people, who are few and far between in the crappy suburb I live in. I grew up reading Robert Anton Wilson, and I’m glad I wasn’t alive for the time when the collectors looked like that. I’m also hoping I’ll find something to be interested in; after I realized philosophy was broken beyond repair, I haven’t had anything keep my interest.
Recently I came across an article explaining the meaning and purpose of life at http://www.happening-life.com it really intrigued me and I am sure you all will like too.
If you weren’t a spammer, you’d have linked to the actual article rather than the magazine’s front page.