Yeah, I’d second that. Someone could make a Google survey form, or comment thread poll, asking which users commenting here would be open to having their success stories published in some capacity, whether here on the blog, or a more widely shared piece of literature.
eggman
Open Thread, May 12 − 18, 2014
Mr. Hurford, I know you’re a prominent writer within the effective altruist community, among other things (e.g., producing software, and open-source web, products, through running .impact). As someone who initially encountered effective altruism, and then Less Wrong, do you have a perspective on how, or how much, Less Wrong has amplified the success of effective altruism as a social movement within the last couple of years?
I’m curious if there is any other variables that might account for you not achieving what you hoped you might by connecting through Less Wrong. For example, many regular attendees of the Vancouver meetup have wanted to get great jobs, move into a house with their rationalist friends, or move to the Bay Area to be part of the central party. However, they haven’t done much of this yet, despite having wanted to with other local rationalists for a couple of years. The fact that most of us are university students, or have only recently launched our careers, throws a wrench into ambitious plans to utterly change our own lives because the effort my friends might have directed towards that is already taken up by their need to adapt to regular responsibilities of fully-fledged adulthood. On our part, I figure the planning fallacy, and overconfidence, caused us to significantly overestimate what we would really achieve as members of a burgeoning social subculture, or whatever.
I figured upvotes in the monthly bragging thread would solely be a function of how much of a heap of utility can be demonstrated to be achieved. However, this is my second-most upvoted comment of all time, with the first-most upvoted being similar: a terse comment with just enough data to make it seem substantial, but is full of warm fuzzies. So, writing ‘Yay! I’m winning!’ for a mundane goal, like doing minimal exercise, might be at least as powerful as providing a long, and modest, explanation for doing something which signals much more greatness in real life. Below mine, other users have commented that they’ve:
cemented an academic career with a lifestyle they love. gave a technical presentation to hundreds of people. became adequately competent in Python to start a fully-fledged web project. made substantial advancement in launching a career as a statistician. *made a regular habit of building skills that are more crucial to success than ‘walking around’.
To me, all of the above seems more impressive than my ‘walking around a bunch’. My hypothesis is that I signaled my success in a simpler package, so it was easier to process, and so there was an easier, and lazier, investment in pressing the ‘upvote’ button. If you upvoted me, why? What’s going on?
I bought a pedometer to track my steps, so I could achieve my goal of taking 10000 steps everyday, and have a motivation to go outside, and do some light exercise. This is from before I bought the pedometer when I was doing no regular exercise. I’ve met my goal of 10000 steps everyday for the last week since I bought the pedometer, so I’ve increased my goal to 12000 steps everyday.
Location: Vancouver, Canada
I was introduced to Less Wrong by a long-time friend who had been reading the website for about a year before I first visited it. Over time, I’ve generally become more integrated with the community. Now, a handful of my closest friends are ones I’ve met through the local meetup. Also, with related communities, the meetup does a lot to give presentations between people, and facilitate skill-sharing, and knowledge bases.
I know that several of my fellow meetup attendees also made great friends through the meetup. There has been at list one instance of two of them becoming roommates, and now a few of my friends are trying to put together a ‘rationalist house’ this summer.
For those not in the know, a ‘rationalist house’ is a group home based around intentional meatspace communities that have risen around this website, so as to create a better living environment where new domestic norms can be tried. There are several in the Bay Area, at least one in Melbourne, probably one in New York(?), etc...
The founder of our meetup, who doesn’t visit this website much anymore, but is generally in contact with the meetup otherwise, made connections with a successful financial manager who more-or-less became a mentor for him. Based upon the mentor’s advice, this friend of mine is now trying to launch his own software company.
Several of us from the meetup have attended a CFAR workshop, including myself, and my friend who introduced me to Less Wrong has done continually ongoing volunteer work for them for the last year. As a result, we’ve become friends, and acquaintances, of much of the rationalist community in San Francisco. Additionally, a few of my friends have been spurred involvement with other organizations based in the Bay Area (e.g., YCombinator, the MIRI, Landmark). He also started an ongoing swing dancing community in Berkeley while he lived there, because memes.
Less Wrong introduced my friends, and I, to the effective altruism community, which infected a few of us with new memes for doing good, spurring at least one of us so far to have donated several thousand dollars to organizations, and projects, like the global prioritization research currently being jointly executed by the Future of Humanity Institute, and the Centre for Effective Altruism.
For the sake of their privacy, I’m not posting the names of these individuals, or their contact information, directly here on the public Internet, but if you’d like to get in touch with them to ask further questions, send me a private message, and I can put you in touch with them.
I agree with you, so I’ve edited my comment a bit to account for your nitpick. See above. Thanks for making the point.
Yes, it’s a joke.
Note: edited for grammar.
Disclosure: the following point is tangential to Givewell, and is more about start-ups.
It strikes me as paradoxical that users of Less Wrong, and the rationalist community, endorse founding a start-up as great ‘rationality training’, and view very successful entrepreneurs as paragons of rationality in the practical world, yet Paul Graham notes in his essays that it may often be only in hindsight that entrepreneurs can assess the strategies they implemented as good, such that they ‘got lucky’ with their success. ‘Getting lucky’, that is, maybe[1] implying that the entrepreneurs in question might not be such paragons of practical rationality after all.
Mr. Graham’s partial solution to this problem is stating that if you’re the right sort of person, you’ll have the right sort of hunches. I believe what Mr. Graham is referring to here is what Luke Muehlhauser has identified as, and labeled, “tacit rationality”.
If you’re an entrepreneurial type looking to start a business, or even an effective altruist looking to start an especially effective non-profit organization, or research foundation, you probably want to know if you’re the “right sort of person who has the right sort of hunches”. Simply believing so, and betting on that, I believe, is prone to the sorts of biases which are common knowledge around here, so we shouldn’t expect the outcome in such a case to be very favorable. So, the options come down to one of the following:
*Figuring out if you already are tacitly rational, like Mark Zuckerburg, or Oprah Winfrey, apparently.
*Transforming yourself from a geek who knows about biases, but does nothing about them, to someone who achieves practical success at an increasing, and predictable, rate, due to their own efforts.
From the conclusion of his post on explicit, and tacit, rationality, here are Mr. Muehlhauser’s tips for performing the above tasks:
If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible. The positive effects of tight feedback loops might trump the effects of explicit rationality training. Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all. If you’re reading this post, you’re probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.
[1] Due diligence: the comment below points out well how my original use of language in this sentence was a universal claim, which isn’t justified. So, I’ve retroactively edited this sentence to make my claim only an existential one.
Note: edited for formatting, nuance, and grammar.
It could very well be phony information. My point is that I’m an absurd nerd, because Less Wrong, so I want to ground my beliefs as well as possible, but I’m very ambiguous about the issue of vegetarianism because there is so much noise about diets, and economics, and ethics, and aaahhh...
I gave a full explanation of my reasons for part-time vegetarianism above, but Lumifer’s statement generally accounts fully for what I choose to eat.
+1 to his comment.
I identify as a flexitarian, meaning I’m a part-time vegetarian. When it’s convenient, I will avoid eating meat. This is usually at restaurants, almost all of which in my city have a vegetarian, if not vegan, option on their menu, or when I’m cooking at home, and there is something in the fridge other than animal flesh, or byproducts, available. In this regard, my biggest ‘vice’ is that I don’t make much effort to restrict my consumption of dairy products, since I’m under the impression that dairy products don’t cause much harm to cattle relative to how much suffering is incurred upon other animals used to generate food for humans.
One major reason I try to reduce my meat consumption when it’s a convenient option is because I don’t exercise much, so I counteract the negative affects this might have on my health in the meantime by consuming fewer calories. Otherwise, my part-time vegetarianism is motivated by my ethics, although I feel very ambiguous about my diet. For example:
I commonly encounter reports from mainstream media about how, to prevent environmental degradation, via climate change, humans must reduce the amount of meat we eat per capita. E.g., the UN has published reports in this regard in the last couple of years. However, on the other hand, I’ve encountered counter-arguments about how if we all became full-time vegetarians in North America, we would have to import resource-intensive soy products from the other side of the world, causing a bunch of pollution in the process anyway.
I’m well aware of how, if they have the capacity to suffer, animals on factory farms indeed suffer very much. That pulls at my heartstrings, or makes me sad, and empathetic, to their suffering, or what have you, so I would like to be a part of easing that suffering. However, my life is full of moral uncertainty, because all the wisest people I turn to in life are split between eating meat, or not. Also, I’m not extremely confident in how sentient non-primates are, or what their capacity to suffer is. Furthermore, if societies moved to abstaining from animal (by)products, we might end up processing more land that results in the deaths of small vertebrates, e.g., rodents, and insects, who might also suffer, and die horribly. So, I find the argument of erring on the side of caution by not eating animals even in the face of uncertainty of their moral relevance appealing, but I’m not so confident in the truth of that that I forgo eating meat entirely.
I fear being in a maligned out-group like vegetarians, and what moral fortitude I tell myself I have falls prey to the same convenient biases everyone experiences in the face of brains getting what they want now, and damn our ideals, so I eat more meat than I would otherwise believe is morally acceptable of myself. In this regard, I might be a hypocrite.
Thanks for the information. In that case, I hope in the future there is another opportunity to ask what blogs are featured on the side panel. I don’t know what anyone else is looking for, but as far as I’m concerned, I check these other rationality blogs as often as I check things posted directly to Less Wrong. I find Slate Star Codex, and Overcoming Bias, particularly interesting. Anyway, if other people gain similar such value from these other blogs, perhaps other blogs could be added in the future. I understand if each of us freely suggested what blogs we individually considered ‘rational’, there would be lots of noise, redundancy, and swamping the forum with poor suggestions. So, I may start a poll in the future asking which blogs the community as a whole would like to see added.
What’s the process for selecting what ‘rationality blogs’ are featured in the sidebar? Is it selected by the administrators of the site?
I’m surprised some blogs of other users with lots of promoted posts here aren’t featured as rationality blogs.
I’m wanting to apply to be a conversation notes writer at Givewell, as they have an open position for it. The application seems quite straightforward, but I’m wondering if there is anything I should consider, because I would love to be hired for this job.
Do you have any suggestions for how I could improve an application?
For the application, I must submit a practice transcription of a Givewell conversation. I’m wondering, specifically, if there are any textbooks, guides to style, or ways of writing I should consult in preparation. Obviously, I must write the transcription myself, and not plagiarize, or whatever.
Disclosure: my votes for the above poll are not anonymous. I want people to be aware of how I voted, because I state the following: my votes for this poll are limited to my perception of Less Wrong over only the last few months, as of the date of this comment, which is the period of time in which I have started checking Less Wrong on a semi-daily basis.
I know, I know...I tend to write in a superfluous, and long-winded manner. Like, longer than the above comment. It was about 20% longer, so I edited out the material that I didn’t believe would actually clarify the questions I was asking, or that I believed wouldn’t be at all valuable to adamzerner. I was at a lack of words other than ‘edited for brevity’. In terms of writing, I believe I’m decent at getting my thoughts out of my head. However, my ability to write more compactly is a skill I need to improve upon, and I intend to do so.
Also, I aim to be quite precise with my language, so I tend to provide more detail in my examples than I believe might be necessary, in an attempt to prevent as much confusion for the reader as I can.
Thanks.
It’s a weird phenomenon, because even those lurkers with accounts who barely contribute might not state how they’ve not socially benefited from Less Wrong. However, I suspect the majority of people who mostly read Less Wrong, and are passive to insert themselves deeper into the community are the sorts of people who are also less likely to find social benefit from it. I mean, from my own experience, that of my friends, and the others commenting here, they took initiative upon themselves to at least , e.g., attend a meatspace Less Wrong meetup. This is more likely to lead to social benefit than Less Wrong spontaneously improving the lives of more passive users who don’t make their presence known. If one is unknown, that person won’t make the social connections which will lead to fruition.