Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.
eggman
The whole subculture that is the new ‘rationality movement’ has some nodes, i.e., nodes, and subcultures, which are not included in this map of the Bay Area memespace. I’m sitting here at home with my friend Kytael, and we’re brainstorming the following:
What nodes are part of the rationalist movement that aren’t typical of the Bay Area memespace.
What nodes aren’t part of the rationalist movement that are still part of the Bay Area memespace.
What nodes we as a community might want to add the rationalist memespace.
What nodes might enter the rationalist memespace that some parts of the community might consider undesirable.
Nodes Unique to the Rationalist Community
Neoreaction
Men’s Rights Activists/Pick-Up Artists
Secular Solstices, Spiritual Naturalism
Self-Reflection
Hansonian Contrarianism
Generalization of Science and Economics to Everyday Life
Nerd/Geek Culture
Nodes From the Bay Area Separate From the Rationalist Community
Whole Earth community
New Age Culture
Back-to-the-land movement
Kink Culture
Controversial Nodes Within the Rationalist Community
Neoreaction
Men’s Rights Activism, Pick-Up Artists
Social Justice
Emerging Subcultures and Memes in the Rationalist Community
Post-rationality/Post-rationalism
Partnered Dancing
(Whatever Is Trending On) Slate Star Codex
Applied Rationality=???
Psychtropic/Nootropic Use
Bitcoin/Cryptocurrency Enthusiasm
New Memes and Groups The Rationalist Community May Want to Explore More
Open Borders
...
This list isn’t exhaustive, and it could be controversial, so please question, or criticize it below. I will reflexively update this list by editing this comment in response to replies. This was more of a brainstorming exercise than anything, but one I thought other Less Wrong users might consider interesting. If a great discussion results, myself, or someone else, could turn this into a fuller post in its own right.
Is there an update on this issue? Representatives from nearly all the relevant organizations have stepped in, but what’s been reported has done little to resolve my confusion on this issue, and I think of myself as divided on it as Mr. Hallquist originally was. Dr. MacAskill, Mr. O’Haigeartaigh, Ms. Salamon have all provided explanations for why they believe each of the organizations they’re attached are the most deserving of funding. The problem is that this has done little to assuage my concern about which organization is in the most need of funds, and will have the greatest impact given a donation in the present, relative to each of the others.
Thinking about it as a write this comment, it strikes me an unfortunate case of events when organizations who totally want to cooperate towards the same ends are put in the awkward position of making competing(?) appeals to the same base of philanthropists. This might have been mentioned elsewhere in the comments, but donations to which organization do any of you believe would lead to the biggest return of investment in terms of attracting more donors, and talent, towards existential risk reduction as a whole? Which organization will increase the base of effective altruists, and like individuals, who would support this cause?
If anything, I could use more information from the CEA, the FHI, and the GPP. Within effective altruism, there’s a bit of a standard of expecting some transparency of the organizations, purportedly effective, which are supported. In terms of financial support, this would mean the open publishing of budgets. Based upon Mr. O’Heigeartaigh’s report above, the FHI itself might be strapped for available time, among all its other core activities, to provide this sort of insight.
I recently started my career as an effective altruist earning to give by making my first big splash with a $1000 USD unrestricted donation to Givewell last month.
Uh, I’ve trawled through Wikipedia for the causes, and symptoms, of mental illnesses, and, according to my doctors (general practitioner, and psychiatrist), I’ve been good at identifying what I’m experiencing before I’ve gone to see them about it. The default case is that patients just go to the doctor, report their symptoms, answer questions about their lifestyle lately, and the doctors take care of diagnoses, and/or assigning treatment. I choose to believe that I have such clarity about my own mental processes because my doctors tell me how impressed they are when I come to them seeming to already know what I’m experiencing. I don’t know why this is, but my lazy hypothesis is chalking it up to me being smart (people I know tell me this more than I would expect), and that I’ve become more self-reflective after having attended a CFAR workshop.
Of course, both my doctors, and I, could be prone to confirmation bias, which would be a scary result. Anyway, I’ve had a similar experience of observing my own behavior, realizing it’s abnormal, and being proactive about seeking medical attention. Still, for everyone, diagnosing yourself by trawling Wikipedia, or WebMD, seems a classic example of an exercise prone to confirmation bias (e.g., experiencing something like medical student’s disease). This post is a signal that I’ve qualified my concerns through past experience, and that I encourage you to both seek out a psychiatrist, as I don’t expect that to result in a false negative diagnosis, and also to still be careful as you think about this stuff.
Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don’t have experience, or practical knowledge of the field, though, so I wouldn’t know.
Insofar as it’s appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.
You’re welcome.
I figure I would do my due diligence for the sake of the community, or whatever, so I downvoted this post. Note that I’m a newer user of Less Wrong who isn’t very familiar with Mr. Newsome’s history of shenanigans on this website. So, I didn’t have an automatic reaction to cringe, or something, when I encountered this piece. I downvoted this post based upon its own, singular lack of merit.
Mr. Newsome, here is some criticism I hope you appreciate.
Nothing about this first chapter here is enticing me to care about ‘post-rationality’, whatever that is. Eliezer Yudkowsky took a premise everyone was familiar with, and turned it on its head during the first chapter. He used a narrative format that was familiar, and actually wrote well. While the first chapter of Harry Potter and the Methods of Rationality didn’t immediately begin with a introduction of what the “methods of rationality” as applied to magic would be, per se, there was enough of that in the first chapter to keep others reading.
In hindsight, Mr. Yudkowsky couldn’t have expected his fan fiction to become so popular, or so widely read. The fact that it has might be biasing me into thinking that his first crack at writing the fan fiction was better than it really is.
Anyway, it seems you’re trying too much with this piece. Harry Potter and the Methods of Rationality is the premise everyone here is familiar with, but you’ve done more than just turn it on its head. You’ve turned the very idea of one having a deep familiarity with the tropes on Less Wrong on its head. The first paragraph is just a blast of memes; I’m familiar with all of them, but I don’t understand what all of them mean. The first part is incoherent, and is signaling that you have the knowledge to mock (in jest) the Less Wrong community. That in itself isn’t clever, and the rest of the piece isn’t clever enough as a parody to keep us, the readers, engaged.
I perceive the second part of this chapter to be a bit funny, but it doesn’t build upon anything to get me to care. I don’t believe it will be sustainable to have Potter-Yudkowsky be aware that he is in a meta-fan-fiction. If the protagonist confronts you, the author, as the controller of the world he is simulated within, he can at best only engage with a caricature of yourself as you’ve written it. It’s difficult for me to think of how you would handle that without it becoming boring, lest you’re very talented, and creative. If Potter-Yudkowsky realizes he can use his awareness to gain superpowers, that destroys the suspension of disbelief in the fantasy world the reader immerses themselves in quickly, which would also be boring. Finally, based upon how this chapter has played out, it would be difficult to maintain great continuity into the next chapter, which I would personally find frustrating, and challenging, as a reader.
This reads as the first part of some absurdist fiction. Still, it contains little foresight. The fact that you were drunk at the time this chapter was written, and posted, leads me to suspect that such an aspect made you want to post something which would be entertaining to yourself, but wasn’t crafted with much thought to how it would be received by whatever readership you were hoping for.
In short, this doesn’t strike me as a direct parody of Harry Potter and the Methods of Rationality, but a parody of the rationalist community itself(?). That’s such an odd thing to do that I find it off-putting, and I consider it this piece’s undoing.
If you think I’m being unfair, note that HPMOR isn’t posted here, just referenced to it. If you actually want to work on writing as you’ve claimed, rather than trolling, maybe fanfic.net is the better place.
It seems to me you’re aware of your own writing, compared to the body of fiction you’re already familiar with, such that you know how to write in the typical style, or cadence, of long-form narratives. That is, you can, or could, write good fiction. I don’t even know that you need to work on your style. Maybe what you need to hone is the broader strokes of planning a piece with a consistent theme, or structure, that would be appealing to the readership you’re hoping for. Obviously, from among the rationalist community is the readership you’re aiming for. Presumably, you have the knowledge to produce funny content that would be better appreciated. Starting on FanFiction.net might be the place to start.
Upvoted. My thoughts:
For full disclosure, I don’t consider myself very successful in real life either, and my ambitions are also much higher than where I am now. This is a phenomenon that my friends from the Vancouver rationalist meetup have remarked upon. My hypothesis for this is that Less Wrong selects for a portion of people who are looking to jump-start their productivity to a new level of lifestyle, but mostly selects for intelligent but complacent nerds who want to learn to think about arguments better, and like reading blogs. Such behavioral tendencies don’t lend themselves to getting out an armchair more often.
Mr. Bur, I don’t know if you’re addressing myself specifically, or generally the users reading this thread, but, like Mr. Kennaway, I agree wholeheartedly. I personally don’t feel extremely qualified to rewrite the core of Less Wrong canon, or whatever. I want to write about the stuff I know, and it will probably be a couple of months before I start attempting to generate high-quality posts, as in the interim I will need to study better the topics which I care about, and which I perceive to not have been thoroughly covered by a better post on Less Wrong before. I believe the best posts in Discussion in recent months have been based on specific topics, like Brienne Strohl’s exploration of memory techniques, or the posts discussing the complicated issues of human health, and nutrition. With fortuitous coincidence, Robin Hanson has recently captured well what I believe you’re getting at.
My prior comment got a fair number of upvotes for the hypothesis about why there was an exodus from Less Wrong of the first generation of the most prominent contributors to Less Wrong. However, going forward, my impression of how remaining users of Less Wrong frame the purpose of using it is a combination of Mr. Bur’s comment above, and this one.
Note: edited for content, and grammar.
[WARNING: GOOEY PERSONAL DETAILS BELOW]
I became part of much of the meatspace rationalist community before I started more frequently using Less Wrong, so I integrate my personal experience into how I comment on here. That’s not to mean that I use my personal anecdotes as evidence for advice for other users of this site; I know that would be stupid. However, if you check my user history on Less Wrong, you’ll notice that I primarily use Less Wrong myself as a source for advice for myself (and my friends, too, who don’t bother to post here, but I believe should). Anyway, Less Wrong has been surprisingly helpful, and insightful. This has been all since 2012-13, mostly, well after when it seems most of you consider Less Wrong to have started declining. So, I’m more optimistic about Less Wrong’s future, but my subjective frame of reference is having good experiences with it after it hits its historical peak of awesomeness. So, maybe the rest of you users here concerned (rightfully so, in my opinion) about the decline of discussion on Less Wrong have hopped on a hedonic treadmill that I haven’t hopped on yet. I believe the good news from this is that I feel excited, and invigorated, to boost Less Wrong Discussion in my spare time. I like these meta-posts focused on solving the Less Wrong decline/identity-crisis/whatever-this-problem-is, and I want to help. In the next week, I’ll curate another meta-post summarizing, and linking to, all the best posts in Discussion in the last year. Please reply to me if this idea seems bad, or unnecessary, to stop me from wasting my time writing it up, if you believe that’s the case.
My friend kytael (not his real name, but his Less Wrong handle) has been on Less Wrong since 2010, has been a volunteer for the CFAR, and lived in the Bay Area for several months as part of the meatspace rationalist community there. For a couple of years, I was only a lurker on Less Wrong, and occasionally read some posts. I didn’t bother to read the Sequences, but I already studied cognitive science, and I attended lots of meetups where the Sequences were discussed, so I understand much of the canon material of Less Wrong rationality, even if I wouldn’t use the same words to describe the comments. It’s only in the last year, and a bit, that I got more involved in my local meetup, which motivated me to get involved in the site. I find myself agreeing with lots of the older Sequence posts, and the highest quality posters (lukeprog, Yvain, gwern, etc.) from a few years ago, but I too am deeply concerned about the decline of vitality on Less Wrong, as I have only started to get excited about it’s online aspects.
Anyway, when I too asked kytael:
What should the purpose of this site be? Is it supposed to be building a movement or filtering down the best knowledge?
(I asked him more, or less, the same question)
He replied: “I think the best way to view Less Wrong is as an archive.”
Since he was tapped into the Bay Area rationalist community, but was a user of Less Wrong from outside of it as well, he was in an especially good position to provide better hypotheses as to why use on this website has declined, due to his observation.
First of all, the most prominent figures of Less Wrong have spread their discussions across more websites than this one, where much discussion from those popular users who used to spend more time on Less Wrong now discuss things. Scott’s/Yvain’s Slate Star Codex is probably the best example of this, another being the Rationalist Masterlist. Following a plethora of blogs is much more difficult than just going through this one site, so for newer users to Less Wrong, or those of us who haven’t had the opportunity to know users of this site more personally, following all this discussion is difficult.
Second of all, the most popular, and common, users of Less Wrong have integrated publicly more, and now use social media. Ever since the inception of the CFAR workshops, users of Less Wrong have flocked to the Bay Area in throngs. They all became fast friends, because the atmosphere of CFAR workshops tends to do that (re: anecdata from my attendance there, and that of my friends). So, everyone connects via the private CFAR mailing lists, or Facebook, or Twitter, or they start businesses together, or form group homes in the Bay Area. Suddenly, once these people can integrate their favorite online community, and subculture, with the rest of their personal lives, there isn’t a need to only communicate with others via the lesswrong.com, the awkward blog/forum-site.
Finally, since the inception of Less Wrong, Eliezer Yudkowsky, and others, started Less Wrong having already reached the conclusion that the best, ‘most rational’ thing for them to do was to reduce existential risk. Eliezer Yudkowsky wrote the Sequences as an exercise for himself to re-invent clear thinking to the point where he would be strong enough to start tackling the issue of existential risk reduction, because he wasn’t yet prepared for it in 2009. Secondarily, he hoped the Sequences would serve as a way for others to catch up his speed, and approach his level of epistemology, or whatever. The instrumental goal of this intent was obviously to get more people to become awesome enough to tackle existential risk alongside him. That was five years ago. As a community goal, Less Wrong was founded as dedicated to ‘refining the art [and (cognitive) science of human rationality’. However, the personal goal for its founders from what was the SIAI, and is now the MIRI, is provide a platform, a springboard, for getting people to care about existential risk reduction. Now, as MIRI enters its phase of greatest growth, the vision of a practical ‘rationality dojo’ finally exists in the CFAR, and with increased mutual collaboration with the Future of Humanity Institute, the effective altruism community, and global catastrophic risk think tanks, those who were the heroes of Less Wrong use the website less as they’ve gotten busier, and their priorities have shifted.
They wanted to start a community around rationality, to improve their own lives, and those of others. Now they have it. So, those of us remaining can join these other communities, or try something new. The tools for those who want this website to flourish again remain here in the old posts: Eliezer, Luke, and Scott, among others, laid the groundwork for us to level up as they have. So, aside from everything else, a second generation, a revival of Less Wrong, where new topics that aren’t mind-killing, either, can be explored. If those caring among us do the hard work to become the new paragon users of Less Wrong, we can reverse its Eternal September.
After this primary exodus from Less Wrong, others occurred as well. I personally know one user who had some of the most upvoted, and some featured, posts on Less Wrong until he stopped using this website, and deleted his account. Now, he interacts with other rationalists via Twitter, and is more involved with the online Neoreaction community. It seems like a lot of Less Wrong users have joined that community. My friend mentioned that he’s read the Sequences, and feels like what he is thinking about is beyond the level of thinking occurring on Less Wrong, so he no longer found the site useful. Another example of a different community is MetaMed: Michael Vassar is probably quite busy with that, and brought a lot of users of Less Wrong with him in that business. They probably prioritize their long hours there, and their personal lives, over taking time to write blog posts here.
Personally, my friends from the local Less Wrong meetup, and I, are starting our own outside projects, which also involve students from the local university, and the local transhumanist, and skeptic, communities as well. Send me a private message if you’re interested in what’s up with us.
In addition to my upvote, this comment is confirmation I, for one, would be interested in this.
I’d suggest just being slightly more suspicious of insulting arguments that make claims about your character sucking (immutably) than ones about the way you’ve laid out the plan.
It seems katydee may have made a mistake in choice of language here by conflating “yourself” with “your plans”. To nitpick, it might better to consistently refer to the thing to be strawmanned as “your plan(s), and not use “you” at all. If one wants to generate an argument to point out flaws in their own plans, strawmanning yourself is like launching an ad hominem attack upon oneself. When somebody is looking to improve only one plan targeted for a (very) specific goal, strawmanning the plan rather than one’s own character would seem to illuminate the relevant flaws better.
Of course, if somebody wants to prevent mistakes in a big chunk of their lives, or their general template for plans, strawmanning themselves, then might be the time strawmanning one’s own character is more worthwhile.
It doesn’t seem like the webmasters, or administrators, of Less Wrong receive these requests as signals. Maybe try sending them a private message directly, unless the culture of Less Wrong already considers that inappropriate, or rude.
Does anyone understand how the mutant-cyborg monster image RationalWiki uses represents Less Wrong? I’ve never understood that.
It’s a weird phenomenon, because even those lurkers with accounts who barely contribute might not state how they’ve not socially benefited from Less Wrong. However, I suspect the majority of people who mostly read Less Wrong, and are passive to insert themselves deeper into the community are the sorts of people who are also less likely to find social benefit from it. I mean, from my own experience, that of my friends, and the others commenting here, they took initiative upon themselves to at least , e.g., attend a meatspace Less Wrong meetup. This is more likely to lead to social benefit than Less Wrong spontaneously improving the lives of more passive users who don’t make their presence known. If one is unknown, that person won’t make the social connections which will lead to fruition.
I’m only age 22, and I don’t have lots of life experience. So, I don’t know how pleasing the rewards of such hardships would be, nor do I have a model of how much pain would go into this. However, reading through the scenarios seemed awful, so I rated my willingness to go through with them very low relative to the median response.
I’d be more interested in the same poll restricted to prime over the age of at least forty, asking along the lines of whether the rewards of hardship were so great they’d be willing to go through the pain again.