I have noticed that many people here want LW resurrection for the sake of LW resurrection.
But why do you want it in the first place?
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
After all, if you think that Eliezer’s writing constitute most of LW value, and Eliezer doesn’t write here anymore, maybe the wise decision is to let it decay.
Emotionally—for the feeling that something new and great is happening here, and I can see it growing.
Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.
Okay, what exactly are the “great things” I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?
When Eliezer was writing the Sequences, merely the fact that “there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom” seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.
Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.
Internet vs real life—things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy—if it isn’t documented, it didn’t happen—is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.
By the way, if you are unhappy about the “decline” of LW because it will make a worse impression on new people you would like to introduce to LW culture—point them towards the book instead.
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)
Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped?
Something I feel Yudkowsky doesn’t really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group. There is some material in there and HPMOR also offers some stuff, but there’s very little that is as formalized as the ideas around “Politics is the Mindkiller/Spiders/Hard Mode,” or “the Typical Mind Fallacy.”
Something Yudkowsky also mentions is that what he writes about rationality is his path. Some things generalize (most people have the same cognitive biases, but in different amounts). From reading the final parts of the Sequences and the final moments of HPMOR I get the vibe that Yudkowsky really wants people to develop their own path. Alicorn did this and Yvain also did/does it to some extent (and I’m reading the early non-Sequence posts and I think that MBlume also did this a bit), but it’s something that could be written more about. Now, I agree that this is hard, the lowest fruit probably is already picked and it’s not something everyone can do. But I find it hard to believe that there are just 3 or 4 people who can actually do this. The bonobo rationalists on tumblr are, in their own, weird way, trying to find a good way to exist in the world in relation to other people. Some of this is formalized, but most of it exists in conversations on tumblr (which is an incredibly annoying medium, both to read and to share). Other people/places from the Map probably do stuff like that as well. I take this as evidence that there is still fruit low enough to pick without needing a ladder.
Something I feel Yudkowsky doesn’t really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group.
I’ve been working on a series of posts centered around this—social rationality, if you will. So far, the best source for such materials remains Yvain’s writings on the topic on his blog; he really nails the art of having sane discussions. He popularised some ways of framing debate tactics such as motte-and-bailey, steelmanning, bravery debates and so on, which entered the SSC jargon.
I’m interested in expanding on that theme with topics such as emphasis fights (“yes, but”-ing) or arguing in bad faith, as examples of failure modes in collective truth-seeking, but in the end it all hinges on an ideally shared perception of morality, or of standards to hold oneself to. My approach relies heavily on motives and on my personal conception of morality, which is why it’s difficult to teach it without looking like I preach it. (At least Eliezer didn’t look too concerned about this one, though, but not everyone has the fortune to be him.) Besides, it’s a very complex and murky field, one best learned through experience and examples.
Why do you prefer offline conversations to online?
Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:
You don’t have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.
You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.
As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.
Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.
Why do you prefer offline conversations to online?
They satisfy me emotionally on a level online conversations don’t. Something in my brain generates a feeling of “a tribe” more intensely.
An offline conversation has a potential to instigate other offline activities. (As an example of what really happened: going together to a gym and having a lecture on “rational” exercising.)
But I agree with what you wrote; online activities also have their advantages. It just seems to me we have too much online, too little offline (at least those who don’t live in the Bay Area).
I have noticed that many people here want LW resurrection for the sake of LW resurrection.
But why do you want it in the first place?
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
After all, if you think that Eliezer’s writing constitute most of LW value, and Eliezer doesn’t write here anymore, maybe the wise decision is to let it decay.
Beware the lost purposes.
Emotionally—for the feeling that something new and great is happening here, and I can see it growing.
Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.
Okay, what exactly are the “great things” I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?
When Eliezer was writing the Sequences, merely the fact that “there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom” seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.
Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.
Internet vs real life—things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy—if it isn’t documented, it didn’t happen—is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.
By the way, if you are unhappy about the “decline” of LW because it will make a worse impression on new people you would like to introduce to LW culture—point them towards the book instead.
Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)
Something I feel Yudkowsky doesn’t really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group. There is some material in there and HPMOR also offers some stuff, but there’s very little that is as formalized as the ideas around “Politics is the Mindkiller/Spiders/Hard Mode,” or “the Typical Mind Fallacy.”
Something Yudkowsky also mentions is that what he writes about rationality is his path. Some things generalize (most people have the same cognitive biases, but in different amounts). From reading the final parts of the Sequences and the final moments of HPMOR I get the vibe that Yudkowsky really wants people to develop their own path. Alicorn did this and Yvain also did/does it to some extent (and I’m reading the early non-Sequence posts and I think that MBlume also did this a bit), but it’s something that could be written more about. Now, I agree that this is hard, the lowest fruit probably is already picked and it’s not something everyone can do. But I find it hard to believe that there are just 3 or 4 people who can actually do this. The bonobo rationalists on tumblr are, in their own, weird way, trying to find a good way to exist in the world in relation to other people. Some of this is formalized, but most of it exists in conversations on tumblr (which is an incredibly annoying medium, both to read and to share). Other people/places from the Map probably do stuff like that as well. I take this as evidence that there is still fruit low enough to pick without needing a ladder.
I’ve been working on a series of posts centered around this—social rationality, if you will. So far, the best source for such materials remains Yvain’s writings on the topic on his blog; he really nails the art of having sane discussions. He popularised some ways of framing debate tactics such as motte-and-bailey, steelmanning, bravery debates and so on, which entered the SSC jargon.
I’m interested in expanding on that theme with topics such as emphasis fights (“yes, but”-ing) or arguing in bad faith, as examples of failure modes in collective truth-seeking, but in the end it all hinges on an ideally shared perception of morality, or of standards to hold oneself to. My approach relies heavily on motives and on my personal conception of morality, which is why it’s difficult to teach it without looking like I preach it. (At least Eliezer didn’t look too concerned about this one, though, but not everyone has the fortune to be him.) Besides, it’s a very complex and murky field, one best learned through experience and examples.
Why do you prefer offline conversations to online?
Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:
You don’t have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.
You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.
As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.
Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.
They satisfy me emotionally on a level online conversations don’t. Something in my brain generates a feeling of “a tribe” more intensely.
An offline conversation has a potential to instigate other offline activities. (As an example of what really happened: going together to a gym and having a lecture on “rational” exercising.)
But I agree with what you wrote; online activities also have their advantages. It just seems to me we have too much online, too little offline (at least those who don’t live in the Bay Area).
Offline conversations are higher bandwidth. And not just because they are lower latency.