A number of years ago, when LessWrong was being revived from its old form to its new form, I did not expect the revival to work. I said as much at the time. For a year or two in the middle there, the results looked pretty ambiguous to me. But by now it’s clear that I was just completely wrong—I did not expect the revival to work as well as it has to date.
Oliver Habryka in particluar wins Bayes points off of me. Hooray for being right while I was wrong, and for building something cool!
LW is more of AI Alignment forum now, but without mad science agent foundations spirit of earlier post-Sequences time. Probably capable of being alive because the field grew. So it’s not revived, it’s something new. Development of rationality mostly remains in the past, little significant discussion in recent years.
Yeah, I think some of this is true, but while there is a lot of AI content, I actually think that a lot of the same people would probably write non-AI content, and engage on non-AI content, if AI was less urgent, or the site had less existing AI content.
That counterfactual is hard to evaluate, but like, a lot of people who used to be core contributors to LW 1.0 are now also posting to LW 2.0, though they are now posting primarily on AI, and I think that’s evidence that it’s more that there has been a broader shift among LW users that AI is just like really urgent and important, instead of there having been a very different new user base that was discovered.
I kind of agree on development of rationality feeling kind of stagnant right now. I think there are still good posts being written, but a lot of cognitive energy is definitely going into AI stuff, more so than rationality stuff.
Same! LW is an outstanding counterexample to my belief that resurrections are impossible. But I haven’t incorporated it into my gears-level model yet, and I’m unsure how to. What did LW do differently, or which gear in my head caused me to fail to predict this?
The original LW was a clone of Reddit. The Reddit source code was quite complex. I am a software developer, I have looked at that code myself, tried to figure out some things, then gave up.
I do not remember whether I made any predictions at that time. But ignoring what I know now, I probably would have said the following:
Creating a website with all functionality of LessWrong 1.0 is a lot of work. Only a few LessWrong readers are capable of building such complex project. And it would take them a lot of time. Most of people with the required skills could probably get a job at Google, so the opportunity cost of building LessWrong 2.0 is very high.
Is anyone going to pay them, or are they supposed to do it in their free time? If it’s the former, it is going to be very expensive. Is it really a good way to spend so much money? If it’s the latter, it is very unlikely that the project will ever get finished, because it will progress very slowly.
If I understand it correctly, what happened is that some people got paid to work on this full-time. And they turned out to be very good at their job. They rewrote everything from scratch, which was probably the easier way, but it required a lot of time, and a lot of trust because it was “either complete success, or nothing” (as opposed to gradually adding new features to Reddit code).
If I understand it correctly, what happened is that some people got paid to work on this full-time.
This is about what I was going to say in response, before reading your comment.
I think the key factor that makes it different from other examples is that it was a competent person’s full time job.
There are some other things that need to go right in addition to that, but I suspect that there are lots of things that people are correctly outside view gloomy about which can just be done, if someone makes it their first priority.
it must be a competent person (as opposed to merely overconfident)
who really cares about the project (more than about other possible projects)
can convince other people of their competence (the ones who have money)
gets sufficient funding (doesn’t waste their time and energy working a second job)
has autonomy (no manager who would override or second-guess their decisions)
no unexpected disasters (e.g. getting hit by a bus, patent trolls suing the project,...)
Other than the unexpected disasters, it seems like something that a competent civilization should easily do. Once you have competent people, allow them to demonstrate their competence, look for intersection between what they want to do and what you need (or if you are sufficiently rich, just an intersection between what they want to do and what you believe is a good thing), give them money, and let them work.
In real life, having the right skills and sending the right signals is not the same thing; people who do things are not the same as people who decide things; time is wasted on meetings and paperwork.
Just to be clear. There were people working on it who had both agency and competence, but they were working on it as a side project. I think having something be someone’s only priority and full-time job makes a large difference on how much agency someone can bring to bear on a project.
A number of years ago, when LessWrong was being revived from its old form to its new form, I did not expect the revival to work. I said as much at the time. For a year or two in the middle there, the results looked pretty ambiguous to me. But by now it’s clear that I was just completely wrong—I did not expect the revival to work as well as it has to date.
Oliver Habryka in particluar wins Bayes points off of me. Hooray for being right while I was wrong, and for building something cool!
Aww, thank you! ^-^
LW is more of AI Alignment forum now, but without mad science agent foundations spirit of earlier post-Sequences time. Probably capable of being alive because the field grew. So it’s not revived, it’s something new. Development of rationality mostly remains in the past, little significant discussion in recent years.
Yeah, I think some of this is true, but while there is a lot of AI content, I actually think that a lot of the same people would probably write non-AI content, and engage on non-AI content, if AI was less urgent, or the site had less existing AI content.
That counterfactual is hard to evaluate, but like, a lot of people who used to be core contributors to LW 1.0 are now also posting to LW 2.0, though they are now posting primarily on AI, and I think that’s evidence that it’s more that there has been a broader shift among LW users that AI is just like really urgent and important, instead of there having been a very different new user base that was discovered.
I kind of agree on development of rationality feeling kind of stagnant right now. I think there are still good posts being written, but a lot of cognitive energy is definitely going into AI stuff, more so than rationality stuff.
I would love to be able to stop worrying about AI and go back to improving rationality. Yet another thing to look forward to once we leap this hurdle.
Totally agree. Oliver & co. won tons of Bayes points off me.
Same! LW is an outstanding counterexample to my belief that resurrections are impossible. But I haven’t incorporated it into my gears-level model yet, and I’m unsure how to. What did LW do differently, or which gear in my head caused me to fail to predict this?
The original LW was a clone of Reddit. The Reddit source code was quite complex. I am a software developer, I have looked at that code myself, tried to figure out some things, then gave up.
I do not remember whether I made any predictions at that time. But ignoring what I know now, I probably would have said the following:
If I understand it correctly, what happened is that some people got paid to work on this full-time. And they turned out to be very good at their job. They rewrote everything from scratch, which was probably the easier way, but it required a lot of time, and a lot of trust because it was “either complete success, or nothing” (as opposed to gradually adding new features to Reddit code).
This is about what I was going to say in response, before reading your comment.
I think the key factor that makes it different from other examples is that it was a competent person’s full time job.
There are some other things that need to go right in addition to that, but I suspect that there are lots of things that people are correctly outside view gloomy about which can just be done, if someone makes it their first priority.
Things that need to go right:
it must be a competent person (as opposed to merely overconfident)
who really cares about the project (more than about other possible projects)
can convince other people of their competence (the ones who have money)
gets sufficient funding (doesn’t waste their time and energy working a second job)
has autonomy (no manager who would override or second-guess their decisions)
no unexpected disasters (e.g. getting hit by a bus, patent trolls suing the project,...)
Other than the unexpected disasters, it seems like something that a competent civilization should easily do. Once you have competent people, allow them to demonstrate their competence, look for intersection between what they want to do and what you need (or if you are sufficiently rich, just an intersection between what they want to do and what you believe is a good thing), give them money, and let them work.
In real life, having the right skills and sending the right signals is not the same thing; people who do things are not the same as people who decide things; time is wasted on meetings and paperwork.
That anyone with any agency and competence was working on it as their primary goal, as opposed to nobody doing so.
Just to be clear. There were people working on it who had both agency and competence, but they were working on it as a side project. I think having something be someone’s only priority and full-time job makes a large difference on how much agency someone can bring to bear on a project.