Why do you think the table is the most important thing in the article?
A different thing Tsvi could have done was say “here’s my best guess of which of these are most important, and my reasoning why”, but this would have essentially the same thing as the table + surrounding essay but with somewhat less fidelity of what his guesses were for the ranking.
Meanwhile I think the most important thing was laying out all the different potential areas of investigation, which I can now reason about on my own.
First, reiterating, the most important bit here is the schema, and drawing attention to this as an important area of further work.
Second, I think calling it “baseless speculation” is just wrong. Given that you’re jumping to a kinda pejorative framing, it looks like your mind is kinda made up and I don’t feel like arguing with you more. I don’t think you actually read the scott article in a way that was really listening to it and considering the implications.
But, since I think the underlying question of “what is LessWrong curated for” is nuanced and not clearly spelled out, I’ll go spell that out for the benefit of everyone just tuning in.
Model 1: LessWrong as “full intellectual pipeline, from ‘research office watercooler’ to ‘published’”
The purpose of LW curated is not to be a peer reviewed journal, and the purpose of LW is not to have quite the same standards for published academic work. Instead, I think of LW has tackling “the problem that academia is solving” through a somewhat different lens, which includes many of the same pieces but organizes them differently.
What you see in a finished, published journal article is the very end of a process, and it’s not where most of the generativity happens. Most progress is happening in conversations around watercoolers at work, slack channels, conference chit-chat, etc.
LW curation is not “published peer review.” The LessWrong Review is more aspiring to be that (I also think The Review fails at achieving all my goals with “the good parts of peer review,” although it achieves other goals, and I have thoughts on how to improve it on that axis)
But the bar for curated is something like “we’ve been talking about this around the watercooler for weeks, the people involved in the overall conversation have found this a useful concept and they are probably going to continue further research that builds on this and eventually you will see some more concrete output.
I don’t know if there’s a good existing reference post for “here in detail is the motivation for why we want to do human intelligence enhancement and make it a major priority.” Tsvi sort of briefly discusses that here but mostly focuses on “where might we want to focus, given this goal.”
Model 2. “Review” is an ongoing process.
One way you can do science is to do all of the important work in private, and then publish at the end. That is basically just not how LW is arranged. The whole idea here is to move the watercooler to the public area, and handle the part where “ideas we talk about at the watercooler are imprecise and maybe wrong” with continuous comment-driven review and improving our conversational epistemics.
I do think the bar for curated is “it’s been at least a few days, the arguments in the post make sense to me (the curator), and nobody was raised major red flags about the overall thrust of the post.” (I think this post meets that bar)
I want people to argue about both the fiddly details of the post, or the overall frame of the post. The way you argue that is by making specific claims about why the post’s details are wrong, or incomplete, or why the posts’s framing is pointed in the wrong direction.
The fact that this post’s framing seems important is more reason to curate it, if we haven’t found first-pass major flaws and I want more opportunity for people to discuss major flaws.
Saying “this post is vague and it’s made up numbers aren’t very precise” isn’t adding anything to the conversation (except for providing some scaffold for a meta-discussion on LW site philosophy, which is maybe useful to do periodically since it’s not obvious at a glance)
Revisiting the guesswork / “baseless speculation” bit
If a group of researchers have a vein they have been discussing at the watercooler, and it has survived a few rounds of discussion and internal criticism, and it’ll be awhile before a major legible rigorous output is published:
I absolutely want those researchers intuitions and best guesses about which bits are important. Those researchers have some expertise and worldmodels. They could spend another 10-100 hours articulating those intuitions with more precision and backing them up with more evidence. Sometimes it’s correct to do that. But if I want other researchers to be able to pick up the work and run with it, I don’t want them bottlenecked on the first researchers privately iterating another 10-100 hours before sharing it.
I don’t want us to overanchor on those initial intuitions and best guesses. And if you don’t trust those researcher’s intuitions, I want you to have an easy time throwing them out and thinking about them from scratch.
.
Why do you think the table is the most important thing in the article?
A different thing Tsvi could have done was say “here’s my best guess of which of these are most important, and my reasoning why”, but this would have essentially the same thing as the table + surrounding essay but with somewhat less fidelity of what his guesses were for the ranking.
Meanwhile I think the most important thing was laying out all the different potential areas of investigation, which I can now reason about on my own.
.
First, reiterating, the most important bit here is the schema, and drawing attention to this as an important area of further work.
Second, I think calling it “baseless speculation” is just wrong. Given that you’re jumping to a kinda pejorative framing, it looks like your mind is kinda made up and I don’t feel like arguing with you more. I don’t think you actually read the scott article in a way that was really listening to it and considering the implications.
But, since I think the underlying question of “what is LessWrong curated for” is nuanced and not clearly spelled out, I’ll go spell that out for the benefit of everyone just tuning in.
Model 1: LessWrong as “full intellectual pipeline, from ‘research office watercooler’ to ‘published’”
The purpose of LW curated is not to be a peer reviewed journal, and the purpose of LW is not to have quite the same standards for published academic work. Instead, I think of LW has tackling “the problem that academia is solving” through a somewhat different lens, which includes many of the same pieces but organizes them differently.
What you see in a finished, published journal article is the very end of a process, and it’s not where most of the generativity happens. Most progress is happening in conversations around watercoolers at work, slack channels, conference chit-chat, etc.
LW curation is not “published peer review.” The LessWrong Review is more aspiring to be that (I also think The Review fails at achieving all my goals with “the good parts of peer review,” although it achieves other goals, and I have thoughts on how to improve it on that axis)
But the bar for curated is something like “we’ve been talking about this around the watercooler for weeks, the people involved in the overall conversation have found this a useful concept and they are probably going to continue further research that builds on this and eventually you will see some more concrete output.
In this case, the conversation has already been ongoing awhile, with posts like Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible (another curated post, which I think is more “rigorous” in the classical sense).
I don’t know if there’s a good existing reference post for “here in detail is the motivation for why we want to do human intelligence enhancement and make it a major priority.” Tsvi sort of briefly discusses that here but mostly focuses on “where might we want to focus, given this goal.”
Model 2. “Review” is an ongoing process.
One way you can do science is to do all of the important work in private, and then publish at the end. That is basically just not how LW is arranged. The whole idea here is to move the watercooler to the public area, and handle the part where “ideas we talk about at the watercooler are imprecise and maybe wrong” with continuous comment-driven review and improving our conversational epistemics.
I do think the bar for curated is “it’s been at least a few days, the arguments in the post make sense to me (the curator), and nobody was raised major red flags about the overall thrust of the post.” (I think this post meets that bar)
I want people to argue about both the fiddly details of the post, or the overall frame of the post. The way you argue that is by making specific claims about why the post’s details are wrong, or incomplete, or why the posts’s framing is pointed in the wrong direction.
The fact that this post’s framing seems important is more reason to curate it, if we haven’t found first-pass major flaws and I want more opportunity for people to discuss major flaws.
Saying “this post is vague and it’s made up numbers aren’t very precise” isn’t adding anything to the conversation (except for providing some scaffold for a meta-discussion on LW site philosophy, which is maybe useful to do periodically since it’s not obvious at a glance)
Revisiting the guesswork / “baseless speculation” bit
If a group of researchers have a vein they have been discussing at the watercooler, and it has survived a few rounds of discussion and internal criticism, and it’ll be awhile before a major legible rigorous output is published:
I absolutely want those researchers intuitions and best guesses about which bits are important. Those researchers have some expertise and worldmodels. They could spend another 10-100 hours articulating those intuitions with more precision and backing them up with more evidence. Sometimes it’s correct to do that. But if I want other researchers to be able to pick up the work and run with it, I don’t want them bottlenecked on the first researchers privately iterating another 10-100 hours before sharing it.
I don’t want us to overanchor on those initial intuitions and best guesses. And if you don’t trust those researcher’s intuitions, I want you to have an easy time throwing them out and thinking about them from scratch.