Either I’m badly misunderstanding you, or your post is at odds with a great many facts about LessWrong and other internet communities. A few examples:
we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong
What??? LW is constantly citing and discussing science, philosophy, and other stuff that didn’t originate on LW. Indeed, most of The Sequences consists in stuff that didn’t originate on LW, as do almost all of my posts, as do lots of other LW content.
in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general
LW has made progress on many topics that Eliezer talked about on LW: decision theory, the science of human values, and more. Finding examples outside the topics Eliezer raised may be difficult because (1) Eliezer covered so many topics, and (2) Eliezer’s Sequences define the major subject matter of the blog. (E.g. we haven’t made progress on French politics because that’s not a topic of the blog.)
I was checking out a blogroll and saw LessWrong listed as Eliezer’s blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author dosen’t make new updates any more.
Yvain, myself, Anna Salamon, and many others have written hundreds of useful and well-liked posts since The Sequences. In what sense is it “Eliezer’s blog”? It’s also untrue that Eliezer no longer writes updates.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
Sure, LW could be better, but what are you comparing to? Every time I try to have a conversation outside LW/OB I am slapped in the face by how much worse other communities tend to be. LessWrong is, by internet community standards, extremely high in intellectual productivity and non-insularity.
So… what am I missing? Have I misunderstood what you’re saying?
Sure, LW could be better, but what are you comparing to? Every time I try to have a conversation outside LW/OB I am slapped in the face by how much worse other communities tend to be.
Yes, Less Wrong is better than all other places. But I hope you will agree that this is not an optimistic prognostication. I do not think we are doing particularly well, if you just look at us and look at how we are doing rather than comparing this place to other places.
But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse.
I do not think we are doing the best we possibly can, and I think that is very bad.
Yvain, myself, Anna Salamon, and many others have written hundreds of useful and well-liked posts since The Sequences. In what sense is it “Eliezer’s blog”?
I agree, but these are salient exceptions, not the rule. It is “Eliezer’s blog” in the sense that The Sequences are the most important thing here, but people are barely reading them (or so I hear)
It’s also untrue that Eliezer no longer writes updates.
They are so very, very rare, though. And the others you listed, indeed many of the others who made good contributions at all, have all but stopped.
Every time I try to have a conversation outside LW/OB I am slapped in the face by how much worse other communities tend to be. LessWrong is, by internet community standards, extremely high in intellectual productivity and non-insularity.
I don’t think your point is strong evidence for your conclusion, unless you are directly observing insularity and low intellectual productivity when you visit other websites. In which case, it seems more prudent to just say that directly. It’s entirely possible that conversations on LW/OB are better, but are better only because (some) people have read the sequences.
Naturally many individuals will update. But as memories fade I think over time the influence of articles like the cited ones will mostly only remain in thick hard to communicate ways such as how they calibrate some rationalist’s heuristics. My complaint isn’t that we fail to note or bring up interesting ideas, my complaint is we fail to propagate them in the community in the same way we propagated original articles. We as a subculture don’t update. I also mention we don’t propagate the original articles as well as we should. Ideas originating off site on average get less debate and are seldom further built on. As several readers have pointed out this might be ameliorated by better indexing. I suspect a big reason for this may be that posts not in a sequence that are of high quality tend to be orphaned and more seldom read.
Concerning cited productivity. Reading the sequences and reading everything since the sequences is a disappointing exercise. I do especially enjoy your work and say Yvains and yes Eliezer’s core is the result of several years perhaps even a decade of low intensity independent research and thought. It is enhanced by several early high quality community members filling in the gaps and extending it, but still. If find it surprising that a much larger LessWrong has been unable to leverage enough crowd-sourcing or even mine enough talent from its readers who are already spending large amounts of time on it, to manage to make as much progress as EY did. To give an specific example of a failure to leverage brains the LessWrong wiki is very useful but it does not match EY’s original hopes by a long-shot.
Did EY eat all the low hanging fruit? Seem unlikely but maybe he did. Regardless we don’t seem to be in the process of standing up on his shoulders.
This is a good question. As of now, probably none.
We should be careful of what conclusion we draw from that. I have two ideas: They all suck (for some value of ‘suck’). LW is structured the wrong way for cumulative productivity.
Wow. That answers that question. (I had previously been somewhat more convinced by the insular/unproductive discussion. It would seem I was too vulnerable to persuasion towards discontent. Oops.)
Well, as the great Iezer-el son of AIXI once wrote in the Scroll “Why Our Kind Can’t Concentrate”, LWers tend to be biased towards contrarianism, criticism, and anti-authoritarianism. So you’re hardly alone.
It was a typo, but then I realized it was an equally valid way of describing the consequences of our biases: we can’t concentrate on any particular theory or approach...
Well, as the great Iezer-el son of AIXI once wrote in the Scroll “Why Our Kind Can’t Concentrate”, LWers tend to be biased towards contrarianism, criticism, and anti-authoritarianism. So you’re hardly alone.
I know, I’m even found myself lamenting at times that here I too often find myself in the role of defending the the orthodoxy. It’s highly unnatural!
When the topic is human social engineering (like raising the sanity line), this is not evidence that members of this community are likely to be able to do the impossible.
I think a good knowledge of history is essential to successfully performing massive changes to society (like raising the sanity line). Even though good historical analysis is very difficult, and prone to significant bias, its importance to the task makes its absence worthy of remark.
Do you think civil engineering analysis is necessary for this task in the same way? Honestly, I think analogizing raising the sanity line to civil engineering is moving backwards.
A study of history is no doubt useful for ensuring massive change attempts do not fail in obvious ways, but that’s not to say it’s essential, nor that it’s important enough to make the list.
I wonder how difficult it would be to just make a list of all the top blood purists and kill them.
They’d tried exactly that during the French Revolution, more or less—make a list of all the enemies of Progress and remove everything above the neck—and it hadn’t worked out too well from what Harry recalled. Maybe he needed to dust off some of those history books his father had bought him, and see if what had gone wrong with the French Revolution was something easy to fix.
The answer to that last question is NO. It would not be easy to fix the trends that led to the Reign of Terror. Believing it would be easy is an error on par with believing that there is strong empirical evidence of the existence of God. Believing that it might be easy after a little investigation is on par with believing that Friendliness is an easy problem. If Harry had spent 1⁄4 of the effort learning European history as he spent learning high-end physics, he’d know that already.
I assert that raising the sanity line is a harder problem than preventing the Reign of Terror once the French deposed Louis XVI. Not knowing history makes it essentially impossible to avoid otherwise obvious pitfalls. Reasonable folks could disagree about how much history to study, but total absence of investigation of history is not a rational amount given the stated goals.
I don’t exactly disagree, but I’m concerned you might be downplaying the bias you mention in an ancestor. My study of the field’s been fairly casual (and focused more on archaeological than historical methodology), but I’ve seen enough to know that academically respectable analyses vary wildly, and generally tend to line up with identity-group membership on the part of their exponents; most of the predictive power of history as a field also seems to lie in interpretation rather than in content. To make matters worse, we don’t have time to verify historical interpretations empirically; few respectable ones make significant predictions that’re valid on timescales less than a few decades.
If we’re interested in making predictions about the future based on the historical record, therefore, we’re left with the problem of choosing an interpretation based on its own internal characteristics. We do have some heuristics to work with, like simplicity and lack of post-facto revisions around major changes in the past, but solving this problem in a reliable way looks to me like it might be Friendliness-complete. And the consequences of failure are scarcely less dire than failing at Friendliness itself, if we’re using it to inform our approach to the latter problem.
I agree with you about how difficult the problem of finding unbiased history—the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I’m not optimistic that the best case is true.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.
At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. T
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions.
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
I find the idea that ‘studying history is valuable for trying to do big things’ counterintuitive. I think it would be valuable for you to try to share your intuition as a post. I would find a set of several examples (perhaps of the form “1) big idea 2) historical evidence of why this idea won’t work well”) very useful for getting a sense of what you’re talking about. I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Until someone does this, I doubt we’ll see much historical discussion.
I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Either I’m badly misunderstanding you, or your post is at odds with a great many facts about LessWrong and other internet communities. A few examples:
What??? LW is constantly citing and discussing science, philosophy, and other stuff that didn’t originate on LW. Indeed, most of The Sequences consists in stuff that didn’t originate on LW, as do almost all of my posts, as do lots of other LW content.
LW has made progress on many topics that Eliezer talked about on LW: decision theory, the science of human values, and more. Finding examples outside the topics Eliezer raised may be difficult because (1) Eliezer covered so many topics, and (2) Eliezer’s Sequences define the major subject matter of the blog. (E.g. we haven’t made progress on French politics because that’s not a topic of the blog.)
Yvain, myself, Anna Salamon, and many others have written hundreds of useful and well-liked posts since The Sequences. In what sense is it “Eliezer’s blog”? It’s also untrue that Eliezer no longer writes updates.
Sure, LW could be better, but what are you comparing to? Every time I try to have a conversation outside LW/OB I am slapped in the face by how much worse other communities tend to be. LessWrong is, by internet community standards, extremely high in intellectual productivity and non-insularity.
So… what am I missing? Have I misunderstood what you’re saying?
Yes, Less Wrong is better than all other places. But I hope you will agree that this is not an optimistic prognostication. I do not think we are doing particularly well, if you just look at us and look at how we are doing rather than comparing this place to other places.
I’d like to remind you of some of the words from my favorite essay, which is also one of your favorite essays:
I do not think we are doing the best we possibly can, and I think that is very bad.
I agree, but these are salient exceptions, not the rule. It is “Eliezer’s blog” in the sense that The Sequences are the most important thing here, but people are barely reading them (or so I hear)
They are so very, very rare, though. And the others you listed, indeed many of the others who made good contributions at all, have all but stopped.
I don’t think your point is strong evidence for your conclusion, unless you are directly observing insularity and low intellectual productivity when you visit other websites. In which case, it seems more prudent to just say that directly. It’s entirely possible that conversations on LW/OB are better, but are better only because (some) people have read the sequences.
Edit: Clarity
Intellectual productivity from the last two weeks:
Conspiracy theories as agency fictions
Armstrong’s thoughts on measuring optimization power
Open problems related to Solomonoff Induction
Loebian cooperation
Optimizing affection
Problematic problems for TDT
Avoiding motivated cognition
Several posts from How to Purchase AI Risk Reduction
Non-insularity from the last two weeks:
Gut bacteria and thought
Combining causality with algorithmic information theory
Debate on ethical careers
Zietsman on voting
Thick and thin
Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent
Central planning is intractable
Computer science and programming links
Naturally many individuals will update. But as memories fade I think over time the influence of articles like the cited ones will mostly only remain in thick hard to communicate ways such as how they calibrate some rationalist’s heuristics. My complaint isn’t that we fail to note or bring up interesting ideas, my complaint is we fail to propagate them in the community in the same way we propagated original articles. We as a subculture don’t update. I also mention we don’t propagate the original articles as well as we should. Ideas originating off site on average get less debate and are seldom further built on. As several readers have pointed out this might be ameliorated by better indexing. I suspect a big reason for this may be that posts not in a sequence that are of high quality tend to be orphaned and more seldom read.
Concerning cited productivity. Reading the sequences and reading everything since the sequences is a disappointing exercise. I do especially enjoy your work and say Yvains and yes Eliezer’s core is the result of several years perhaps even a decade of low intensity independent research and thought. It is enhanced by several early high quality community members filling in the gaps and extending it, but still. If find it surprising that a much larger LessWrong has been unable to leverage enough crowd-sourcing or even mine enough talent from its readers who are already spending large amounts of time on it, to manage to make as much progress as EY did. To give an specific example of a failure to leverage brains the LessWrong wiki is very useful but it does not match EY’s original hopes by a long-shot.
Did EY eat all the low hanging fruit? Seem unlikely but maybe he did. Regardless we don’t seem to be in the process of standing up on his shoulders.
How many of these will be referenced by anyone in two years time?
This is a good question. As of now, probably none.
We should be careful of what conclusion we draw from that. I have two ideas: They all suck (for some value of ‘suck’). LW is structured the wrong way for cumulative productivity.
Indexing is key, I think.
I think the problem is that these posts aren’t well-indexed, so they tend to get forgotten once they fall of the recent posts pages.
And the recent posts page is moving too fast.
(Which would not be such a problem if we had separate lists for articles like these and for the other articles.)
Wow. That answers that question. (I had previously been somewhat more convinced by the insular/unproductive discussion. It would seem I was too vulnerable to persuasion towards discontent. Oops.)
Well, as the great Iezer-el son of AIXI once wrote in the Scroll “Why Our Kind Can’t Concentrate”, LWers tend to be biased towards contrarianism, criticism, and anti-authoritarianism. So you’re hardly alone.
I may be missing the joke but I think you are refering to “Why Our Kind Can’t Cooperate”.
It was a typo, but then I realized it was an equally valid way of describing the consequences of our biases: we can’t concentrate on any particular theory or approach...
I know, I’m even found myself lamenting at times that here I too often find myself in the role of defending the the orthodoxy. It’s highly unnatural!
This list noticeably lacks any historical analysis. My sense is that history studies on the level of Bureaucracy or The Politics of the Prussian Army would be met with indifference or disfavor. Analysis like that in Aramis, or the Love of Technology would be met with disfavor or outright hostility.
When the topic is human social engineering (like raising the sanity line), this is not evidence that members of this community are likely to be able to do the impossible.
I disagree with ‘notably’; it also lacks any civic engineering analysis.
I should probably phrase this point nicer.
I think a good knowledge of history is essential to successfully performing massive changes to society (like raising the sanity line). Even though good historical analysis is very difficult, and prone to significant bias, its importance to the task makes its absence worthy of remark.
Do you think civil engineering analysis is necessary for this task in the same way? Honestly, I think analogizing raising the sanity line to civil engineering is moving backwards.
A study of history is no doubt useful for ensuring massive change attempts do not fail in obvious ways, but that’s not to say it’s essential, nor that it’s important enough to make the list.
In Chapter 7 of MoR, Harry thinks the following:
The answer to that last question is NO. It would not be easy to fix the trends that led to the Reign of Terror. Believing it would be easy is an error on par with believing that there is strong empirical evidence of the existence of God. Believing that it might be easy after a little investigation is on par with believing that Friendliness is an easy problem. If Harry had spent 1⁄4 of the effort learning European history as he spent learning high-end physics, he’d know that already.
I assert that raising the sanity line is a harder problem than preventing the Reign of Terror once the French deposed Louis XVI. Not knowing history makes it essentially impossible to avoid otherwise obvious pitfalls. Reasonable folks could disagree about how much history to study, but total absence of investigation of history is not a rational amount given the stated goals.
I don’t exactly disagree, but I’m concerned you might be downplaying the bias you mention in an ancestor. My study of the field’s been fairly casual (and focused more on archaeological than historical methodology), but I’ve seen enough to know that academically respectable analyses vary wildly, and generally tend to line up with identity-group membership on the part of their exponents; most of the predictive power of history as a field also seems to lie in interpretation rather than in content. To make matters worse, we don’t have time to verify historical interpretations empirically; few respectable ones make significant predictions that’re valid on timescales less than a few decades.
If we’re interested in making predictions about the future based on the historical record, therefore, we’re left with the problem of choosing an interpretation based on its own internal characteristics. We do have some heuristics to work with, like simplicity and lack of post-facto revisions around major changes in the past, but solving this problem in a reliable way looks to me like it might be Friendliness-complete. And the consequences of failure are scarcely less dire than failing at Friendliness itself, if we’re using it to inform our approach to the latter problem.
I agree with you about how difficult the problem of finding unbiased history—the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I’m not optimistic that the best case is true.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.
Why do you believe this?
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
I find the idea that ‘studying history is valuable for trying to do big things’ counterintuitive. I think it would be valuable for you to try to share your intuition as a post. I would find a set of several examples (perhaps of the form “1) big idea 2) historical evidence of why this idea won’t work well”) very useful for getting a sense of what you’re talking about. I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Until someone does this, I doubt we’ll see much historical discussion.
Because society, unlike say physics, is a thick problem, so in order to have any chance to make reasonable decisions is to calibrate yourself by knowing a lot of history.
I’m sorry, I wasn’t clear. I meant “unless you are directly observing insularity and low intellectual productivity when you visit other websites”.