When the topic is human social engineering (like raising the sanity line), this is not evidence that members of this community are likely to be able to do the impossible.
I think a good knowledge of history is essential to successfully performing massive changes to society (like raising the sanity line). Even though good historical analysis is very difficult, and prone to significant bias, its importance to the task makes its absence worthy of remark.
Do you think civil engineering analysis is necessary for this task in the same way? Honestly, I think analogizing raising the sanity line to civil engineering is moving backwards.
A study of history is no doubt useful for ensuring massive change attempts do not fail in obvious ways, but that’s not to say it’s essential, nor that it’s important enough to make the list.
I wonder how difficult it would be to just make a list of all the top blood purists and kill them.
They’d tried exactly that during the French Revolution, more or less—make a list of all the enemies of Progress and remove everything above the neck—and it hadn’t worked out too well from what Harry recalled. Maybe he needed to dust off some of those history books his father had bought him, and see if what had gone wrong with the French Revolution was something easy to fix.
The answer to that last question is NO. It would not be easy to fix the trends that led to the Reign of Terror. Believing it would be easy is an error on par with believing that there is strong empirical evidence of the existence of God. Believing that it might be easy after a little investigation is on par with believing that Friendliness is an easy problem. If Harry had spent 1⁄4 of the effort learning European history as he spent learning high-end physics, he’d know that already.
I assert that raising the sanity line is a harder problem than preventing the Reign of Terror once the French deposed Louis XVI. Not knowing history makes it essentially impossible to avoid otherwise obvious pitfalls. Reasonable folks could disagree about how much history to study, but total absence of investigation of history is not a rational amount given the stated goals.
I don’t exactly disagree, but I’m concerned you might be downplaying the bias you mention in an ancestor. My study of the field’s been fairly casual (and focused more on archaeological than historical methodology), but I’ve seen enough to know that academically respectable analyses vary wildly, and generally tend to line up with identity-group membership on the part of their exponents; most of the predictive power of history as a field also seems to lie in interpretation rather than in content. To make matters worse, we don’t have time to verify historical interpretations empirically; few respectable ones make significant predictions that’re valid on timescales less than a few decades.
If we’re interested in making predictions about the future based on the historical record, therefore, we’re left with the problem of choosing an interpretation based on its own internal characteristics. We do have some heuristics to work with, like simplicity and lack of post-facto revisions around major changes in the past, but solving this problem in a reliable way looks to me like it might be Friendliness-complete. And the consequences of failure are scarcely less dire than failing at Friendliness itself, if we’re using it to inform our approach to the latter problem.
I agree with you about how difficult the problem of finding unbiased history—the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I’m not optimistic that the best case is true.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.
At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. T
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions.
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
I find the idea that ‘studying history is valuable for trying to do big things’ counterintuitive. I think it would be valuable for you to try to share your intuition as a post. I would find a set of several examples (perhaps of the form “1) big idea 2) historical evidence of why this idea won’t work well”) very useful for getting a sense of what you’re talking about. I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Until someone does this, I doubt we’ll see much historical discussion.
I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
This list noticeably lacks any historical analysis. My sense is that history studies on the level of Bureaucracy or The Politics of the Prussian Army would be met with indifference or disfavor. Analysis like that in Aramis, or the Love of Technology would be met with disfavor or outright hostility.
When the topic is human social engineering (like raising the sanity line), this is not evidence that members of this community are likely to be able to do the impossible.
I disagree with ‘notably’; it also lacks any civic engineering analysis.
I should probably phrase this point nicer.
I think a good knowledge of history is essential to successfully performing massive changes to society (like raising the sanity line). Even though good historical analysis is very difficult, and prone to significant bias, its importance to the task makes its absence worthy of remark.
Do you think civil engineering analysis is necessary for this task in the same way? Honestly, I think analogizing raising the sanity line to civil engineering is moving backwards.
A study of history is no doubt useful for ensuring massive change attempts do not fail in obvious ways, but that’s not to say it’s essential, nor that it’s important enough to make the list.
In Chapter 7 of MoR, Harry thinks the following:
The answer to that last question is NO. It would not be easy to fix the trends that led to the Reign of Terror. Believing it would be easy is an error on par with believing that there is strong empirical evidence of the existence of God. Believing that it might be easy after a little investigation is on par with believing that Friendliness is an easy problem. If Harry had spent 1⁄4 of the effort learning European history as he spent learning high-end physics, he’d know that already.
I assert that raising the sanity line is a harder problem than preventing the Reign of Terror once the French deposed Louis XVI. Not knowing history makes it essentially impossible to avoid otherwise obvious pitfalls. Reasonable folks could disagree about how much history to study, but total absence of investigation of history is not a rational amount given the stated goals.
I don’t exactly disagree, but I’m concerned you might be downplaying the bias you mention in an ancestor. My study of the field’s been fairly casual (and focused more on archaeological than historical methodology), but I’ve seen enough to know that academically respectable analyses vary wildly, and generally tend to line up with identity-group membership on the part of their exponents; most of the predictive power of history as a field also seems to lie in interpretation rather than in content. To make matters worse, we don’t have time to verify historical interpretations empirically; few respectable ones make significant predictions that’re valid on timescales less than a few decades.
If we’re interested in making predictions about the future based on the historical record, therefore, we’re left with the problem of choosing an interpretation based on its own internal characteristics. We do have some heuristics to work with, like simplicity and lack of post-facto revisions around major changes in the past, but solving this problem in a reliable way looks to me like it might be Friendliness-complete. And the consequences of failure are scarcely less dire than failing at Friendliness itself, if we’re using it to inform our approach to the latter problem.
I agree with you about how difficult the problem of finding unbiased history—the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I’m not optimistic that the best case is true.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.
Why do you believe this?
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
I find the idea that ‘studying history is valuable for trying to do big things’ counterintuitive. I think it would be valuable for you to try to share your intuition as a post. I would find a set of several examples (perhaps of the form “1) big idea 2) historical evidence of why this idea won’t work well”) very useful for getting a sense of what you’re talking about. I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Until someone does this, I doubt we’ll see much historical discussion.
Because society, unlike say physics, is a thick problem, so in order to have any chance to make reasonable decisions is to calibrate yourself by knowing a lot of history.