Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY’s arguments.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I’m not a cryonicist because I don’t think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don’t worry about UFAI because I don’t think our civilization has the capability to achieve AI. It’s not that I think AI is spectacularly hard, I just don’t think we can do Hard Things anymore.
Now, I don’t know whether my pessimism is more rational than others’ optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don’t talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?
It’s not that I think AI is spectacularly hard, I just don’t think we can do Hard Things anymore.
I’m sympathetic to the idea that we can’t do Hard Things, at least in the US and much of the rest of the West. Unfortunately progress in AI seems like the kind of Hard Thing that still is possible. Stagnation has hit atoms, not bits. There does seem to be a consensus that AI is not a stagnant field at all, but rather one that is consistently progressing.
Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated. Societies that are dynamic and competent in one area, such as physics research, will also be dynamic and competent in other areas, such as infrastructure and good governance.
What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world’s best research and technology in the field of microbiology. Or we might observe that Indonesia had the best set of laws, courts, and legal knowledge. Such observations would falsify my hypothesis.
If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly. One obvious thing that could derail American innovation is catastrophic social turmoil.
Optimists could accept the civilizational competence correlation idea, but believe that US competence in areas like infotech is going to “pull up” our performance in other areas, at which we are presently failing abjectly.
Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield.
Soviet Russia did very well with space and nukes. On the other hand, one of the reasons it imploded was that it could not keep up doing very well with space and nukes.
I think the correlation you’re talking about exists, but it’s not that strong (or, to be more precise, its effects could be overridden by some factors).
There is also the issue of relative position. Brain drain is important and at the moment US is the preferred destination of energetic smart people from all over the world. If that changes, US will lose much of it’s edge.
I used to think that Soviet Union was worse in economy, but at least better at things like math. Then I read some books about math in Soviet Union and realized that pretty much all mathematical progress in Soviet Union came from people who were not supported by the regime, because the regime preferred to support the ones good at playing political games, even if they were otherwise completely incompetent. (Imagine equivalents of Lysenko; e.g. people arguing that schools shouldn’t teach vectors, because vectors are a “bourgeoise pseudoscience”. No, I am not making this one up.) There were many people who couldn’t get a job at academia and had to work in factories, who did a large part of the math research in their free time.
There were a few lucky exceptions, for example Kolmogorov once invented something that was useful for WW2 warfare, so in reward he became one of the few competent people in the Academy of Science. He quickly used his newly gained political powers to create a few awesome projects, such as the international mathematical olympiad, the mathematical jurnal Kvant, and high schools specializing at mathematics. After a few years he lost his influence again, because he wasn’t very good at playing political games, but his projects remained.
Seems like the lesson is that when insanity becomes the official ideology, it ruins everything, unless something like war provides a feedback from reality, and even then the islands of sanity are limited.
What were these books? I don’t speak Russian, so I’ll probably follow up with: who were a few important mathematicians who worked in factories?
I’ve heard a few stories of people being demoted from desk jobs to manual labor after applying for exit visas, but that’s not quite the same as never getting a desk job in the first place. I’ve heard a lot of stories of badly-connected pure mathematicians being sent to applied think tanks, but that’s pretty cushy and there wasn’t much obligation to do the nominal work, so they just kept doing pure math. I can’t remember them, but I think I’ve heard stories of mathematicians getting non-research desk jobs, but doing math at work.
Thanks! Since that’s in English, I will take at least a look at it.
Gessen does not strike me as a reliable source, so for now I am completely discounting everything you said about it, in favor of what I have heard directly from Russian mathematicians, which is a lot less extreme.
Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated.
I’m sure they’re correlated but not all that tightly.
What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world’s best research and technology in the field of microbiology.
I think there are some pretty good examples. The soviets made great achievements in spaceflight and nuclear energy research in spite of having terrible economic and social policies. The Mayans had sophisticated astronomical calendars but they also practiced human sacrifice and never invented the wheel.
If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly.
I doubt it, but even if true it doesn’t save us, since plenty of other countries could develop AGI.
Is there a way we can discuss civilizational issues without becoming mind-killed?
Sure there is. Start with the usual rationalist mantra: what do you believe? Why do you believe it? How would you describe this Great Stagnation? Why do you believe we are headed towards this? And let us pick up from there.
LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don’t talk about politics.
I don’t think “we don’t talk about politics” is true to the extend that people are going to have blind spots about it.
Politics isn’t completely banned from LW. There are many venues from facebook discussions with LW folks, Yvain’s blog, various EA fora and omnilibrium that also are about politics.
I think we even had the question of whether people believe we are in a great stagnation in a past census.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists.
How do you know? Did you actually look at the relevant census numbers to come to that conclusion? If so, quoting the numbers would make your post more data driven and more substantial. If you goal is to have important discussion about civilizational issues being more data driven can be quite useful.
Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY’s arguments.
I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I’m not a cryonicist because I don’t think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don’t worry about UFAI because I don’t think our civilization has the capability to achieve AI. It’s not that I think AI is spectacularly hard, I just don’t think we can do Hard Things anymore.
Now, I don’t know whether my pessimism is more rational than others’ optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don’t talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?
I’m sympathetic to the idea that we can’t do Hard Things, at least in the US and much of the rest of the West. Unfortunately progress in AI seems like the kind of Hard Thing that still is possible. Stagnation has hit atoms, not bits. There does seem to be a consensus that AI is not a stagnant field at all, but rather one that is consistently progressing.
Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated. Societies that are dynamic and competent in one area, such as physics research, will also be dynamic and competent in other areas, such as infrastructure and good governance.
What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world’s best research and technology in the field of microbiology. Or we might observe that Indonesia had the best set of laws, courts, and legal knowledge. Such observations would falsify my hypothesis.
If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly. One obvious thing that could derail American innovation is catastrophic social turmoil.
Optimists could accept the civilizational competence correlation idea, but believe that US competence in areas like infotech is going to “pull up” our performance in other areas, at which we are presently failing abjectly.
Soviet Russia did very well with space and nukes. On the other hand, one of the reasons it imploded was that it could not keep up doing very well with space and nukes.
I think the correlation you’re talking about exists, but it’s not that strong (or, to be more precise, its effects could be overridden by some factors).
There is also the issue of relative position. Brain drain is important and at the moment US is the preferred destination of energetic smart people from all over the world. If that changes, US will lose much of it’s edge.
I used to think that Soviet Union was worse in economy, but at least better at things like math. Then I read some books about math in Soviet Union and realized that pretty much all mathematical progress in Soviet Union came from people who were not supported by the regime, because the regime preferred to support the ones good at playing political games, even if they were otherwise completely incompetent. (Imagine equivalents of Lysenko; e.g. people arguing that schools shouldn’t teach vectors, because vectors are a “bourgeoise pseudoscience”. No, I am not making this one up.) There were many people who couldn’t get a job at academia and had to work in factories, who did a large part of the math research in their free time.
There were a few lucky exceptions, for example Kolmogorov once invented something that was useful for WW2 warfare, so in reward he became one of the few competent people in the Academy of Science. He quickly used his newly gained political powers to create a few awesome projects, such as the international mathematical olympiad, the mathematical jurnal Kvant, and high schools specializing at mathematics. After a few years he lost his influence again, because he wasn’t very good at playing political games, but his projects remained.
Seems like the lesson is that when insanity becomes the official ideology, it ruins everything, unless something like war provides a feedback from reality, and even then the islands of sanity are limited.
What were these books? I don’t speak Russian, so I’ll probably follow up with: who were a few important mathematicians who worked in factories?
I’ve heard a few stories of people being demoted from desk jobs to manual labor after applying for exit visas, but that’s not quite the same as never getting a desk job in the first place. I’ve heard a lot of stories of badly-connected pure mathematicians being sent to applied think tanks, but that’s pretty cushy and there wasn’t much obligation to do the nominal work, so they just kept doing pure math. I can’t remember them, but I think I’ve heard stories of mathematicians getting non-research desk jobs, but doing math at work.
Masha Gessen: Perfect Rigour: A Genius and the Mathematical Breakthrough of the Century
This is a story about one person, but there is a lot of background information on doing math in Soviet Union.
Thanks! Since that’s in English, I will take at least a look at it.
Gessen does not strike me as a reliable source, so for now I am completely discounting everything you said about it, in favor of what I have heard directly from Russian mathematicians, which is a lot less extreme.
Many of the same people worked on both projects. In particular, Keldysh’s Calculation Bureau.
I’m sure they’re correlated but not all that tightly.
I think there are some pretty good examples. The soviets made great achievements in spaceflight and nuclear energy research in spite of having terrible economic and social policies. The Mayans had sophisticated astronomical calendars but they also practiced human sacrifice and never invented the wheel.
I doubt it, but even if true it doesn’t save us, since plenty of other countries could develop AGI.
A LWer created Omnilibrium for that.
Any results? (I am personally unimpressed by the few random links I have seen.)
Sure there is. Start with the usual rationalist mantra: what do you believe? Why do you believe it?
How would you describe this Great Stagnation? Why do you believe we are headed towards this?
And let us pick up from there.
I don’t think “we don’t talk about politics” is true to the extend that people are going to have blind spots about it. Politics isn’t completely banned from LW. There are many venues from facebook discussions with LW folks, Yvain’s blog, various EA fora and omnilibrium that also are about politics.
I think we even had the question of whether people believe we are in a great stagnation in a past census.
How do you know? Did you actually look at the relevant census numbers to come to that conclusion? If so, quoting the numbers would make your post more data driven and more substantial. If you goal is to have important discussion about civilizational issues being more data driven can be quite useful.
The humanity or just the West?
I don’t see why not.
That, too. That large error bar of uncertainty isn’t going to go away even if we talk about the issues :-)