Although I don’t fully understand the reference, I think I sort of see where it’s going.
Either way though, epistemological practice is what one does in coming up with a way of modeling economic activity or anything else, and epistemological commentary is one’s attempt to explain the fundamentals of what exactly is going on when one does the former.
In this case, you know it’s the result of epistemological practice when it’s an actual economic model or whatever (e.g., the Austrian Business Cycle Theory), and you know it’s epistemological commentary when they start talking about a priori statements, or logical positivism, or something like that.
In other words, they’re batshit crazy, but somehow manage to say some sensible things anyway? I’d be uneasy about assuming that getting the right answers implies that they must be doing something rationally right underneath, and only believe they believe that stuff about economics being an a priori science.
Re the Halo Jones reference: At one point, Halo Jones has joined the army fighting an interstellar war, and in a rare moment of leisure is talking with a hard-bitten old soldier. The army is desperate to get new recruits into the field as fast as possible, and the distinction between training exercises and actual combat is rather blurred. Halo asks her (it’s an all-female army), “How do you know if it was combat, or just combat experience?”. She replies, “If you’re still alive afterwards, it was just combat experience.”
Far from being batshit crazy, Mises was an eminently reasonable thinker. It’s just that he didn’t do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.
People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don’t propagate to all others, much of what they say or write is signaling that’s incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it’s more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.
Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatsoever. They already think most people are completely batshit insane, and that the world is a joke and is going down the tubes. There’s nothing really to constrain them from sounding insane on epistemology. It’s not a red flag to them if everyone seems to disagree.
Forget the epistemology. They’re just parroting confused secondary accounts of the work of a thinker who himself utterly failed in his endeavor to explain where he was coming from on this topic, and they’re parroting it to signal team affiliation, a break from the mainstream, etc. Beliefs don’t always propagate throughout the whole web, especially when they’re less usefully analyzed as “beliefs” and more as mere words spilled for the purpose of signaling something.
If you read enough and listen to enough of the modern Austrian school (which is a tragically hard prospect given how allergic most LW-style rationalists would be to the presentation and style of argumentation), you’ll find that what’s going on in the world, or rather what’s going so wrong in society, will become incredibly clear, and half of everything will fall into place. It’s one of the two major pieces in the puzzle—the other of which may be found on Less Wrong.
Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that “what’s going so wrong in society” can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don’t say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it’s about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the “human condition”. But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you’re talking about and what the two magic insights are.
Oh sorry. I didn’t mean that “what’s going so wrong in society” is a single piece that can be understood given those two ingredients but is otherwise destined to remain confusing. I meant that what one finds on Less Wrong explains part of what’s going so wrong, and Austrian economics (if properly distilled) elucidates the other.
I should clarify though that Less Wrong certainly provides the bigger picture understanding of the situation, with the whole outdated hardware analysis etc., and thus it would be less like two symmetrical pieces being fit together, and more like a certain distilled form of Austrian economics being slotted into a missing section in the Less Wrong worldview.
I also didn’t mean to suggest that adding some insight from Less Wrong to some insight from the Austrian school would suddenly reveal the solution to civilization’s problems. Rather, what I’m suggesting would just be another step in the process to understanding the issues we face—perhaps even a very large step—and thus would simply put us in a better position to figure out what to do to make it significantly more likely that the future will go well.
Not two magic insights, but two very large collections of knowledge and information that would be very useful to synthesize and add together. Less Wrong has a lot of insights about outdated hardware, cognitive biases, how our minds work and where they’re likely to go systematically wrong, certain existential risks, AI, etc., and Austrian economics elucidates something much more controversial: the joke that is the current economic, political, and perhaps even social organization of every single nation on Earth.
As people from Less Wrong, what else should we expect but complete disaster? The current societal structure is the result of tribal political instincts gone awry in this new, evolutionarily discordant situation of having massive tribes of millions of people. Our hardware and factory presets were optimized for hunter-gatherer situations of at most a couple hundred people (?), but now the groups exceed millions. It would be an absolute miracle if societal organization at this point in history were not completely insane. Austrian economics details the insanity at length.
I have also found claims that one or a few simple ideas can solve huge swaths of the world’s problems to be a sign of naivity, but another exception is when there is mass delusion or confusion due to systematic errors. Provided such pervasive and damaging errors do exist, merely clearing up those errors would be a major service to humanity. In this sense, Less Wrong and Misesian epistemology share a goal: to eliminate flawed reasoning. I am not sure why Mises chose to put forth this LW-style message as a positive theory (praxeology), but the content seems to me entirely negative in that it formalizes and systematizes many of the corrections economists (even mainstream ones) must have been tired of making. Perhaps he found that people were more receptive to hearing a “competing theory” than to having their own theories covered in red ink.
Considering we already had a post on the epistemic problems of the school, would you be willing to write a post or sequence on what you consider particularly interesting or worthwhile in Austrian economics?
A possible analogy for how Crux views the Austrian economics might be how most of us view the Copenhagen quantum mechanics of Bohr, Heisenberg et al: excellent science done by top-notch scientists, unfortunately intertwined with a confused epistemology which they thought was essential to the science, but actually wasn’t. (I don’t know enough about Austrian economics to say if the analogy is at any level correct, but it seems a sensible interpretation of what Crux says).
Although I don’t fully understand the reference, I think I sort of see where it’s going.
Either way though, epistemological practice is what one does in coming up with a way of modeling economic activity or anything else, and epistemological commentary is one’s attempt to explain the fundamentals of what exactly is going on when one does the former.
In this case, you know it’s the result of epistemological practice when it’s an actual economic model or whatever (e.g., the Austrian Business Cycle Theory), and you know it’s epistemological commentary when they start talking about a priori statements, or logical positivism, or something like that.
In other words, they’re batshit crazy, but somehow manage to say some sensible things anyway? I’d be uneasy about assuming that getting the right answers implies that they must be doing something rationally right underneath, and only believe they believe that stuff about economics being an a priori science.
Re the Halo Jones reference: At one point, Halo Jones has joined the army fighting an interstellar war, and in a rare moment of leisure is talking with a hard-bitten old soldier. The army is desperate to get new recruits into the field as fast as possible, and the distinction between training exercises and actual combat is rather blurred. Halo asks her (it’s an all-female army), “How do you know if it was combat, or just combat experience?”. She replies, “If you’re still alive afterwards, it was just combat experience.”
Far from being batshit crazy, Mises was an eminently reasonable thinker. It’s just that he didn’t do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.
People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don’t propagate to all others, much of what they say or write is signaling that’s incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it’s more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.
Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatsoever. They already think most people are completely batshit insane, and that the world is a joke and is going down the tubes. There’s nothing really to constrain them from sounding insane on epistemology. It’s not a red flag to them if everyone seems to disagree.
Forget the epistemology. They’re just parroting confused secondary accounts of the work of a thinker who himself utterly failed in his endeavor to explain where he was coming from on this topic, and they’re parroting it to signal team affiliation, a break from the mainstream, etc. Beliefs don’t always propagate throughout the whole web, especially when they’re less usefully analyzed as “beliefs” and more as mere words spilled for the purpose of signaling something.
If you read enough and listen to enough of the modern Austrian school (which is a tragically hard prospect given how allergic most LW-style rationalists would be to the presentation and style of argumentation), you’ll find that what’s going on in the world, or rather what’s going so wrong in society, will become incredibly clear, and half of everything will fall into place. It’s one of the two major pieces in the puzzle—the other of which may be found on Less Wrong.
Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that “what’s going so wrong in society” can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don’t say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it’s about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the “human condition”. But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you’re talking about and what the two magic insights are.
Oh sorry. I didn’t mean that “what’s going so wrong in society” is a single piece that can be understood given those two ingredients but is otherwise destined to remain confusing. I meant that what one finds on Less Wrong explains part of what’s going so wrong, and Austrian economics (if properly distilled) elucidates the other.
I should clarify though that Less Wrong certainly provides the bigger picture understanding of the situation, with the whole outdated hardware analysis etc., and thus it would be less like two symmetrical pieces being fit together, and more like a certain distilled form of Austrian economics being slotted into a missing section in the Less Wrong worldview.
I also didn’t mean to suggest that adding some insight from Less Wrong to some insight from the Austrian school would suddenly reveal the solution to civilization’s problems. Rather, what I’m suggesting would just be another step in the process to understanding the issues we face—perhaps even a very large step—and thus would simply put us in a better position to figure out what to do to make it significantly more likely that the future will go well.
Not two magic insights, but two very large collections of knowledge and information that would be very useful to synthesize and add together. Less Wrong has a lot of insights about outdated hardware, cognitive biases, how our minds work and where they’re likely to go systematically wrong, certain existential risks, AI, etc., and Austrian economics elucidates something much more controversial: the joke that is the current economic, political, and perhaps even social organization of every single nation on Earth.
As people from Less Wrong, what else should we expect but complete disaster? The current societal structure is the result of tribal political instincts gone awry in this new, evolutionarily discordant situation of having massive tribes of millions of people. Our hardware and factory presets were optimized for hunter-gatherer situations of at most a couple hundred people (?), but now the groups exceed millions. It would be an absolute miracle if societal organization at this point in history were not completely insane. Austrian economics details the insanity at length.
I have also found claims that one or a few simple ideas can solve huge swaths of the world’s problems to be a sign of naivity, but another exception is when there is mass delusion or confusion due to systematic errors. Provided such pervasive and damaging errors do exist, merely clearing up those errors would be a major service to humanity. In this sense, Less Wrong and Misesian epistemology share a goal: to eliminate flawed reasoning. I am not sure why Mises chose to put forth this LW-style message as a positive theory (praxeology), but the content seems to me entirely negative in that it formalizes and systematizes many of the corrections economists (even mainstream ones) must have been tired of making. Perhaps he found that people were more receptive to hearing a “competing theory” than to having their own theories covered in red ink.
Considering we already had a post on the epistemic problems of the school, would you be willing to write a post or sequence on what you consider particularly interesting or worthwhile in Austrian economics?
Yes. May be a while though.
A possible analogy for how Crux views the Austrian economics might be how most of us view the Copenhagen quantum mechanics of Bohr, Heisenberg et al: excellent science done by top-notch scientists, unfortunately intertwined with a confused epistemology which they thought was essential to the science, but actually wasn’t. (I don’t know enough about Austrian economics to say if the analogy is at any level correct, but it seems a sensible interpretation of what Crux says).