There’s an argument to be made that even if you’re not an altruist, that “societal default” only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.
I don’t feel like my work on AI has given me any particular advantage in figuring out how to deal with automation, especially since the kind of AI we’re thinking about is mostly AGI and job-threatening automation is mostly narrow AI. I don’t think I have a major advantage in figuring out which jobs seem likely to persist and which ones won’t—at least not one that would be a further advantage on top of just reading the existing expert reports on the topic.
I think that the main difference between me and the average expert-report-reading, reasonably smart person is that I’m less confident in the expert opinion telling us anything useful / anybody being able to meaningfully predict any of this, but that just means that I have even less of an idea of what I should do in response to these trends.
I think of the LW-style rationality as giving you the set of tools to realize in the first place when the default-path available to you is likely to be insufficient, and the impetus to actually do something differently.
I _think_ it should still be a useful skill for evaluating, acting upon and various career stuff. I’m not 100% sure it’s better than having domain expertise in what-things-are-likely-to-be-automated-last, but I think at least being calibrated-about-uncertainty and able to make some generally useful, broad strategic decisions.
I agree that it’s useful in realizing that the default path is likely to be insufficient. I’m not sure that it’s particularly useful in helping figure out what to do instead, though. I feel like there have been times when LW rationality has even been a handicap to me, in that it has left me with an understanding of how every available option is somehow inadequate, but failed to suggest anything that would be adequate. The result has been paralysis, when “screw it, I’ll just do something” would probably have produced a better result.
It seems to me that this is mostly orthogonal to “LW rationality” (at least, the “classic” form of it), and is a matter of mindset. I’ve long (always?) been of the “screw it, I’ll just do something” mindset; I can report that it works quite well and has produced good results; I have never experienced any disconnect between it and, say, anything in the Sequences.
LW rationality has … left me with an understanding of how every available option is somehow inadequate, but failed to suggest anything that would be adequate
Well, there’s something odd about that formulation, isn’t it? You’re treating “adequacy” as a binary property, it seems; but that’s not inherent in anything I recognize as “LW rationality”! Surely the “pure” form of the instrumental imperative to “maximize expected utility” (or something similar in spirit if not in implementation details) doesn’t have any trouble whatsoever with there being multiple options, all of which are somehow less than ideal. Pick whatever’s least bad, and go to it…
Well, there’s something odd about that formulation, isn’t it? You’re treating “adequacy” as a binary property, it seems; but that’s not inherent in anything I recognize as “LW rationality”.
Well, let’s use the automation thing as an example.
I know that existing track records for how much career security etc. various jobs offer, aren’t going to be of much use. I also know that existing expert predictions on which jobs are going to stay reliable, aren’t necessary very reliable either.
So now I know that I shouldn’t rely on the previous wisdom on the topic. The average smart person reading the news has probably figured this out too, with all the talk about technological unemployment. I think that LW rationality has given me a slightly better understanding of the limitations of experts, so compared to the average smart person, I know that I probably shouldn’t rely too much on the new thinking on the topic, either.
Great. But what should I do instead? LW rationality doesn’t really tell me, so in practice—if I go with the “screw it, I’ll just do something” mentality, I just fall back into going with the best expert predictions anyway. Going with the “screw it” mentality means that LW rationality doesn’t hurt me in this case, but it doesn’t particularly benefit me, either. It just makes my predictions less certain, without changing my actions.
Surely the “pure” form of the instrumental imperative to “maximize expected utility” (or something similar in spirit if not in implementation details) doesn’t have any trouble whatsoever with there being multiple options, all of which are somehow less than ideal. Pick whatever’s least bad, and go to it…
Logically, yes. That’s what I do these days.
That said, many people need some reasonable-seeming level of confidence before embarking on a project. “I don’t think that any of this is going to work, but I’ll just do something anyway” tends to be psychologically hard. (Scott has speculated that “very low confidence in anything” is what depression is.)
My anecdotal observation is that there are some people—including myself in the past—who encounter LW, have it hammered in how uncertain they should be about everything, and then this contributes to driving their confidence levels down to the point where they’ll be frequently paralyzed when making decisions. All options feel too uncertain to feel worth acting upon and none of them meets whatever minimum threshold is required for the brain to consider something worth even trying, so nothing gets done.
I say that LW sometimes contributes to this, not that it causes it; it doesn’t have that effect on everyone. You probably need previous psychological issues, such as a pre-existing level of depression or generally low self-confidence, for this to happen.
I say that LW sometimes contributes to this, not that it causes it; it doesn’t have that effect on everyone. You probably need previous psychological issues, such as a pre-existing level of depression or generally low self-confidence, for this to happen.
Yes, I think I agree with your view on this. (I’d add a caveat that I suspect it’s not quite depression that does it, but something else, which I’m not sure I can name accurately enough to be useful… I will say this: I was severely depressed around the time I came across LessWrong—and let me tell you, LW rationality definitely did not have this effect you describe on me… Anecdotal observation of others, since then, has confirmed my impression. )
The average smart person reading the news has probably figured this out too … LW rationality doesn’t hurt me in this case, but it doesn’t particularly benefit me
I think you might be—from your LW-rationality-influenced vantage point—underestimating how prevalent various cognitive distortions (or, let’s just say it in plain language: stupidity and wrongheadedness) are in even “average smart people”.
Much of the best of what LW has to offer has always been (as one old post here put it) “rationality as non-self-destruction”. The point isn’t necessarily that you’re rational, and therefore, you win; the point is that by default, you lose, in various stupid and avoidable ways; LW-style rationality helps you not do that.
Now, that might not get you all the way to “winning”. You do still need stuff like “if there aren’t any good options, just take the least bad one and go for it, or at any rate do something instead of just sitting around”, which are, to a large degree, common-sense rules (which, while they would certainly be included in any total, idealized version of “rationality principles”, are by no means unique to LessWrong). But without LessWrong, it’s entirely possible that you’d just fail in some dumb way.
My personal experience is that I know a lot of smart people, and what I observe is that intelligence is no barrier to irrationality and nonsensical beliefs/actions. My impression is that there is a correlation, among the smart people I know, between how consistently they can avoid this sort of thing, and how much exposure they’ve had to LessWrong-style rationality (even if secondhand), or similar ideas.
I’d add a caveat that I suspect it’s not quite depression that does it, but something else, which I’m not sure I can name accurately enough to be useful…
This sounds right to me. Something in the rough space of depression, but not quite the same thing.
I think you might be—from your LW-rationality-influenced vantage point—underestimating how prevalent various cognitive distortions (or, let’s just say it in plain language: stupidity and wrongheadedness) are in even “average smart people”.
That’s certainly possible, and I definitely agree that there are many kinds of wrongheadedness that are common in smart people but seem to be much less common among LW readers.
That said, my impression of “average smart people” mostly comes from the people I’ve met at university, hobbies, and the like. I don’t live in the Bay or near any of the rationalist hubs. So most of the folks I interact with, and am thinking about, aren’t active LW readers (though they might have run across the occasional LW article). It’s certainly possible that I’m falling victim to some kind of selection bias in my impression of the average smart person, but I doubt that being too influenced by LW rationality is the filter in question.
Much of the best of what LW has to offer has always been (as one old post here put it) “rationality as non-self-destruction”. The point isn’t necessarily that you’re rational, and therefore, you win; the point is that by default, you lose, in various stupid and avoidable ways; LW-style rationality helps you not do that.
Hmm. “Rationalists might not win, but at least they don’t lose just because they’re shooting themselves in the foot.” I like that, and think that I agree.
I think that LW rationality has given me a slightly better understanding of the limitations of experts, so compared to the average smart person, I know that I probably shouldn’t rely too much on the new thinking on the topic, either.
Great. But what should I do instead? LW rationality doesn’t really tell me, so in practice—if I go with the “screw it, I’ll just do something” mentality, I just fall back into going with the best expert predictions anyway.
It seems like LW rationality would straightforwardly tell you that this means you ought to keep your eggs in multiple different baskets rather than investing everything in the single top expert opinion. (Assuming you’re risk-averse, which it sounds like you are.)
True, I also agree with the edit in that it’s also useful for other reasons, but the expected inferential distance on that was larger, so I figured I would better make the easy-to-make point than none at all.
There’s an argument to be made that even if you’re not an altruist, that “societal default” only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.
Does LW-style rationality give you any major advantage in figuring out what to do as a consequence of major automation, though?
I think the meme-set of “expect AI to be a really big deal and put a lot of your effort into steering how AI goes” does do that on expectation.
I don’t feel like my work on AI has given me any particular advantage in figuring out how to deal with automation, especially since the kind of AI we’re thinking about is mostly AGI and job-threatening automation is mostly narrow AI. I don’t think I have a major advantage in figuring out which jobs seem likely to persist and which ones won’t—at least not one that would be a further advantage on top of just reading the existing expert reports on the topic.
I think that the main difference between me and the average expert-report-reading, reasonably smart person is that I’m less confident in the expert opinion telling us anything useful / anybody being able to meaningfully predict any of this, but that just means that I have even less of an idea of what I should do in response to these trends.
I think of the LW-style rationality as giving you the set of tools to realize in the first place when the default-path available to you is likely to be insufficient, and the impetus to actually do something differently.
I _think_ it should still be a useful skill for evaluating, acting upon and various career stuff. I’m not 100% sure it’s better than having domain expertise in what-things-are-likely-to-be-automated-last, but I think at least being calibrated-about-uncertainty and able to make some generally useful, broad strategic decisions.
I agree that it’s useful in realizing that the default path is likely to be insufficient. I’m not sure that it’s particularly useful in helping figure out what to do instead, though. I feel like there have been times when LW rationality has even been a handicap to me, in that it has left me with an understanding of how every available option is somehow inadequate, but failed to suggest anything that would be adequate. The result has been paralysis, when “screw it, I’ll just do something” would probably have produced a better result.
It seems to me that this is mostly orthogonal to “LW rationality” (at least, the “classic” form of it), and is a matter of mindset. I’ve long (always?) been of the “screw it, I’ll just do something” mindset; I can report that it works quite well and has produced good results; I have never experienced any disconnect between it and, say, anything in the Sequences.
Well, there’s something odd about that formulation, isn’t it? You’re treating “adequacy” as a binary property, it seems; but that’s not inherent in anything I recognize as “LW rationality”! Surely the “pure” form of the instrumental imperative to “maximize expected utility” (or something similar in spirit if not in implementation details) doesn’t have any trouble whatsoever with there being multiple options, all of which are somehow less than ideal. Pick whatever’s least bad, and go to it…
Well, let’s use the automation thing as an example.
I know that existing track records for how much career security etc. various jobs offer, aren’t going to be of much use. I also know that existing expert predictions on which jobs are going to stay reliable, aren’t necessary very reliable either.
So now I know that I shouldn’t rely on the previous wisdom on the topic. The average smart person reading the news has probably figured this out too, with all the talk about technological unemployment. I think that LW rationality has given me a slightly better understanding of the limitations of experts, so compared to the average smart person, I know that I probably shouldn’t rely too much on the new thinking on the topic, either.
Great. But what should I do instead? LW rationality doesn’t really tell me, so in practice—if I go with the “screw it, I’ll just do something” mentality, I just fall back into going with the best expert predictions anyway. Going with the “screw it” mentality means that LW rationality doesn’t hurt me in this case, but it doesn’t particularly benefit me, either. It just makes my predictions less certain, without changing my actions.
Logically, yes. That’s what I do these days.
That said, many people need some reasonable-seeming level of confidence before embarking on a project. “I don’t think that any of this is going to work, but I’ll just do something anyway” tends to be psychologically hard. (Scott has speculated that “very low confidence in anything” is what depression is.)
My anecdotal observation is that there are some people—including myself in the past—who encounter LW, have it hammered in how uncertain they should be about everything, and then this contributes to driving their confidence levels down to the point where they’ll be frequently paralyzed when making decisions. All options feel too uncertain to feel worth acting upon and none of them meets whatever minimum threshold is required for the brain to consider something worth even trying, so nothing gets done.
I say that LW sometimes contributes to this, not that it causes it; it doesn’t have that effect on everyone. You probably need previous psychological issues, such as a pre-existing level of depression or generally low self-confidence, for this to happen.
Yes, I think I agree with your view on this. (I’d add a caveat that I suspect it’s not quite depression that does it, but something else, which I’m not sure I can name accurately enough to be useful… I will say this: I was severely depressed around the time I came across LessWrong—and let me tell you, LW rationality definitely did not have this effect you describe on me… Anecdotal observation of others, since then, has confirmed my impression. )
I think you might be—from your LW-rationality-influenced vantage point—underestimating how prevalent various cognitive distortions (or, let’s just say it in plain language: stupidity and wrongheadedness) are in even “average smart people”.
Much of the best of what LW has to offer has always been (as one old post here put it) “rationality as non-self-destruction”. The point isn’t necessarily that you’re rational, and therefore, you win; the point is that by default, you lose, in various stupid and avoidable ways; LW-style rationality helps you not do that.
Now, that might not get you all the way to “winning”. You do still need stuff like “if there aren’t any good options, just take the least bad one and go for it, or at any rate do something instead of just sitting around”, which are, to a large degree, common-sense rules (which, while they would certainly be included in any total, idealized version of “rationality principles”, are by no means unique to LessWrong). But without LessWrong, it’s entirely possible that you’d just fail in some dumb way.
My personal experience is that I know a lot of smart people, and what I observe is that intelligence is no barrier to irrationality and nonsensical beliefs/actions. My impression is that there is a correlation, among the smart people I know, between how consistently they can avoid this sort of thing, and how much exposure they’ve had to LessWrong-style rationality (even if secondhand), or similar ideas.
This sounds right to me. Something in the rough space of depression, but not quite the same thing.
That’s certainly possible, and I definitely agree that there are many kinds of wrongheadedness that are common in smart people but seem to be much less common among LW readers.
That said, my impression of “average smart people” mostly comes from the people I’ve met at university, hobbies, and the like. I don’t live in the Bay or near any of the rationalist hubs. So most of the folks I interact with, and am thinking about, aren’t active LW readers (though they might have run across the occasional LW article). It’s certainly possible that I’m falling victim to some kind of selection bias in my impression of the average smart person, but I doubt that being too influenced by LW rationality is the filter in question.
Hmm. “Rationalists might not win, but at least they don’t lose just because they’re shooting themselves in the foot.” I like that, and think that I agree.
It seems like LW rationality would straightforwardly tell you that this means you ought to keep your eggs in multiple different baskets rather than investing everything in the single top expert opinion. (Assuming you’re risk-averse, which it sounds like you are.)
Well… but that only applies to a very small subset of people—even relative to all people who are likely ever to be interested in “rationality”!
Edit: To be clear, I actually think the answer to Kaj’s question is “yes, it does”—just not for this reason!
True, I also agree with the edit in that it’s also useful for other reasons, but the expected inferential distance on that was larger, so I figured I would better make the easy-to-make point than none at all.