I’m not much moved by these types of arguments, essentially because (in my view) the level of meta at which they occur is too far removed from the object level. If you look at the actual points your opponents lay out, and decide (for whatever reason) that you find those points uncompelling… that’s it. Your job here is done, and the remaining fact that they disagree with you is, if not explained away, then at least screened off. (And to be clear, sometimes it is explained away, although that happens mostly with bad arguments.)
Ditto for outside view arguments—if you’ve looked at past examples of tech, concluded that they’re dissimilar from AGI in a number of ways (not a hard conclusion to reach), and moreover concluded that some of those dissimilarities are strategically significant (a slightly harder conclusion, and one that some people stumble before reaching—but not, ultimately, that hard), then the base rates of the category being outside-viewed no longer contain any independently relevant information, which means that—again—your job here is done.
(I’ve made comments to similar effect in the past, and plan to continuing trumpeting this horn for as long as the meme to which it is counter continues to exist.)
This does, of course, rely on your own reasoning to be correct, in the sense that if you’re wrong, well… you’re wrong. But of course, this really isn’t a particularly special kind of situation: it’s one that recurs all across life, in all kinds of fields and domains. And in particular, it’s not the kind of situation you should cower away from in fear—not if your goal is actually grasping the reality of the situation.
***
And finally (and obviously), all of this only applies to the person making the updates in the first place (which is why, you may notice, everything above the asterisks seems to inhabit the perspective of someone who believes they understand what’s happening, and takes for granted that it’s possible for them to be right as well as wrong). If you’re not in the position of such an individual, but instead conceive of yourself as primarily a third party, an outsider looking in...
...well, mostly I’d ask what the heck you’re doing, and why you aren’t either (1) trying to form your own models, to become one of the People Who Can Get Things Right As Well As Wrong, or—alternatively—(2) deciding that it’s not worth your time and effort, either because of a lack of comparative advantage, or just because you think the whole thing is Likely To Be Bunk.
It kind of sounds like you’re on the second path—which, to be clear, is totally fine! One of the predictable consequences of Daring to Disagree with Others is that Other Others might look upon you, notice that they can’t really tell who’s right from the outside, and downgrade their confidence accordingly. That’s fine, and even good in some sense: you definitely don’t want people thinking they ought to believe something even in [what looks to them like] the absence of any good arguments for it; that’s a recipe for irrationality.
But that’s the whole point, isn’t it—that the perspectives of the Insider, the Researcher Trying to Get At the Truth, and the Outsider, the Bystander Peering Through the Windows—will not look identical, and for obvious reason: they’re different people standing in different (epistemic) places! Neither one of them should agonize about the fact that the former has a tighter probability distribution than the latter; that’s what happens when you proceed further down the path—ideally the right path, but any path has the same property: that your probability distribution narrows as you go further down, and your models become more specific and more detailed.
So go ahead and downgrade your assessment of “LW epistemics” accordingly, if that’s what you’ve decided is the right thing to do in your position as the outsider looking in. (Although I’d argue that what you’d really want is to downgrade your assessment of MIRI, instead of LW as a whole; they’re the most extreme ones in the room, after all. For the record, I think this is Pretty Awesome, but your mileage may vary.) But don’t demand that the Insider be forced to update their probability distribution to match yours—to widen their distribution, to walk back the path they’ve followed in the course of forming their detailed models—simply because you can’t see what [they think] they’re seeing, from their vantage point!
Those people are down in the trenches for a reason: they’re investigating what they see as the most likely possibilities, and letting them do their work is good, even if you think they haven’t justified their (seeming) confidence level to your satisfaction. They’re not trying to.
(Oh hey, I think that has something to do with the title of the post we’re commenting on.)
I’m not much moved by these types of arguments, essentially because (in my view) the level of meta at which they occur is too far removed from the object level. If you look at the actual points your opponents lay out, and decide (for whatever reason) that you find those points uncompelling… that’s it. Your job here is done, and the remaining fact that they disagree with you is, if not explained away, then at least screened off. (And to be clear, sometimes it is explained away, although that happens mostly with bad arguments.)
Ditto for outside view arguments—if you’ve looked at past examples of tech, concluded that they’re dissimilar from AGI in a number of ways (not a hard conclusion to reach), and moreover concluded that some of those dissimilarities are strategically significant (a slightly harder conclusion, and one that some people stumble before reaching—but not, ultimately, that hard), then the base rates of the category being outside-viewed no longer contain any independently relevant information, which means that—again—your job here is done.
(I’ve made comments to similar effect in the past, and plan to continuing trumpeting this horn for as long as the meme to which it is counter continues to exist.)
This does, of course, rely on your own reasoning to be correct, in the sense that if you’re wrong, well… you’re wrong. But of course, this really isn’t a particularly special kind of situation: it’s one that recurs all across life, in all kinds of fields and domains. And in particular, it’s not the kind of situation you should cower away from in fear—not if your goal is actually grasping the reality of the situation.
***
And finally (and obviously), all of this only applies to the person making the updates in the first place (which is why, you may notice, everything above the asterisks seems to inhabit the perspective of someone who believes they understand what’s happening, and takes for granted that it’s possible for them to be right as well as wrong). If you’re not in the position of such an individual, but instead conceive of yourself as primarily a third party, an outsider looking in...
...well, mostly I’d ask what the heck you’re doing, and why you aren’t either (1) trying to form your own models, to become one of the People Who Can Get Things Right As Well As Wrong, or—alternatively—(2) deciding that it’s not worth your time and effort, either because of a lack of comparative advantage, or just because you think the whole thing is Likely To Be Bunk.
It kind of sounds like you’re on the second path—which, to be clear, is totally fine! One of the predictable consequences of Daring to Disagree with Others is that Other Others might look upon you, notice that they can’t really tell who’s right from the outside, and downgrade their confidence accordingly. That’s fine, and even good in some sense: you definitely don’t want people thinking they ought to believe something even in [what looks to them like] the absence of any good arguments for it; that’s a recipe for irrationality.
But that’s the whole point, isn’t it—that the perspectives of the Insider, the Researcher Trying to Get At the Truth, and the Outsider, the Bystander Peering Through the Windows—will not look identical, and for obvious reason: they’re different people standing in different (epistemic) places! Neither one of them should agonize about the fact that the former has a tighter probability distribution than the latter; that’s what happens when you proceed further down the path—ideally the right path, but any path has the same property: that your probability distribution narrows as you go further down, and your models become more specific and more detailed.
So go ahead and downgrade your assessment of “LW epistemics” accordingly, if that’s what you’ve decided is the right thing to do in your position as the outsider looking in. (Although I’d argue that what you’d really want is to downgrade your assessment of MIRI, instead of LW as a whole; they’re the most extreme ones in the room, after all. For the record, I think this is Pretty Awesome, but your mileage may vary.) But don’t demand that the Insider be forced to update their probability distribution to match yours—to widen their distribution, to walk back the path they’ve followed in the course of forming their detailed models—simply because you can’t see what [they think] they’re seeing, from their vantage point!
Those people are down in the trenches for a reason: they’re investigating what they see as the most likely possibilities, and letting them do their work is good, even if you think they haven’t justified their (seeming) confidence level to your satisfaction. They’re not trying to.
(Oh hey, I think that has something to do with the title of the post we’re commenting on.)
Thank you for answering, and I now get why narrower probability distributions are there for the inside view
I hope that no matter what really is the truth, that LWers continue on, but always making sure that they are careful in how well their epistemics are.