I think that the above is also a good explanation for why many ML engineers working on AI or AGI don’t see any particular reason to engage with or address arguments about high p(doom).
When from a distance one views a field that:
Has longstanding disagreements about basic matters
Has theories—but many of the theories have not resulted in really any concrete predictions that differentiate from standard expectations, despite efforts to do so.
Will continue to exist regardless of how well you criticize any one part of it.
There’s basically little reason to engage with it. These are all also evidence that there’s something epistemically off with what is going on in the field.
Maybe this evidence is wrong! But I do think that it is evidence, and not-weak evidence, and it’s very reasonable for a ML engineer to not deeply engage with arguments because of it.
I don’t think that’s it, because I don’t think the situation in AI Alignment is all that unusual. “Science progresses one funeral at a time” is a universally applied adage. There’s also an even more general “common wisdom” that people in a debate ~never succeed in changing each others’ minds, and the only point of having a debate is to sway the on-the-fence onlookers — that debates are a performance art.
There’s something fundamentally off in human psyche that invalidates Aumann’s for us.
And put like this, I think the capabilities researchers will forever disagree with the alignment researchers for this exact same reason.
It might be the case that it’s because of a more universal thing. Like sometimes time is just necessary for science to progress. And definitely the right view of debate is of changing the POV of onlookers, not the interlocutors.
But—I still suspect, without being able to quantify, that alignment is worse than the other sciences in that the standards by-which-people-agree-what-good-work-is are just uncertain.
People in alignment sometimes say that alignment is pre-paradigmatic. I think that’s a good frame—I take it to mean that the standards of what qualifies as good work themselves not yet ascertained, among many other things. I think that if paradigmaticity is a line with math on the left and, like… pre-atomic chemistry all the way on the right, alignment is pretty far to the right. Modern RL is further to the left, and modern supervised learning with transformers much further to the left, followed up by things for which we actually have textbooks which don’t go out of date every 12 months.
I don’t think this would be disputed? But this really means that it’s almost certain at some point that > 80% of alignment-related-intellectual-output will be tossed at some point in the future, because that’s what pre paradigmaticity means. (Like, 80% is arguably a best case scenario for preparadigmatic fields!) Which means in turn that engaging with it is really a deeply unattractive prospect.
I guess what I’m saying is that I agree that the situation for alignment is not at all bad for a pre-paradigmatic field, but if you call your field preparadigmatic that seems like a pretty bad place to be in, in term of what kind of credibility well-calibrated observers should accord you.
Edit: And like, to the degree that arguments that p(doom) is high are entirely separate from the field of alignment, this is actually a reason for ML engineers to care deeply about alignment, as a way of preventing doom, even if it is preparadigmatic! But I’m quite uncertain that this is true.
But—I still suspect, without being able to quantify, that alignment is worse than the other sciences in that the standards by-which-people-agree-what-good-work-is are just uncertain.
People in alignment sometimes say that alignment is pre-paradigmatic. I think that’s a good frame—I take it to mean that the standards of what qualifies as good work themselves not yet ascertained, among many other things. I think that if paradigmaticity is a line with math on the left and, like… pre-atomic chemistry all the way on the right, alignment is pretty far to the right. Modern RL is further to the left, and modern supervised learning with transformers much further to the left, followed up by things for which we actually have textbooks which don’t go out of date every 12 months.
I don’t think this would be disputed?
Noting that I don’t dispute this.
An important reason why this is true is because existential risk prevention can’t be an experimental field. Some existential risks—such as asteroid impacts—can be understood with strong theory (like, settled physics). AI risk isn’t one of those (and any path by which it could become one of those depends on an inferential leap which is itself uncertain, namely, extrapolating results from near-term AI experiments, to much more powerful AI systems).
I do want to argue against the theory that science progresses one funeral at a time:
“Science advances one funeral at a time” → this seems to be both generally not true as well as being a harmful meme (because it is a common argument used to argue against life extension research).
3. Will continue to exist regardless of how well you criticize any one part of it.
Depending on what you mean by “any one part of it”, I think 3 is false. E.g., a sufficiently good critique of “AGI won’t just have human-friendly values by default” would cause MIRI to throw a party and close up shop.
Huh, roll to disbelieve on ‘sufficient to close up shop’?. I don’t think this is my only crux for AI being really dangerous.
Even if sufficiently advanced AGI reliably converges to human-friendly values in a very strong sense (i.e. two rival humans trying to build AGIs for war, or many humans with many AGIs embarking on complex economic goals, will somehow always figure out the best things for humans even if it means disobeying orders by stupid humans)...
...there’s still a separate case to be made multipolar narrow non-fully-superhuman AIs won’t kill us before the AGI sovereign fixes everything.
I think a more likely thing we’d want to stick around to do in that world is ‘try to accelerate humanity to AGI ASAP’. “Sufficiently advanced AGI converges to human-friendly values” is weaker than “AGI will just have human-friendly values by default”.
My worry is selection effects distort it such that even if the true evidence of doom is less than LW thinks, the fact that we are always exposed to evidence for doom distorts the fact that there may be not that much evidence for doom.
For example, if there are 50 evidence units for doom and 500 evidence units against doom, but LW has found 25 evidence units compared to 5 evidence units for against doom, that’s a selection effect at work.
My worry is something similar may be happening for AI risk.
And the value of knowing we’re wrong is far more than if we are epistemically correct.
Basically the fact LW has far more arguments for “alignment will be hard” compared to alignment being easy is the selection effect I’m talking about.
I was also worried because ML people don’t really think that AGI poses an existential risk, and that’s evidence, in an Aumann sense.
Now I do think this is explainable, but other issues remain:
Has longstanding disagreements about basic matters
Has theories—but many of the theories have not resulted in really any concrete predictions that differentiate from standard expectations, despite efforts to do so.
Basically the fact LW has far more arguments for “alignment will be hard” compared to alignment being easy is the selection effect I’m talking about.
That could either be ‘we’re selecting for good arguments, and the good arguments point toward alignment being hard’, or it could be a non-epistemic selection effect.
Why do you think it’s a non-epistemic selection effect? It’s easier to find arguments for ‘the Earth is round’ than ‘the Earth is flat’, but that doesn’t demonstrate a non-epistemic bias.
I was also worried because ML people don’t really think that AGI poses an existential risk, and that’s evidence, in an Aumann sense.
… By ‘an Aumann sense’ do you just mean ‘if you know nothing about a brain, then knowing it believes P is some Bayesian evidence for the truth of P’? That seems like a very weird way to use “Aumann”, but if that’s what you mean then sure. It’s trivial evidence to anyone who’s spent much time poking at the details, but it’s evidence.
… By ‘an Aumann sense’ do you just mean ‘if you know nothing about a brain, then knowing it believes P is some Bayesian evidence for the truth of P’? That seems like a very weird way to use “Aumann”, but if that’s what you mean then sure. It’s trivial evidence to anyone who’s spent much time poking at the details, but it’s evidence.
Basically, it means that the fact that other smart people working in ML/AI doesn’t agree with LW is itself evidence that LW is wrong, since rational reasoner’s updating towards the same priors should see disagreements lessen until there isn’t disagreement, at least in the case where there is only 1 objective truth.
Now I do think this is explainable as a case where vast incentives for capabilities researchers to adopt the position that it isn’t an existential risk, given the potential power and massive impact of AGI benefits them.
That could either be ‘we’re selecting for good arguments, and the good arguments point toward alignment being hard’, or it could be a non-epistemic selection effect.
Why do you think it’s a non-epistemic selection effect? It’s easier to find arguments for ‘the Earth is round’ than ‘the Earth is flat’, but that doesn’t demonstrate a non-epistemic bias.
I definitely agree that this alone doesn’t only suggest that LW isn’t doing it’s epistemics well or that there is a problematic selection effect.
My worries re LW epistemics are the following:
There’s a lot more theory than empirical evidence, while this is changing for the better, theory being so predominant in LW culture is a serious problem, as theory can easily get out of reality and model poorly.
Comparing AI will come to the idea that the world is round is sort of insane, as the fact that the world is round has both some theoretical evidence and massive empirical evidence, and LW on AI is nowhere close to the evidence for AI catastrophe. It also isn’t necessary.
More specifically, the outside view suggests that AI takeoff is probably slower and most importantly, it suggests that catastrophe from technology has a low prior, given massive amounts of claims of impending doom/catastrophe don’t ever occur, and catastrophe potential from nukes is almost certainly overrated, which tells us we should have much lower priors and suggests thats humanity dying out is actually hard.
I’m not asking LWers to totally change their views, but to have more uncertainty in their estimates of AI risk.
I’m not much moved by these types of arguments, essentially because (in my view) the level of meta at which they occur is too far removed from the object level. If you look at the actual points your opponents lay out, and decide (for whatever reason) that you find those points uncompelling… that’s it. Your job here is done, and the remaining fact that they disagree with you is, if not explained away, then at least screened off. (And to be clear, sometimes it is explained away, although that happens mostly with bad arguments.)
Ditto for outside view arguments—if you’ve looked at past examples of tech, concluded that they’re dissimilar from AGI in a number of ways (not a hard conclusion to reach), and moreover concluded that some of those dissimilarities are strategically significant (a slightly harder conclusion, and one that some people stumble before reaching—but not, ultimately, that hard), then the base rates of the category being outside-viewed no longer contain any independently relevant information, which means that—again—your job here is done.
(I’ve made comments to similar effect in the past, and plan to continuing trumpeting this horn for as long as the meme to which it is counter continues to exist.)
This does, of course, rely on your own reasoning to be correct, in the sense that if you’re wrong, well… you’re wrong. But of course, this really isn’t a particularly special kind of situation: it’s one that recurs all across life, in all kinds of fields and domains. And in particular, it’s not the kind of situation you should cower away from in fear—not if your goal is actually grasping the reality of the situation.
***
And finally (and obviously), all of this only applies to the person making the updates in the first place (which is why, you may notice, everything above the asterisks seems to inhabit the perspective of someone who believes they understand what’s happening, and takes for granted that it’s possible for them to be right as well as wrong). If you’re not in the position of such an individual, but instead conceive of yourself as primarily a third party, an outsider looking in...
...well, mostly I’d ask what the heck you’re doing, and why you aren’t either (1) trying to form your own models, to become one of the People Who Can Get Things Right As Well As Wrong, or—alternatively—(2) deciding that it’s not worth your time and effort, either because of a lack of comparative advantage, or just because you think the whole thing is Likely To Be Bunk.
It kind of sounds like you’re on the second path—which, to be clear, is totally fine! One of the predictable consequences of Daring to Disagree with Others is that Other Others might look upon you, notice that they can’t really tell who’s right from the outside, and downgrade their confidence accordingly. That’s fine, and even good in some sense: you definitely don’t want people thinking they ought to believe something even in [what looks to them like] the absence of any good arguments for it; that’s a recipe for irrationality.
But that’s the whole point, isn’t it—that the perspectives of the Insider, the Researcher Trying to Get At the Truth, and the Outsider, the Bystander Peering Through the Windows—will not look identical, and for obvious reason: they’re different people standing in different (epistemic) places! Neither one of them should agonize about the fact that the former has a tighter probability distribution than the latter; that’s what happens when you proceed further down the path—ideally the right path, but any path has the same property: that your probability distribution narrows as you go further down, and your models become more specific and more detailed.
So go ahead and downgrade your assessment of “LW epistemics” accordingly, if that’s what you’ve decided is the right thing to do in your position as the outsider looking in. (Although I’d argue that what you’d really want is to downgrade your assessment of MIRI, instead of LW as a whole; they’re the most extreme ones in the room, after all. For the record, I think this is Pretty Awesome, but your mileage may vary.) But don’t demand that the Insider be forced to update their probability distribution to match yours—to widen their distribution, to walk back the path they’ve followed in the course of forming their detailed models—simply because you can’t see what [they think] they’re seeing, from their vantage point!
Those people are down in the trenches for a reason: they’re investigating what they see as the most likely possibilities, and letting them do their work is good, even if you think they haven’t justified their (seeming) confidence level to your satisfaction. They’re not trying to.
(Oh hey, I think that has something to do with the title of the post we’re commenting on.)
I think that the above is also a good explanation for why many ML engineers working on AI or AGI don’t see any particular reason to engage with or address arguments about high p(doom).
When from a distance one views a field that:
Has longstanding disagreements about basic matters
Has theories—but many of the theories have not resulted in really any concrete predictions that differentiate from standard expectations, despite efforts to do so.
Will continue to exist regardless of how well you criticize any one part of it.
There’s basically little reason to engage with it. These are all also evidence that there’s something epistemically off with what is going on in the field.
Maybe this evidence is wrong! But I do think that it is evidence, and not-weak evidence, and it’s very reasonable for a ML engineer to not deeply engage with arguments because of it.
I don’t think that’s it, because I don’t think the situation in AI Alignment is all that unusual. “Science progresses one funeral at a time” is a universally applied adage. There’s also an even more general “common wisdom” that people in a debate ~never succeed in changing each others’ minds, and the only point of having a debate is to sway the on-the-fence onlookers — that debates are a performance art.
There’s something fundamentally off in human psyche that invalidates Aumann’s for us.
And put like this, I think the capabilities researchers will forever disagree with the alignment researchers for this exact same reason.
It might be the case that it’s because of a more universal thing. Like sometimes time is just necessary for science to progress. And definitely the right view of debate is of changing the POV of onlookers, not the interlocutors.
But—I still suspect, without being able to quantify, that alignment is worse than the other sciences in that the standards by-which-people-agree-what-good-work-is are just uncertain.
People in alignment sometimes say that alignment is pre-paradigmatic. I think that’s a good frame—I take it to mean that the standards of what qualifies as good work themselves not yet ascertained, among many other things. I think that if paradigmaticity is a line with math on the left and, like… pre-atomic chemistry all the way on the right, alignment is pretty far to the right. Modern RL is further to the left, and modern supervised learning with transformers much further to the left, followed up by things for which we actually have textbooks which don’t go out of date every 12 months.
I don’t think this would be disputed? But this really means that it’s almost certain at some point that > 80% of alignment-related-intellectual-output will be tossed at some point in the future, because that’s what pre paradigmaticity means. (Like, 80% is arguably a best case scenario for preparadigmatic fields!) Which means in turn that engaging with it is really a deeply unattractive prospect.
I guess what I’m saying is that I agree that the situation for alignment is not at all bad for a pre-paradigmatic field, but if you call your field preparadigmatic that seems like a pretty bad place to be in, in term of what kind of credibility well-calibrated observers should accord you.
Edit: And like, to the degree that arguments that p(doom) is high are entirely separate from the field of alignment, this is actually a reason for ML engineers to care deeply about alignment, as a way of preventing doom, even if it is preparadigmatic! But I’m quite uncertain that this is true.
Noting that I don’t dispute this.
An important reason why this is true is because existential risk prevention can’t be an experimental field. Some existential risks—such as asteroid impacts—can be understood with strong theory (like, settled physics). AI risk isn’t one of those (and any path by which it could become one of those depends on an inferential leap which is itself uncertain, namely, extrapolating results from near-term AI experiments, to much more powerful AI systems).
Yeah, I agree with that.
I do want to argue against the theory that science progresses one funeral at a time:
Fair enough, I suppose I don’t actually have a vast body of rigorous evidence in favour of that phrase.
Yeah, I’m starting to get a little queasy at the epistemics of AI Alignment, and I’m also getting concerned that our epistemic house isn’t in order.
Depending on what you mean by “any one part of it”, I think 3 is false. E.g., a sufficiently good critique of “AGI won’t just have human-friendly values by default” would cause MIRI to throw a party and close up shop.
Huh, roll to disbelieve on ‘sufficient to close up shop’?. I don’t think this is my only crux for AI being really dangerous.
Even if sufficiently advanced AGI reliably converges to human-friendly values in a very strong sense (i.e. two rival humans trying to build AGIs for war, or many humans with many AGIs embarking on complex economic goals, will somehow always figure out the best things for humans even if it means disobeying orders by stupid humans)...
...there’s still a separate case to be made multipolar narrow non-fully-superhuman AIs won’t kill us before the AGI sovereign fixes everything.
I think a more likely thing we’d want to stick around to do in that world is ‘try to accelerate humanity to AGI ASAP’. “Sufficiently advanced AGI converges to human-friendly values” is weaker than “AGI will just have human-friendly values by default”.
Well, that’s just not true.
My worry is selection effects distort it such that even if the true evidence of doom is less than LW thinks, the fact that we are always exposed to evidence for doom distorts the fact that there may be not that much evidence for doom.
For example, if there are 50 evidence units for doom and 500 evidence units against doom, but LW has found 25 evidence units compared to 5 evidence units for against doom, that’s a selection effect at work.
My worry is something similar may be happening for AI risk.
And the value of knowing we’re wrong is far more than if we are epistemically correct.
Why do you think this?
Basically the fact LW has far more arguments for “alignment will be hard” compared to alignment being easy is the selection effect I’m talking about.
I was also worried because ML people don’t really think that AGI poses an existential risk, and that’s evidence, in an Aumann sense.
Now I do think this is explainable, but other issues remain:
That could either be ‘we’re selecting for good arguments, and the good arguments point toward alignment being hard’, or it could be a non-epistemic selection effect.
Why do you think it’s a non-epistemic selection effect? It’s easier to find arguments for ‘the Earth is round’ than ‘the Earth is flat’, but that doesn’t demonstrate a non-epistemic bias.
… By ‘an Aumann sense’ do you just mean ‘if you know nothing about a brain, then knowing it believes P is some Bayesian evidence for the truth of P’? That seems like a very weird way to use “Aumann”, but if that’s what you mean then sure. It’s trivial evidence to anyone who’s spent much time poking at the details, but it’s evidence.
Basically, it means that the fact that other smart people working in ML/AI doesn’t agree with LW is itself evidence that LW is wrong, since rational reasoner’s updating towards the same priors should see disagreements lessen until there isn’t disagreement, at least in the case where there is only 1 objective truth.
Now I do think this is explainable as a case where vast incentives for capabilities researchers to adopt the position that it isn’t an existential risk, given the potential power and massive impact of AGI benefits them.
I definitely agree that this alone doesn’t only suggest that LW isn’t doing it’s epistemics well or that there is a problematic selection effect.
My worries re LW epistemics are the following:
There’s a lot more theory than empirical evidence, while this is changing for the better, theory being so predominant in LW culture is a serious problem, as theory can easily get out of reality and model poorly.
Comparing AI will come to the idea that the world is round is sort of insane, as the fact that the world is round has both some theoretical evidence and massive empirical evidence, and LW on AI is nowhere close to the evidence for AI catastrophe. It also isn’t necessary.
More specifically, the outside view suggests that AI takeoff is probably slower and most importantly, it suggests that catastrophe from technology has a low prior, given massive amounts of claims of impending doom/catastrophe don’t ever occur, and catastrophe potential from nukes is almost certainly overrated, which tells us we should have much lower priors and suggests thats humanity dying out is actually hard.
I’m not asking LWers to totally change their views, but to have more uncertainty in their estimates of AI risk.
I’m not much moved by these types of arguments, essentially because (in my view) the level of meta at which they occur is too far removed from the object level. If you look at the actual points your opponents lay out, and decide (for whatever reason) that you find those points uncompelling… that’s it. Your job here is done, and the remaining fact that they disagree with you is, if not explained away, then at least screened off. (And to be clear, sometimes it is explained away, although that happens mostly with bad arguments.)
Ditto for outside view arguments—if you’ve looked at past examples of tech, concluded that they’re dissimilar from AGI in a number of ways (not a hard conclusion to reach), and moreover concluded that some of those dissimilarities are strategically significant (a slightly harder conclusion, and one that some people stumble before reaching—but not, ultimately, that hard), then the base rates of the category being outside-viewed no longer contain any independently relevant information, which means that—again—your job here is done.
(I’ve made comments to similar effect in the past, and plan to continuing trumpeting this horn for as long as the meme to which it is counter continues to exist.)
This does, of course, rely on your own reasoning to be correct, in the sense that if you’re wrong, well… you’re wrong. But of course, this really isn’t a particularly special kind of situation: it’s one that recurs all across life, in all kinds of fields and domains. And in particular, it’s not the kind of situation you should cower away from in fear—not if your goal is actually grasping the reality of the situation.
***
And finally (and obviously), all of this only applies to the person making the updates in the first place (which is why, you may notice, everything above the asterisks seems to inhabit the perspective of someone who believes they understand what’s happening, and takes for granted that it’s possible for them to be right as well as wrong). If you’re not in the position of such an individual, but instead conceive of yourself as primarily a third party, an outsider looking in...
...well, mostly I’d ask what the heck you’re doing, and why you aren’t either (1) trying to form your own models, to become one of the People Who Can Get Things Right As Well As Wrong, or—alternatively—(2) deciding that it’s not worth your time and effort, either because of a lack of comparative advantage, or just because you think the whole thing is Likely To Be Bunk.
It kind of sounds like you’re on the second path—which, to be clear, is totally fine! One of the predictable consequences of Daring to Disagree with Others is that Other Others might look upon you, notice that they can’t really tell who’s right from the outside, and downgrade their confidence accordingly. That’s fine, and even good in some sense: you definitely don’t want people thinking they ought to believe something even in [what looks to them like] the absence of any good arguments for it; that’s a recipe for irrationality.
But that’s the whole point, isn’t it—that the perspectives of the Insider, the Researcher Trying to Get At the Truth, and the Outsider, the Bystander Peering Through the Windows—will not look identical, and for obvious reason: they’re different people standing in different (epistemic) places! Neither one of them should agonize about the fact that the former has a tighter probability distribution than the latter; that’s what happens when you proceed further down the path—ideally the right path, but any path has the same property: that your probability distribution narrows as you go further down, and your models become more specific and more detailed.
So go ahead and downgrade your assessment of “LW epistemics” accordingly, if that’s what you’ve decided is the right thing to do in your position as the outsider looking in. (Although I’d argue that what you’d really want is to downgrade your assessment of MIRI, instead of LW as a whole; they’re the most extreme ones in the room, after all. For the record, I think this is Pretty Awesome, but your mileage may vary.) But don’t demand that the Insider be forced to update their probability distribution to match yours—to widen their distribution, to walk back the path they’ve followed in the course of forming their detailed models—simply because you can’t see what [they think] they’re seeing, from their vantage point!
Those people are down in the trenches for a reason: they’re investigating what they see as the most likely possibilities, and letting them do their work is good, even if you think they haven’t justified their (seeming) confidence level to your satisfaction. They’re not trying to.
(Oh hey, I think that has something to do with the title of the post we’re commenting on.)
Thank you for answering, and I now get why narrower probability distributions are there for the inside view
I hope that no matter what really is the truth, that LWers continue on, but always making sure that they are careful in how well their epistemics are.