1. For reasons discussed on comments to previous posts here, I’m wary of using words like “lie” or “scam” to mean “honest reporting of unconsciously biased reasoning”. If I criticized this post by calling you a liar trying to scam us, and then backed down to “I’m sure you believe this, but you probably have some bias, just like all of us”, I expect you would be offended. But I feel like you’re making this equivocation throughout this post.
2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you’re criticizing in this post are AI professors. Unless you got your timelines from industry, which I don’t think many people here did, them being stupid isn’t especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they’re wrong doesn’t change anything.
3. I’ve frequently heard people who believe AI might be near say that their side can’t publicly voice their opinions, because they’ll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it’s natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.
4. I’m not familiar with the politics in AI research. But in medicine, I’ve noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they’re usually wrong, people in the field know they’re wrong, and they’re trying to bamboozle a reading public who aren’t smart enough to figure out that they’re wrong (but who are hungry for a “Galileo stands up to hidebound medical establishment” narrative). Number two, there’s a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment’s lie. You can always get a New Yorker story by writing “Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?” Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like “You’re not presenting an alternative solution” in these cases, they mean something like “You don’t have some alternate way of treating diseases that has no side effects, so stop pretending you’re Galileo for pointing out a problem everyone was already aware of.” See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don’t know for sure that this is what’s going on in AI, but it would make sense.
I’m not against modeling sociopolitical dynamics. But I think you’re doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they’ve had their intended shocking effect).
I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they’re less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics “The other side are liars who are silencing my side!” level, we’re just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.
Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. “This new thing has great opportunities and great risks; you need guidance to navigate and govern it” is a great advertisement for universities and think tanks. Which doesn’t mean AI, narrow or strong, doesn’t actually have great opportunities and risks! But nonprofits and academics aren’t immune from the incentives to exaggerate.
Re: 4: I have a different perspective. The loonies who go to the press with “did you know psychiatric drugs have SIDE EFFECTS?!” are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public’s loss of reverence, in the long run. Blind reverence for professionals is a freebie, which locally may be beneficial to the public if the professionals really are wise, but is essentially fragile. IMO it’s not worth trying to cultivate or preserve. In the long run, good stuff will win out, and smart psychiatrists can just as easily frame themselves as agreeing with the anti-psych cranks in spirit, as being on Team Avoid Side Effects And Withdrawal Symptoms, Unlike All Those Dumbasses Who Don’t Care (all two of them).
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
I’m not making the argument “business is stupid about AI timelines therefore the opposite is right”.
Yes, this is a reason to expect distortion in favor of mainstream opinions (including medium-long timelines). It can be modeled along with the other distortions.
Regardless of whether Gary Marcus is “bad” (what would that even mean?), the concrete criticisms aren’t ones that imply AI timelines are short, deep learning can get to AGI, etc. They’re ones that sometimes imply the opposite, and anyway, ones that systematically distort narratives towards short timelines (as I spelled out). If it’s already widely known that deep learning can’t do reasoning, then… isn’t that reason not to expect short AI timelines, and to expect that many of the non-experts who think so (including tech execs and rationalists) have been duped?
If you think I did the modeling wrong, and have concrete criticisms (such as the criticism that there’s a distortionary effect towards long timelines due to short timelines seeming loony), then that’s useful. But it seems like you’re giving a general counterargument against modeling these sorts of sociopolitical dynamics. If the modeling comes out that there are more distortionary effects in one direction than another, or that there are different distortionary effects in different circumstances, isn’t that important to take into consideration rather than dismissing it as “monkey politics”?
On 3, I notice this part of your post jumps out to me:
Of course, I’d have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
Okay, that might be useful. (For a mainstream perspective on this I have agreement with, see The Scams Are Winning).
The argument for most of the post is that there are active distortionary pressures towards short timelines. I mentioned the tech survey in the conclusion to indicate that the distortionary pressures aren’t some niche interest, they’re having big effects on the world.
Will discuss later in this comment.
By “concrete criticisms” I mean the Twitter replies. I’m studying the implicit assumptions behind these criticisms to see what it says about attitudes in the AI field.
I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way.
I think this is the main thrust of your criticism, and also the main thrust of point 3. I do think lots of things are scams, and I could have written about other things instead, but I wrote about short timelines, because I can’t talk about everything in one essay, and this one seems important.
I couldn’t have written an equally compelling essay on biases in favor of long timelines without lying, I think, or even with lying while trying to maintain plausibility. (Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy)
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases. I can’t prove that there isn’t an equally strong case that there’s bias in favor of long timelines, because that would be proving a negative. (Even if I did, that would be a case of “sometimes there’s bias in favor of X, sometimes against X, it depends on the situation/person/etc”; the newly discovered distortionary pressures don’t negate the fact that the previously discovered ones exist)
I agree that it’s difficult (practically impossible) to engage with a criticism of the form “I don’t find your examples compelling”, because such a criticism is in some sense opaque: there’s very little you can do with the information provided, except possibly add more examples (which is time-consuming, and also might not even work if the additional examples you choose happen to be “uncompelling” in the same way as your original examples).
However, there is a deeper point to be made here: presumably you yourself only arrived at your position after some amount of consideration. The fact that others appear to find your arguments (including any examples you used) uncompelling, then, usually indicates one of two things:
You have not successfully expressed the full chain of reasoning that led you to originally adopt your conclusion (owing perhaps to constraints on time, effort, issues with legibility, or strategic concerns). In this case, you should be unsurprised at the fact that other people don’t appear to be convinced by your post, since your post does not present the same arguments/evidence that convinced you yourself to believe your position.
You do, in fact, find the raw examples in your post persuasive. This would then indicate that any disagreement between you and your readers is due to differing priors, i.e. evidence that you would consider sufficient to convince yourself of something, does not likewise convince others. Ideally, this fact should cause you to update in favor of the possibility that you are mistaken, at least if you believe that your interlocutors are being rational and intellectually honest.
I don’t know which of these two possibilities it actually is, but it may be worth keeping this in mind if you make a post that a bunch of people seem to disagree with.
Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy
Minor, unconfident, point: I’m not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own “look how manipulative the other guys are” essays. If the target is thoughtful people trying to figure things out, they’ll want to hear about both sides, no?
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases.
I think courts spend a fair bit of effort not just in evaluating strength of case, but standing and impact of the case. not “what else could you argue?”, but “why does this complaint matter, to whom?”
IMO, you’re absolutely right that there’s lots of pressures to make unrealistically short predictions for advances, and this causes a lot of punditry, and academia and industry, to … what? It’s annoying, but who is harmed and who has the ability to improve things?
Personally, I think timeline for AGI is a poorly-defined prediction—the big question is what capabilities satisfy the “AGI” definition. I think we WILL see more and more impressive performance in aspects of problem-solving and prediction that would have been classified as “intelligence” 50 years ago, but that we probably won’t credit with consciousness or generality.
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme.
The eponymous Charles Ponzi had a plausible arbitrage idea backing his famous scheme; it’s not unlikely that he was already in over his head (and therefore desperately trying to make himself believe he’d find some other way to make his investors whole) by the time he found out that transaction costs made the whole thing impractical.
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.
I think this is an important point to make, since the implicit lesson I’m reading here is that there’s no difference between giving false information intentionally (“lying”) and giving false information unintentionally (“being wrong”). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.
Of course there’s an important difference between lying and being wrong. It’s a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely.
If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren’t, and get tricked into having worse beliefs than if they had used an uninformative prior.
I am not trying to punish people, I am trying to make agent-based models.
(Regarding Madoff, what you present is suggestive, but it doesn’t prove that he was conscious that he had no plans to trade and was deceiving his investors. We don’t really know what he was conscious of and what he wasn’t.)
I’m wary of using words like “lie” or “scam” to mean “honest reporting of unconsciously biased reasoning”
When someone is systematically trying to convince you of a thing, do not be like, “nice honest report”, but be like, “let me think for myself whether that is correct”.
but be like, “let me think for myself whether that is correct”.
From my perspective, describing something as “honest reporting of unconsciously biased reasoning” seems much more like an invitation for me to think for myself whether it’s correct than calling it a “lie” or a “scam”.
Calling your opponent’s message a lie and a scam actually gets my defenses up that you’re the one trying to bamboozle me, since you’re using such emotionally charged language.
Maybe others react to these words differently though.
This comment is such a good example of managing to be non-triggering in making the point. It stands out to me amongst all the comments above it, which are at least somewhat heated.
Sure if you just call it “honest reporting”. But that was not the full phrase used. The full phrase used was “honest reporting of unconsciously biased reasoning”.
I would not call trimming that down to “honest reporting” a case of honest reporting! ;-)
If I claim, “Joe says X, and I think he honestly believes that, though his reasoning is likely unconsciously biased here”, then that does not at all seem to me like an endorsement of X, and certainly not a clear endorsement.
1. For reasons discussed on comments to previous posts here, I’m wary of using words like “lie” or “scam” to mean “honest reporting of unconsciously biased reasoning”. If I criticized this post by calling you a liar trying to scam us, and then backed down to “I’m sure you believe this, but you probably have some bias, just like all of us”, I expect you would be offended. But I feel like you’re making this equivocation throughout this post.
2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you’re criticizing in this post are AI professors. Unless you got your timelines from industry, which I don’t think many people here did, them being stupid isn’t especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they’re wrong doesn’t change anything.
3. I’ve frequently heard people who believe AI might be near say that their side can’t publicly voice their opinions, because they’ll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it’s natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.
4. I’m not familiar with the politics in AI research. But in medicine, I’ve noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they’re usually wrong, people in the field know they’re wrong, and they’re trying to bamboozle a reading public who aren’t smart enough to figure out that they’re wrong (but who are hungry for a “Galileo stands up to hidebound medical establishment” narrative). Number two, there’s a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment’s lie. You can always get a New Yorker story by writing “Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?” Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like “You’re not presenting an alternative solution” in these cases, they mean something like “You don’t have some alternate way of treating diseases that has no side effects, so stop pretending you’re Galileo for pointing out a problem everyone was already aware of.” See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don’t know for sure that this is what’s going on in AI, but it would make sense.
I’m not against modeling sociopolitical dynamics. But I think you’re doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they’ve had their intended shocking effect).
I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they’re less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics “The other side are liars who are silencing my side!” level, we’re just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.
Re: 2: nonprofits and academics have even more incentives than business to claim that a new technology is extremely dangerous. Think tanks and universities are in the knowledge business; they are more valuable when people seek their advice. “This new thing has great opportunities and great risks; you need guidance to navigate and govern it” is a great advertisement for universities and think tanks. Which doesn’t mean AI, narrow or strong, doesn’t actually have great opportunities and risks! But nonprofits and academics aren’t immune from the incentives to exaggerate.
Re: 4: I have a different perspective. The loonies who go to the press with “did you know psychiatric drugs have SIDE EFFECTS?!” are not really a threat to public information to the extent that they are telling the truth. They are a threat to the perceived legitimacy of psychiatrists. This has downsides (some people who could benefit from psychiatric treatment will fear it too much) but fundamentally the loonies are right that a psychiatrist is just a dude who went to school for a long time, not a holy man. To the extent that there is truth in psychiatry, it can withstand the public’s loss of reverence, in the long run. Blind reverence for professionals is a freebie, which locally may be beneficial to the public if the professionals really are wise, but is essentially fragile. IMO it’s not worth trying to cultivate or preserve. In the long run, good stuff will win out, and smart psychiatrists can just as easily frame themselves as agreeing with the anti-psych cranks in spirit, as being on Team Avoid Side Effects And Withdrawal Symptoms, Unlike All Those Dumbasses Who Don’t Care (all two of them).
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
I’m not making the argument “business is stupid about AI timelines therefore the opposite is right”.
Yes, this is a reason to expect distortion in favor of mainstream opinions (including medium-long timelines). It can be modeled along with the other distortions.
Regardless of whether Gary Marcus is “bad” (what would that even mean?), the concrete criticisms aren’t ones that imply AI timelines are short, deep learning can get to AGI, etc. They’re ones that sometimes imply the opposite, and anyway, ones that systematically distort narratives towards short timelines (as I spelled out). If it’s already widely known that deep learning can’t do reasoning, then… isn’t that reason not to expect short AI timelines, and to expect that many of the non-experts who think so (including tech execs and rationalists) have been duped?
If you think I did the modeling wrong, and have concrete criticisms (such as the criticism that there’s a distortionary effect towards long timelines due to short timelines seeming loony), then that’s useful. But it seems like you’re giving a general counterargument against modeling these sorts of sociopolitical dynamics. If the modeling comes out that there are more distortionary effects in one direction than another, or that there are different distortionary effects in different circumstances, isn’t that important to take into consideration rather than dismissing it as “monkey politics”?
On 3, I notice this part of your post jumps out to me:
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
Okay, that might be useful. (For a mainstream perspective on this I have agreement with, see The Scams Are Winning).
The argument for most of the post is that there are active distortionary pressures towards short timelines. I mentioned the tech survey in the conclusion to indicate that the distortionary pressures aren’t some niche interest, they’re having big effects on the world.
Will discuss later in this comment.
By “concrete criticisms” I mean the Twitter replies. I’m studying the implicit assumptions behind these criticisms to see what it says about attitudes in the AI field.
I think this is the main thrust of your criticism, and also the main thrust of point 3. I do think lots of things are scams, and I could have written about other things instead, but I wrote about short timelines, because I can’t talk about everything in one essay, and this one seems important.
I couldn’t have written an equally compelling essay on biases in favor of long timelines without lying, I think, or even with lying while trying to maintain plausibility. (Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy)
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases. I can’t prove that there isn’t an equally strong case that there’s bias in favor of long timelines, because that would be proving a negative. (Even if I did, that would be a case of “sometimes there’s bias in favor of X, sometimes against X, it depends on the situation/person/etc”; the newly discovered distortionary pressures don’t negate the fact that the previously discovered ones exist)
I agree that it’s difficult (practically impossible) to engage with a criticism of the form “I don’t find your examples compelling”, because such a criticism is in some sense opaque: there’s very little you can do with the information provided, except possibly add more examples (which is time-consuming, and also might not even work if the additional examples you choose happen to be “uncompelling” in the same way as your original examples).
However, there is a deeper point to be made here: presumably you yourself only arrived at your position after some amount of consideration. The fact that others appear to find your arguments (including any examples you used) uncompelling, then, usually indicates one of two things:
You have not successfully expressed the full chain of reasoning that led you to originally adopt your conclusion (owing perhaps to constraints on time, effort, issues with legibility, or strategic concerns). In this case, you should be unsurprised at the fact that other people don’t appear to be convinced by your post, since your post does not present the same arguments/evidence that convinced you yourself to believe your position.
You do, in fact, find the raw examples in your post persuasive. This would then indicate that any disagreement between you and your readers is due to differing priors, i.e. evidence that you would consider sufficient to convince yourself of something, does not likewise convince others. Ideally, this fact should cause you to update in favor of the possibility that you are mistaken, at least if you believe that your interlocutors are being rational and intellectually honest.
I don’t know which of these two possibilities it actually is, but it may be worth keeping this in mind if you make a post that a bunch of people seem to disagree with.
Scott’s post explaining his opinion is here, and is called ‘Against Lie Inflation’.
Minor, unconfident, point: I’m not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own “look how manipulative the other guys are” essays. If the target is thoughtful people trying to figure things out, they’ll want to hear about both sides, no?
I think courts spend a fair bit of effort not just in evaluating strength of case, but standing and impact of the case. not “what else could you argue?”, but “why does this complaint matter, to whom?”
IMO, you’re absolutely right that there’s lots of pressures to make unrealistically short predictions for advances, and this causes a lot of punditry, and academia and industry, to … what? It’s annoying, but who is harmed and who has the ability to improve things?
Personally, I think timeline for AGI is a poorly-defined prediction—the big question is what capabilities satisfy the “AGI” definition. I think we WILL see more and more impressive performance in aspects of problem-solving and prediction that would have been classified as “intelligence” 50 years ago, but that we probably won’t credit with consciousness or generality.
Then perhaps you should start here.
The eponymous Charles Ponzi had a plausible arbitrage idea backing his famous scheme; it’s not unlikely that he was already in over his head (and therefore desperately trying to make himself believe he’d find some other way to make his investors whole) by the time he found out that transaction costs made the whole thing impractical.
Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.
I think this is an important point to make, since the implicit lesson I’m reading here is that there’s no difference between giving false information intentionally (“lying”) and giving false information unintentionally (“being wrong”). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.
Of course there’s an important difference between lying and being wrong. It’s a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely.
If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren’t, and get tricked into having worse beliefs than if they had used an uninformative prior.
I am not trying to punish people, I am trying to make agent-based models.
(Regarding Madoff, what you present is suggestive, but it doesn’t prove that he was conscious that he had no plans to trade and was deceiving his investors. We don’t really know what he was conscious of and what he wasn’t.)
When someone is systematically trying to convince you of a thing, do not be like, “nice honest report”, but be like, “let me think for myself whether that is correct”.
From my perspective, describing something as “honest reporting of unconsciously biased reasoning” seems much more like an invitation for me to think for myself whether it’s correct than calling it a “lie” or a “scam”.
Calling your opponent’s message a lie and a scam actually gets my defenses up that you’re the one trying to bamboozle me, since you’re using such emotionally charged language.
Maybe others react to these words differently though.
This comment is such a good example of managing to be non-triggering in making the point. It stands out to me amongst all the comments above it, which are at least somewhat heated.
Thanks!
It’s a pretty clear way of endorsing something to call it “honest reporting”.
Sure if you just call it “honest reporting”. But that was not the full phrase used. The full phrase used was “honest reporting of unconsciously biased reasoning”.
I would not call trimming that down to “honest reporting” a case of honest reporting! ;-)
If I claim, “Joe says X, and I think he honestly believes that, though his reasoning is likely unconsciously biased here”, then that does not at all seem to me like an endorsement of X, and certainly not a clear endorsement.