I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
I’m not making the argument “business is stupid about AI timelines therefore the opposite is right”.
Yes, this is a reason to expect distortion in favor of mainstream opinions (including medium-long timelines). It can be modeled along with the other distortions.
Regardless of whether Gary Marcus is “bad” (what would that even mean?), the concrete criticisms aren’t ones that imply AI timelines are short, deep learning can get to AGI, etc. They’re ones that sometimes imply the opposite, and anyway, ones that systematically distort narratives towards short timelines (as I spelled out). If it’s already widely known that deep learning can’t do reasoning, then… isn’t that reason not to expect short AI timelines, and to expect that many of the non-experts who think so (including tech execs and rationalists) have been duped?
If you think I did the modeling wrong, and have concrete criticisms (such as the criticism that there’s a distortionary effect towards long timelines due to short timelines seeming loony), then that’s useful. But it seems like you’re giving a general counterargument against modeling these sorts of sociopolitical dynamics. If the modeling comes out that there are more distortionary effects in one direction than another, or that there are different distortionary effects in different circumstances, isn’t that important to take into consideration rather than dismissing it as “monkey politics”?
On 3, I notice this part of your post jumps out to me:
Of course, I’d have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
Okay, that might be useful. (For a mainstream perspective on this I have agreement with, see The Scams Are Winning).
The argument for most of the post is that there are active distortionary pressures towards short timelines. I mentioned the tech survey in the conclusion to indicate that the distortionary pressures aren’t some niche interest, they’re having big effects on the world.
Will discuss later in this comment.
By “concrete criticisms” I mean the Twitter replies. I’m studying the implicit assumptions behind these criticisms to see what it says about attitudes in the AI field.
I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way.
I think this is the main thrust of your criticism, and also the main thrust of point 3. I do think lots of things are scams, and I could have written about other things instead, but I wrote about short timelines, because I can’t talk about everything in one essay, and this one seems important.
I couldn’t have written an equally compelling essay on biases in favor of long timelines without lying, I think, or even with lying while trying to maintain plausibility. (Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy)
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases. I can’t prove that there isn’t an equally strong case that there’s bias in favor of long timelines, because that would be proving a negative. (Even if I did, that would be a case of “sometimes there’s bias in favor of X, sometimes against X, it depends on the situation/person/etc”; the newly discovered distortionary pressures don’t negate the fact that the previously discovered ones exist)
I agree that it’s difficult (practically impossible) to engage with a criticism of the form “I don’t find your examples compelling”, because such a criticism is in some sense opaque: there’s very little you can do with the information provided, except possibly add more examples (which is time-consuming, and also might not even work if the additional examples you choose happen to be “uncompelling” in the same way as your original examples).
However, there is a deeper point to be made here: presumably you yourself only arrived at your position after some amount of consideration. The fact that others appear to find your arguments (including any examples you used) uncompelling, then, usually indicates one of two things:
You have not successfully expressed the full chain of reasoning that led you to originally adopt your conclusion (owing perhaps to constraints on time, effort, issues with legibility, or strategic concerns). In this case, you should be unsurprised at the fact that other people don’t appear to be convinced by your post, since your post does not present the same arguments/evidence that convinced you yourself to believe your position.
You do, in fact, find the raw examples in your post persuasive. This would then indicate that any disagreement between you and your readers is due to differing priors, i.e. evidence that you would consider sufficient to convince yourself of something, does not likewise convince others. Ideally, this fact should cause you to update in favor of the possibility that you are mistaken, at least if you believe that your interlocutors are being rational and intellectually honest.
I don’t know which of these two possibilities it actually is, but it may be worth keeping this in mind if you make a post that a bunch of people seem to disagree with.
Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy
Minor, unconfident, point: I’m not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own “look how manipulative the other guys are” essays. If the target is thoughtful people trying to figure things out, they’ll want to hear about both sides, no?
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases.
I think courts spend a fair bit of effort not just in evaluating strength of case, but standing and impact of the case. not “what else could you argue?”, but “why does this complaint matter, to whom?”
IMO, you’re absolutely right that there’s lots of pressures to make unrealistically short predictions for advances, and this causes a lot of punditry, and academia and industry, to … what? It’s annoying, but who is harmed and who has the ability to improve things?
Personally, I think timeline for AGI is a poorly-defined prediction—the big question is what capabilities satisfy the “AGI” definition. I think we WILL see more and more impressive performance in aspects of problem-solving and prediction that would have been classified as “intelligence” 50 years ago, but that we probably won’t credit with consciousness or generality.
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme.
The eponymous Charles Ponzi had a plausible arbitrage idea backing his famous scheme; it’s not unlikely that he was already in over his head (and therefore desperately trying to make himself believe he’d find some other way to make his investors whole) by the time he found out that transaction costs made the whole thing impractical.
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.
I think this is an important point to make, since the implicit lesson I’m reading here is that there’s no difference between giving false information intentionally (“lying”) and giving false information unintentionally (“being wrong”). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.
Of course there’s an important difference between lying and being wrong. It’s a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely.
If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren’t, and get tricked into having worse beliefs than if they had used an uninformative prior.
I am not trying to punish people, I am trying to make agent-based models.
(Regarding Madoff, what you present is suggestive, but it doesn’t prove that he was conscious that he had no plans to trade and was deceiving his investors. We don’t really know what he was conscious of and what he wasn’t.)
I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)
I’m not making the argument “business is stupid about AI timelines therefore the opposite is right”.
Yes, this is a reason to expect distortion in favor of mainstream opinions (including medium-long timelines). It can be modeled along with the other distortions.
Regardless of whether Gary Marcus is “bad” (what would that even mean?), the concrete criticisms aren’t ones that imply AI timelines are short, deep learning can get to AGI, etc. They’re ones that sometimes imply the opposite, and anyway, ones that systematically distort narratives towards short timelines (as I spelled out). If it’s already widely known that deep learning can’t do reasoning, then… isn’t that reason not to expect short AI timelines, and to expect that many of the non-experts who think so (including tech execs and rationalists) have been duped?
If you think I did the modeling wrong, and have concrete criticisms (such as the criticism that there’s a distortionary effect towards long timelines due to short timelines seeming loony), then that’s useful. But it seems like you’re giving a general counterargument against modeling these sorts of sociopolitical dynamics. If the modeling comes out that there are more distortionary effects in one direction than another, or that there are different distortionary effects in different circumstances, isn’t that important to take into consideration rather than dismissing it as “monkey politics”?
On 3, I notice this part of your post jumps out to me:
One possibility behind the “none at all” is that ‘disagreement leads to writing posts, agreement leads to silence’, but another possibility is ‘if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.’
My sense is it’s more the latter, which makes this seem weirdly ‘bad faith’ to me. That is, suppose I know Alice doesn’t want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice’s concerns are overblown and so write a post about how actually it’s very hard to use biological weapons and we shouldn’t waste money on countermeasures. Alice won’t respond with “look, it’s not hard, you just do A, B, C and then you kill thousands of people,” because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.
It is not obvious what the right path is here. Obviously, we can’t let anyone hijack the group epistemology by having concerns about what can and can’t be made public knowledge, but also it seems like we shouldn’t pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.
Alice has the option of finding a generally trusted arbiter, Carol, who she tells the plan to; then, Carol can tell the public how realistic the plan is.
Do we have those generally trusted arbiters? I note that it seems like many people who I think of as ‘generally trusted’ are trusted because of some ‘private information’, even if it’s just something like “I’ve talked to Carol and get the sense that she’s sensible.”
I don’t think there are fully general trusted arbiters, but it’s possible to bridge the gap with person X by finding person Y trusted by both you and X.
I think that sufficiently universally trusted arbiters may be very hard to find, but Alice can also refrain from that option to prevent the issue gaining more public attention, believing more attention or attention of various groups to be harmful. I can imagine cases, where more credible people (Carols) saying they are convinced that e.g. “it is really easily doable” would disproportionally give more incentives for misuse than defense (by the groups the information reaches, the reliability signals those groups accept etc).
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
Okay, that might be useful. (For a mainstream perspective on this I have agreement with, see The Scams Are Winning).
The argument for most of the post is that there are active distortionary pressures towards short timelines. I mentioned the tech survey in the conclusion to indicate that the distortionary pressures aren’t some niche interest, they’re having big effects on the world.
Will discuss later in this comment.
By “concrete criticisms” I mean the Twitter replies. I’m studying the implicit assumptions behind these criticisms to see what it says about attitudes in the AI field.
I think this is the main thrust of your criticism, and also the main thrust of point 3. I do think lots of things are scams, and I could have written about other things instead, but I wrote about short timelines, because I can’t talk about everything in one essay, and this one seems important.
I couldn’t have written an equally compelling essay on biases in favor of long timelines without lying, I think, or even with lying while trying to maintain plausibility. (Note also, it seems useful for there to be essays on the Democrat party’s marketing strategy that don’t also talk about the Republican party’s marketing strategy)
Courts don’t work by the judge saying “well, you know, you could argue for anything, so what’s the point in having people present cases for one side or the other?” The point is that some cases end up stronger than other cases. I can’t prove that there isn’t an equally strong case that there’s bias in favor of long timelines, because that would be proving a negative. (Even if I did, that would be a case of “sometimes there’s bias in favor of X, sometimes against X, it depends on the situation/person/etc”; the newly discovered distortionary pressures don’t negate the fact that the previously discovered ones exist)
I agree that it’s difficult (practically impossible) to engage with a criticism of the form “I don’t find your examples compelling”, because such a criticism is in some sense opaque: there’s very little you can do with the information provided, except possibly add more examples (which is time-consuming, and also might not even work if the additional examples you choose happen to be “uncompelling” in the same way as your original examples).
However, there is a deeper point to be made here: presumably you yourself only arrived at your position after some amount of consideration. The fact that others appear to find your arguments (including any examples you used) uncompelling, then, usually indicates one of two things:
You have not successfully expressed the full chain of reasoning that led you to originally adopt your conclusion (owing perhaps to constraints on time, effort, issues with legibility, or strategic concerns). In this case, you should be unsurprised at the fact that other people don’t appear to be convinced by your post, since your post does not present the same arguments/evidence that convinced you yourself to believe your position.
You do, in fact, find the raw examples in your post persuasive. This would then indicate that any disagreement between you and your readers is due to differing priors, i.e. evidence that you would consider sufficient to convince yourself of something, does not likewise convince others. Ideally, this fact should cause you to update in favor of the possibility that you are mistaken, at least if you believe that your interlocutors are being rational and intellectually honest.
I don’t know which of these two possibilities it actually is, but it may be worth keeping this in mind if you make a post that a bunch of people seem to disagree with.
Scott’s post explaining his opinion is here, and is called ‘Against Lie Inflation’.
Minor, unconfident, point: I’m not sure that this is true. It seems like it would result in people mostly fallacy-fallacy-ing the other side, each with their own “look how manipulative the other guys are” essays. If the target is thoughtful people trying to figure things out, they’ll want to hear about both sides, no?
I think courts spend a fair bit of effort not just in evaluating strength of case, but standing and impact of the case. not “what else could you argue?”, but “why does this complaint matter, to whom?”
IMO, you’re absolutely right that there’s lots of pressures to make unrealistically short predictions for advances, and this causes a lot of punditry, and academia and industry, to … what? It’s annoying, but who is harmed and who has the ability to improve things?
Personally, I think timeline for AGI is a poorly-defined prediction—the big question is what capabilities satisfy the “AGI” definition. I think we WILL see more and more impressive performance in aspects of problem-solving and prediction that would have been classified as “intelligence” 50 years ago, but that we probably won’t credit with consciousness or generality.
Then perhaps you should start here.
The eponymous Charles Ponzi had a plausible arbitrage idea backing his famous scheme; it’s not unlikely that he was already in over his head (and therefore desperately trying to make himself believe he’d find some other way to make his investors whole) by the time he found out that transaction costs made the whole thing impractical.
Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.
I think this is an important point to make, since the implicit lesson I’m reading here is that there’s no difference between giving false information intentionally (“lying”) and giving false information unintentionally (“being wrong”). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.
Of course there’s an important difference between lying and being wrong. It’s a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely.
If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren’t, and get tricked into having worse beliefs than if they had used an uninformative prior.
I am not trying to punish people, I am trying to make agent-based models.
(Regarding Madoff, what you present is suggestive, but it doesn’t prove that he was conscious that he had no plans to trade and was deceiving his investors. We don’t really know what he was conscious of and what he wasn’t.)