So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli’s former collaborator on that blog—fails to buy the Scary Idea “due to cognitive biases that are hard to overcome.” I find that a bit ironic.
Like Robin and Eli and perhaps yourself, I’ve read the heuristics and biases literature also. I’m not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
It seems more plausible to me to assert that many folks who believe the Scary Idea, are having their judgment warped by plain old EMOTIONAL bias—i.e. stuff like “fear of the unknown”, and “the satisfying feeling of being part a self-congratulatory in-crowd that thinks it understands the world better than everyone else”, and the well known “addictive chemical high of righteous indignation”, etc.
Regarding your final paragraph: Is your take on the debate between Robin and Eli about “Foom” that all Robin was saying boils down to “la la la I can’t hear you” ? If so I would suggest that maybe YOU are the one with the (metaphorical) hearing problem ;p ….
I think there’s a strong argument that: “The truth value of “Once an AGI is at the level of a smart human computer scientist, hard takeoff is likely” is significantly above zero.” No assertion stronger than that seems to me to be convincingly supported by any of the arguments made on Less Wrong or Overcoming Bias or any of Eli’s prior writings.
Personally, I actually do strongly suspect that once an AGI reaches that level, a hard takeoff is extremely likely unless the AGI has been specifically inculcated with goal content working against this. But I don’t claim to have a really compelling argument for this. I think we need a way better theory of AGI before we can frame such arguments compellingly. And I think that theory is going to emerge after we’ve experimented with some AGI systems that are fairly advanced, yet well below the “smart computer scientist” level.
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli’s former collaborator on that blog—fails to buy the Scary Idea “due to cognitive biases that are hard to overcome.” I find that a bit ironic
Welcome to humanity. ;-) I enjoy Hanson’s writing, but AFAICT, he’s not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, even when I basically agree with his opinion!
I similarly found Seth Roberts’ blog much less compelling than I did before (again, despite often sharing similar opinions), so it’s not just him that I find to be reasoning less well, post-Bayes.
(When I first joined LW, I saw posts that were disparaging of Seth Roberts, and I didn’t get what they were talking about, until after I understood what “privileging the hypothesis” really means, among other LW-isms.)
I’m not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
See, that’s a perfect example of a “la la la I can’t hear you” argument. You’re essentially claiming that you’re not a human being—an extraordinary claim, requiring extraordinary proof.
Simply knowing about biases does very nearly zero for your ability to overcome them, or to spot them in yourself (vs. spotting them in others, where it’s easy to do all day long.)
It seems more plausible to me to assert that many folks who believe the Scary Idea, are having their judgment warped by plain old EMOTIONAL bias—i.e. stuff like “fear of the unknown”, and “the satisfying feeling of being part a self-congratulatory in-crowd that thinks it understands the world better than everyone else”, and the well known “addictive chemical high of righteous indignation”, etc.
Since you said “many”, I’ll say that I agree with you that that is possible. In principle, it could be possible for me as well, but...
To be clear on my own position: I am a FAI skeptic, in the sense that I have a great many doubts about its feasibility—too many to present or argue here. All I’m saying in this discussion is that to believe AI is dangerous, one only need to believe that humans are terminally stupid, and there is more than ample evidence for that proposition. ;-)
Also, more relevant to the issue of emotional bias: I don’t primarily identify as an LW-ite; in fact I think that a substantial portion of the LW community has its head up its ass in overvaluing epistemic (vs. instrumental) rationality, and that many people here are emulating a level of reasoning they don’t personally comprehend… and before I understood the reasoning myself, I thought the entire thing was a cult of personality, and wondered why everybody was making such a religious-sounding fuss over a minor bit of mathematics used for spam filtering. ;-)
Is your take on the debate between Robin and Eli about “Foom” that all Robin was saying boils down to “la la la I can’t hear you” ?
My take is that before the debate, I was wary of AI dangers, but skeptical of fooming. Afterward, I was convinced fooming was near inevitable, given the ability to create a decent AI using a reasonably small amount of computing resources.
And a big part of that convincing was that Robin never seemed to engage with any of Eliezer’s arguments, and instead either attacked Eliezer or said, “but look, other things happen this other way”.
It seems to me that it’d be hard to do a worse job of convincing people of the anti-foom position, without being an idiot or a troll.
That is, AFAICT, Robin argued the way a lawyer argues when they know the client is guilty: pounding on the facts when the law is against them, pounding on the law when the facts are against them, and pounding on the table when the facts and law are both against.
I think there’s a strong argument that: “The truth value of “Once an AGI is at the level of a smart human computer scientist, hard takeoff is likely” is significantly above zero.”
Yep.
No assertion stronger than that seems to me to be convincingly supported by any of the arguments made on Less Wrong or Overcoming Bias or any of Eli’s prior writings.
I’m curious what stronger assertion you think is necessary. I would personally add, “Humans are bad at programming, no nontrivial program is bug-free, and an AI is a nontrivial program”, but I don’t think there’s a lack of evidence for any of these propositions. ;-)
[Edited to add the “given” qualification on “nearly inevitable”, as that’s been a background assumption I may not have made clear in my position on this thread.]
I enjoy Hanson’s writing, but AFAICT, he’s not a Bayesian reasoner.
I don’t believe it’s a meaningful property (as used in this context), and you should do well to taboo it (possibly, to convince me it’s actually meaningful).
I don’t believe it’s a meaningful property (as used in this context), and you should do well to taboo it
True enough; it would be more precise to say that he argues positions based on evidence which can also support other positions, and therefore isn’t convincing evidence to a Bayesian.
it would be more precise to say that he argues positions based on evidence which can also support other positions, and therefore isn’t convincing evidence to a Bayesian.
What do you mean? Evidence can’t support both sides of an argument, so how can one inappropriately use such impossible evidence?
What do you mean? Evidence can’t support both sides of an argument, so how can one inappropriately use such impossible evidence?
It would be a mistake assume that PJ was limiting his evaluation to positions selected from one of those ‘both sides’ of a clear dichotomy. Particularly since PJ has just been emphasizing the relevance of ‘privileging the hypothesis’ to bayesian reasoning and also said ‘other positions’ plural. This being the case no ‘impossible evidence’ is involved.
That’s true. I believe that PJ was commenting on how such evidence is used. In this context that means PJ would require that the evidence be used more rather than just for a chosen position. The difference between a ‘Traditional Rationalist’ debater and a (non-existent, idealized) unbiased Bayesian.
PJ, I’d love to drag you off topic slightly and ask you about this:
before I understood the reasoning myself, I thought the entire thing was a cult of personality, and wondered why everybody was making such a religious-sounding fuss over a minor bit of mathematics used for spam filtering. ;-)
What is it that you now understand, that you didn’t before?
What is it that you now understand, that you didn’t before?
That is annoyingly difficult to describe. Of central importance, I think, is the notion of privileging the hypothesis, and what that really means. Why what we naively consider “evidence” for a position, really isn’t.
ISTM that this is the core of grasping Bayesianism: not understanding what reasoning is, so much as understanding why what we all naively think is reasoning and evidence, usually isn’t.
Have you come across the post by that name? Without reading that it may be hard to reverse engineer the meaning from the jargon.
The intro gives a solid intuitive description:
Suppose that the police of Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues—the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.
Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”
That is privileging the hypothesis. When you start looking for evidence and taking an idea seriously when you have no good reason to consider it instead of countless others that are just as likely.
I have come across that post, and the story of the murder investigation, and I have an understanding of what the term means.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
Consider a hypothesis, H1. If a piece of evidence E1 is consistent with H, the naive interpretation is that E1 is an argument in favor of H1.
In truth, this isn’t an argument in favor of H1 -- it’s merely the absence of an argument against H1.
That, in a nutshell, is the difference between Bayesian reasoning and naive argumentation—also known as “confirmation bias”.
To really prove H1, you need to show that E1 wouldn’t happen under H2, H3, etc., and you need to look for disconfirmations D1, D2, etc. that would invalidate H1, to make sure they’re not there.
Before I really grokked Bayesianism, the above all made logical sense to me, but it didn’t seem as important as Eliezer claimed. It seemed like just another degree of rigor, rather than reasoning of a different quality.
Now that I “get it”, the other sort of evidence seems more-obviously inadequate—not just lower-quality evidence, but non-evidence.
ISTM that this is a good way to test at least one level of how well you grasp Bayes: does simple supporting evidence still feel like evidence to you? If so, you probably haven’t “gotten” it yet.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
That isn’t a wrongthought. Factors like you mention here are all good reason to assign credence to a hypothesis.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
Yes, no, maybe… that is exactly what it is! An example of an error would be having some preferred opinion and then finding all the evidence that supports that particular opinion. Or, say, encountering a piece of of evidence and noticing that it supports your favourite position but neglecting that it supports positions X, Y and Z just as well.
“Simply knowing about biases does very nearly zero for your ability to overcome them, or to spot them in yourself (vs. spotting them in others, where it’s easy to do all day long.)”
I looked briefly at the evidence for that. Most of it seemed to be from the so-called “self-serving bias”—which looks like an adaptive signalling system to me—and so is not really much of a “bias” at all.
People are unlikely to change existing adaptive behaviour just because someone points it out and says it is a form of “bias”. The more obvious thing to do is to conclude is that they don’t know what they are talking about—or that they are trying to manipulate you.
Regarding your final paragraph: Is your take on the debate between Robin and Eli about “Foom” that all Robin was saying boils down to “la la la I can’t hear you” ?
Good summary. Although I would have gone with “la la la la If you’re right then most of expertise is irrelevant. Must protect assumptions of free competition. Respect my authority!”
What I found most persuasive about that debate was Robin’s arguments—and their complete lack of merit. The absence of evidence is evidence of absence when there is a motivated competent debater with an incentive to provide good arguments.
Regarding your final paragraph: Is your take on the debate between Robin and Eli about “Foom” that all Robin was saying boils down to “la la la I can’t hear you” ?
I recall getting a distinct impression from Robin which I could caricature as “lalala you’re biased with hero-epic story.”
I also recall Eliezer asking for a probability breakdown, and I don’t think Robin provided it.
I recall getting a distinct impression from Robin which I could caricature as “lalala you’re biased with hero-epic story.”
… and closely related: “I’m an Impressive Economist. If you don’t just take my word for it you are arrogant.”
In what I took to be an insightful comment by Eliezer in the aftermath of the debate Eliezer noted that he and Robin seemed to have fundamental disagreement about what should be taken as good evidence. This lead into posts about ‘outside view’, ‘superficial similarities’ and ‘reference class tennis’. (And conceivably had something to do with priming the thoughts behind ‘status and stupidity’ although I would never presume that was primarily or significantly directed at Robin.)
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli’s former collaborator on that blog—fails to buy the Scary Idea “due to cognitive biases that are hard to overcome.” I find that a bit ironic.
Like Robin and Eli and perhaps yourself, I’ve read the heuristics and biases literature also. I’m not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
It seems more plausible to me to assert that many folks who believe the Scary Idea, are having their judgment warped by plain old EMOTIONAL bias—i.e. stuff like “fear of the unknown”, and “the satisfying feeling of being part a self-congratulatory in-crowd that thinks it understands the world better than everyone else”, and the well known “addictive chemical high of righteous indignation”, etc.
Regarding your final paragraph: Is your take on the debate between Robin and Eli about “Foom” that all Robin was saying boils down to “la la la I can’t hear you” ? If so I would suggest that maybe YOU are the one with the (metaphorical) hearing problem ;p ….
I think there’s a strong argument that: “The truth value of “Once an AGI is at the level of a smart human computer scientist, hard takeoff is likely” is significantly above zero.” No assertion stronger than that seems to me to be convincingly supported by any of the arguments made on Less Wrong or Overcoming Bias or any of Eli’s prior writings.
Personally, I actually do strongly suspect that once an AGI reaches that level, a hard takeoff is extremely likely unless the AGI has been specifically inculcated with goal content working against this. But I don’t claim to have a really compelling argument for this. I think we need a way better theory of AGI before we can frame such arguments compellingly. And I think that theory is going to emerge after we’ve experimented with some AGI systems that are fairly advanced, yet well below the “smart computer scientist” level.
Welcome to humanity. ;-) I enjoy Hanson’s writing, but AFAICT, he’s not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, even when I basically agree with his opinion!
I similarly found Seth Roberts’ blog much less compelling than I did before (again, despite often sharing similar opinions), so it’s not just him that I find to be reasoning less well, post-Bayes.
(When I first joined LW, I saw posts that were disparaging of Seth Roberts, and I didn’t get what they were talking about, until after I understood what “privileging the hypothesis” really means, among other LW-isms.)
See, that’s a perfect example of a “la la la I can’t hear you” argument. You’re essentially claiming that you’re not a human being—an extraordinary claim, requiring extraordinary proof.
Simply knowing about biases does very nearly zero for your ability to overcome them, or to spot them in yourself (vs. spotting them in others, where it’s easy to do all day long.)
Since you said “many”, I’ll say that I agree with you that that is possible. In principle, it could be possible for me as well, but...
To be clear on my own position: I am a FAI skeptic, in the sense that I have a great many doubts about its feasibility—too many to present or argue here. All I’m saying in this discussion is that to believe AI is dangerous, one only need to believe that humans are terminally stupid, and there is more than ample evidence for that proposition. ;-)
Also, more relevant to the issue of emotional bias: I don’t primarily identify as an LW-ite; in fact I think that a substantial portion of the LW community has its head up its ass in overvaluing epistemic (vs. instrumental) rationality, and that many people here are emulating a level of reasoning they don’t personally comprehend… and before I understood the reasoning myself, I thought the entire thing was a cult of personality, and wondered why everybody was making such a religious-sounding fuss over a minor bit of mathematics used for spam filtering. ;-)
My take is that before the debate, I was wary of AI dangers, but skeptical of fooming. Afterward, I was convinced fooming was near inevitable, given the ability to create a decent AI using a reasonably small amount of computing resources.
And a big part of that convincing was that Robin never seemed to engage with any of Eliezer’s arguments, and instead either attacked Eliezer or said, “but look, other things happen this other way”.
It seems to me that it’d be hard to do a worse job of convincing people of the anti-foom position, without being an idiot or a troll.
That is, AFAICT, Robin argued the way a lawyer argues when they know the client is guilty: pounding on the facts when the law is against them, pounding on the law when the facts are against them, and pounding on the table when the facts and law are both against.
Yep.
I’m curious what stronger assertion you think is necessary. I would personally add, “Humans are bad at programming, no nontrivial program is bug-free, and an AI is a nontrivial program”, but I don’t think there’s a lack of evidence for any of these propositions. ;-)
[Edited to add the “given” qualification on “nearly inevitable”, as that’s been a background assumption I may not have made clear in my position on this thread.]
I don’t believe it’s a meaningful property (as used in this context), and you should do well to taboo it (possibly, to convince me it’s actually meaningful).
True enough; it would be more precise to say that he argues positions based on evidence which can also support other positions, and therefore isn’t convincing evidence to a Bayesian.
What do you mean? Evidence can’t support both sides of an argument, so how can one inappropriately use such impossible evidence?
What do you mean, “both”?
It would be a mistake assume that PJ was limiting his evaluation to positions selected from one of those ‘both sides’ of a clear dichotomy. Particularly since PJ has just been emphasizing the relevance of ‘privileging the hypothesis’ to bayesian reasoning and also said ‘other positions’ plural. This being the case no ‘impossible evidence’ is involved.
I see. But in that case, there is no problem with use of such evidence.
That’s true. I believe that PJ was commenting on how such evidence is used. In this context that means PJ would require that the evidence be used more rather than just for a chosen position. The difference between a ‘Traditional Rationalist’ debater and a (non-existent, idealized) unbiased Bayesian.
PJ, I’d love to drag you off topic slightly and ask you about this:
What is it that you now understand, that you didn’t before?
That is annoyingly difficult to describe. Of central importance, I think, is the notion of privileging the hypothesis, and what that really means. Why what we naively consider “evidence” for a position, really isn’t.
ISTM that this is the core of grasping Bayesianism: not understanding what reasoning is, so much as understanding why what we all naively think is reasoning and evidence, usually isn’t.
That hasn’t really helped… would you try again?
(What does privileging the hypothesis really mean? and why is reasoning and evidence usually … not?)
Have you come across the post by that name? Without reading that it may be hard to reverse engineer the meaning from the jargon.
The intro gives a solid intuitive description:
That is privileging the hypothesis. When you start looking for evidence and taking an idea seriously when you have no good reason to consider it instead of countless others that are just as likely.
I have come across that post, and the story of the murder investigation, and I have an understanding of what the term means.
The obvious answer to the murder quote is that you look harder for evidence around the crimescene, and go where the evidence leads, and there only. The more realistic answer is that you look for recent similar murders, for people who had a grudge against the dead person, for criminals known to commit murder in that city… and use those to progress the investigation because those are useful places to start.
I’m wondering what pjeby has realised, which turns this naive yet straightforward understanding into wrongthought worth commenting on.
If evidence is not facts which reveal some result-options to be more likely true and others less likely true, then what is it?
Consider a hypothesis, H1. If a piece of evidence E1 is consistent with H, the naive interpretation is that E1 is an argument in favor of H1.
In truth, this isn’t an argument in favor of H1 -- it’s merely the absence of an argument against H1.
That, in a nutshell, is the difference between Bayesian reasoning and naive argumentation—also known as “confirmation bias”.
To really prove H1, you need to show that E1 wouldn’t happen under H2, H3, etc., and you need to look for disconfirmations D1, D2, etc. that would invalidate H1, to make sure they’re not there.
Before I really grokked Bayesianism, the above all made logical sense to me, but it didn’t seem as important as Eliezer claimed. It seemed like just another degree of rigor, rather than reasoning of a different quality.
Now that I “get it”, the other sort of evidence seems more-obviously inadequate—not just lower-quality evidence, but non-evidence.
ISTM that this is a good way to test at least one level of how well you grasp Bayes: does simple supporting evidence still feel like evidence to you? If so, you probably haven’t “gotten” it yet.
That is from ‘You can’t prove the null by not rejecting it’.
That isn’t a wrongthought. Factors like you mention here are all good reason to assign credence to a hypothesis.
Yes, no, maybe… that is exactly what it is! An example of an error would be having some preferred opinion and then finding all the evidence that supports that particular opinion. Or, say, encountering a piece of of evidence and noticing that it supports your favourite position but neglecting that it supports positions X, Y and Z just as well.
I looked briefly at the evidence for that. Most of it seemed to be from the so-called “self-serving bias”—which looks like an adaptive signalling system to me—and so is not really much of a “bias” at all.
People are unlikely to change existing adaptive behaviour just because someone points it out and says it is a form of “bias”. The more obvious thing to do is to conclude is that they don’t know what they are talking about—or that they are trying to manipulate you.
Good summary. Although I would have gone with “la la la la If you’re right then most of expertise is irrelevant. Must protect assumptions of free competition. Respect my authority!”
What I found most persuasive about that debate was Robin’s arguments—and their complete lack of merit. The absence of evidence is evidence of absence when there is a motivated competent debater with an incentive to provide good arguments.
I recall getting a distinct impression from Robin which I could caricature as “lalala you’re biased with hero-epic story.”
I also recall Eliezer asking for a probability breakdown, and I don’t think Robin provided it.
… and closely related: “I’m an Impressive Economist. If you don’t just take my word for it you are arrogant.”
In what I took to be an insightful comment by Eliezer in the aftermath of the debate Eliezer noted that he and Robin seemed to have fundamental disagreement about what should be taken as good evidence. This lead into posts about ‘outside view’, ‘superficial similarities’ and ‘reference class tennis’. (And conceivably had something to do with priming the thoughts behind ‘status and stupidity’ although I would never presume that was primarily or significantly directed at Robin.)