Strictly speaking I’m not actually sure the AI-box experiment falls under the AI domain. For that particular thing, it’s mostly that they’ve thought about it more than me.
But in general I think you’re being a bit unfair to Eliezer Y. and probably MIRI as well. By objective standards, I’m not a domain expert in anything at all either. Despite this, I still fancy myself a domain expert specifically within various narrow sub-fields of neuroscience and psychology. I think people who know those sides of me would agree. If they don’t, well, i will be acquiring the objective signals of domain expertise in a few short years, and I’m quite certain that the process of earning these signals is not what is causing domain expertise.
Having read Eliezer’s writing, I’m quite convinced that he has sufficient self awareness to know what he does and does not has expertise in. If he expresses high confidence in something, that carries a lot of weight for me—and if that something is in a field that he knows much more about than me, his opinion holds more weight than mine. I can trust him to be reasonable about assigning certainties.
I don’t think I’m blindly overvaluing his opinion either. As a token to prove not-faith, I’ll offer up an example of where I’m leaning towards disagreement with E.Y. and most of Lesswrong even after taking the opinions into account: I currently still favor Causal Decision Theory (with a small modification I’ve made that makes consistently it win) over Timeless Decision Theory, despite this area being extremely in EY’s domain and out of my domain.
If they don’t, well, i will be acquiring the objective signals of domain expertise in a few short years, and I’m quite certain that the process of earning these signals is not what is causing domain expertise.
But an external observer has no way of assessing your expertise other than looking at objective signals. Objective signals don’t necessarily have to be degrees or PhDs. Relevant work experience or a record of peer reviewed publications would also qualify.
Having read Eliezer’s writing, I’m quite convinced that he has sufficient self awareness to know what he does and does not has expertise in. If he expresses high confidence in something, that carries a lot of weight for me
Have you read his quantum mechanics sequence? Or his writings on cryonics? Or even on morality and decision theory? His general approach is “This is the only one obviously correct soultion to the problem, and everybody who thinks otherwise is an idiot” while in fact he often ignores or strawmans known opposing positions and counter-arguments.
and if that something is in a field that he knows much more about than me, his opinion holds more weight than mine.
Beware of a possible circular reasoning: How do you know that EY knows much more than you in a given field? Because he is a doman expert. How do you know that EY is a domain expert? Because he knows much more than you in that field.
Timeless Decision Theory, despite this area being extremely in EY’s domain
It’s not. Timeless Decision Theory is not considered a significant development by anyone outside MIRI that studies decision theory professionally (mathematicians, economists, AI researchers, philosophers).
I did start reading the QM sequence, but then realized I wasn’t getting anywhere and stopped. I don’t think knowing QM is useful for philosophy or rationality, except as an example of how science works, so I’m not sure why there is a sequence on it. I figured that if I actually wanted to understand I’d be better off working through physics books. My impression is that the physics community thinks it is well written for but not somewhat misleading. I’m not sure which cryo-writings you are referring to—all the ones I have come across are opinion pieces about why one aught to do cryonics. I haven’t come across any pieces referring to facts....but biology contains my own domain and i trust my own opinion more anyway. You are correct that none of those reasons are good reasons to respect Eliezer Y.
This discussion essentially seems to be “boo vs yay” for Eliezer Y. Let me explain why I really respect Eliezer Y:
What I did read is his work on logic and epistemology. It was the first time I’ve read an author who happened to agree with me on almost all major points about logic, epistemology, and ontology. (Re: almost: We may or may not diverge ontologically on subjective experience / the hard problem of consciousness / what makes reality real- I’m not sure. I’m confident that I am not confused. He’s written some things that sound like he knows, and other things that sound like he’s making the classic mistakes, so I’m uncertain as to his actual views. Also, decision theory. But that’s it.). Granted, it’s not uncommon for other people on Lesswrong to be equally philosophically correct, but Eliezer Y. was the gathering point that brought all these correct people together. Some of them might even have become less philosophically wrong as a result of being here. That counts for something, in my book.
He expressed insights identical to the ones that I had in younger years, and often in more articulate terms than I could have. He compressed complex insights into snappy phrases that allow me to have a much higher information density and much shorter inferential distance when communicating with other Lesswrongers. Creating a community where everyone understands phrases like “notice confusion”, “2-place word”, etc...save entire paragraphs of communication. Having these concepts condensed into smaller verbal labels also helps with thinking. It doesn’t matter that many others have thought of it before—the presentation of the ideas is the impressive part.
When someone independently converges with me on many seemingly unrelated topics for which most people do not converge on, I begin to trust their judgement. I begin to take their opinion as evidence that I would have the same opinion, were I presented with the same evidence that they have. When that same person introduces me to cool concepts I haven’t considered and plays a key role in founding a community of people who have all independently converged on my philosophical insights, putting even greater weight on their opinions is the natural and correct reaction.
This really isn’t hero worship or an affective death spirals - I could write this paragraph about a lot of other people. It’s a measured reaction to seeing someone do impressive things, firsthand. I could say many of the exact same things about philosophical convergence combined with showing me lots of cool new things about many other people I know on the internet, many other people on Lesswrong forums, and at least one person I know in real life. I’m just give respect where it is due. If we broaden from the bolded text to other domains of life, there are multiple people IRL who I respect in a similar way.
But an external observer has no way of assessing your expertise other than looking at objective signals. Objective signals don’t necessarily have to be degrees or PhDs. Relevant work experience or a record of peer reviewed publications would also qualify.
In addition to those things, I also consider a history of conversations or a reading of someone’s writing as evidence. Granted, this might get me people who just sound verbally impressive...but I think I’ve got enough filters that a person has to sound impressive and be impressive in order to pass this way.
And by that metric, from the outside view E.Y. (and quite a few other people on Lesswrong) have put out more signals than I have.
This is the only one obviously correct soultion to the problem, and everybody who thinks otherwise is an idiot” while in fact he often ignores or strawmans known opposing positions and counter-arguments
Yeah, the words can be rougher than optimal. I can see where you are coming from. I think that because smart people are accustomed to other people making mistakes more frequently than themselves, a lot of smart people have the bad habit of acting dismissive towards others. Occasionally, you might dismiss someone for being wrong when, this time, they are actually right and it is you who is wrong. It’s a bad habit not just because it is socially costly but also because it can prevent you from changing your opinion when you are mistaken.
It’s an example of a situation where he did display overconfidence. His introductory presentation of QM is more or less correct, up to some technical details, but things start to fall apart when he moves to interpretations of QM.
Quantum mechanics is conterintuitive, and there are various epistemological interpretations of its fundamental concepts have been developed over the decades. The consensus among most physicists and philosophers of science is that none of them has proved to be clearly superior and in fact it’s not even clear whether the very issue of finding a correct intepretation of QM is even a proper scientific question. Yudkowsky, on the other hand, claimed that by using Bayesian inference, he settled the question, pretty much proving that the many-worlds intepretation is the only correct one. It should be noted that the many-world interpretation is indeed a plausible one and is quite popular among physicists, but most physicists wouldn’t consider it on par with a scientifically justified belief, while EY claimed that MWI is obviously true and everybody who disagrees doesn’t understand probability theory. Furthermore, he ignored or misrepresented the other intepretations, for instance conflating Copenhagen intepretation with the objective collapse intepretations. refref
There are other examples of EY overconfidence, though that is perhaps the most blatant one. Mind you, I’m not saying that the guy is an idiot and you should disregard everything that he wrote. But you should not automaticaly assume that his estimates of his own competence are well calibrated. By the way, this is a general phenomenon, known as Dunning–Kruger effect : people with little competence in a given field tend to overesitmate their own competence.
When someone independently converges with me on many seemingly unrelated topics for which most people do not converge on, I begin to trust their judgement. I begin to take their opinion as evidence that I would have the same opinion, were I presented with the same evidence that they have. When that same person introduces me to cool concepts I haven’t considered and plays a key role in founding a community of people who have all independently converged on my philosophical insights, putting even greater weight on their opinions is the natural and correct reaction.
It is a natural reaction, but in general it can be very much misleading. People naturally tend to exhibit In-group bias and deference to authority. When somebody you respect a lot says something and you are inclined to trust them even if you can’t properly evaluate the claim, you should know where this instinctive reaction comes from and you should be wary.
In addition to those things, I also consider a history of conversations or a reading of someone’s writing as evidence.
The problem is that if you are not a domain expert in a field, it’s difficult to evaluate whether somebody else is a domain expert just by talking to them or reading their writing. You can recognize whether somebody is less competent than you are, but recognizing higher competence is much more difficult without independent objective signals.
Moreover, general intelligence, or even actual expertise in a given field, don’t automatically translate to expertise in another field. For instance, Isaac Newton was a genius and a domain expert in physics. This doesn’t mean that his theological arguments hold any merit.
Actually, by any objective standard they are not.
Strictly speaking I’m not actually sure the AI-box experiment falls under the AI domain. For that particular thing, it’s mostly that they’ve thought about it more than me.
But in general I think you’re being a bit unfair to Eliezer Y. and probably MIRI as well. By objective standards, I’m not a domain expert in anything at all either. Despite this, I still fancy myself a domain expert specifically within various narrow sub-fields of neuroscience and psychology. I think people who know those sides of me would agree. If they don’t, well, i will be acquiring the objective signals of domain expertise in a few short years, and I’m quite certain that the process of earning these signals is not what is causing domain expertise.
Having read Eliezer’s writing, I’m quite convinced that he has sufficient self awareness to know what he does and does not has expertise in. If he expresses high confidence in something, that carries a lot of weight for me—and if that something is in a field that he knows much more about than me, his opinion holds more weight than mine. I can trust him to be reasonable about assigning certainties.
I don’t think I’m blindly overvaluing his opinion either. As a token to prove not-faith, I’ll offer up an example of where I’m leaning towards disagreement with E.Y. and most of Lesswrong even after taking the opinions into account: I currently still favor Causal Decision Theory (with a small modification I’ve made that makes consistently it win) over Timeless Decision Theory, despite this area being extremely in EY’s domain and out of my domain.
But an external observer has no way of assessing your expertise other than looking at objective signals. Objective signals don’t necessarily have to be degrees or PhDs. Relevant work experience or a record of peer reviewed publications would also qualify.
Have you read his quantum mechanics sequence? Or his writings on cryonics? Or even on morality and decision theory? His general approach is “This is the only one obviously correct soultion to the problem, and everybody who thinks otherwise is an idiot” while in fact he often ignores or strawmans known opposing positions and counter-arguments.
Beware of a possible circular reasoning:
How do you know that EY knows much more than you in a given field? Because he is a doman expert.
How do you know that EY is a domain expert? Because he knows much more than you in that field.
It’s not. Timeless Decision Theory is not considered a significant development by anyone outside MIRI that studies decision theory professionally (mathematicians, economists, AI researchers, philosophers).
I did start reading the QM sequence, but then realized I wasn’t getting anywhere and stopped. I don’t think knowing QM is useful for philosophy or rationality, except as an example of how science works, so I’m not sure why there is a sequence on it. I figured that if I actually wanted to understand I’d be better off working through physics books. My impression is that the physics community thinks it is well written for but not somewhat misleading. I’m not sure which cryo-writings you are referring to—all the ones I have come across are opinion pieces about why one aught to do cryonics. I haven’t come across any pieces referring to facts....but biology contains my own domain and i trust my own opinion more anyway. You are correct that none of those reasons are good reasons to respect Eliezer Y.
This discussion essentially seems to be “boo vs yay” for Eliezer Y. Let me explain why I really respect Eliezer Y:
What I did read is his work on logic and epistemology. It was the first time I’ve read an author who happened to agree with me on almost all major points about logic, epistemology, and ontology. (Re: almost: We may or may not diverge ontologically on subjective experience / the hard problem of consciousness / what makes reality real- I’m not sure. I’m confident that I am not confused. He’s written some things that sound like he knows, and other things that sound like he’s making the classic mistakes, so I’m uncertain as to his actual views. Also, decision theory. But that’s it.). Granted, it’s not uncommon for other people on Lesswrong to be equally philosophically correct, but Eliezer Y. was the gathering point that brought all these correct people together. Some of them might even have become less philosophically wrong as a result of being here. That counts for something, in my book.
He expressed insights identical to the ones that I had in younger years, and often in more articulate terms than I could have. He compressed complex insights into snappy phrases that allow me to have a much higher information density and much shorter inferential distance when communicating with other Lesswrongers. Creating a community where everyone understands phrases like “notice confusion”, “2-place word”, etc...save entire paragraphs of communication. Having these concepts condensed into smaller verbal labels also helps with thinking. It doesn’t matter that many others have thought of it before—the presentation of the ideas is the impressive part.
When someone independently converges with me on many seemingly unrelated topics for which most people do not converge on, I begin to trust their judgement. I begin to take their opinion as evidence that I would have the same opinion, were I presented with the same evidence that they have. When that same person introduces me to cool concepts I haven’t considered and plays a key role in founding a community of people who have all independently converged on my philosophical insights, putting even greater weight on their opinions is the natural and correct reaction.
This really isn’t hero worship or an affective death spirals - I could write this paragraph about a lot of other people. It’s a measured reaction to seeing someone do impressive things, firsthand. I could say many of the exact same things about philosophical convergence combined with showing me lots of cool new things about many other people I know on the internet, many other people on Lesswrong forums, and at least one person I know in real life. I’m just give respect where it is due. If we broaden from the bolded text to other domains of life, there are multiple people IRL who I respect in a similar way.
In addition to those things, I also consider a history of conversations or a reading of someone’s writing as evidence. Granted, this might get me people who just sound verbally impressive...but I think I’ve got enough filters that a person has to sound impressive and be impressive in order to pass this way.
And by that metric, from the outside view E.Y. (and quite a few other people on Lesswrong) have put out more signals than I have.
Yeah, the words can be rougher than optimal. I can see where you are coming from. I think that because smart people are accustomed to other people making mistakes more frequently than themselves, a lot of smart people have the bad habit of acting dismissive towards others. Occasionally, you might dismiss someone for being wrong when, this time, they are actually right and it is you who is wrong. It’s a bad habit not just because it is socially costly but also because it can prevent you from changing your opinion when you are mistaken.
It’s an example of a situation where he did display overconfidence. His introductory presentation of QM is more or less correct, up to some technical details, but things start to fall apart when he moves to interpretations of QM.
Quantum mechanics is conterintuitive, and there are various epistemological interpretations of its fundamental concepts have been developed over the decades. The consensus among most physicists and philosophers of science is that none of them has proved to be clearly superior and in fact it’s not even clear whether the very issue of finding a correct intepretation of QM is even a proper scientific question.
Yudkowsky, on the other hand, claimed that by using Bayesian inference, he settled the question, pretty much proving that the many-worlds intepretation is the only correct one.
It should be noted that the many-world interpretation is indeed a plausible one and is quite popular among physicists, but most physicists wouldn’t consider it on par with a scientifically justified belief, while EY claimed that MWI is obviously true and everybody who disagrees doesn’t understand probability theory. Furthermore, he ignored or misrepresented the other intepretations, for instance conflating Copenhagen intepretation with the objective collapse intepretations. ref ref
There are other examples of EY overconfidence, though that is perhaps the most blatant one. Mind you, I’m not saying that the guy is an idiot and you should disregard everything that he wrote. But you should not automaticaly assume that his estimates of his own competence are well calibrated.
By the way, this is a general phenomenon, known as Dunning–Kruger effect : people with little competence in a given field tend to overesitmate their own competence.
It is a natural reaction, but in general it can be very much misleading. People naturally tend to exhibit In-group bias and deference to authority. When somebody you respect a lot says something and you are inclined to trust them even if you can’t properly evaluate the claim, you should know where this instinctive reaction comes from and you should be wary.
The problem is that if you are not a domain expert in a field, it’s difficult to evaluate whether somebody else is a domain expert just by talking to them or reading their writing. You can recognize whether somebody is less competent than you are, but recognizing higher competence is much more difficult without independent objective signals.
Moreover, general intelligence, or even actual expertise in a given field, don’t automatically translate to expertise in another field.
For instance, Isaac Newton was a genius and a domain expert in physics. This doesn’t mean that his theological arguments hold any merit.