Since this is now kinda on-topic… I don’t think Eliezer Yudkowsky is considerably more intelligent that I am. I’m aware of Dunning-Kruger effect, but the interesting part is that I simply don’t find any way to overcome this. I’m fairly intelligent, but since people around here regard my barely-MENSA(probably not even that) -level of IQ a minium requirement to even read this blog, the situation I’m in is fairly interesting. I see repeated claims of super-intelligence, but I can see just someone who has had few more years to hone his skills and who has wasted less years on doing pointless things.
So, I’m kinda curious: What’s it with Elizer Yudkowsky that makes everyone look up to him? I see some fairly interesting all-around posts, but is it that you see something more? I think there is a possibility here that the reason you admire Yudkowsky and I don’t is partially because he Seems Deep(in the sense that he makes sense immediatly but seems novel), while I spent my earlier life discovering much the same stuff alone. It lacks novelty(to me), but adds details and strengths which I attribute to experience, and thus I judge Eliezer less “deep”, and, consequently, less superhuman? This possibility gains some credibility from the fact that in my own little circles, I have pretty much the same sort of reputation as Eliezer has here(and, I think this is surprisingly much about personality, and thus I assume it’s surprisingly rare for people here to be have that reputation), and before I knew of Eliezer, I had plans of becoming something much like he is now.
I presented two possibilites: Should I accept that I’m just incapable of distinguishing the level so much above my own, or should I defy the public opinion and regard Eliezer as “not that smart”, because it’s more about personality and “seeming deep” than about real difference in mental machinery. However, third option exists: I haven’t read the stuff that makes everyone admire Eliezer so.
I’m really interested in manifestations of intelligence, so this issue is of a great importance to me. Especially if it is about Dunning-Kruger, I wanna understand how to overcome that. Maybe it’s just that I pass those “technical-seeming” parts that actually demonstrate amazing intellectual stunts, and I should make a better mental notes every time I’m forced to skip some phrase without completely grasping the meaning.
On one hand, Eliezer writes extremely good explanations. I’m learning from his style a lot.
On the other hand, many people have pointed out that he doesn’t publish novel rigorous results, which kinda detracts from the aura.
On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he’s turned out right every time that I know of.
On the fourth hand, I’ve seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine.
On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don’t completely agree with but find pretty important. Biggest of them is the idea of Friendly AI.
The weighting coefficients you give to those considerations are, of course, up to you.
ETA: on an unrelated topic, would you like to write a post on Go? CronoDAS has just turned our attention to something interesting.
“On one hand, Eliezer writes extremely good explanations. I’m learning from his style a lot.”
Yeah, but they are rather verbose he tends to use 5 words when 2 would do.
“On the other hand, many people have pointed out that he doesn’t publish novel rigorous results, which kinda detracts from the aura.”
If you want to be in science this is a big issue unless your trying to pull a Wolfram and we all know how that turned out.
“On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he’s turned out right every time that I know of.”
But the math on this site what little there is tends to be toy problems and very simple. Let’s see him find and correct a mistake in some higher order fluid mechanics equations. I would personally like to see him solve a non-trivial second order non-linear partial differential equation.
“On the fourth hand, I’ve seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine.”
That’s horrifying if you’re going to do science you have to control your error rate and that is where peer review comes in. (I recently submitted a paper where I was sloppy on some rounding of some of my results and I got slammed for it, science is all about precision and doing it right) If you don’t do the peer review then you may think your idea is good when if you actually had someone else look at it you’d see it was total trash.
“On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don’t completely agree with but find pretty important. Biggest of them is the idea of Friendly AI.”
But for science and AI this is essentially meaningless since if your goal is to make an FAI then math and rigor is necessary. The ability to write non-technical papers arguing for some idea that is technical is trivial. The challange is getting the technical detail right. This is where I would like to see Eliezer submit some of his work on decision theory show that he is actually making a theory that is properly rigorous.
I think the worst thing would be if people here just wait for Eliezer and he shows up at the end of 10 years with an extremely long non-technical paper that gets us no closer to a real FAI.
While awesome math ability is a great thing to have, it would only complement whatever skills Eliezer needs to succeed in his AI goals. If Eliezer finds that he lacks the math skills at a certain point to develop some new piece of mathematics, he can find a math collaborator that will be thrilled about having a novel problem to work on.
I’m also not concerned about error rate. You write that the challenge is “getting the technical details right”—this is simply not true. It’s the main, big, mostly correct ideas we need to progress in science, not meticulousness.
(I recently submitted a paper where I was sloppy on some rounding of some of my results and I got slammed for it, science is all about precision and doing it right)
Publication is all about precision and doing it right, and it should be. But don’t you feel like the science was done before the more careful rounding?
If that had been a novel rigorous result it would not have been wrong. It was just a bit of eyeballing mathematics, which I’ve done in any number of places.
Since this is now kinda on-topic… I don’t think Eliezer Yudkowsky is considerably more intelligent that I am. I’m aware of Dunning-Kruger effect, but the interesting part is that I simply don’t find any way to overcome this. I’m fairly intelligent, but since people around here regard my barely-MENSA(probably not even that) -level of IQ a minium requirement to even read this blog, the situation I’m in is fairly interesting. I see repeated claims of super-intelligence, but I can see just someone who has had few more years to hone his skills and who has wasted less years on doing pointless things.
So, I’m kinda curious: What’s it with Elizer Yudkowsky that makes everyone look up to him? I see some fairly interesting all-around posts, but is it that you see something more? I think there is a possibility here that the reason you admire Yudkowsky and I don’t is partially because he Seems Deep(in the sense that he makes sense immediatly but seems novel), while I spent my earlier life discovering much the same stuff alone. It lacks novelty(to me), but adds details and strengths which I attribute to experience, and thus I judge Eliezer less “deep”, and, consequently, less superhuman? This possibility gains some credibility from the fact that in my own little circles, I have pretty much the same sort of reputation as Eliezer has here(and, I think this is surprisingly much about personality, and thus I assume it’s surprisingly rare for people here to be have that reputation), and before I knew of Eliezer, I had plans of becoming something much like he is now.
I presented two possibilites: Should I accept that I’m just incapable of distinguishing the level so much above my own, or should I defy the public opinion and regard Eliezer as “not that smart”, because it’s more about personality and “seeming deep” than about real difference in mental machinery. However, third option exists: I haven’t read the stuff that makes everyone admire Eliezer so.
I’m really interested in manifestations of intelligence, so this issue is of a great importance to me. Especially if it is about Dunning-Kruger, I wanna understand how to overcome that. Maybe it’s just that I pass those “technical-seeming” parts that actually demonstrate amazing intellectual stunts, and I should make a better mental notes every time I’m forced to skip some phrase without completely grasping the meaning.
On one hand, Eliezer writes extremely good explanations. I’m learning from his style a lot.
On the other hand, many people have pointed out that he doesn’t publish novel rigorous results, which kinda detracts from the aura.
On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he’s turned out right every time that I know of.
On the fourth hand, I’ve seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine.
On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don’t completely agree with but find pretty important. Biggest of them is the idea of Friendly AI.
The weighting coefficients you give to those considerations are, of course, up to you.
ETA: on an unrelated topic, would you like to write a post on Go? CronoDAS has just turned our attention to something interesting.
That’s a lot of hands.
“On one hand, Eliezer writes extremely good explanations. I’m learning from his style a lot.”
Yeah, but they are rather verbose he tends to use 5 words when 2 would do.
“On the other hand, many people have pointed out that he doesn’t publish novel rigorous results, which kinda detracts from the aura.”
If you want to be in science this is a big issue unless your trying to pull a Wolfram and we all know how that turned out.
“On the third hand, he often finds and corrects non-obvious mathematical mistakes made by other people, including me, and he’s turned out right every time that I know of.”
But the math on this site what little there is tends to be toy problems and very simple. Let’s see him find and correct a mistake in some higher order fluid mechanics equations. I would personally like to see him solve a non-trivial second order non-linear partial differential equation.
“On the fourth hand, I’ve seen multiple cases where he made math mistakes of his own, and have discovered a couple of those myself. But that could be attributed to the fact that he publishes so much, and his error frequency is certainly many times lower than mine.”
That’s horrifying if you’re going to do science you have to control your error rate and that is where peer review comes in. (I recently submitted a paper where I was sloppy on some rounding of some of my results and I got slammed for it, science is all about precision and doing it right) If you don’t do the peer review then you may think your idea is good when if you actually had someone else look at it you’d see it was total trash.
“On the fifth hand, he has published novel non-rigorous arguments on real world topics that I don’t completely agree with but find pretty important. Biggest of them is the idea of Friendly AI.”
But for science and AI this is essentially meaningless since if your goal is to make an FAI then math and rigor is necessary. The ability to write non-technical papers arguing for some idea that is technical is trivial. The challange is getting the technical detail right. This is where I would like to see Eliezer submit some of his work on decision theory show that he is actually making a theory that is properly rigorous.
I think the worst thing would be if people here just wait for Eliezer and he shows up at the end of 10 years with an extremely long non-technical paper that gets us no closer to a real FAI.
But those are just my thoughts.
While awesome math ability is a great thing to have, it would only complement whatever skills Eliezer needs to succeed in his AI goals. If Eliezer finds that he lacks the math skills at a certain point to develop some new piece of mathematics, he can find a math collaborator that will be thrilled about having a novel problem to work on.
I’m also not concerned about error rate. You write that the challenge is “getting the technical details right”—this is simply not true. It’s the main, big, mostly correct ideas we need to progress in science, not meticulousness.
Publication is all about precision and doing it right, and it should be. But don’t you feel like the science was done before the more careful rounding?
If that had been a novel rigorous result it would not have been wrong. It was just a bit of eyeballing mathematics, which I’ve done in any number of places.
Edited to amend.