“It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.”
I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.
There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven’t yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.
“I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.”
I don’t think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.
As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.
I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...
Of course startups sometimes lose; they certainly aren’t invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.
“If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.”
Nick Bostrom (http://www.nickbostrom.com/cv.pdf)
Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever)
A couple people who’s names I won’t mention since I doubt you’d know them from Johns Hopkins Applied Physics Lab where I did some work.
etc.
I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.
My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you’re smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)
No, because I don’t believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.
Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?
Eliezer invoked the notion of intelligent insanity in response to Aumann’s approach to the absent-minded driver problem. In this case, what was Aumann efficiently optimizing in spite of his own wishes?
“Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)”
Couldn’t have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.
I think accomplishments are a better measure (quality over quantity obviously)
I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I’m trying to walk in 40 years?
ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn’t a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.
I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.
“It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.”
I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.
There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven’t yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.
“I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.”
I don’t think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.
As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.
I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...
Of course startups sometimes lose; they certainly aren’t invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.
“If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.”
(citation needed)
Ok, here are some people:
Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who’s names I won’t mention since I doubt you’d know them from Johns Hopkins Applied Physics Lab where I did some work. etc.
I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.
My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you’re smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)
Sorry this is harsh but there it is.
I think you have confused “smart” with “accomplished”, or perhaps “possessed of a suitably impressive resumé”.
No, because I don’t believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.
What do you think “intelligence” is?
Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?
Previously, Eliezer has said that intelligence is efficient optimization.
I have trouble meshing this definition with the concept of intelligent insanity.
Intelligently insane efficiently optimize stuff in the way they don’t want it optimized.
Eliezer invoked the notion of intelligent insanity in response to Aumann’s approach to the absent-minded driver problem. In this case, what was Aumann efficiently optimizing in spite of his own wishes?
“Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)”
Couldn’t have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.
I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I’m trying to walk in 40 years?
ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn’t a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.
I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.