Outside of politically motivated issues (e.g. global warming), most people tend to generally not disagree with accomplished scientists on the topics within that scientist’s area of expertise and accomplishment, and to treat the more accomplished person as source of wisdom rather than as opponent in a debate. It is furthermore my honest opinion that Wang is more intelligent than Luke, and it is also the opinion that most reasonable people would share, and Luke must understand this.
semianonymous
It is not accusation or insult. It is the case though that the people in question (Luke, Eliezer) need to assume the possibility that people they are talking to are more intelligent than they are—something that is clearly more probable than not given available evidence—and they seem not to.
It is clear that you just don’t want to hear opinion more intelligent without qualifiers that allow you to disregard this opinion immediately, and you are being obtuse.
It also does not present valid inference. Ideally, you’re right but in practice people do not make the inferences they do not like.
I try not to assume narcissist personality disorder. Most people have IQ around 100 and are perfectly comfortable with the notion that accomplished PhD is smarter than they are. Most smart people, also, are perfectly comfortable with the notion that someone significantly more accomplished is probably smarter than they are. Some people have NPD and have operating assumption ‘I am the smartest person in the world’ but they are a minority across entire spectrum of intelligence. There are also cultural differences.
Are you even serious?
I fail to see how the suggestion that Wang is much smarter than Luke is an insult—unless Luke believes that there can’t be a person much smarter than him.
My point is that this bell curve shouldn’t be a new argument, it should be the first step in your reasoning and if it was not, you must have been going in the other direction. You seem to be now doing the same with the original social status.
I think I have sufficiently answered your question: I find Wang’s writings and accomplishments to require significantly higher intelligence (at minimum) than Luke’s, and I started with normal distribution as the prior (as everyone should). In any game of wits with no massive disparity in training in favour of Luke, I would bet on Wang.
Ah, that’s what you meant by the other remark. In that case, this isn’t backing up claimed prior proxies and is a new argument.
New to you. Not new to me. Should not have been new to you either. Study and train to reduce communication overhead.
Anyone who has read what Luke has to say or interacted with Luke can tell pretty strongly that Luke is on the right side of the Bell curve.
Exercise for you: find formula for distribution of IQ of someone whom you know to have IQ>x . (I mean, find variance and other properties).
There are a lot of Chinese academics who come to the United States. So what do you mean by very difficult?
Those born higher up social ladder don’t understand it is hard to climb below them too.
I apologise for my unawareness that you call China second world. It is still the case that it is very difficult to move from China to US.
Also, if we go back in time 20 years, so that Pei Wang would be about the same age Luke is now, do you think you’d have an accomplishment list for Pei Wang that was substantially longer than Luke’s current one? If so, how does that impact your claim?
If we move back 20 years, it is 1992, and Pei Wang has already been a lecturer in China then moved to Indiana University. Length of the accomplishment list is a poor proxy, difficulty is important. As I explained in the edit, you shouldn’t forget about Bell’s curve. No evidence for intelligence is good evidence of absence, on the IQ>100 side of normal distribution.
There is very little data on Luke and that is a proxy for Luke being less intelligent, dramatically so. It is instrumental to Luke’s goals to provide such data. On the second world or third world that is irrelevant semantics.
edit: and as rather strong evidence that Luke is approximately as intelligent as the least intelligent version of Luke that can look the same to us, it suffices to cite normal distribution of intelligence.
Accomplishments of all kinds, the position, the likelihood that Wang has actually managed to move from effectively lower class (third world) to upper class (but I didn’t look up where he’s from, yet), etc.
What proxies do you think would indicate Luke is more intelligent? I can’t seem to think of any.
I’m unsure how you are getting more intelligent.
I’m unsure how you are not. Every single proxy for intelligence indicates a fairly dramatic gap in intelligence in favour of Wang. Of course for politeness sake we assume that they would be at least equally intelligent, and for phyg sake that Luke would be more intelligent, but it is simply very, very, very unlikely.
When there is any use of domain specific knowledge and expertise, without a zillion citations for elementary facts, you see “simple errors of reasoning” whereas everyone else sees “you are a clueless dilettante”. Wang is a far more intelligent person than Luke, sorry, the world is unjust and there is nothing Luke or Eliezer can do about their relatively low intelligence compared to people in the field. Lack of education on top of the lower intelligence doesn’t help at all.
edit: I stand by it. I don’t find either Eliezer or Luke to be particularly smart; smarter than average blogger, for sure, but not genuises. I by the way score very high on IQ tests. I can judge not just by accomplishments but simply because I can actually evaluate the difficulty of the work, and, well, they never did anything that’s too difficult for IQ of 120 , maybe 125 . If there is one thing that makes LessWrong a cult, it is the high-confidence belief that the gurus are smartest, or among the smartest people on Earth.
If you clear away all the noise arising from the fact that this interaction constitutes a clash of tribal factions...
Pei seems to conflate the possibility...
I’m finding these dialogues worthwhile for (so far) lowering my respect for “mainstream” AI researchers...
and so on.
I think it’d be great if SIAI would not lath on the most favourable and least informative interpretation of any disagreement, in precisely the way how e.g. any community around free energy devices does. It’d be also great if Luke allowed for the possibility that Wang (and most other people whom are more intelligent, better educated, and more experienced than Luke) are actually correct, and Luke is completely wrong (or not even wrong).
I think you must first consider simpler possibility that SIAI actually has a very bad argument, and isn’t making any positive contribution to saving mankind from anything. When you have very good reasons to think it isn’t so (high iq test scores don’t suffice), very well verified given all the biases, you can consider possibility that it is miscommunication.
Well, my prior for someone on the internet who’s asking for money being scam is no less than 99% (and I do avoid pascal mugging by not taking strings from such sources as proper hypotheses), and I think that is a very common prior, so there better be good evidence that it isn’t scam—a panel of accomplished scientists and engineers, working to save the world, etc etc. think something on the scale of IPCC. rather than some weak evidence that it is scam, and something even less convincing than e.g. Steorn’s perpetual motion device.
Scamming works best by self deceit though, so even though you are almost certainly just a bunch of fraudsters, you still feel genuinely wronged and insulted by suggestion that you are, because the first people that you would have defrauded would have been yourselves. You’d also feel wronged that there is nothing you could of done to look better. There isn’t; if your cause was genuine it would of been started decades ago by more qualified people.
You can eliminate the evidence that you consider double counted, for example grandiose self worth and grandiose plans, though those need to be both present because grandiose self worth without grandiose plans would just indicate some sort of miscommunication (and the self worth metric is more subjective), and are alone much poorer indicators than combined.
In any case accurate estimation of anything of this kind is very difficult. In general one just adopts a strategy such that sociopaths would not have sufficient selfish payoff for cheating it; altruism is far cheaper signal for non-selfish agents; in very simple terms if you give someone $3 for donating $4 to very well verified charity, those who value $4 in charity above $1 in pocket, will accept the deal. You just ensure that there is no selfish gain in transactions, and you’re fine; if you don’t adopt anti cheat strategy, you will be found and exploited with very high confidence as unlike the iterated prisoner dilemma, cheaters get to choose whom to play with, and get to make signals that make easily cheatable agents play with them; a bad strategy is far more likely to be exploited than any conservative estimate would suggest.
I thought about it some more and the relevant question is—how do we guess what are his abilities? And what is his aptitude at those abilities? Is there statistical methods we can use? (e.g. SPR) What would the outcome be? How can we deduce his utility function?
Normally, when one has e.g. high mathematical aptitude, or programming aptitude, or the like, as a teenager one still has to work on it and train (the brain undergoes significant synaptic pruning at about 20 years of age, limiting your opportunity to improve afterwards), and regardless of the final goal, the intelligent people tend to have a lot of things to show from back when they were practising. I think most people see absence of such stuff as a very strong indicator of lack of ability, especially as seeing it so provides incentive to demonstrate the ability.
The signal being what exactly?