I think this cannot happen, due to physical limits that are very taut for human brains.
I was saying that e.g. the gap between Magnus Carlsen and the median human in chess ability is 10x − 1000x the gap between the median human and a dumb human.
I think this is just straightforwardly true and applies to many other fields.
Basically, orders of magnitude difference can’t happen, only at best 1 OOM better can happen.
And in practice, I think human intelligence has significantly narrower bands than this, to the point where I think 2x differences are crazily high, and anything beyond that is beyond the human distribution.
This is because intelligence is well approximated by a normal distribution, and normal distributions with the population we have would have 6.4 standard deviations away from average at the top, which is essentially a little over 2x.
Thus I think Eliezer got this point much more right than most of his other points.
I think this just does not make sense as an inference from the post you cited/stand as a chain of reasoning in itself and it isn’t well supported empirically.
Magnus Carlsen is not 2x as good in Chess as a median human.
The normal distribution of IQ is an induced one. The raw scores do not necessarily conform to the particular normal distribution we’re familiar with. 200 IQ does not necessarily correspond to 2x g factor.
How are you measuring the gap between chess players?
Here is one way: say that A is one unit better than B if A beats B about 2⁄3 of the time. (“One unit” is also called “100 Elo rating points”.) Now we can measure the difference between any two players by considering chains of one-unit-better players joining them, or by doing statistical magic to get a unified rating scale for everyone.
In the rating system used by FIDE (the international chess federation), Magnus Carlsen’s rating is about 2850. I think a player making literally random moves would have a rating somewhere around zero. (This isn’t because the zero point is deliberately set there or anything, it’s just coincidence, and I might well be out by a couple of hundred points.) Beginners who have played only a few games and play hilariously badly might be rated somewhere around 200. The median human … doesn’t actually play chess. The median actually-chess-playing human (who probably has more aptitude for the game than the median human, since people prefer to play games they might be good at) is maybe somewhere around 1000 or so.
So, very crudely: dumb human = 0; median human = 1000; best human = 3000. The median-Magnus gap is larger than the dumb-median gap, but it’s not anywhere near 10x larger.
On the other hand, you might choose some very different thing to measure. For instance: number of hours at current skill level to think up a chess opening move that’s in a position already reached at least 10x in strong players’ games, that has never been played before, and that is at least as good as all the other moves that have been played. I’m sure Magnus is at least 100x better than the median chess player by this metric. In general, “ability to do things only the best can do” will always be much much better at the top than everywhere else. If you measure a theoretical physicist by the rate at which they can make Nobel-worthy discoveries then the very best ones are probably 100x better than the median professional theoretical physicist, who in turn is well over 1000x better than the median human. On the other hand, it’s probably also true that the median human is well over 1000x better than the stupidest human by this measure; it’s not at all obvious what the (best:median) / (median:worst) ratio is like.
Any question like “by what factor are the best better than the average?” is liable to be very sensitive to exactly what you’re measuring and how you quantify it.
Uh, I wasn’t using 0 as the set point for dumb human. And I’m not sure “median chess player” is necessarily a good set point for the ability of the median human. I think the median chess player is significantly better at chess than the median human.
It feels like a distortion/cheating to set dumb human at random play and median human at “median chess player”. I feel like you should set dumb human at dumb chess player or median human at an actual median human for consistency.
But on second thought, this doesn’t actually matter for my argument.
As for how to measure gaps, I think the negative of the logarithm of the probability of winning is good?
(The intuition is that an improvement in accuracy from 90%→99% and frok 99%→99.9% are the same linear increase in ability).
A difference of 10 bits would thus corresponds to a 1024x larger gap (smaller numbers are better, but here we’re only looking at the gaps, not the raw differences in ability).
That was the intuition driving the Magnus Carlsen statement; that linear differences in ELO represent exponential gaps in ability.
Using your numbers of 0,1000 and 3,000.
There are 10 units between dumb human and median human and 20 units between median human and dumb human. But that 10 unit difference is not a linear gap but exponential and represents several orders of magnitude.
More concretely, every 400 points of ELO correspond to a 10x increase in expected score.
So there is (very) roughly:
A 102.5x difference in expected score between median human and dumb human
A 105x difference in expected score between median human and peak human.
105102.5=105−2.5=102.5x larger gap, so a 2.5 orders of magnitude larger gap.
I wasn’t claiming that you were “using 0 as the set point for dumb human”; I’m not sure where you get that from. All I said is that with typical chess Elo ratings, a random player (which is what I’m using to stand in for “really stupid human”) happens to end up not too far from a rating of 0.
One difficulty here is that, as I mentioned, the median human being doesn’t actually play chess, so the interpretation of terms like “median human” and “dumb human” isn’t clear.
If we look at the whole human population and actual chess-playing ability then the worst and the median are both at the “don’t actually know the rules” level. I don’t even know how you assess games involving such players. It might in that case be true that (best:median)/(median:worst) is infinite but I think it’s clear that this isn’t actually telling us anything meaningful about chess ability.
If we look at the whole human population and something like “chess-playing ability if you tell them the rules but otherwise don’t intervene at all” then again the differences between people who previously didn’t know the rules will be very small, and again I don’t think it will tell us anything interesting.
If we look at the whole human population and something like “chess-playing ability if the ones who previously didn’t know the rules or hardly ever played spent a month playing casually in their spare time before being assessed” then my guess is that we’d have worst still close to a rating of 0 on typical Elo scales, median maybe somewhere around 800?, and best still Magnus at 2850.
If we look at just the population of people who ever actually play chess (and go back to no interventions) then I guess we get worst still not much over 0 (maybe 100 or so?), median somewhere around 1000 or so, and best still Magnus at 2850.
Etc.
The Elo rating system is basically log odds rather than log probabilities, but it’s close enough to what you have in mind. (And I claim log odds is clearly better than log probability, and I think odds rather than probability is actually what you meant since e.g. the logarithms of 99% and 99.9% are not much different.)
Your meaning isn’t 100% clear to me, since first of all you say negative log [odds] “is good” as a way to measure gaps—which is equivalent to measuring Elo rating differences, which is what I did—but then afterwards you say: no, but actually we should look at the thing that’s the logarithm of: “linear differences in ELO represent exponential gaps in ability”. So you don’t think log odds is a good measure of gap after all?
Anyway, if you measure gap by winning-odds-ratio then I’ll agree that the Carlsen-median gap is many times bigger than the median-worst gap. But:
1. If you measure that way then gaps are multiplicative rather than additive, which means that e.g. diagrams like the intelligence scales you sketch in the OP are misleading. And, I claim, ratios of gap sizes are likewise misleading.
2. The original point of your remarks about Carlsen being so dramatically better than average chess players was in support of the thesis that there isn’t much more “room at the top” for super-skilled computers, and if you measure that way then there’s quite a lot of room.
Expanding on 1: suppose A < B < C and “the B-C gap is 100x bigger than the A-B gap”. Does this mean that the A-C gap is barely bigger than the B-C gap? Yes for e.g. ELO rating differences: if the ratings are 1000, 1010, 2000 then C will beat A and B at about the same rate. But no for odds ratio. Suppose the B:A odds ratio is 2 and the C:B odds ratio is 200. In the Elo model these are multiplicative, so the C:A odds ratio is 400. And 400 is not barely bigger than 200.
Expanding on 2: so far as I can see no one’s made a really serious effort at measuring current top chess engine strength on a scale compatible with, say, FIDE’s, but according to someone at https://chess.stackexchange.com/questions/40485/what-is-stockfish-15s-fide-calibrated-elo-rating TCEC ratings shouldn’t be crazily wrong and they put Stockfish 15 at about 3620, versus Carlsen at about 2850. That’s a 600ish-point rating difference, which converted at the “400 points = 10x score increase) says that today’s superhuman robots are say 13x better than the best human.
I don’t see any perspective from which this looks much like a scale with “village idiot” at one end, “Einstein” at the other, and no substantial room for better-than-Einstein scores.
The point about using 0 elo as a stand in for dumb human doesn’t seem to be germane to our actual disagreements, so I wouldn’t address it.
The Elo rating system is basically log odds rather than log probabilities, but it’s close enough to what you have in mind. (And I claim log odds is clearly better than log probability, and I think odds rather than probability is actually what you meant since e.g. the logarithms of 99% and 99.9% are not much different.)
Your meaning isn’t 100% clear to me, since first of all you say negative log [odds] “is good” as a way to measure gaps—which is equivalent to measuring Elo rating differences, which is what I did—but then afterwards you say: no, but actually we should look at the thing that’s the logarithm of: “linear differences in ELO represent exponential gaps in ability”. So you don’t think log odds is a good measure of gap after all?
I think it is a good measure of the gap, but the scale is not a linear scale but a logarithmic scale. Hence a linear difference on that scale represents an exponential difference in the underlying quantity.
1. If you measure that way then gaps are multiplicative rather than additive, which means that e.g. diagrams like the intelligence scales you sketch in the OP are misleading. And, I claim, ratios of gap sizes are likewise misleading.
I’m not sure I understand this point well.
Expanding on 2: so far as I can see no one’s made a really serious effort at measuring current top chess engine strength on a scale compatible with, say, FIDE’s, but according to someone at https://chess.stackexchange.com/questions/40485/what-is-stockfish-15s-fide-calibrated-elo-rating TCEC ratings shouldn’t be crazily wrong and they put Stockfish 15 at about 3620, versus Carlsen at about 2850. That’s a 600ish-point rating difference, which converted at the “400 points = 10x score increase) says that today’s superhuman robots are say 13x better than the best human.
But that ∼600 point difference is much smaller than the 1800+ point difference between Carlsen and the median chess player (using 1000 as the baseline for median). That was the argument I made; that Carlsen was closer to optimal play[1] than to median, not that optimal play was not much better than Carlsen.
I don’t see any perspective from which this looks much like a scale with “village idiot” at one end, “Einstein” at the other, and no substantial room for better-than-Einstein scores.
This wasn’t a point I was trying to make in the post, nor is it a point I actually believe. I do believe there is a lot of room above humans, I just think the human spectrum is itself very large.
I should caveat this that by “optimal play”, I meant something more like “optimal play given bounded resources”, not necessarily optimal play with unbounded computational resources.
The point about using 0 elo as a stand in for dumb human doesn’t seem to be germane to our actual disagreements, so I wouldn’t address it.
Fine with me, but I just want to note again that I am not using 0 Elo as a “stand-in” nor saying you were doing that, I’m estimating roughly where a dumb human happens to end up on the scale, and for the case of FIDE chess ratings I think it turns out to be near zero. To be clear, I’m not trying to reopen any discussion about whether that’s right (since I agree it isn’t particularly important for our actual disagreement), it’s just that it seems like you’re being quite insistent about describing something I did in a way that doesn’t match what I think I was actually doing, and I would prefer the last thing about it in the discussion not to misrepresent my intentions.
(I notice that that sounds more annoyed than I actually am. Not annoyed, just wanting to avoid future misunderstandings.)
I’m not sure there exactly is an “underlying quantity” here. Differences in rating, hence odds ratios in results, are fairly well defined (though, note, it’s not like there’s any sort of necessary principle along the lines of “if when A plays lots of games against B their odds are a:b, and if when B plays lots of games against C their odds are b:c, then when A plays lots of games against C their odds are a:c”, which the Elo scale and the usual ways of updating Elo ratings in the light of results are effectively assuming IIUC). But I don’t think there’s an absolute thing that the differences are sensibly regarded as differences in or ratios of.
I guess you could try to pick some “canonical” single player—a truly random player, or a literally perfect player—and look at odds ratios there. But I think the assumption I mentioned in the previous paragraph really does break down in that case.
I’m not sure I understand this point well.
I expanded on it a couple of paragraphs below. If that still didn’t clarify, can you say a bit about what doesn’t make sense?
That ~600 point difference is much smaller than the 1800+ point difference between Carlsen and the median chess player [...] the argument that I made [] that Carlsen was closer to optimal play than to median, not that optimal play was not much better than Carlsen.
Hmm, maybe I misunderstood something? You wrote
It seems that for many cognitive tasks, the median practitioner is often much closer to beginner/completely unskilled/random noise than they are to the best in the world [… footnote:] My intuitive sense is that [...] the gap between Magnus Carlsen and the median human in chess ability is 10x − 1000x the gap between the median human and a dumb human.
It’s the “10x-1000x” I’m disputing, not any version of “much closer” that’s compatible with the “right” numbers being on the order of 0 / 1000 / 3000.
As for any relationship to optimal play, I was getting that from assuming that what you said about chess was intended as support for drawing a scale with “idiot” on the left and “Einstein” on the right rather than one with “mouse” on the left and “vast superhuman intelligence” on the right. (My own feeling is that what we actually want is more likely a scale with all four of those points on it, and no pair super-close together. Idiots really are much smarter than mice; Einstein really is much smarter than an idiot; God really is much smarter than Einstein; and for many purposes none of those gaps is so large or so small as to make the others negligible. For difficult enough tasks, idiots and mice may be indistinguishable, or even idiots and mice and average people, but those are also the tasks for which I expect hypothetical superintelligences to have big advantages over the smartest humans.)
So I took you to be suggesting that the median-Carlsen gap is large enough that at least one of these pictures is not misleading: ||---------| (vertical bars are idiot, median, Carlsen) or |------------|| (vertical bars are median, Carlsen, superhuman machine). And I don’t agree with either; I think the idiot-median-Carlsen-machine picture is more like |-----|----------|-----| except that I don’t know how much room there is for that last gap to grow as the machines continue to improve.
And my reason for thinking this is that (1) if you use something like log odds (roughly equivalent to Elo ratings) to measure the gap sizes, then that’s what happens, and (2) if you use the odds themselves[1] then indeed the larger gaps become larger by huge factors, but in that case the |-----|--------| pictures where you put gaps next to one another and compare their sizes are completely misleading, because the correct way to combine two gaps is not to put them side by side (which amounts to adding up their sizes), and (3) these large-factor gaps don’t seem to me to be reason to prefer your “just idiot...Einstein” scale to the sort that Yudkowsky likes to draw, because the point Yudkowsky is trying to make by drawing them is about what happens to the right of Einstein, and there is good reason to think that there’s plenty—in particular, there’s a large-odds-ratio gap—to the right of Carlsen.
It’s the “10x-1000x” I’m disputing, not any version of “much closer” that’s compatible with the “right” numbers being on the order of 0 / 1000 / 3000.
I think that’s basically correct. Magnus Carlsen’s expected score vs a median human is 100s of times greater than a median human’s expected score vs a dumb human (as inferred from their ELO, I sketched a rough calculation at the end of this post).
As for the remainder of your reply, the point of Yudkowsky’s I was contending with is the claim that Einstein is very close to an idiot in absolute terms (especially compared to the difference between an idiot and a chimpanzee).
I wasn’t touching on how superintelligences compare to Einstein.
Magnus Carlsen’s expected score vs a median human is 100s of times greater than a median human’s expected score vs a dumb human
since to a good approximation Carlsen gets 100 wins, 0 draws, 0 losses against a median human for a total score of 100, and a median human gets at least an expected score of 50 against a dumb human.
I do not dispute that there are ways of doing the accounting that make the Carlsen-median gap 100x (or 1000x or whatever) bigger than the median-dumbest gap. My claim is that for most purposes those ways of doing the accounting are worse.
I can’t tell whether you think my reasons for thinking that are too stupid to deserve a response, or think they miss the point in some fundamental way, or don’t understand them, or just aren’t very interested in discussing them. But that’s where the actual disagreement lies.
As for mouse/chimp/idiot/Einstein, my general model of these things is that for most mental tasks there’s a minimum level of brainpower needed to do them at all, which for things we think of as interesting mental tasks generally lies somewhere between “idiot” and “Einstein” (because if even idiots could do them easily we wouldn’t think of them as interesting mental tasks, and if even Einsteins couldn’t do them we mostly wouldn’t think of them at all), and sometimes but maybe not always a maximum level of brainpower needed to do them about as well as possible, which might or might not also be somewhere between “idiot” and “Einstein”, and then the biggest delta is the one between not being able to Do The Thing and being able to do it, and after that any given increment matters a lot until you get to the maximum, and after that nothing matters much.[1] So when we pay attention to some specific task we think of as a difficult thing humans can do, we should expect to find “mouse”, “chimp”, “idiot”, and some further portion of the human population all clustered together and “Einstein” some way away. But there are also tasks that pretty much all humans can do, some of which distinguish (e.g.) mice from chimps, and I think it’s fair to say that there is a real sense in which humans are closer to chimps than chimps are to mice even though for human-ish mental tasks there’s no difference to speak of between mice and chimps; and for some tasks there is probably huge scope for doing better than the best humans. (Some of those tasks may be ones it has never occurred to us to try because they are beyond our conception.) I think the question of how much room there is above “Einstein” on the scale is highly relevant if you are asking how close “idiot” and “Einstein” are.
[1] Of course this is a simplification; minima and maxima for this sort of thing are usually “soft” rather than “hard”, and most interesting tasks actually involve a variety of skills whose minima and maxima won’t all be in the exact same place, and brainpower isn’t really one-dimensional, etc., etc., etc. I assume you appreciate all these things as well as I do :-).
Magnus Carlsen’s expected score vs a median human is 100s of times greater than a median human’s expected score vs a dumb human
since to a good approximation Carlsen gets 100 wins, 0 draws, 0 losses against a median human for a total score of 100, and a median human gets at least an expected score of 50 against a dumb human.
“It then follows that for each 400 rating points of advantage over the opponent, the expected score is magnified ten times in comparison to the opponent’s expected score.”
Median—dumb ELO difference is 1,000 points: 102.5x difference in expected score
Magnus—median ELO difference is 1,850 points: 104.625x difference in expected score
Magnus—median gap is >100x median-human gap
104.625102.5=104.625−2.5=102.125=133.25x
I do not dispute that there are ways of doing the accounting that make the Carlsen-median gap 100x (or 1000x or whatever) bigger than the median-dumbest gap. My claim is that for most purposes those ways of doing the accounting are worse.
I can’t tell whether you think my reasons for thinking that are too stupid to deserve a response, or think they miss the point in some fundamental way, or don’t understand them, or just aren’t very interested in discussing them. But that’s where the actual disagreement lies.
You’ve not addressed what you think is wrong with the above calculation, so I’m just confused. I think it’s basically a canonical quantification of chess ability?
I think the question of how much room there is above “Einstein” on the scale is highly relevant if you are asking how close “idiot” and “Einstein” are.
We could also just ask whether idiot was closer to chimpanzee than to Einstein. I’m mostly interested in how long it takes AI to cross the human cognitive frontier, not whether strongly superhuman AI is possible (I think it is).
My issue with the calculation isn’t with the calculation. It is indeed correct (with the usual assumptions, which are probably somewhat wrong but it doesn’t much matter) that if Magnus plays many games against a median chessplayer then he will probably get something like 10^4.6 times as many points as they do, and that if a median chessplayer plays a maximally-dumb one then they will probably get something like 10^2.5 times as many points as the maximally-dumb one, and that the ratio between those two ratios is on the order of 100x. I don’t object to any of that, and never have.
I feel, rather, that that isn’t a very meaningful calculation to be doing, if what you want to do is to ask “how much better is Carlsen than median, than median is better than dumbest?”.
More specifically, my objections are as follows. (They overlap somewhat.)
0. The odds-ratio figure you are using is by no means the canonical way to quantify chess ability. Consider: the very first thing you said on the topic was “As for how to measure gaps, I think the negative of the logarithm of the [odds] is good”. I agree with that: I think log-odds is better for most purposes.
1. For multiplicative things like these odds ratios, I think it is generally misleading to say “gap 1 is 10x as big as gap 2” when you mean that the odds ratio for gap 2 is 10x bigger. I think that e.g. “gap 1 is twice as big as gap 2″ should mean that gap 2 is like one instance of gap 1 and then another instance of gap 1, which for odds ratios means that the odds ratio for gap 2 is the square of the odds ratio for gap 1. By that way of thinking, the median-Magnus gap is less than twice the size of the dumbest-median gap. Your terminology requires you to say that the median-Magnus gap simultaneously (a) is hundreds of times bigger than the dumbest-median gap and (b) is not large enough to fit two copies of the dumbest-median gap into (i.e., find someone X as much better than median as median is than dumber; and then find someone Y as much better than X as median is than dumber; if you do that, Y will be better than Magnus).
2. If you are going to draw diagrams like the Yudkowsky scale, which implicitly compare “gap sizes” by placing gaps next to one another, then you had better be using a measure of difference that behaves additively rather than multiplicatively. Because that’s the only way for the relative distances of A,B,C along the scale to convey accurately the relationship between the A-B, B-C, and A-C gaps. (You could of course make a scale where position is proportional to, say, “odds ratio against median player”. That will make the dumbest-median gap very small and the median-Magnus gap very large. But it will also make an “odds ratio 10:1” gap vary hugely in size depending on where on the scale it is, which I don’t think is what you want to do.)
#0. Regarding the log of the odds ratios, I want to clarify that I never meant it as a linear scale. I was working with the intuition that linear gaps in logarithmic scales are exponential.
#1. I get what you’re saying, but I think this objection would apply to any logarithmic scale; do you endorse that conclusion/generalisation of your objection?
If the gap between two points on a logarithmic scale is d, and that represents a change of D in the underlying quantity, a gap of 2d would represent a change of D2 in the underlying quantity.
Talking about change may help elide the issues from different intuitions about what gaps should mean.
My claim above was that the underlying quantity was (a linear measure of) “chess ability”, and the ELO scale had that kind of logarithmic relationship to it.
2. I was implicitly making the transformation above where I converted a logarithmic scale into a linear/additive scale.
I agree that it doesn’t make sense to use non linear scales when talking about gaps. I also agree that ELO score is one such nonlinear scale.
My claim about the size of the gap was after converting the nonlinear ELO rating to the ~linear “expected score”. Hence I spoke about gaps in expected score.
I think the crux is this:
What do you think is the best/most sensible linear measure of chess ability?
(By linear measure, i mean that a difference of kx is k times as big as a difference of x.)
I am not sure exactly what you’re asking me whether I endorse, but I do indeed think that for “multiplicative” things that you might choose to measure on a log scale, “twice as big a gap” should generally mean 2x on the log scale or squaring on the ratio scale.
If you think it doesn’t make sense to use nonlinear scales when talking about gaps, and think Elo rating is nonlinear while exp(Elo rating) is linear, then you are not agreeing but radically disagreeing with me. I think Elo rating differences are a pretty good way of measuring gaps in chess ability, and I think exp(Elo rating) is much worse.
I think Elo rating is nearer to being a linear measure of chess ability than odds ratio, to whatever extent that statement makes sense. I think that if you spend a while doing puzzles every day and your rating goes up by 50 points (~1.33x improvement in odds ratio), and then you spend a while learning openings and your rating goes up by another 50 points, then it’s more accurate to say that doing both those things brought twice the improvement that doing just one did (i.e., 100 points versus 50 points) than to say it brought 1.33x the improvement that doing just one did (i.e., 1.78x odds versus 1.33x odds). I think that if you’re improving faster and it’s 200 points each time (~3x odds) then it doesn’t suddenly become appropriate to say that doing both things brought 3x the improvement of doing one of them. I think that if you’re enough better than me that you get 10x more points than I do when we play, and if Joe Blow is enough better than you that he gets 10x more points than you do when we play, then the gap between Joe and me is twice as big as the gap between you and me or the gap between Joe and you, because the big gap can be thought of as made up of two identical smaller gaps, and not 10x as big.
I was saying that e.g. the gap between Magnus Carlsen and the median human in chess ability is 10x − 1000x the gap between the median human and a dumb human.
I think this is just straightforwardly true and applies to many other fields.
I think this just does not make sense as an inference from the post you cited/stand as a chain of reasoning in itself and it isn’t well supported empirically.
Magnus Carlsen is not 2x as good in Chess as a median human.
The normal distribution of IQ is an induced one. The raw scores do not necessarily conform to the particular normal distribution we’re familiar with. 200 IQ does not necessarily correspond to 2x g factor.
How are you measuring the gap between chess players?
Here is one way: say that A is one unit better than B if A beats B about 2⁄3 of the time. (“One unit” is also called “100 Elo rating points”.) Now we can measure the difference between any two players by considering chains of one-unit-better players joining them, or by doing statistical magic to get a unified rating scale for everyone.
In the rating system used by FIDE (the international chess federation), Magnus Carlsen’s rating is about 2850. I think a player making literally random moves would have a rating somewhere around zero. (This isn’t because the zero point is deliberately set there or anything, it’s just coincidence, and I might well be out by a couple of hundred points.) Beginners who have played only a few games and play hilariously badly might be rated somewhere around 200. The median human … doesn’t actually play chess. The median actually-chess-playing human (who probably has more aptitude for the game than the median human, since people prefer to play games they might be good at) is maybe somewhere around 1000 or so.
So, very crudely: dumb human = 0; median human = 1000; best human = 3000. The median-Magnus gap is larger than the dumb-median gap, but it’s not anywhere near 10x larger.
On the other hand, you might choose some very different thing to measure. For instance: number of hours at current skill level to think up a chess opening move that’s in a position already reached at least 10x in strong players’ games, that has never been played before, and that is at least as good as all the other moves that have been played. I’m sure Magnus is at least 100x better than the median chess player by this metric. In general, “ability to do things only the best can do” will always be much much better at the top than everywhere else. If you measure a theoretical physicist by the rate at which they can make Nobel-worthy discoveries then the very best ones are probably 100x better than the median professional theoretical physicist, who in turn is well over 1000x better than the median human. On the other hand, it’s probably also true that the median human is well over 1000x better than the stupidest human by this measure; it’s not at all obvious what the (best:median) / (median:worst) ratio is like.
Any question like “by what factor are the best better than the average?” is liable to be very sensitive to exactly what you’re measuring and how you quantify it.
Uh, I wasn’t using 0 as the set point for dumb human. And I’m not sure “median chess player” is necessarily a good set point for the ability of the median human. I think the median chess player is significantly better at chess than the median human.
It feels like a distortion/cheating to set dumb human at random play and median human at “median chess player”. I feel like you should set dumb human at dumb chess player or median human at an actual median human for consistency.
But on second thought, this doesn’t actually matter for my argument.
As for how to measure gaps, I think the negative of the logarithm of the probability of winning is good? (The intuition is that an improvement in accuracy from 90%→99% and frok 99%→99.9% are the same linear increase in ability).
A difference of 10 bits would thus corresponds to a 1024x larger gap (smaller numbers are better, but here we’re only looking at the gaps, not the raw differences in ability).
That was the intuition driving the Magnus Carlsen statement; that linear differences in ELO represent exponential gaps in ability.
Using your numbers of 0, 1000 and 3,000.
There are 10 units between dumb human and median human and 20 units between median human and dumb human. But that 10 unit difference is not a linear gap but exponential and represents several orders of magnitude.
More concretely, every 400 points of ELO correspond to a 10x increase in expected score.
So there is (very) roughly:
A 102.5x difference in expected score between median human and dumb human
A 105x difference in expected score between median human and peak human.
105102.5=105−2.5=102.5x larger gap, so a 2.5 orders of magnitude larger gap.
I wasn’t claiming that you were “using 0 as the set point for dumb human”; I’m not sure where you get that from. All I said is that with typical chess Elo ratings, a random player (which is what I’m using to stand in for “really stupid human”) happens to end up not too far from a rating of 0.
One difficulty here is that, as I mentioned, the median human being doesn’t actually play chess, so the interpretation of terms like “median human” and “dumb human” isn’t clear.
If we look at the whole human population and actual chess-playing ability then the worst and the median are both at the “don’t actually know the rules” level. I don’t even know how you assess games involving such players. It might in that case be true that (best:median)/(median:worst) is infinite but I think it’s clear that this isn’t actually telling us anything meaningful about chess ability.
If we look at the whole human population and something like “chess-playing ability if you tell them the rules but otherwise don’t intervene at all” then again the differences between people who previously didn’t know the rules will be very small, and again I don’t think it will tell us anything interesting.
If we look at the whole human population and something like “chess-playing ability if the ones who previously didn’t know the rules or hardly ever played spent a month playing casually in their spare time before being assessed” then my guess is that we’d have worst still close to a rating of 0 on typical Elo scales, median maybe somewhere around 800?, and best still Magnus at 2850.
If we look at just the population of people who ever actually play chess (and go back to no interventions) then I guess we get worst still not much over 0 (maybe 100 or so?), median somewhere around 1000 or so, and best still Magnus at 2850.
Etc.
The Elo rating system is basically log odds rather than log probabilities, but it’s close enough to what you have in mind. (And I claim log odds is clearly better than log probability, and I think odds rather than probability is actually what you meant since e.g. the logarithms of 99% and 99.9% are not much different.)
Your meaning isn’t 100% clear to me, since first of all you say negative log [odds] “is good” as a way to measure gaps—which is equivalent to measuring Elo rating differences, which is what I did—but then afterwards you say: no, but actually we should look at the thing that’s the logarithm of: “linear differences in ELO represent exponential gaps in ability”. So you don’t think log odds is a good measure of gap after all?
Anyway, if you measure gap by winning-odds-ratio then I’ll agree that the Carlsen-median gap is many times bigger than the median-worst gap. But:
1. If you measure that way then gaps are multiplicative rather than additive, which means that e.g. diagrams like the intelligence scales you sketch in the OP are misleading. And, I claim, ratios of gap sizes are likewise misleading.
2. The original point of your remarks about Carlsen being so dramatically better than average chess players was in support of the thesis that there isn’t much more “room at the top” for super-skilled computers, and if you measure that way then there’s quite a lot of room.
Expanding on 1: suppose A < B < C and “the B-C gap is 100x bigger than the A-B gap”. Does this mean that the A-C gap is barely bigger than the B-C gap? Yes for e.g. ELO rating differences: if the ratings are 1000, 1010, 2000 then C will beat A and B at about the same rate. But no for odds ratio. Suppose the B:A odds ratio is 2 and the C:B odds ratio is 200. In the Elo model these are multiplicative, so the C:A odds ratio is 400. And 400 is not barely bigger than 200.
Expanding on 2: so far as I can see no one’s made a really serious effort at measuring current top chess engine strength on a scale compatible with, say, FIDE’s, but according to someone at https://chess.stackexchange.com/questions/40485/what-is-stockfish-15s-fide-calibrated-elo-rating TCEC ratings shouldn’t be crazily wrong and they put Stockfish 15 at about 3620, versus Carlsen at about 2850. That’s a 600ish-point rating difference, which converted at the “400 points = 10x score increase) says that today’s superhuman robots are say 13x better than the best human.
I don’t see any perspective from which this looks much like a scale with “village idiot” at one end, “Einstein” at the other, and no substantial room for better-than-Einstein scores.
The point about using 0 elo as a stand in for dumb human doesn’t seem to be germane to our actual disagreements, so I wouldn’t address it.
I think it is a good measure of the gap, but the scale is not a linear scale but a logarithmic scale. Hence a linear difference on that scale represents an exponential difference in the underlying quantity.
I’m not sure I understand this point well.
But that ∼600 point difference is much smaller than the 1800+ point difference between Carlsen and the median chess player (using 1000 as the baseline for median). That was the argument I made; that Carlsen was closer to optimal play[1] than to median, not that optimal play was not much better than Carlsen.
This wasn’t a point I was trying to make in the post, nor is it a point I actually believe. I do believe there is a lot of room above humans, I just think the human spectrum is itself very large.
I should caveat this that by “optimal play”, I meant something more like “optimal play given bounded resources”, not necessarily optimal play with unbounded computational resources.
Fine with me, but I just want to note again that I am not using 0 Elo as a “stand-in” nor saying you were doing that, I’m estimating roughly where a dumb human happens to end up on the scale, and for the case of FIDE chess ratings I think it turns out to be near zero. To be clear, I’m not trying to reopen any discussion about whether that’s right (since I agree it isn’t particularly important for our actual disagreement), it’s just that it seems like you’re being quite insistent about describing something I did in a way that doesn’t match what I think I was actually doing, and I would prefer the last thing about it in the discussion not to misrepresent my intentions.
(I notice that that sounds more annoyed than I actually am. Not annoyed, just wanting to avoid future misunderstandings.)
Fair enough.
The remainder of the post you’re replying too defends my claim that the Carlsen—median gap is 10x − 1000x the median—dumb gap.
I’m more interested in addressing that as it might be an important disagreement.
I’m not sure there exactly is an “underlying quantity” here. Differences in rating, hence odds ratios in results, are fairly well defined (though, note, it’s not like there’s any sort of necessary principle along the lines of “if when A plays lots of games against B their odds are a:b, and if when B plays lots of games against C their odds are b:c, then when A plays lots of games against C their odds are a:c”, which the Elo scale and the usual ways of updating Elo ratings in the light of results are effectively assuming IIUC). But I don’t think there’s an absolute thing that the differences are sensibly regarded as differences in or ratios of.
I guess you could try to pick some “canonical” single player—a truly random player, or a literally perfect player—and look at odds ratios there. But I think the assumption I mentioned in the previous paragraph really does break down in that case.
I expanded on it a couple of paragraphs below. If that still didn’t clarify, can you say a bit about what doesn’t make sense?
Hmm, maybe I misunderstood something? You wrote
It’s the “10x-1000x” I’m disputing, not any version of “much closer” that’s compatible with the “right” numbers being on the order of 0 / 1000 / 3000.
As for any relationship to optimal play, I was getting that from assuming that what you said about chess was intended as support for drawing a scale with “idiot” on the left and “Einstein” on the right rather than one with “mouse” on the left and “vast superhuman intelligence” on the right. (My own feeling is that what we actually want is more likely a scale with all four of those points on it, and no pair super-close together. Idiots really are much smarter than mice; Einstein really is much smarter than an idiot; God really is much smarter than Einstein; and for many purposes none of those gaps is so large or so small as to make the others negligible. For difficult enough tasks, idiots and mice may be indistinguishable, or even idiots and mice and average people, but those are also the tasks for which I expect hypothetical superintelligences to have big advantages over the smartest humans.)
So I took you to be suggesting that the median-Carlsen gap is large enough that at least one of these pictures is not misleading: ||---------| (vertical bars are idiot, median, Carlsen) or |------------|| (vertical bars are median, Carlsen, superhuman machine). And I don’t agree with either; I think the idiot-median-Carlsen-machine picture is more like |-----|----------|-----| except that I don’t know how much room there is for that last gap to grow as the machines continue to improve.
And my reason for thinking this is that (1) if you use something like log odds (roughly equivalent to Elo ratings) to measure the gap sizes, then that’s what happens, and (2) if you use the odds themselves[1] then indeed the larger gaps become larger by huge factors, but in that case the |-----|--------| pictures where you put gaps next to one another and compare their sizes are completely misleading, because the correct way to combine two gaps is not to put them side by side (which amounts to adding up their sizes), and (3) these large-factor gaps don’t seem to me to be reason to prefer your “just idiot...Einstein” scale to the sort that Yudkowsky likes to draw, because the point Yudkowsky is trying to make by drawing them is about what happens to the right of Einstein, and there is good reason to think that there’s plenty—in particular, there’s a large-odds-ratio gap—to the right of Carlsen.
[1] Insert Schiller quote here :-).
I think that’s basically correct. Magnus Carlsen’s expected score vs a median human is 100s of times greater than a median human’s expected score vs a dumb human (as inferred from their ELO, I sketched a rough calculation at the end of this post).
As for the remainder of your reply, the point of Yudkowsky’s I was contending with is the claim that Einstein is very close to an idiot in absolute terms (especially compared to the difference between an idiot and a chimpanzee).
I wasn’t touching on how superintelligences compare to Einstein.
I don’t think you mean exactly that
since to a good approximation Carlsen gets 100 wins, 0 draws, 0 losses against a median human for a total score of 100, and a median human gets at least an expected score of 50 against a dumb human.
I do not dispute that there are ways of doing the accounting that make the Carlsen-median gap 100x (or 1000x or whatever) bigger than the median-dumbest gap. My claim is that for most purposes those ways of doing the accounting are worse.
I can’t tell whether you think my reasons for thinking that are too stupid to deserve a response, or think they miss the point in some fundamental way, or don’t understand them, or just aren’t very interested in discussing them. But that’s where the actual disagreement lies.
As for mouse/chimp/idiot/Einstein, my general model of these things is that for most mental tasks there’s a minimum level of brainpower needed to do them at all, which for things we think of as interesting mental tasks generally lies somewhere between “idiot” and “Einstein” (because if even idiots could do them easily we wouldn’t think of them as interesting mental tasks, and if even Einsteins couldn’t do them we mostly wouldn’t think of them at all), and sometimes but maybe not always a maximum level of brainpower needed to do them about as well as possible, which might or might not also be somewhere between “idiot” and “Einstein”, and then the biggest delta is the one between not being able to Do The Thing and being able to do it, and after that any given increment matters a lot until you get to the maximum, and after that nothing matters much.[1] So when we pay attention to some specific task we think of as a difficult thing humans can do, we should expect to find “mouse”, “chimp”, “idiot”, and some further portion of the human population all clustered together and “Einstein” some way away. But there are also tasks that pretty much all humans can do, some of which distinguish (e.g.) mice from chimps, and I think it’s fair to say that there is a real sense in which humans are closer to chimps than chimps are to mice even though for human-ish mental tasks there’s no difference to speak of between mice and chimps; and for some tasks there is probably huge scope for doing better than the best humans. (Some of those tasks may be ones it has never occurred to us to try because they are beyond our conception.) I think the question of how much room there is above “Einstein” on the scale is highly relevant if you are asking how close “idiot” and “Einstein” are.
[1] Of course this is a simplification; minima and maxima for this sort of thing are usually “soft” rather than “hard”, and most interesting tasks actually involve a variety of skills whose minima and maxima won’t all be in the exact same place, and brainpower isn’t really one-dimensional, etc., etc., etc. I assume you appreciate all these things as well as I do :-).
What’s your issue with the below calculation.
104.625102.5=104.625−2.5=102.125=133.25x400 points ELO represents a 10x difference in expected score
“It then follows that for each 400 rating points of advantage over the opponent, the expected score is magnified ten times in comparison to the opponent’s expected score.”
Median—dumb ELO difference is 1,000 points: 102.5x difference in expected score
Magnus—median ELO difference is 1,850 points: 104.625x difference in expected score
Magnus—median gap is >100x median-human gap
You’ve not addressed what you think is wrong with the above calculation, so I’m just confused. I think it’s basically a canonical quantification of chess ability?
We could also just ask whether idiot was closer to chimpanzee than to Einstein. I’m mostly interested in how long it takes AI to cross the human cognitive frontier, not whether strongly superhuman AI is possible (I think it is).
My issue with the calculation isn’t with the calculation. It is indeed correct (with the usual assumptions, which are probably somewhat wrong but it doesn’t much matter) that if Magnus plays many games against a median chessplayer then he will probably get something like 10^4.6 times as many points as they do, and that if a median chessplayer plays a maximally-dumb one then they will probably get something like 10^2.5 times as many points as the maximally-dumb one, and that the ratio between those two ratios is on the order of 100x. I don’t object to any of that, and never have.
I feel, rather, that that isn’t a very meaningful calculation to be doing, if what you want to do is to ask “how much better is Carlsen than median, than median is better than dumbest?”.
More specifically, my objections are as follows. (They overlap somewhat.)
0. The odds-ratio figure you are using is by no means the canonical way to quantify chess ability. Consider: the very first thing you said on the topic was “As for how to measure gaps, I think the negative of the logarithm of the [odds] is good”. I agree with that: I think log-odds is better for most purposes.
1. For multiplicative things like these odds ratios, I think it is generally misleading to say “gap 1 is 10x as big as gap 2” when you mean that the odds ratio for gap 2 is 10x bigger. I think that e.g. “gap 1 is twice as big as gap 2″ should mean that gap 2 is like one instance of gap 1 and then another instance of gap 1, which for odds ratios means that the odds ratio for gap 2 is the square of the odds ratio for gap 1. By that way of thinking, the median-Magnus gap is less than twice the size of the dumbest-median gap. Your terminology requires you to say that the median-Magnus gap simultaneously (a) is hundreds of times bigger than the dumbest-median gap and (b) is not large enough to fit two copies of the dumbest-median gap into (i.e., find someone X as much better than median as median is than dumber; and then find someone Y as much better than X as median is than dumber; if you do that, Y will be better than Magnus).
2. If you are going to draw diagrams like the Yudkowsky scale, which implicitly compare “gap sizes” by placing gaps next to one another, then you had better be using a measure of difference that behaves additively rather than multiplicatively. Because that’s the only way for the relative distances of A,B,C along the scale to convey accurately the relationship between the A-B, B-C, and A-C gaps. (You could of course make a scale where position is proportional to, say, “odds ratio against median player”. That will make the dumbest-median gap very small and the median-Magnus gap very large. But it will also make an “odds ratio 10:1” gap vary hugely in size depending on where on the scale it is, which I don’t think is what you want to do.)
#0. Regarding the log of the odds ratios, I want to clarify that I never meant it as a linear scale. I was working with the intuition that linear gaps in logarithmic scales are exponential.
#1. I get what you’re saying, but I think this objection would apply to any logarithmic scale; do you endorse that conclusion/generalisation of your objection?
If the gap between two points on a logarithmic scale is d, and that represents a change of D in the underlying quantity, a gap of 2d would represent a change of D2 in the underlying quantity.
Talking about change may help elide the issues from different intuitions about what gaps should mean.
My claim above was that the underlying quantity was (a linear measure of) “chess ability”, and the ELO scale had that kind of logarithmic relationship to it.
2. I was implicitly making the transformation above where I converted a logarithmic scale into a linear/additive scale.
I agree that it doesn’t make sense to use non linear scales when talking about gaps. I also agree that ELO score is one such nonlinear scale.
My claim about the size of the gap was after converting the nonlinear ELO rating to the ~linear “expected score”. Hence I spoke about gaps in expected score.
I think the crux is this: What do you think is the best/most sensible linear measure of chess ability?
(By linear measure, i mean that a difference of kx is k times as big as a difference of x.)
I am not sure exactly what you’re asking me whether I endorse, but I do indeed think that for “multiplicative” things that you might choose to measure on a log scale, “twice as big a gap” should generally mean 2x on the log scale or squaring on the ratio scale.
If you think it doesn’t make sense to use nonlinear scales when talking about gaps, and think Elo rating is nonlinear while exp(Elo rating) is linear, then you are not agreeing but radically disagreeing with me. I think Elo rating differences are a pretty good way of measuring gaps in chess ability, and I think exp(Elo rating) is much worse.
I think Elo rating is nearer to being a linear measure of chess ability than odds ratio, to whatever extent that statement makes sense. I think that if you spend a while doing puzzles every day and your rating goes up by 50 points (~1.33x improvement in odds ratio), and then you spend a while learning openings and your rating goes up by another 50 points, then it’s more accurate to say that doing both those things brought twice the improvement that doing just one did (i.e., 100 points versus 50 points) than to say it brought 1.33x the improvement that doing just one did (i.e., 1.78x odds versus 1.33x odds). I think that if you’re improving faster and it’s 200 points each time (~3x odds) then it doesn’t suddenly become appropriate to say that doing both things brought 3x the improvement of doing one of them. I think that if you’re enough better than me that you get 10x more points than I do when we play, and if Joe Blow is enough better than you that he gets 10x more points than you do when we play, then the gap between Joe and me is twice as big as the gap between you and me or the gap between Joe and you, because the big gap can be thought of as made up of two identical smaller gaps, and not 10x as big.