Beyond ELO: Rethinking Chess Skill as a Multidimensional Random Variable
Introduction
The traditional ELO rating system reduces a player’s ability to a single scalar value E, from which win probabilities are computed via a logistic function of the rating difference. While pragmatic, this one-dimensional approach may obscure the rich, multifaceted nature of chess skill. For instance, factors such as tactical creativity, psychological resilience, opening mastery, and endgame proficiency could interact in complex ways that a single number cannot capture.
I’m interested in exploring whether modeling a player’s ability as a vector
with each component representing a distinct skill dimension, can yield more accurate predictions of match outcomes. I tried asking ChatGPT for a detailed answer on this idea, but its responses aren’t that helpful frankly.
The Limitations of a 1D Metric
The standard ELO system computes the win probability for two players A and B as a function of the scalar difference E_A−E_B, typically via:
where and α is a scaling parameter. This model assumes that all relevant aspects of chess performance are captured by E. Yet, consider two players with equal ELO ratings: one might excel in tactical positions but falter in long, strategic endgames, while the other might exhibit a more balanced but less spectacular play style. Their match outcomes could differ significantly depending on the nuances of a particular game—nuances that a one-dimensional rating might not capture.
A natural extension is to represent each player’s skill by a vector , where each corresponds to a distinct skill (e.g., tactics, endgame, openings). One might model the probability of player A beating player B as:
where ⟨⋅,⋅⟩ denotes the dot product and is a weight vector representing the relative importance of each skill dimension.
I’m interested in opening the discussion: has anyone developed or encountered multidimensional models for competitive games that could be adapted for chess? How might techniques from psychometrics—e.g. Item Response Theory (IRT) - inform the construction of these models?
Considering the typical chess data (wins, draws, losses, and perhaps even in-game evaluations), is there a realistic pathway to disentangling multiple dimensions of ability? What metrics or validation strategies would best demonstrate that a multidimensional model provides superior predictive performance compared to the traditional ELO system?
Ultimately my aim here is to build chess betting models … lol, but I think the stats is really cool too. Any insights on probabilistic or computational techniques that might help in this endeavor would be highly appreciated.
Thank you for your time and input.
I might be misunderstanding, but it looks to me like your proposed extension is essentially just the Elo model with some degrees of freedom that don’t yet appear to matter?
The dot product has the property that <theta_A-theta_B,w> = <theta_A,w> - <theta_B,w>, so the only thing that matters is the <theta_P,w> for each player P, which is just a single scalar. So we are on a one-dimensional scale again where predictions are based on taking a sigmoid of the difference between a single scalar associated with each player.
As far as I can tell, the way that such a model could still be a nontrivial extension of Elo would be if you posited w could vary between games, whether randomly from some distribution or whether via additional parameters associated to players that influence what w is in the games they are involved in, or other things like that. But it seems you would need something like that, or else some source of nonlinearity, because if w is constant then every dimension orthogonal to that fixed w can never have any effect on predictions by the model.
ELO is the Electric Light Orchestra. The Elo rating is named after Prof. Arpad Elo.
I considered the idea of representing players via vectors in different context (chess, soccer, mma) and also worked a bit on splitting the evaluation of moves into “quality” and “risk taking”, with the idea of quantifying aggression in chess.
My impression is that the single scalar rating works really well in chess, so I’m not sure how much there is beyond that. However, some simple experiments in that direction wouldn’t be too difficult to set up.
Also, I think there were competitions on creating better rating systems that outperform Elo’s predictiveness (which apparently isn’t too difficult). But I don’t know whether any of those were multi-dimensional.
Here are some interesting, at least tangentially relevant, sources I’ve managed to dig up:
A psychometric analysis of chess expertise
Detecting Individual Decision-Making Style: Exploring Behavioral Stylometry in Chess
Science of Chess: A g-factor for chess? A psychometric scale for playing ability
Chess Rating Estimation from Moves and Clock Times Using a CNN-LSTM
Science of Chess—How many kinds of chess ability are there?
Comparing Elo, Glicko, IRT, and Bayesian Statistical Models for Educational and Gaming Data
Circling back to this with a thing I was thinking about—suppose one wanted to figure out just one additional degree of freedom to the Elo rating a player had (at a given point in time, if you also allow evolution over time) that would add as much improvement as possible. Almost certainly you need more dimensions than that to properly fit real idiosyncratic nonlinearities/nontransitivities (i.e. if you had a playing population with specific pairs of players that were especially strong/weak only against specific other players, or cycles of players where A beats B beats C beats A, etc), but if you just wanted to work out what the “second principal component” might be, what’s a plausible guess?
First, you can essentially reproduce the Elo model by rather than each player having a rating and the winning chance being a function of the difference between their ratings, instead you posit that each player has a rating and when they play a game, they each indepedently sample a random value from a fixed probability distribution centered around their own rating, and the player with the larger sample wins.
I think that you exactly reproduce the Elo model up to scaling if this distribution is a Gumbel distribution, because the difference of two Gumbels is apparently equivalent to a draw from a logistic distribution, and the CDF of the logistic distribution is precisely the sigmoid that the Elo model posits. But in practice, you should end up with almost the same thing if you choose any other reasonable distribution so long as it has the right heaviness of tail.
In particular, I’d expect having linearly-exponential tails is good rather than quadratically-exponential tails like the normal distribution has, because linearly-exponential tails tend to be desirable for real-world ratings models due to being much more outlier-resistant and in the real world you have issues like forfeits, sandbaggers, internet disconnection/timeouts, etc. (If you have a quadratically exponential tail, then a ratings model can put so low probability on an outlier that subject to seeing the outlier, the ratings model is forced to make a too-large update to accommodate it, this should be intuitive from a Bayesian perspective). Outliers and noise and the realities of real world ratings data I’d expect introduces far bigger variation in ratings quality anyways than any minor distribution-shape differences would.
So for example, you could also say each player draws from a logistic distribution, rather than only a Gumbel. The difference of two logistics is not quite a logistic distribution but up to rescaling it should be pretty close so this is nearly the Elo model again.
Anyways, with any reformulation like this, there is a very natural candidate now for a second dimension—that of the variance of the distribution that a player draws their sample from. Rather than each player drawing from a fixed distribution centered around their rating before seeing who has the higher value and wins, we now add a second parameter that allows the variance of that distribution to vary by player. So the ratings model now becomes able to express things like “this player is more variable in performance between games, or prone to blunders uncharacteristic of their skill level than this other player”. This parameter might also improve the rating system’s ability to “explain away” things like sandbagger players by assigning them a high variance, thereby reducing their distortionary impact on other players’ ratings even before manual intervention.
You might want to have a look at Microsoft’s TrueSkill. An ELO like rating for online team games. It was a good early (there’s probably newer and better ones) answer for how to rank an individual when teamed randomly together with others.
If this actually hasn’t been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) → (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.
Some random ideas:
Before doing any sort of positional analysis: What does the (ELO_1,ELO_2,engine eval) → Probability of win/draw function look like? What happens when choosing an engine near those ELO ratings vs. the strongest engines?
Observing how rapidly the eval changes when given to a weak engine might give a somwhat automatable metric on the “sharpness” of a chess position (so you don’t have to label everything yourself)
Welcome to the realm of the posters!
Elos are already multidimensional, in a sense, because players have different ratings on different platforms. Hikaru Nakamura, for example, has a higher Elo on chess.com than FIDE. But that’s just nitpicky pedantry; I understand that you’re really asking is whether chess ability for a specific version of chess has subskills.
Among chess players and chess teachers, it is common (as you note) to break chess ability into three subskills:
Openings
Midgame
Endgame
Openings these days are mostly about memorizing solved lines. Openings are so well-solved and memorization-dependent, that they can be boring to top players. This boredom is a force behind the popularity of Fisher Random among top players.
Magnus Carlson (currently the world’s best chess player) is famous for his endgame ability. He’s less interested in openings (especially classical openings) these days. It’s not uncommon for him to open with something stupid, like switching his king and queen, and then still beat a grandmaster.
Could you use these subskills to predict competition results? Absolutely. If you were placing extremely precise bets on the outcomes of games, then you shouldn’t just consider Elo. In classical games, you should also consider how much time each player has prepared for the tournament by studying opening lines. You can extrapolate Elo trend lines too.
The reason nobody uses a breakdown this fine for competitions isn’t because it wouldn’t generate a small additional signal. It’s because nobody has a strong enough motivation too. There aren’t billions of dollars getting bet on chess competition results. Elo is perfectly adequate when you’re choosing who to invite to a chess tournament. It’s also extremely legible, too.
If you are putting that much effort into predicting outcomes, it may be cheaper to just bribe players. Bribery is especially cheap for chess variants with smaller prize pools, like Xiangqi.
That said, there does exist an organization that does analyze chess skill at extremely fine resolution: chess.com. But not to predict winners. Instead, chess.com analyzes player skill at resolutions finer than “Elo” because in order to detect cheating. Chess cheaters often exhibit telltale signs where their skill spikes really hard within a game. This signal is not detectable with mere Elo.