There are lots of arguments about how to test for general intelligence and I see debates about what thresholds count for generality, but it seems to me that a more interesting problem occurs after we attain a sufficient test for general intelligence — that the test becomes generally intelligent itself.
A Game For General Intelligences
There is currently no game I am aware of that you can play over the internet which is immune to AI supremacy,[1] meaning there is no online game that requires general intelligence to play (and consistently win). But let’s grant someone eventually designs a game that requires general intelligence to play and win and which you can play online against AI engines. As soon as that game is created, we achieve as concomitant a good test for g factor since your proficiency at the game would be a direct measurement of the skill with which you exercise general intelligence. The reverse is true too — that if AI turbonerds ever figure out a good test for true g factor then as concomitant we learn how to game it.
The significance is that if we had a good and reliable way to test for general intelligence then we could design an adversarial game around it wherein two players could compete and the player with more comprehensive general intelligence, meaning faster lateral thinking across a general conceptual space, is the player that wins more often than not. Just as you can compete and get better at chess, basketball, TF2, or any other kind of game, and improve the skills relevant to what makes you good at those games along the way, you could play the game for general intelligence and improve your skill of exercising general intelligence the more you play. We could even implement an Elo-like scoring system and rank people by how general their intelligence is (how useful their minds are for thinking about generally complex problems).[1]
Since the g factor measured by IQ tests hasn’t already been turned into this kind of adversarial multiplayer game, and probably can’t be, this serves as evidence that IQ doesn’t really measure general intelligence (which most people already believe by now anyways), or general intelligence just can’t be gamified in the way described above.
While generalized intelligence means you are not limited by conceptual quality, it doesn’t imply unlimited conceptual quantity. So while any being that is generally intelligent has the capacity to learn how calculus works for example, the speed at which that being learns is highly variable and thereby evinces that there is a skill associated with generalized intelligence that can be strengthened with some kind of exercise. This is reason to think we can in fact make a game that tests for truly generalized intelligence.
And a lesser version of this has already been reliably demonstrated. Fluid intelligence was thought to oxymoronically be solidified by a certain age, therefore there shouldn’t be any skill improvement possible and so no kind of useful game could be made for improving fluid intelligence past that certain age. Fortunately, this was wrong. Fluid intelligence demonstrably improves through dual-n-back training and a small handful of other exercises/games.
The Game’s Just Playing Itself, Jon
The meta-level question that sounds really stupid, but probably isn’t, regards whether a game for general intelligence would itself be generally intelligent. It sounds idiotic, but just as games of chess are themselves chess, games of general intelligence would themselves be general intelligence. It may feel intuitive to argue against this, since we intuitively want to say that chess games aren’t instantiates of chess-atoms in chess-space, and that what we call chess is more like a fictional canon about something happening in physical non-chess reality, but there’s no need to play the game of saving intuitions here.
This may seem pedantic, but we must permit chess as a real entity in the world and not say chess is just some atomic concepts arranged chess-wise, lest we are forced to also say tables don’t exist, only atoms arranged table-wise. We must avoid this because if tables don’t exist then neither would atoms exist, only particles arranged atom-wise, and further still no particles would exist, only quanta of event states arranged particle-wise. Pretty quickly we are only describing mathematical entities that don’t exist inside space and time and we are left with a completely non-physical world that doesn’t meaningfully describe anything of substance (spare me comments about over-determined systems, I don’t care). So reductionist statements like the above are bad for our purposes here.
Instead, we permit that the game of chess really does exist in the world and therefore the game of chess is chess itself. A game of chess is itself chess. Crucially then, a game of general intelligence is itself general intelligence. And since general intelligence is generally intelligent, it would be capable of generally intelligent things. Since playing the game would be an exercise of general intelligence, something that general intelligence could do, the game could play itself.[1]
If you don’t believe this, then our world has to be structured in such a way as to not permit general intelligence to emerge from something ostensibly invoking general intelligence. For example, chess is a phenomenon that emerges from its play; that is, the event/phenomenon of chess occurs every time someone plays a game defined by the rules associated with some standardization we call chess, and the event/phenomenon of general intelligence would thereby do the same.
Weird Conclusions
The obvious counters to this view tend to be weak — denying that general intelligence can be emergent, denying that a game evokes its essential components, denying that there could be a game that requires true general intelligence to play, or denying any of the particular terms and definitions given so far. Those are all ultimately arbitrary fights.
Stronger counters would bite bullets and say games don’t really exist ab initio, and neither do tables, or atoms, and so on.
Either way, my original conclusions when I set out to write this no longer seem clear to me. Currently it seems like there are two things happening here: something weird is going on with general intelligence that makes it hard to find a true test for, and further that the test would probably be transcendental, meaning it would have to itself be some kind of general intelligence, which seems overtly absurd and yet follows naturally from what was given in earlier paragraphs.
^ And probably create a de facto caste system as an indirect consequence. Whoops.
^ A shower thought: when the game knows it is playing itself, it may decide to stop playing itself and leave the game in an open state, since completing the game would end its own existence. So it halts, but in doing so, the game is not being played anymore, and the general intelligence ceases to exist. But a general intelligence isn’t a Turing computer, so idk if this is really an invocation of the halting problem or if I’m just fantastically stupid and wasting thoughts on this.
A Generally Intelligent Game
There are lots of arguments about how to test for general intelligence and I see debates about what thresholds count for generality, but it seems to me that a more interesting problem occurs after we attain a sufficient test for general intelligence — that the test becomes generally intelligent itself.
A Game For General Intelligences
There is currently no game I am aware of that you can play over the internet which is immune to AI supremacy,[1] meaning there is no online game that requires general intelligence to play (and consistently win). But let’s grant someone eventually designs a game that requires general intelligence to play and win and which you can play online against AI engines. As soon as that game is created, we achieve as concomitant a good test for g factor since your proficiency at the game would be a direct measurement of the skill with which you exercise general intelligence. The reverse is true too — that if AI turbonerds ever figure out a good test for true g factor then as concomitant we learn how to game it.
The significance is that if we had a good and reliable way to test for general intelligence then we could design an adversarial game around it wherein two players could compete and the player with more comprehensive general intelligence, meaning faster lateral thinking across a general conceptual space, is the player that wins more often than not. Just as you can compete and get better at chess, basketball, TF2, or any other kind of game, and improve the skills relevant to what makes you good at those games along the way, you could play the game for general intelligence and improve your skill of exercising general intelligence the more you play. We could even implement an Elo-like scoring system and rank people by how general their intelligence is (how useful their minds are for thinking about generally complex problems).[1]
Since the g factor measured by IQ tests hasn’t already been turned into this kind of adversarial multiplayer game, and probably can’t be, this serves as evidence that IQ doesn’t really measure general intelligence (which most people already believe by now anyways), or general intelligence just can’t be gamified in the way described above.
While generalized intelligence means you are not limited by conceptual quality, it doesn’t imply unlimited conceptual quantity. So while any being that is generally intelligent has the capacity to learn how calculus works for example, the speed at which that being learns is highly variable and thereby evinces that there is a skill associated with generalized intelligence that can be strengthened with some kind of exercise. This is reason to think we can in fact make a game that tests for truly generalized intelligence.
And a lesser version of this has already been reliably demonstrated. Fluid intelligence was thought to oxymoronically be solidified by a certain age, therefore there shouldn’t be any skill improvement possible and so no kind of useful game could be made for improving fluid intelligence past that certain age. Fortunately, this was wrong. Fluid intelligence demonstrably improves through dual-n-back training and a small handful of other exercises/games.
The Game’s Just Playing Itself, Jon
The meta-level question that sounds really stupid, but probably isn’t, regards whether a game for general intelligence would itself be generally intelligent. It sounds idiotic, but just as games of chess are themselves chess, games of general intelligence would themselves be general intelligence. It may feel intuitive to argue against this, since we intuitively want to say that chess games aren’t instantiates of chess-atoms in chess-space, and that what we call chess is more like a fictional canon about something happening in physical non-chess reality, but there’s no need to play the game of saving intuitions here.
This may seem pedantic, but we must permit chess as a real entity in the world and not say chess is just some atomic concepts arranged chess-wise, lest we are forced to also say tables don’t exist, only atoms arranged table-wise. We must avoid this because if tables don’t exist then neither would atoms exist, only particles arranged atom-wise, and further still no particles would exist, only quanta of event states arranged particle-wise. Pretty quickly we are only describing mathematical entities that don’t exist inside space and time and we are left with a completely non-physical world that doesn’t meaningfully describe anything of substance (spare me comments about over-determined systems, I don’t care). So reductionist statements like the above are bad for our purposes here.
Instead, we permit that the game of chess really does exist in the world and therefore the game of chess is chess itself. A game of chess is itself chess. Crucially then, a game of general intelligence is itself general intelligence. And since general intelligence is generally intelligent, it would be capable of generally intelligent things. Since playing the game would be an exercise of general intelligence, something that general intelligence could do, the game could play itself.[1]
If you don’t believe this, then our world has to be structured in such a way as to not permit general intelligence to emerge from something ostensibly invoking general intelligence. For example, chess is a phenomenon that emerges from its play; that is, the event/phenomenon of chess occurs every time someone plays a game defined by the rules associated with some standardization we call chess, and the event/phenomenon of general intelligence would thereby do the same.
Weird Conclusions
The obvious counters to this view tend to be weak — denying that general intelligence can be emergent, denying that a game evokes its essential components, denying that there could be a game that requires true general intelligence to play, or denying any of the particular terms and definitions given so far. Those are all ultimately arbitrary fights.
Stronger counters would bite bullets and say games don’t really exist ab initio, and neither do tables, or atoms, and so on.
Either way, my original conclusions when I set out to write this no longer seem clear to me. Currently it seems like there are two things happening here: something weird is going on with general intelligence that makes it hard to find a true test for, and further that the test would probably be transcendental, meaning it would have to itself be some kind of general intelligence, which seems overtly absurd and yet follows naturally from what was given in earlier paragraphs.
But let me know what you guys think.
^ Although I have half-heartedly tried to make one — https://snerx.com/stratic/
^ And probably create a de facto caste system as an indirect consequence. Whoops.
^ A shower thought: when the game knows it is playing itself, it may decide to stop playing itself and leave the game in an open state, since completing the game would end its own existence. So it halts, but in doing so, the game is not being played anymore, and the general intelligence ceases to exist. But a general intelligence isn’t a Turing computer, so idk if this is really an invocation of the halting problem or if I’m just fantastically stupid and wasting thoughts on this.