I’m wary of a possible equivocation about what the “natural abstraction hypothesis” means here.
If we are referring to the redundant information hypothesis and various kinds of selection theorems, this is a mathematical framework that could end up being correct, is not at all ungrounded, and Wentworth sure seems like the man for the job.
But then you are still left with the task of grounding this framework in physical reality to allow you to make correct empirical predictions about and real-world interventions on what you will see from more advanced models. Our physical world abstracting well seems plausible (not necessarily >50% likely), and these abstractions being “natural” (e.g., in a category-theoretic sense) seems likely conditional on the first clause of this sentence being true, but I give an extremely low probability to the idea that these abstractions will be used by any given general intelligence or (more to the point) advanced AI model to a large and wide enough extent that retargeting the search is even close to possible.
Rohin Shah has already explained the basic reasons why I believe the mesa-optimizer-type search probably won’t exist/be findable in the inner workings of the models we encounter: “Search is computationally inefficient relative to heuristics, and we’ll be selecting really hard on computational efficiency.” And indeed, when I look at the only general intelligences I have ever encountered in my entire existence thus far, namely humans, I see mostly just a kludge of impulses and heuristics that depend very strongly (almost entirely) on our specific architectural make-up and the contextual feedback we encounter in our path through life. Change either of those and the end result shifts massively.
And even moving beyond that, is the concept of the number “three” a natural abstraction? Then I see entire collections and societies of (generally intelligent) human beings today who don’t adopt it. Are the notions of “pressure” and “temperature” and “entropy” natural abstractions? I look at all human beings in 1600 and note that not a single one of them had ever correctly conceptualized a formal version of any of those; and indeed, even making a conservative estimate of the human species (with an essentially unchanged modern cognitive architecture) having existed for 200k years, this means that for 99.8% of our species’ history, we had no understanding whatsoever of concepts as “universal” and “natural” as that. If you look at subatomic particles like electrons or stuff in quantum mechanics, the percentage manages to get even higher. And that’s only conditioning on abstractions about the outside world that we have eventually managed to figure out; what about the other unknown unknowns?
For example, this post does an experiment that shows that OOD data still makes the Platonic Representation Hypothesis true, meaning that it’s likely that deeper factors are at play than just shallow similarity
I don’t think it shows that at all, since I have not been able to find any analysis of the methodology, data generation, discussion of results, etc. With no disrespect to the author (who surely wasn’t intending for his post to be taken as authoritative as a full paper in terms of updating towards his claim), this is shoddy science, or rather not science at all, just a context-free correlation matrix.
Anyway, all this is probably more fit for a longer discussion at some point.
Rohin Shah has already explained the basic reasons why I believe the mesa-optimizer-type search probably won’t exist/be findable in the inner workings of the models we encounter: “Search is computationally inefficient relative to heuristics, and we’ll be selecting really hard on computational efficiency.”
I think this statement is quite ironic in retrospect, given how OpenAI’s o-series seems to work (at train-time and at inference-time both), and how much AI researchers hype it up.
By contrast, my understanding is that the sort of search John is talking about retargeting isn’t the brute-force babble-and-prune algorithms, but a top-down heuristical-constraint-based search.
So it is in fact the ML researchers now who believe in the superiority of the computationally inefficient search; not the agency theorists.
Re the OpenAI o-series and search, my initial prediction is that Q*/MCTS search will work well on problems that are easy to verify and and easy to get training data for, and not work if either of these 2 conditions are violated, and secondarily will be reliant on the model having good error correction capabilities to use the search effectively, which is why I expect we can make RL capable of superhuman performance on mathematics/programming with some rather moderate schlep/drudge work, and I also expect cost reductions such that it can actually be practical, but I’m only giving a 50⁄50 chance by 2028 for superhuman performance as measured by benchmarks in these domains.
I think my main difference from you, Thane Ruthenis is I expect costs to reduce surprisingly rapidly, though this is admittedly untested.
This will accelerate AI progress, but not immediately cause an AI explosion, though in the more extreme paces this could create something like a scenario where programming companies are founded by a few people smartly managing a lot of programming AIs, and programming/mathematics experiencing something like what happened to the news industry from the rise of the internet, where there was a lot of bankruptcy of the middle end, the top end won big, and most people are in the bottom end.
Also, correct point on how a lot of people’s conceptions of search are babble-and-prune, not top down search like MCTS/Q*/BFS/DFS/A* (not specifically targeted at sunwillrisee
By contrast, my understanding is that the sort of search John is talking about retargeting isn’t the brute-force babble-and-prune algorithms, but a top-down heuristical-constraint-based search.
I’m not strongly committed to the view that the costs won’t rapidly reduce: I can certainly see the worlds in which it’s possible to efficiently distill trees-of-thought unrolls into single chains of thoughts. Perhaps it scales iteratively, where we train a ML model to handle the next layer of complexity by generating big ToTs, distilling them into CoTs, then generating the next layer of ToTs using these more-competent CoTs, etc.
Or perhaps distillation doesn’t work that well, and the training/inference costs grow exponentially (combinatorially?).
Yeah, we will have to wait at least several years.
One confound in all of this is that big talent is moving out of OpenAI, which means I’m more bearish on the company’s future prospects specifically without it being that much of a detriment towards progress towards AGI.
I’m wary of a possible equivocation about what the “natural abstraction hypothesis” means here.
If we are referring to the redundant information hypothesis and various kinds of selection theorems, this is a mathematical framework that could end up being correct, is not at all ungrounded, and Wentworth sure seems like the man for the job.
But then you are still left with the task of grounding this framework in physical reality to allow you to make correct empirical predictions about and real-world interventions on what you will see from more advanced models. Our physical world abstracting well seems plausible (not necessarily >50% likely), and these abstractions being “natural” (e.g., in a category-theoretic sense) seems likely conditional on the first clause of this sentence being true, but I give an extremely low probability to the idea that these abstractions will be used by any given general intelligence or (more to the point) advanced AI model to a large and wide enough extent that retargeting the search is even close to possible.
And indeed, it is the latter question that represents the make-or-break moment for natural abstractions’ theory of change, for it is only when the model in front of you (as opposed to some other idealized model) uses these specific abstractions that you can look through the AI’s internal concepts and find your desired alignment target.
Rohin Shah has already explained the basic reasons why I believe the mesa-optimizer-type search probably won’t exist/be findable in the inner workings of the models we encounter: “Search is computationally inefficient relative to heuristics, and we’ll be selecting really hard on computational efficiency.” And indeed, when I look at the only general intelligences I have ever encountered in my entire existence thus far, namely humans, I see mostly just a kludge of impulses and heuristics that depend very strongly (almost entirely) on our specific architectural make-up and the contextual feedback we encounter in our path through life. Change either of those and the end result shifts massively.
And even moving beyond that, is the concept of the number “three” a natural abstraction? Then I see entire collections and societies of (generally intelligent) human beings today who don’t adopt it. Are the notions of “pressure” and “temperature” and “entropy” natural abstractions? I look at all human beings in 1600 and note that not a single one of them had ever correctly conceptualized a formal version of any of those; and indeed, even making a conservative estimate of the human species (with an essentially unchanged modern cognitive architecture) having existed for 200k years, this means that for 99.8% of our species’ history, we had no understanding whatsoever of concepts as “universal” and “natural” as that. If you look at subatomic particles like electrons or stuff in quantum mechanics, the percentage manages to get even higher. And that’s only conditioning on abstractions about the outside world that we have eventually managed to figure out; what about the other unknown unknowns?
I don’t think it shows that at all, since I have not been able to find any analysis of the methodology, data generation, discussion of results, etc. With no disrespect to the author (who surely wasn’t intending for his post to be taken as authoritative as a full paper in terms of updating towards his claim), this is shoddy science, or rather not science at all, just a context-free correlation matrix.
Anyway, all this is probably more fit for a longer discussion at some point.
I think this statement is quite ironic in retrospect, given how OpenAI’s o-series seems to work (at train-time and at inference-time both), and how much AI researchers hype it up.
By contrast, my understanding is that the sort of search John is talking about retargeting isn’t the brute-force babble-and-prune algorithms, but a top-down heuristical-constraint-based search.
So it is in fact the ML researchers now who believe in the superiority of the computationally inefficient search; not the agency theorists.
Re the OpenAI o-series and search, my initial prediction is that Q*/MCTS search will work well on problems that are easy to verify and and easy to get training data for, and not work if either of these 2 conditions are violated, and secondarily will be reliant on the model having good error correction capabilities to use the search effectively, which is why I expect we can make RL capable of superhuman performance on mathematics/programming with some rather moderate schlep/drudge work, and I also expect cost reductions such that it can actually be practical, but I’m only giving a 50⁄50 chance by 2028 for superhuman performance as measured by benchmarks in these domains.
I think my main difference from you, Thane Ruthenis is I expect costs to reduce surprisingly rapidly, though this is admittedly untested.
This will accelerate AI progress, but not immediately cause an AI explosion, though in the more extreme paces this could create something like a scenario where programming companies are founded by a few people smartly managing a lot of programming AIs, and programming/mathematics experiencing something like what happened to the news industry from the rise of the internet, where there was a lot of bankruptcy of the middle end, the top end won big, and most people are in the bottom end.
Also, correct point on how a lot of people’s conceptions of search are babble-and-prune, not top down search like MCTS/Q*/BFS/DFS/A* (not specifically targeted at sunwillrisee
I’m not strongly committed to the view that the costs won’t rapidly reduce: I can certainly see the worlds in which it’s possible to efficiently distill trees-of-thought unrolls into single chains of thoughts. Perhaps it scales iteratively, where we train a ML model to handle the next layer of complexity by generating big ToTs, distilling them into CoTs, then generating the next layer of ToTs using these more-competent CoTs, etc.
Or perhaps distillation doesn’t work that well, and the training/inference costs grow exponentially (combinatorially?).
Yeah, we will have to wait at least several years.
One confound in all of this is that big talent is moving out of OpenAI, which means I’m more bearish on the company’s future prospects specifically without it being that much of a detriment towards progress towards AGI.
Yeah, it hasn’t been shown that these abstractions can ultimately be retargeted by default for today’s AI.