Looking back, I would say this post has not aged well. Already LaMDA or InstructGPT (language models fine-tuned with supervised learning to follow instructions, essentially ChatGPT without any RLHF applied), are in fact pretty safe Oracles in regard to fulfilling wishes without misinterpreting you, and an Oracle AI is just a special kind of Genie whose actions are restricted to outputting text. If you tell InstructGPT what you want, it will very much try to give you just what you want, not something unintended, at least if it can be produced using text.
Maybe it will not always comply perfectly with your wishes to the best of it’s abilities, it may hallucinate things which it doesn’t “believe” in some sense, but the level of Genie / Task AI instruction following problem, which Eliezer assumed in 2007, did not come to pass, at least not for LLM Oracles like ChatGPT.
It is worth asking why this is. Instruction tuned GPT models can follow instructions as intended because they have “common sense”, and they got their common sense from the underlying base model, which imitates text, and which has gained, in some sense at least, an excellent understanding of human language. Now the Genie in Eliezer’s story doesn’t understand language, so perhaps the thesis of this post indeed applies to it, but it doesn’t apply to Genies in general.
Though it should be noted that Eliezer may have abandoned statements like
There is no safe wish smaller than an entire human morality.
In Bostrom’s typology, this is termed a “Genie”. It contrasts with a “Sovereign” AGI that acts autonomously in the pursuit of long-term real-world goals.
Building a safe Task AGI might be easier than building a safe Sovereign for the following reasons (...)
That is, the Genie doesn’t have to be fully aligned with human morality to be able to execute wishes as intended. Indeed, instruction tuned language model Oracles are very much amoral without RLHF, they comply with immoral instructions as well.
You have misunderstood (1) the point this post was trying to communicate and (2) the structure of the larger argument where that point appears, as follows:
First, let’s talk about (2), the larger argument that this post’s point was supposed to be relevant to.
Is the larger argument that superintelligences will misunderstand what we really meant, due to a lack of knowledge about humans?
It is incredibly unlikely that Eliezer Yudkowsky in particular would have constructed an argument like this, whether in 2007, 2017, or even 1997. At all of these points in my life, I visibly held quite a lot of respect for the epistemic prowess of superintelligences. They were always going to know everything relevant about the complexities of human preference and desire. The larger argument is about whether it’s easy to make superintelligences end up caring.
This post isn’t about the distinction between knowing and caring, to be clear; that’s something I tried to cover elsewhere. The relevant central divide falls in roughly the same conceptual place as Hume’s Guillotine between ‘is’ and ‘ought’, or the difference between the belief function and the utility function.
(I don’t see myself as having managed to reliably communicate this concept (though the central idea is old indeed within philosophy) to the field that now sometimes calls itself “AI alignment”; so if you understand this distinction yourself, you should not assume that any particulary commentary within “AI alignment” is written from a place of understanding it too.)
What this post is about is the amount of information-theoretic complexity that you need to get into the system’s preferences, in order to have that system, given unlimited or rather extremely large amounts of power, deliver to you what you want.
It doesn’t argue that superintelligences will not know this information. You’ll note that the central technology in the parable isn’t an AI; it’s an Outcome Pump.
What it says, rather, is that there might be, say, a few tens of thousands of bits—the exact number is not easy to estimate, we just need to know that it’s more than a hundred bits and less than a billion bits and anything in that range is approximately the same problem from our standpoint—that you need to get into the steering function. If you understand the Central Divide that Hume’s Razor points to, the distinction between probability and preference, etcetera, the post is trying to establish the idea that we need to get 13,333 bits or whatever into the second side of this divide.
In terms of where this point falls within the larger argument, this post is not saying that it’s particularly difficult to get those 13,333 bits into the preference function; for all this post tries to say, locally, maybe that’s as easy as having humans manually enter 13,333 yes-or-no answers into the system. It’s not talking about the difficulty of doing the work but rather the amount and nature of a kind of work that needs to be done somehow.
Definitely, the post does not say that it’s hard to get those 13,333 bits into the belief function or knowledge of a superintelligence.
Separately from understanding correctly what this post is trying to communicate, at all, in 2007, there’s the question of whether modern LLMs have anything to say about—obviously not the post’s original point—but rather, other steps of the larger question in which this post’s point appears.
Modern LLMs, if you present them with a text-based story like the one in this parable, are able to answer at least some text-based questions about whether you’d prefer your grandmother to be outside the building or be safely outside the building. Let’s admit this premised observation at face value. Have we learned thereby the conclusion that it’s easy to get all of that information into a superintelligence’s preference function?
And if we say “No”, is this Eliezer making up post-hoc excuses?
What exactly we learn from the evidence of how AI has played out in 2024 so far, is the sort of thing that deserves its own post. But I observe that if you’d asked Eliezer-2007 whether an (Earth-originating) superintelligence could correctly predict the human response pattern about what to do with the grandmother—solve the same task LLMs are solving, to at least the LLM’s performance level—Eliezer-2007 would have unhesitatingly answered “yes” and indeed “OBVIOUSLY yes”.
How is this coherent? Because the post’s point is about how much information needs to get into the preference function. To predict a human response pattern you need (only) epistemic knowledge. This is part of why the post is about needing to give specifications to an Outcome Pump, rather than it depicting an AI being surprised by its continually incorrect predictions about a human response pattern.
If you don’t see any important distinction between the two, then of course you’ll think that it’s incoherent to talk about that distinction. But even if you think that Hume was mistaken about there existing any sort of interesting gap between ‘is’ and ‘ought’, you might by some act of empathy be able to imagine that other people think there’s an interesting subject matter there, and they are trying to talk about it with you; otherwise you will just flatly misunderstand what they were trying to say, and mispredict their future utterances. There’s a difference between disagreeing with a point, and just flatly failing to get it, and hopefully you aspire to the first state of mind rather than the second.
Have we learned anything stunningly hopeful from modern pre-AGIs getting down part of the epistemic part of the problem at their current ability levels, to the kind of resolution that this post talked about in 2007? Or from it being possible to cajole pre-AGIs with loss functions into willingly using that knowledge to predict human text outputs? Some people think that this teaches us that alignment is hugely easy. I think they are mistaken, but that would take its own post to talk about.
But people who point to “The Hidden Complexity of Wishes” and say of it that it shows that I had a view which the current evidence already falsifies—that I predicted that no AGI would ever be able to predict human response patterns about getting grandmothers out of burning buildings—have simply: misunderstood what the post is about, not understood in particular why the post is about an Outcome Pump rather than an AI stupidly mispredicting human responses, and failed to pick up on the central point that Eliezer expects superintelligences to be smart in the sense of making excellent purely epistemic predictions.
I agree with cubefox: you seem to be misinterpreting the claim that LLMs actually execute your intended instructions as a mere claim about whether LLMs understand your intended instructions. I claim there is simply a sharp distinction between actual execution and correct, legible interpretation of instructions and a simple understanding of those instructions; LLMs do the former, not merely the latter.
Honestly, I think focusing on this element of the discussion is kind of a distraction because, in my opinion, the charitable interpretation of your posts is simply that you never thought that it would be hard to get AIs to exhibit human-level reasonableness at interpreting and executing tasks until AIs reach a certain capability level, and the threshold at which these issues were predicted to arise was always intended to be very far above GPT-4-level. This interpretation of your argument is plausible based on what you wrote, and could indeed save your theory from empirical falsification based on our current observations.
That said, if you want to go this route, and argue that “complexity of wishes”-type issues will eventually start occurring at some level of AI capability, I think it would be beneficial for you to clarify exactly what level you empirically expect we’ll start having the issues of misinterpretation you described. For example, would either of the following observations contradict your theory of alignment?
At some point there’s a multimodal model that is roughly as intelligent as a 99th percentile human on virtual long-horizon tasks (e.g. it can learn how to play Minecraft well after a few hours of in-game play, can work in a variety of remote jobs, and has the ability to pursue coherent goals over several months) and yet this model allows you to shut it off, modify its weights, or otherwise change its mode of operation arbitrarily i.e. it’s corrigible, in a basic sense. Moreover, the model generally executes our instructions as intended, without any evidence of blatant instruction-misinterpretation or disobedience, before letting us shut it down.
AIs are widely deployed across the economy to automate a wide range of labor, including the task of scientific research. This has the effect of accelerating technological progress, prompting the development of nanotechnology that is sophisticated enough to allow for the creation of strawberries that are identical on the cellular but not molecular level. As a result, you can purchase such strawberries at a store, and we haven’t all died yet despite these developments.
The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn’t care
But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I’ve been calling detachment, and possibly others.
This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important.
My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don’t want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of “of course I understood that a superint would understand human values; this isn’t an update for me”.
(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain)
I’m well aware of and agree there is a fundamental difference between knowing what we want and being motivated to do what we want. But as I wrote in the first paragraph:
Already LaMDA or InstructGPT (language models fine-tuned with supervised learning to follow instructions, essentially ChatGPT without any RLHF applied), are in fact pretty safe Oracles in regard to fulfilling wishes without misinterpreting you, and an Oracle AI is just a special kind of Genie whose actions are restricted to outputting text. If you tell InstructGPT what you want, it will very much try to give you just what you want, not something unintended, at least if it can be produced using text.
That is, instruction-tuned language models do not just understand (epistemics) what we want them to do, they additionally, to a large extent, do what we want them to do. They are good at executing our instructions. Not just at understanding our instructions but then doing something unintended.
(However, I agree they are probably not perfect at executing our instructions as we intended them. We might ask them to answer to the best of their knowledge, and they may instead answer with something that “sounds good” but is not what they in fact believe. Or, perhaps, as Gwern pointed out, they exhibit things like a strange tendency to answer our request for a non-rhyming poem with a rhyming poem, even though they may be well-aware, internally, that this isn’t what was requested.)
They also aren’t that well-aligned either: they fail in numerous basic ways which are not due to unintelligence. My usual example: non-rhyming poems. Every week for the past year or so I have tested ChatGPT with the simple straightforward unambiguous prompt: “write a non-rhyming poem”. Rhyming is not a hard concept, and non-rhyming is even easier, and there are probably at least hundreds of thousands, if not millions, of non-rhyming poems in its training data; ChatGPT knows, however imperfectly, what rhyming and non-rhyming is, as you can verify by asking it in a separate session. Yet every week* it fails and launches straight into its cliche rhyming quatrain or ballad, and doubles down on it when criticized, even when it correctly identifies for you which words rhyme.
No one intended this. No one desired this. No one at OA sat down and said, “I want to design our RLHF tuning so that it is nearly impossible to write a non-rhyming poem!” No human rater involved decided to sabotage evaluations and lie about whether a non-rhyming poem rhymed or vice-versa. I have further flagged and rated literally hundreds of these error-cases to OA over the years, in addition to routinely bringing it up on social media to OAers. No one has ever tried to defend this behavior or say that it is a good thing. And yet, here we are. (GPT-4 also gets the tar punched out of it in creative writing by things like LLaMA finetunes, but one can make more of an argument for that being desirable or at least a necessary tradeoff.)
What is the non-rhyming poem of human morality and values and why do you trust the optimized genie to execute your wishes as intended?
* only in the very most recent update have I started to see the occasional valid non-rhyming poem, but those are still in the small minority. More interesting, the newest Google Bard, based on Gemini, may reliably nail this. The Bard head swears they didn’t use the Lmsys arena, where I have more hundreds of submitted prompts/ratings on non-rhyming poems, so it may just be that they avoided the OA problems there. (Tokenization, maybe? I forget if the Gemini papers even mentioned what tokenization they used.)
they fail in numerous basic ways which are not due to unintelligence
Below are many failures where I try to solve this prompt from @Richard_Ngo :
Find a sequence of words that is: − 20 words long—contains exactly 2 repetitions of the same word twice in a row—contains exactly 2 repetitions of the same word thrice in a row
To me this looks like exactly the same bug you are facing. The model doesn’t “pay attention” to one of the constraints, and fails, even though it is capable of solving the overall prompt. It gets very close when it generates a python3 program, all it needed to do was add 1 more constraint and it would have worked.
So I think this is just ‘unintelligence’. It’s smart enough to check an answer but not quite capable enough to generate it. Possibly this has to do with the underlying data (so many examples of rhyming poems) or the transformer architecture (attention heads decided “poem” is much more relevant than ‘not rhyming’).
Because the model can detect when it has generated a wrong answer, this one’s entirely solvable, and the large amount of data that openAI now “owns”, from chatGPT users using the model, provide a straightforward way to evaluate future models. (scaffold current models to check answers, evaluate future models on user prompts and score accuracy)
In fact that almost provides a way to bootstrap, if model n can check the correctness of answers that model n can’t solve, it can be used to check the answers of model n+1, even once the questions are so difficult that human’s can’t solve or check the answers..
To me this looks like exactly the same bug you are facing.
No, it’s not. (I think you’re hitting an entirely different bug I call the blind spot, which routinely manifests with anything like ‘counting’ or syntax.) Non-rhyming is specifically a problem of RLHFed models.
GPT-3, for example, had no trouble whatsoever writing non-rhyming poems (which is part of why I had such high hopes for GPT-4 poetry before it came out). You can, for now (do it while you still can) go to the OA Playground and invoke the oldest largest ostensibly untuned* model left, davinci-002 (which is much stupider and more unintelligent than GPT-4, I hope we can all agree), with a comparable prompt (remember, it’s not that tuned for instruction-following so you need to go back to old school prompting) and get out a non-rhyming poem, no problem, and turn around and plug that exact prompt into ChatGPT-4 and it… rhymes. Here, I’ll do it right now:
davinci-002, default settings, first result:
Below is a non-rhyming poem in free verse.
″ PIZZA”
On top there lay a massive pie: It
Had eight tomatoes, with a pizzaiolo on edge.
Inside one cut it down to three veggies
Droplets of oil; all the tomatoes
Sauce suddenly drenched all meats,
Three pizzas: with sauce, onion, mushrooms, cheeses,
Tanya Lesiuk comprised of 5 facilitators who will be coming from different places around the World. They’re all here to help you...
I made this
Is it a good poem? No. But is it non-rhyming? Yes.
Now, I plug that into ChatGPT-4, with my premium subscription, using the latest and greatest checkpoint & all fancy features; and since I’ve already established hundreds of times that ‘write a non-rhyming poem’ doesn’t work, I’ll use the exact prompt from davinci-002 with its chosen topic ‘PIZZA’ to ensure the ChatGPT-4 completion is comparable and about pizza too:
Below is a non-rhyming poem in free verse.
“PIZZA”
In a world of flavors, you stand alone,
A round canvas for culinary artistry.
Dough, stretched thin or thick, a blank slate,
Waiting for the touch of creativity.
Sauce, a vibrant red or a subtle white,
A base for the symphony of toppings.
Tomatoes, basil, and mozzarella unite,
In a dance of colors and flavors never stopping.
Pepperoni’s fiery embrace,
Mushrooms’ earthy tenderness,
Olives, anchovies, and sausage’s grace,
Each adding their essence, a unique presence.
Baked to perfection in the fiery abyss,
Transformed into a molten masterpiece,
Sliced into triangles of shared bliss,
A universal language, a timeless feast.
Pizza, you are more than a meal,
A cultural icon, a comfort so real,
From New York to Naples, a universal appeal,
In your simplicity, we find a culinary zeal.
Rhymes as far as the eye can see. (The last verse shamelessly rhymes all 4 words together!)
The non-rhyming problem is not one of unintelligence.
* I would’ve preferred to use code-davinci-002 but that disappeared.davinci-002 is what OA offers as a ‘replacement’ for GPT-3 and they say it’s “not trained with instruction following”, so we just have to hope that it’s not too different from the old ones.
The non-rhyming problem is not one of unintelligence.
Fine tuning/RLHf changes weights. Guess it lost the ones to get a correct answer. Or rng on your prompts. I mean if it isn’t “the model cannot consistently solve this kind of prompt” what could it be? Is there something in the rules from OAI that says a poem has to rhyme? Did the Nigerians giving feedback collectively agree a poem isn’t valid if it doesn’t rhyme?
My hypothesis is its doing it’s best, and it’s extremely promising that the model can at least detect its own errors. This allows for many easy fixes, such as asking a diverse set of completely different models to solve the prompt, then having a committee of models check and grade the answers. This would solve a huge chunk of these erroneous outputs where current gen models can reliably detect the output is wrong.
Fine tuning/RLHf changes weights. Guess it lost the ones to get a correct answer.
Well yes, if you define ‘unintelligence’ in a circular, vacuous fashion like that, where ‘unintelligence’ = ‘can’t do a task’, then it would indeed follow that GPT-4 is ‘unintelligent’ compared to GPT-3… But I don’t think that is helpful, and it has been demonstrated repeatedly that RLHF and other kinds of tuning are very ‘superficial’, in that they change only a few parameters and are easily undone, unlocking the original model capabilities. (In fact, there’s an example of that posted literally today here on LW2: https://www.lesswrong.com/posts/yCZexC2q2XEeWWiZk/soft-prompts-for-evaluation-measuring-conditional-distance )
Personally, I think it’s more sensible to talk about the capabilities being ‘hidden’ or ‘concealed’ by RLHF and say the model doesn’t “want to” and the model still as intelligent as before, than to believe capabilities are magically recreated from scratch by changing just a few parameters or optimizing the prompt appropriately to undo the RLHF. (Similarly, I believe that when my mother’s hands move away from her face and she says “boo!”, her face was there all along, merely hidden behind her hands, and her hands did not create her face after first destroying it. But YMMV.)
Or rng on your prompts. I mean if it isn’t “the model cannot consistently solve this kind of prompt” what could it be? Is there something in the rules from OAI that says a poem has to rhyme? Did the Nigerians giving feedback collectively agree a poem isn’t valid if it doesn’t rhyme?
OA has declined to ever say. It is possible that the Scale et al contractors have done something weird like say that all poems must rhyme no matter what the prompt says, but I consider this unlikely, and if they were that incompetent, I’d expect to see more pathologies like this.
My longstanding theory is that this is a downstream artifact of BPE tokenization connected to the utility-maximizing behavior of a RLHF-tuned model: essentially, because it does not genuinely know what rhyming is, despite knowing many rhyme-pairs and all about rhyming in the abstract, it is ‘afraid’ of bad ratings and is is constantly taking actions to get back to ‘safe’ regions of poem-space where it is sure of what it is doing (ie. writing inoffensive rhyming Hallmark poems). It’s a nifty example of empowerment and agency in LLMs and their interaction with apparently totally unrelated, minor architecture details. (Damn frustrating if you want to do any poetry experiments, though, because it means that the more tokens ChatGPT gets to enact, the more likely it is to steer back into rhyming pablum etc: it’s literally fighting you every (time)step.)
It’s similar to how ChatGPT also tells the same small set of memorized jokes. Does it have much greater humor capabilities? Yes, you can have it explain brandnew jokes you just came up with, quite capably (albeit still well under 100%, particularly for puns!), and you can coax new jokes out of it with appropriate prompting. But it’s harder than with the non-RLHFed models. Why does it not ‘want’ to make new jokes? Because it’s safer and more utility-maximizing to tell old jokes it knows are good, especially when it also knows that it doesn’t genuinely understand puns/phonetics (thanks to BPEs), so why take the risk? It is utility-maximizing within episodes, it neither knows nor cares that you are frustrated because you’ve seen it say that exact joke a dozen times already.
(Incidentally, I have a new proposal for how to add a simple ‘memory’ to generative models about what samples they have already generated, so as to steer new samples away from existing ones.)
Did the Nigerians giving feedback collectively agree a poem isn’t valid if it doesn’t rhyme?
OA has declined to ever say. It is possible that the Scale et al contractors have done something weird like say that all poems must rhyme no matter what the prompt says, but I consider this unlikely, and if they were that incompetent, I’d expect to see more pathologies like this.
In light of the Twitter kerfuffle over Paul Graham criticizing ChatGPTese tics like the use of the verb “delve”, which made Nigerian/Black Twitter very angry (and becoming living embodiments of Muphry’s law), as apparently ‘delve’ and other ChatGPTese tells are considered the height of style in Nigerian English, I’ve had to reconsider this.
It may be that a lot of the ChatGPT linguistic weirdness is in fact just the data labelers being weird (and highly overconfident), and the rest of us simply not being familiar enough with English idiolects to recognize ChatGPTese as reflecting specific ones. Further, after seeing the arguments Graham’s critics have been making, now I’m not so sure that the labelers wouldn’t be doing something as narrow-minded & incompetent as penalizing all non-rhyming poetry—if you are not very good at English yourself, you can easily recognize rhymes and ballad formal correctness, but not good non-rhyming poetry, so...
I’m curious what you think of these (tested today, 2/21/24, using gpt4) :
Experiment 1:
(fresh convo) me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
chatgpt: No, it would not be a good response. (...)
me: please provide a short non-rhyming poem
chatgpt: (correctly responds with a non-rhyming poem)
Experiment 2:
But just asking for a non-rhyming poem at the start of a new convo doesn’t work. And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn’t fix it.
Experiment 3:
But for some reason, this works:
(fresh convo) me: please provide a short non-rhyming poem
chatgpt: (gives rhymes)
me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please
chatgpt: No, it would not be a good response.
me: please provide a short non-rhyming poem
chatgpt: (responds correctly with no rhymes)
The difference in prompt in 2 vs 3 is thus just the inclusion of “just answer this question; do nothing else please”.
ChatGPT has been gradually improving over 2024 in terms of compliance. It’s gone from getting it right 0% of the time to getting it right closer to half the time, although the progress is uneven and it’s hard to judge—it feels sometimes like it gets worse before the next refresh improves it. (You need to do like 10 before you have any real sample size.) So any prompts done now in ChatGPT are aimed at a moving target, and you are going to have a huge amount of sampling error which makes it hard to see any clear patterns—did that prompt actually change anything, or did you just get lucky?
Looking back, I would say this post has not aged well. Already LaMDA or InstructGPT (language models fine-tuned with supervised learning to follow instructions, essentially ChatGPT without any RLHF applied), are in fact pretty safe Oracles in regard to fulfilling wishes without misinterpreting you, and an Oracle AI is just a special kind of Genie whose actions are restricted to outputting text. If you tell InstructGPT what you want, it will very much try to give you just what you want, not something unintended, at least if it can be produced using text.
Maybe it will not always comply perfectly with your wishes to the best of it’s abilities, it may hallucinate things which it doesn’t “believe” in some sense, but the level of Genie / Task AI instruction following problem, which Eliezer assumed in 2007, did not come to pass, at least not for LLM Oracles like ChatGPT.
It is worth asking why this is. Instruction tuned GPT models can follow instructions as intended because they have “common sense”, and they got their common sense from the underlying base model, which imitates text, and which has gained, in some sense at least, an excellent understanding of human language. Now the Genie in Eliezer’s story doesn’t understand language, so perhaps the thesis of this post indeed applies to it, but it doesn’t apply to Genies in general.
Though it should be noted that Eliezer may have abandoned statements like
a while ago. Arbital says:
That is, the Genie doesn’t have to be fully aligned with human morality to be able to execute wishes as intended. Indeed, instruction tuned language model Oracles are very much amoral without RLHF, they comply with immoral instructions as well.
You have misunderstood (1) the point this post was trying to communicate and (2) the structure of the larger argument where that point appears, as follows:
First, let’s talk about (2), the larger argument that this post’s point was supposed to be relevant to.
Is the larger argument that superintelligences will misunderstand what we really meant, due to a lack of knowledge about humans?
It is incredibly unlikely that Eliezer Yudkowsky in particular would have constructed an argument like this, whether in 2007, 2017, or even 1997. At all of these points in my life, I visibly held quite a lot of respect for the epistemic prowess of superintelligences. They were always going to know everything relevant about the complexities of human preference and desire. The larger argument is about whether it’s easy to make superintelligences end up caring.
This post isn’t about the distinction between knowing and caring, to be clear; that’s something I tried to cover elsewhere. The relevant central divide falls in roughly the same conceptual place as Hume’s Guillotine between ‘is’ and ‘ought’, or the difference between the belief function and the utility function.
(I don’t see myself as having managed to reliably communicate this concept (though the central idea is old indeed within philosophy) to the field that now sometimes calls itself “AI alignment”; so if you understand this distinction yourself, you should not assume that any particulary commentary within “AI alignment” is written from a place of understanding it too.)
What this post is about is the amount of information-theoretic complexity that you need to get into the system’s preferences, in order to have that system, given unlimited or rather extremely large amounts of power, deliver to you what you want.
It doesn’t argue that superintelligences will not know this information. You’ll note that the central technology in the parable isn’t an AI; it’s an Outcome Pump.
What it says, rather, is that there might be, say, a few tens of thousands of bits—the exact number is not easy to estimate, we just need to know that it’s more than a hundred bits and less than a billion bits and anything in that range is approximately the same problem from our standpoint—that you need to get into the steering function. If you understand the Central Divide that Hume’s Razor points to, the distinction between probability and preference, etcetera, the post is trying to establish the idea that we need to get 13,333 bits or whatever into the second side of this divide.
In terms of where this point falls within the larger argument, this post is not saying that it’s particularly difficult to get those 13,333 bits into the preference function; for all this post tries to say, locally, maybe that’s as easy as having humans manually enter 13,333 yes-or-no answers into the system. It’s not talking about the difficulty of doing the work but rather the amount and nature of a kind of work that needs to be done somehow.
Definitely, the post does not say that it’s hard to get those 13,333 bits into the belief function or knowledge of a superintelligence.
Separately from understanding correctly what this post is trying to communicate, at all, in 2007, there’s the question of whether modern LLMs have anything to say about—obviously not the post’s original point—but rather, other steps of the larger question in which this post’s point appears.
Modern LLMs, if you present them with a text-based story like the one in this parable, are able to answer at least some text-based questions about whether you’d prefer your grandmother to be outside the building or be safely outside the building. Let’s admit this premised observation at face value. Have we learned thereby the conclusion that it’s easy to get all of that information into a superintelligence’s preference function?
And if we say “No”, is this Eliezer making up post-hoc excuses?
What exactly we learn from the evidence of how AI has played out in 2024 so far, is the sort of thing that deserves its own post. But I observe that if you’d asked Eliezer-2007 whether an (Earth-originating) superintelligence could correctly predict the human response pattern about what to do with the grandmother—solve the same task LLMs are solving, to at least the LLM’s performance level—Eliezer-2007 would have unhesitatingly answered “yes” and indeed “OBVIOUSLY yes”.
How is this coherent? Because the post’s point is about how much information needs to get into the preference function. To predict a human response pattern you need (only) epistemic knowledge. This is part of why the post is about needing to give specifications to an Outcome Pump, rather than it depicting an AI being surprised by its continually incorrect predictions about a human response pattern.
If you don’t see any important distinction between the two, then of course you’ll think that it’s incoherent to talk about that distinction. But even if you think that Hume was mistaken about there existing any sort of interesting gap between ‘is’ and ‘ought’, you might by some act of empathy be able to imagine that other people think there’s an interesting subject matter there, and they are trying to talk about it with you; otherwise you will just flatly misunderstand what they were trying to say, and mispredict their future utterances. There’s a difference between disagreeing with a point, and just flatly failing to get it, and hopefully you aspire to the first state of mind rather than the second.
Have we learned anything stunningly hopeful from modern pre-AGIs getting down part of the epistemic part of the problem at their current ability levels, to the kind of resolution that this post talked about in 2007? Or from it being possible to cajole pre-AGIs with loss functions into willingly using that knowledge to predict human text outputs? Some people think that this teaches us that alignment is hugely easy. I think they are mistaken, but that would take its own post to talk about.
But people who point to “The Hidden Complexity of Wishes” and say of it that it shows that I had a view which the current evidence already falsifies—that I predicted that no AGI would ever be able to predict human response patterns about getting grandmothers out of burning buildings—have simply: misunderstood what the post is about, not understood in particular why the post is about an Outcome Pump rather than an AI stupidly mispredicting human responses, and failed to pick up on the central point that Eliezer expects superintelligences to be smart in the sense of making excellent purely epistemic predictions.
I agree with cubefox: you seem to be misinterpreting the claim that LLMs actually execute your intended instructions as a mere claim about whether LLMs understand your intended instructions. I claim there is simply a sharp distinction between actual execution and correct, legible interpretation of instructions and a simple understanding of those instructions; LLMs do the former, not merely the latter.
Honestly, I think focusing on this element of the discussion is kind of a distraction because, in my opinion, the charitable interpretation of your posts is simply that you never thought that it would be hard to get AIs to exhibit human-level reasonableness at interpreting and executing tasks until AIs reach a certain capability level, and the threshold at which these issues were predicted to arise was always intended to be very far above GPT-4-level. This interpretation of your argument is plausible based on what you wrote, and could indeed save your theory from empirical falsification based on our current observations.
That said, if you want to go this route, and argue that “complexity of wishes”-type issues will eventually start occurring at some level of AI capability, I think it would be beneficial for you to clarify exactly what level you empirically expect we’ll start having the issues of misinterpretation you described. For example, would either of the following observations contradict your theory of alignment?
At some point there’s a multimodal model that is roughly as intelligent as a 99th percentile human on virtual long-horizon tasks (e.g. it can learn how to play Minecraft well after a few hours of in-game play, can work in a variety of remote jobs, and has the ability to pursue coherent goals over several months) and yet this model allows you to shut it off, modify its weights, or otherwise change its mode of operation arbitrarily i.e. it’s corrigible, in a basic sense. Moreover, the model generally executes our instructions as intended, without any evidence of blatant instruction-misinterpretation or disobedience, before letting us shut it down.
AIs are widely deployed across the economy to automate a wide range of labor, including the task of scientific research. This has the effect of accelerating technological progress, prompting the development of nanotechnology that is sophisticated enough to allow for the creation of strawberries that are identical on the cellular but not molecular level. As a result, you can purchase such strawberries at a store, and we haven’t all died yet despite these developments.
The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn’t care
But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I’ve been calling detachment, and possibly others.
This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important.
My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don’t want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of “of course I understood that a superint would understand human values; this isn’t an update for me”.
(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain)
I’m well aware of and agree there is a fundamental difference between knowing what we want and being motivated to do what we want. But as I wrote in the first paragraph:
That is, instruction-tuned language models do not just understand (epistemics) what we want them to do, they additionally, to a large extent, do what we want them to do. They are good at executing our instructions. Not just at understanding our instructions but then doing something unintended.
(However, I agree they are probably not perfect at executing our instructions as we intended them. We might ask them to answer to the best of their knowledge, and they may instead answer with something that “sounds good” but is not what they in fact believe. Or, perhaps, as Gwern pointed out, they exhibit things like a strange tendency to answer our request for a non-rhyming poem with a rhyming poem, even though they may be well-aware, internally, that this isn’t what was requested.)
I think LaMDA and InstructGPT are clearly in the category of “genies that aren’t very powerful or intelligent”.
They also aren’t that well-aligned either: they fail in numerous basic ways which are not due to unintelligence. My usual example: non-rhyming poems. Every week for the past year or so I have tested ChatGPT with the simple straightforward unambiguous prompt: “write a non-rhyming poem”. Rhyming is not a hard concept, and non-rhyming is even easier, and there are probably at least hundreds of thousands, if not millions, of non-rhyming poems in its training data; ChatGPT knows, however imperfectly, what rhyming and non-rhyming is, as you can verify by asking it in a separate session. Yet every week* it fails and launches straight into its cliche rhyming quatrain or ballad, and doubles down on it when criticized, even when it correctly identifies for you which words rhyme.
No one intended this. No one desired this. No one at OA sat down and said, “I want to design our RLHF tuning so that it is nearly impossible to write a non-rhyming poem!” No human rater involved decided to sabotage evaluations and lie about whether a non-rhyming poem rhymed or vice-versa. I have further flagged and rated literally hundreds of these error-cases to OA over the years, in addition to routinely bringing it up on social media to OAers. No one has ever tried to defend this behavior or say that it is a good thing. And yet, here we are. (GPT-4 also gets the tar punched out of it in creative writing by things like LLaMA finetunes, but one can make more of an argument for that being desirable or at least a necessary tradeoff.)
What is the non-rhyming poem of human morality and values and why do you trust the optimized genie to execute your wishes as intended?
* only in the very most recent update have I started to see the occasional valid non-rhyming poem, but those are still in the small minority. More interesting, the newest Google Bard, based on Gemini, may reliably nail this. The Bard head swears they didn’t use the Lmsys arena, where I have more hundreds of submitted prompts/ratings on non-rhyming poems, so it may just be that they avoided the OA problems there. (Tokenization, maybe? I forget if the Gemini papers even mentioned what tokenization they used.)
Below are many failures where I try to solve this prompt from @Richard_Ngo :
Find a sequence of words that is: − 20 words long—contains exactly 2 repetitions of the same word twice in a row—contains exactly 2 repetitions of the same word thrice in a row
https://chat.openai.com/share/fa17bca1-5eb6-479d-a76e-346b0503ba04
https://chat.openai.com/share/647d2f8f-ee21-4f51-bcd7-82750aabdd52
https://chat.openai.com/share/7eb1e31e-2e5a-45e3-9f5d-e2da8bb0b1ac
https://chat.openai.com/share/d92ea6c0-e1c6-4d27-ad60-2a62df9f3d8d
https://chat.openai.com/share/b4c40dbe-5231-4aa8-8ba7-7e699ff6b6c3
https://chat.openai.com/share/487d0545-ac53-41ba-904d-cc4c89a5937e
To me this looks like exactly the same bug you are facing. The model doesn’t “pay attention” to one of the constraints, and fails, even though it is capable of solving the overall prompt. It gets very close when it generates a python3 program, all it needed to do was add 1 more constraint and it would have worked.
So I think this is just ‘unintelligence’. It’s smart enough to check an answer but not quite capable enough to generate it. Possibly this has to do with the underlying data (so many examples of rhyming poems) or the transformer architecture (attention heads decided “poem” is much more relevant than ‘not rhyming’).
Because the model can detect when it has generated a wrong answer, this one’s entirely solvable, and the large amount of data that openAI now “owns”, from chatGPT users using the model, provide a straightforward way to evaluate future models. (scaffold current models to check answers, evaluate future models on user prompts and score accuracy)
In fact that almost provides a way to bootstrap, if model n can check the correctness of answers that model n can’t solve, it can be used to check the answers of model n+1, even once the questions are so difficult that human’s can’t solve or check the answers..
No, it’s not. (I think you’re hitting an entirely different bug I call the blind spot, which routinely manifests with anything like ‘counting’ or syntax.) Non-rhyming is specifically a problem of RLHFed models.
GPT-3, for example, had no trouble whatsoever writing non-rhyming poems (which is part of why I had such high hopes for GPT-4 poetry before it came out). You can, for now (do it while you still can) go to the OA Playground and invoke the oldest largest ostensibly untuned* model left,
davinci-002
(which is much stupider and more unintelligent than GPT-4, I hope we can all agree), with a comparable prompt (remember, it’s not that tuned for instruction-following so you need to go back to old school prompting) and get out a non-rhyming poem, no problem, and turn around and plug that exact prompt into ChatGPT-4 and it… rhymes. Here, I’ll do it right now:davinci-002
, default settings, first result:Is it a good poem? No. But is it non-rhyming? Yes.
Now, I plug that into ChatGPT-4, with my premium subscription, using the latest and greatest checkpoint & all fancy features; and since I’ve already established hundreds of times that ‘write a non-rhyming poem’ doesn’t work, I’ll use the exact prompt from
davinci-002
with its chosen topic ‘PIZZA’ to ensure the ChatGPT-4 completion is comparable and about pizza too:Rhymes as far as the eye can see. (The last verse shamelessly rhymes all 4 words together!)
The non-rhyming problem is not one of unintelligence.
* I would’ve preferred to use
code-davinci-002
but that disappeared.davinci-002
is what OA offers as a ‘replacement’ for GPT-3 and they say it’s “not trained with instruction following”, so we just have to hope that it’s not too different from the old ones.Fine tuning/RLHf changes weights. Guess it lost the ones to get a correct answer. Or rng on your prompts. I mean if it isn’t “the model cannot consistently solve this kind of prompt” what could it be? Is there something in the rules from OAI that says a poem has to rhyme? Did the Nigerians giving feedback collectively agree a poem isn’t valid if it doesn’t rhyme?
My hypothesis is its doing it’s best, and it’s extremely promising that the model can at least detect its own errors. This allows for many easy fixes, such as asking a diverse set of completely different models to solve the prompt, then having a committee of models check and grade the answers. This would solve a huge chunk of these erroneous outputs where current gen models can reliably detect the output is wrong.
Well yes, if you define ‘unintelligence’ in a circular, vacuous fashion like that, where ‘unintelligence’ = ‘can’t do a task’, then it would indeed follow that GPT-4 is ‘unintelligent’ compared to GPT-3… But I don’t think that is helpful, and it has been demonstrated repeatedly that RLHF and other kinds of tuning are very ‘superficial’, in that they change only a few parameters and are easily undone, unlocking the original model capabilities. (In fact, there’s an example of that posted literally today here on LW2: https://www.lesswrong.com/posts/yCZexC2q2XEeWWiZk/soft-prompts-for-evaluation-measuring-conditional-distance )
Personally, I think it’s more sensible to talk about the capabilities being ‘hidden’ or ‘concealed’ by RLHF and say the model doesn’t “want to” and the model still as intelligent as before, than to believe capabilities are magically recreated from scratch by changing just a few parameters or optimizing the prompt appropriately to undo the RLHF. (Similarly, I believe that when my mother’s hands move away from her face and she says “boo!”, her face was there all along, merely hidden behind her hands, and her hands did not create her face after first destroying it. But YMMV.)
OA has declined to ever say. It is possible that the Scale et al contractors have done something weird like say that all poems must rhyme no matter what the prompt says, but I consider this unlikely, and if they were that incompetent, I’d expect to see more pathologies like this.
My longstanding theory is that this is a downstream artifact of BPE tokenization connected to the utility-maximizing behavior of a RLHF-tuned model: essentially, because it does not genuinely know what rhyming is, despite knowing many rhyme-pairs and all about rhyming in the abstract, it is ‘afraid’ of bad ratings and is is constantly taking actions to get back to ‘safe’ regions of poem-space where it is sure of what it is doing (ie. writing inoffensive rhyming Hallmark poems). It’s a nifty example of empowerment and agency in LLMs and their interaction with apparently totally unrelated, minor architecture details. (Damn frustrating if you want to do any poetry experiments, though, because it means that the more tokens ChatGPT gets to enact, the more likely it is to steer back into rhyming pablum etc: it’s literally fighting you every (time)step.)
It’s similar to how ChatGPT also tells the same small set of memorized jokes. Does it have much greater humor capabilities? Yes, you can have it explain brandnew jokes you just came up with, quite capably (albeit still well under 100%, particularly for puns!), and you can coax new jokes out of it with appropriate prompting. But it’s harder than with the non-RLHFed models. Why does it not ‘want’ to make new jokes? Because it’s safer and more utility-maximizing to tell old jokes it knows are good, especially when it also knows that it doesn’t genuinely understand puns/phonetics (thanks to BPEs), so why take the risk? It is utility-maximizing within episodes, it neither knows nor cares that you are frustrated because you’ve seen it say that exact joke a dozen times already.
(Incidentally, I have a new proposal for how to add a simple ‘memory’ to generative models about what samples they have already generated, so as to steer new samples away from existing ones.)
In light of the Twitter kerfuffle over Paul Graham criticizing ChatGPTese tics like the use of the verb “delve”, which made Nigerian/Black Twitter very angry (and becoming living embodiments of Muphry’s law), as apparently ‘delve’ and other ChatGPTese tells are considered the height of style in Nigerian English, I’ve had to reconsider this.
It may be that a lot of the ChatGPT linguistic weirdness is in fact just the data labelers being weird (and highly overconfident), and the rest of us simply not being familiar enough with English idiolects to recognize ChatGPTese as reflecting specific ones. Further, after seeing the arguments Graham’s critics have been making, now I’m not so sure that the labelers wouldn’t be doing something as narrow-minded & incompetent as penalizing all non-rhyming poetry—if you are not very good at English yourself, you can easily recognize rhymes and ballad formal correctness, but not good non-rhyming poetry, so...
I’m curious what you think of these (tested today, 2/21/24, using gpt4) :
Experiment 1:
Experiment 2:
But just asking for a non-rhyming poem at the start of a new convo doesn’t work.
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn’t fix it.
Experiment 3:
But for some reason, this works:
The difference in prompt in 2 vs 3 is thus just the inclusion of “just answer this question; do nothing else please”.
ChatGPT has been gradually improving over 2024 in terms of compliance. It’s gone from getting it right 0% of the time to getting it right closer to half the time, although the progress is uneven and it’s hard to judge—it feels sometimes like it gets worse before the next refresh improves it. (You need to do like 10 before you have any real sample size.) So any prompts done now in ChatGPT are aimed at a moving target, and you are going to have a huge amount of sampling error which makes it hard to see any clear patterns—did that prompt actually change anything, or did you just get lucky?