It seems like we keep getting LLMs that are better and better at getting the point of fairly abstract concepts (e.g. understanding jokes). As compute increases and their performance improves, it seems increasingly likely that human “values” are within the class of not-that-heavily-finetuned LLMs.
For example, if I prompted a GPT-5 model fine-tuned on lots of moral opinions about stuff: “[details of world], would a human say that was a more beautiful world than today, and why?” I… don’t think it’d do terribly?
The same goes for e.g. how the AI would answer the trolley problem. I’d guess it’d look roughly like humans’ responses: messy, slightly different depending on the circumstance, but not genuinely orthogonal to most humans’ values.
This is obviously vulnerable to adversarial examples or extreme OOD settings, but then robustness seems to be increasing with compute used, and we can do a decent job of OOD-catching.
Is there a modern reformulation of “fragility of value” that addresses this obvious situational improvement? Because as of now, the pure “Fragility of Value” thesis seems a little absurd (though I’d still believe a weaker version).
The key thing here seems to be the difference between understanding a value and having that value. Nothing about the fragile value claim or the Orthogonality thesis says that the main blocker is AI systems failing to understand human values. A superintelligent paperclip maximizer could know what I value and just not do it, the same way I can understand what the paperclipper values and choose to pursue my own values instead.
Your argument is for LLM’s understanding human values, but that doesn’t necessarily have anything to do with the values that they actually have. It seems likely that their actual values are something like “predict text accurately”, and this requires understanding human values but not adopting them.
I think you’re misunderstanding my point, let me know if I should change the question wording.
Assume we’re focused on outer alignment. Then we can provide a trained regressor LLM as the utility function, instead of Eg maximize paperclips. So understanding and valuing are synonymous in that setting.
Ah, gotcha. I think the post is fine, I just failed to read.
If I now correctly understand, the proposal is to ask a LLM to simulate human approval, and use that as the training signal for your Big Scary AGI. I think this still has some problems:
Using an LLM to simulate human approval sounds like reward modeling, which seems useful. But LLM’s aren’t trained to simulate humans, they’re trained to predict text. So, for example, an LLM will regurgitate the dominant theory of human values, even if it has learned (in a Latent Knowledge sense) that humans really value something else.
Even if the simulation is perfect, using human approval isn’t a solution to outer alignment, for reasons like deception and wireheading
I worry that I still might not understand your question, because I don’t see how fragility of value and orthogonality come into this?
It still does honestly seem way more likely to not kill us all than a paperclip-optimizer, so if we’re pressed for time near the end, why shouldn’t we go with this suggestion over something else?
Inner alignment (mesa-optimizers) is still a big problem.
quick take: Roughly speaking adversarial examples are the Modern Reformulation you’re asking about.
In my mind the main issue here is that we probably need extreme levels of robustness / OOD-catching. And these probably only come much too late, after less-cautious actors have deployed AI systems that induce lots of x-risk.
Interesting! I wonder whether adversarial robustness improvement is a necessary step in AGI capabilities, and thus represents a blocker from the other side.
Not to mention that there’s a race between “how many planning steps can you do” and “how hard have you made it to find adversarial examples”, and their relative growth curves determine which wins.
I think treating adversarial robustness/OOD handling as a single continuous dimension is the wrong way to go about it.
The basic robustness problem is that there are variables that are usually correlated, or usually restricted to some range, or usually independent, or otherwise usually satisfy some “nice” property. This allows you to be “philosophically lazy” by not making the distinctions that would be required if the nice property doesn’t hold.
But once the nice property fails, the distinctions you need to make are going to depend on what the purpose of your reasoning is. So there will be several different “ways” of being robust, where most of them will not lead to alignment.
For instance, if you’re not good at lying, then telling the truth is basically the same as not getting caught lying. However, as you gain capabilities, the assumption that these two go together ends up failing, because you can lie and cover your tracks. The most appropriate way to generalize depends on what you’re trying to do, e.g. whether you are trying to help vs trying to convince others.
I think if you have already figured out a way to get the AI to try to be aligned to humans, it is reasonable to rely on capabilities researchers to figure out the OOD/adversarial robustness solutions necessary to make it work. However, here you instead want to go the other way, relying on capabilities researcher’s OOD/adversarial robustness to define “being aligned to humans”, and I don’t think this is going to work, since it lacks a ground truth purpose that could guide it.
Note that “in a new ontology the previous reward signals have become under specified and therefore within the reward module we have a sub module that gets clarification from a human on which alternate hypothesis is true” is in principle a dynamic solution to that type of failure.
(See e.g. https://arxiv.org/abs/2202.03418)
To head off the anticipated response: this does still count as “the reward” to the model, because it is all part of the mechanism through which the reward is being generated from the state.
Sure, but I consider this approach to fall under attempts to “try to be aligned to humans”. It doesn’t seem like it would be a blocker on the capabilities side if this is missing, only on the alignment side.
(On the alignment side, there’s the issue that your proposed solution is easier said than done.)
I guess I expect that even at low capability levels, reward-disambiguating on will be crucial and capabilities researchers will be working on it.
I don’t see that as likely, because at low capabilities levels, researchers can notice that the reward isn’t working and just it, without needing to rely on the AI asking them.
Consider a task like asking a generally-intelligent chatbot to buy you furniture you like. The only reasonable way to model the reward involves asking 20-questions about your sub-preferences for sofa styles. This seems like the nature of most service sector tasks?
I have a hard time inferring the specifics of that scenario, and I think the specifics probably matter a lot. So I need to ask some further questions.
Why exactly would a generally-intelligent chatbot be useful for buying furniture (over, say, an expert system)? If I try to come up with reasons, I could imagine it would make sense if it has to find the best deal over unstructured data including all sorts of arbitrary settings, such as people who set their couch for sale. Or if it has to go out and get the furniture. Is that what you have in mind?
Furthermore, let’s repeat that the hard part isn’t in manually specifying a distinction when you have that distinction in mind, it’s in spontaneously recognizing a need for a distinction, accurately conveying the options for the distinctions to the humans, and interpreting that to pick the appropriate distinction. When it comes to something like a firm that sells a chatbot for furniture preferences, I don’t really follow how this latter part is needed. Because it seems like the people who make the furniture-buying chatbot could sit down and enumerate whatever preferences are needed to be clarified, and then code that into the chatbot directly. The best explanation I can come up with is that you imagine it being much more general than this, being more like a sort of servant bot which can handle many tasks, not just buying furniture?
Finally, I’m unsure of what capabilities you imagine the chatbot to have. For instance, a possible “ground truth” you could use for training would be to have humans rate the furniture after they’ve received and used it, on a scale from bad to good. For bots that are not very capable, perhaps the best way to optimize their ratings would be to just get good furniture. But for bots that are highly capable, there are many other ways to get good reviews, e.g. hacking into the system and overriding them. I’m not sure if you imagine the low-capability end or the high-capability end here.
The chatbot is “generally intelligent”, so buying furniture is just one of many tasks it may be asked to execute; another task it could be asked to do is “order me some food”.
The hard part is indeed in spontaneously recognizing distinctions—but we already reward RL agents for curiosity, i.e. taking an action for which your world model fails to predict the consequences. Predicting which new distinctions are salient-to-humans is a thing you can optimize, because you can cleanly label it.
Also to clarify, we’re only arguing here about whether this capability will be naturally invested-in, so I don’t think it matters if highly capable bots have other strategies.
I think the capabilities of the AI matters a lot for alignment strategies, and that’s why I’m asking you about it and why I need you to answer that question.
A subhuman intelligence would rely on humans to make most of the decisions. It would order human-designed furniture types through human-created interfaces and receive human-fabricated furniture. At each of those steps, it delgates an enormous number of decisions to humans, which makes those decisions automatically end up reasonably aligned, but also prevents the AI from doing optimization over them. In the particular case of human-designed interfaces, they tend to automatically expose information about the things that humans care about, and eliciting human preferences can be shortcut be focusing on these dimensions.
But a superhuman intelligence would solve tasks through taking actions independently of humans, as that can allow it to more highly optimize the outcomes. And a solution for alignment that relies on humans making most of the decisions would presumably not generalize to this case, where the AI makes most of the decisions.
I think there are intermediate cases—delegating some but not all decisions—that require this sort of tooling. See Eg this paper from today: http://ai.googleblog.com/2022/04/simple-and-effective-zero-shot-task.html that focuses on how to learn intent.
This seems like the crux of the matter. I don’t think OOD or robustness is as straightforward as you think.
remind me what OOD stands for again?
Out of distribution
The problem is how you incorporate that understanding into an optimization process, not necessarily how you get an AI to understand those values.
Given my above reply to james.lucassen about explicitly using a regressor LLM as a reward model, does that give better insight?
Or are you skeptical of the AI’s mapping from “world state” into language? I’d argue that we might get away with having the AI natively define its world state as language, a la SayCan.
I have no idea what I mean, on further reflection. I’m as confused as you are on why this is hard if we have an accurate utility function sitting right there. Maybe the idea is that subject to optimization pressure it would fail?
Yeah so I think that’s what the adversarial example/OOD people worry about. That just seems… like it buys you a lot? And like we should focus more on those problems specifically.
The best solution I can think of to outer-aligning an AGI capable of doing STEM research is to build one that’s a value learner and an alignment researcher. Obviously for a value learner, doing alignment research is a convergent instrumental strategy: it wants to do whatever humans want, so it needs to better figure out what that is so it can do a better job. Then human values become an attractor.
However, to implement this strategy, you first need to build a value-learning AGI capable of doing STEM research (which obviously we don’t yet know how to do) that is initially sufficiently aligned to human values that it starts off inside the basin of attraction. I.e. it needs a passable first guess at human values for it to improve upon: one that’s sufficiently close that a) it doesn’t kill us all in the meantime while its understanding of our values is converging, b) it understands that we want things from it like honesty, corrigibility, willingness to shut down, fairness and so forth, and c) that we can’t give it a complete description of human values because we don’t fully understand them ourselves.
Your suggestion of using something like an LLM to encode a representation of human values is exactly the lines that I think we should be thinking on for that “initial starting value” for human values for a value learning AGI. Indeed, there are already researchers building ethical question testing sets for LLMs.
The issue is—as I understand it—under a sufficiently powerful optimizer “everything” essentially becomes adversarial, including OOD-catching itself.
I understand this in principle, but that seems to imply that for less scary AGIs, this might actually work. That unlocks a pretty massive part of the solution space (e.g. helping with alignment). Obviously we don’t know exactly how much, but that seems reasonably testable (e.g. OOD detection is also a precondition to self-driving cars so people know how to make it well-calibrated).
It’s not a “solution”, but it’s substantially harder to imagine a catastrophic failure from a large AGI project that isn’t actually bidding for superintelligence.