Rational Animations’ main writer and helmsman
Writer
How to Upload a Mind (In Three Not-So-Easy Steps)
RA Bounty: Looking for feedback on screenplay about AI Risk
Hofstadter too!
How to Eradicate Global Extreme Poverty [RA video with fundraiser!]
For Rational Animations, there’s no problem if you do that, and I generally don’t see drawbacks.
Perhaps one thing to be aware of is that some of the articles we’ll animate will be slightly adapted. Sorting Pebbles and The Power of Intelligence are exactly like the original. The Parable of The Dagger has deletions of words such as “replied” or “said” and adds a short additional scene at the end. The text of The Hidden Complexity of Wishes has been changed in some places, with Eliezer’s approval, mainly because the original has some references to other articles. In any case, when there are such changes, I write it in the LW post accompanying the videos.
If you just had to pick one, go for The Goddess of Everything Else.
Here’s a short list of my favorites.
In terms of animation:
- The Goddess of Everything Else
- The Hidden Complexity of Wishes
- The Power of Intelligence
In terms of explainer:- Humanity was born way ahead of its time. The reason is grabby aliens. [written by me]
- Everything might change forever this century (or we’ll go extinct). [mostly written by Matthew Barnett]
Also, I’ve sent the Discord invite.
I (Gretta) will be leading the communications team at MIRI, working with Rob Bensinger, Colm Ó Riain, Nate, Eliezer, and other staff to create succinct, effective ways of explaining the extreme risks posed by smarter-than-human AI and what we think should be done about this problem.
I just sent an invite to Eliezer to Rational Animations’ private Discord server so that he can dump some thoughts on Rational Animations’ writers. It’s something we decided to do when we met at Manifest. The idea is that we could distill his infodumps into something succinct to be animated.
That said, if in the future you have from the outset some succinct and optimized material you think we could help spread to a wide audience and/or would benefit from being animated, we can likely turn your writings into animations on Rational Animations, as we already did for a few articles in The Sequences.
The same invitation extends to every AI Safety organization.
EDIT: Also, let me know if more of MIRI’s staff would like to join that server, since it seems like what you’re trying to achieve with comms overlaps with what we’re trying to do. That server basically serves as central point of organization for all the work happening at Rational Animations.
I don’t speak for Matthew, but I’d like to respond to some points. My reading of his post is the same as yours, but I don’t fully agree with what you wrote as a response.
If you find something that looks to you like a solution to outer alignment / value specification, but it doesn’t help make an AI care about human values, then you’re probably mistaken about what actual problem the term ‘value specification’ is pointing at.
[...]
It was always possible to attempt to solve the value specification problem by just pointing at a human. The fact that we can now also point at an LLM and get a result that’s not all that much worse than pointing at a human is not cause for an update about how hard value specification isMy objection to this is that if an LLM can substitute for a human, it could train the AI system we’re trying to align much faster and for much longer. This could make all the difference.
If you could come up with a simple action-value function Q(observation, action), that when maximized over actions yields a good outcome for humans, then I think that would probably be helpful for alignment.
I suspect (and I could be wrong) that Q(observation, action) is basically what Matthew claims GPT-N could be. A human who gives moral counsel can only say so much and, therefore, can give less information to the model we’re trying to align. An LLM wouldn’t be as limited and could provide a ton of information about Q(observation, action), so we can, in practice, consider it as being our specification of Q(observation, action).
Edit: another option is that GPT-N, for the same reason of not being limited by speed, could write out a pretty huge Q(observation, action) that would be good, unlike a human.
Keeping all this in mind, the actual crux of the post to me seems:
I claim that GPT-4 is already pretty good at extracting preferences from human data. If you talk to GPT-4 and ask it ethical questions, it will generally give you reasonable answers. It will also generally follow your intended directions, rather than what you literally said. Together, I think these facts indicate that GPT-4 is probably on a path towards an adequate solution to the value identification problem, where “adequate” means “about as good as humans”. And to be clear, I don’t mean that GPT-4 merely passively “understands” human values. I mean that asking GPT-4 to distinguish valuable and non-valuable outcomes works pretty well at approximating the human value function in practice, and this will become increasingly apparent in the near future as models get more capable and expand to more modalities.
[8] If you disagree that AI systems in the near-future will be capable of distinguishing valuable from non-valuable outcomes about as reliably as humans, then I may be interested in operationalizing this prediction precisely, and betting against you. I don’t think this is a very credible position to hold as of 2023, barring a pause that could slow down AI capabilities very soon.
About it, MIRI-in-my-head would say: “No. RLHF or similarly inadequate training techniques mean that GPT-N’s answers would build a bad proxy value function”.
And Matthew-in-my-head would say: “But in practice, when I interrogate GPT-4 its answers are fine, and they will improve further as LLMs get better. So I don’t see why future systems couldn’t be used to construct a good value function, actually”.
I agree that MIRI’s initial replies don’t seem to address your points and seem to be straw-manning you. But there is one point they’ve made, which appears in some comments, that seems central to me. I could translate it in this way to more explicitly tie it to your post:
”Even if GPT-N can answer questions about whether outcomes are bad or good, thereby providing “a value function”, that value function is still a proxy for human values since what the system is doing is still just relaying answers that would make humans give thumbs up or thumbs down.”
To me, this seems like the strongest objection. You haven’t solved the value specification problem if your value function is still a proxy that can be goodharted etc.If you think about it in this way, then it seems like the specification problem gets moved to the procedure you use to finetune large language models to make them able to give answers about human values. If the training mechanism you use to “lift” human values out of LLM’s predictive model is imperfect, then the answers you get won’t be good enough to build a value function that we can trust.
That said, we have GPT-4 now, and with better subsequent alignment techniques, I’m not so sure we won’t be able to get an actual good value function by querying some more advanced and better-aligned language model and then using it as a training signal for something more agentic. And yeah, at that point, we still have the inner alignment part to solve, granted that we solve the value function part, and I’m not sure we should be a lot more optimistic than before having considered all these arguments. Maybe somewhat, though, yeah.
Eliezer, are you using the correct LW account? There’s only a single comment under this one.
Would it be fair to summarize this post as:
1. It’s easier to construct the shape of human values than MIRI thought. An almost good enough version of that shape is within RLHFed GPT-4, in its predictive model of text. (I use “shape” since it’s Eliezer’s terminology under this post.)
2. It still seems hard to get that shape into some AI’s values, which is something MIRI has always said.
Therefore, the update for MIRI should be on point 1: constructing that shape is not as hard as they thought.
The Hidden Complexity of Wishes—The Animation
Will AI kill everyone? Here’s what the godfathers of AI have to say [RA video]
Up until recently, with a big spreadsheet and guesses about these metrics:
- Expected impact
- Expected popularity
- Ease of adaptation (for external material)
The next few videos will still be chosen in this way, but we’re drafting some documents to be more deliberate. In particular, we now have a list of topics to prioritize within AI Safety, especially because sometimes they build on each other.
Thank you for the heads-up about the Patreon page; I’ve corrected it!
Given that the logic puzzle is not the point of the story (i.e., you could understand the gist of what the story is trying to say without understanding the first logic puzzle), I’ve decided not to use more space to explain it. I think the video (just like the original article) should be watched one time all at once and then another time, but pausing multiple times and thinking about the logic.
The Parable of the Dagger—The Animation
The Goddess of Everything Else—The Animation
This is probably not the most efficient way for keeping up with new stuff, but aisafety.info is shaping up to be a good repository of alignment concepts.
They might not do that if they have different end goals though. Some version of this strategy doesn’t seem so hopeless to me.