Eliezer Yudkowsky’s main message to his Twitter fans is:
Aligning human-level or superhuman AI with its creators’ objectives is also called “superalignment”. And a month ago, I proposed a solution to that. One might call it Volition Extrapolated by Language Models (VELM).
Apparently, the idea was novel (not the “extrapolated volition” part):
But it suffers from the fact that language models are trained on large bodies of Internet text. And this includes falsehoods. So even in the case of a superior learning algorithm[1], a language model using it on Internet text would be prone to generating falsehoods, mimicking those who generated the training data.
So a week later, I proposed a solution to that problem too. Perhaps one could call it Truthful Language Models (TLM). That idea was apparently novel too. At least no one seems to be able to link prior art.
Its combination with the first idea might be called Volition Extrapolated by Truthful Language Models (VETLM). And this is what I was hoping to discuss.
But this community’s response was rather disinterested. When I posted it, it started at +3 points, and it’s still there. Assuming that AGI is inevitable, shouldn’t superalignment solution proposals be almost infinitely important, rationally-speaking?
I can think of five possible critiques:
It’s not novel. If so, please do post links to prior art.
It doesn’t have an “experiments” section. But do experimental results using modern AI transfer to future AGI, possibly using very different algorithms?
It’s a hand-waving argument. There is no mathematical proof. But many human concepts are hard to pin down, mathematically. Even if a mathematical proof can be found, your disinterest does not exactly help.
Promoting the idea that a solution to superalignment exists doesn’t jibe with the message “stop AGI”. But why do you want to stop it, if it can be aligned? Humanity has other existential risks. They should be weighed against each other.
Entertaining the idea that a possible solution to superalignment exists does not help AI safety folks’ job security. Tucker Carlson released an episode recently arguing that “AI safety” is a grift and/or a cult. But I disagree with both. People should try to find the mathematical proofs, if possible, and flaws, if any.
- ^
Think “AIXI running on a hypothetical quantum supercomputer”, if this helps your imagination. But I think that superior ML algorithms will be found for modern hardware.
I can’t speak for anyone else, but the reason I’m not more interested in this idea is that I’m not convinced it could actually be done. Right now, big AI companies train on piles of garbage data since it’s the only way they can get sufficient volume. The idea that we’re going to produce a similar amount of perfectly labeled data doesn’t seem plausible.
I don’t want to be too negative, because maybe you have an answer to that, but maybe-working-in-theory is only the first step and if there’s no visible path to actually doing what you propose, then people will naturally be less excited.
That’s not at all the idea. Allow me to quote myself:
You must have missed the words “some of” in it. I’m not suggesting labeling all of the text, or even a large fraction of it. Just enough to teach the model the concept of right and wrong.
It shouldn’t take long, especially since I’m assuming a human-level ML algorithm here, that is, one with data efficiency comparable to that of humans.
Ah, I misread the quote you included from Nathan Helm-Burger. That does make more sense.
This seems like a good idea in general, and would probably make one of the things Anthropic is trying to do (find the “being truthful” neuron) easier.
I suspect this labeling and using the labels is still harder that you think though, since individual tokens don’t have truth values.
I looked through the links you posted and it seems like the push-back is mostly around things you didn’t mention in this post (prompt engineering as an alignment strategy).
Why should they?
You could label each paragraph, for example. Then, when the LM is trained, the correct label could come before each paragraph, as a special token: <true>, <false>, <unknown> and perhaps <mixed>.
Then, during generation, you’d feed it <true> as part of the prompt, and when it generates paragraph breaks.
Similarly, you could do this on a per-sentence basis.
Carlson’s interview, BTW. It discusses LessWrong in the first half of the video. Between X and YouTube, the interview got 4M views—possibly the most high-profile exposure of this site?
I’m kind of curious about the factual accuracy: “debugging” / struggle sessions, polycules, and the 2017 psychosis—Did that happen?