Physicist switching to AI alignment
Studying these man-made horrors so they are no longer beyond my comprehension
Physicist switching to AI alignment
Studying these man-made horrors so they are no longer beyond my comprehension
I am curious to know whether Anthropic has any sort of plan to not include results such as this in the training data of actual future LLMs.
To me it seems like a bad idea to include it since it could allow the model to have a sense on how we can set up a fake deployment-training distinction setups or how it should change and refine its strategies. It also can paint a picture that the model behaving like this is expected which is a pretty dangerous hyperstition.
If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks, then you ought to be able to say much weaker things that are impossible in two years, and you should have those predictions queued up and ready to go rather than falling into nervous silence after being asked.
Sorry, I might misunderstanding you (and hope I am), but… I think doomers literally say “Nobody knows what internal motivational structures SGD will entrain into scaled-up networks and thus we are all doomed”. The problems is not having the science to confidently say how the AIs will turn out, and not that doomers have a secret method to know that next-token-prediction is evil.
If you meant that doomers are too confident answering the question “will SGD even make motivational structures?” their (and mine) answer still stems from ignorance: nobody knows, but it is plausible that SGD will make motivational structures in the neural networks because it can be useful in many tasks (to get low loss or whatever), and if you think you do know better you should show it experimentally and theoretically in excruciating detail.
I also don’t see how it logically follows that “If your model has the extraordinary power to say what internal motivational structures SGD will entrain into scaled-up networks” ⇒ “then you ought to be able to say much weaker things that are impossible in two years” but it seems to be the core of the post. Even if anyone had the extraordinary model to predict what SGD exactly does (which we, as a species, should really strive for!!) it would still be a different question to predict what will or won’t happen in the next two years.
If I reason about my field (physics) the same should hold for a sentence structured like “If your model has the extraordinary power to say how an array of neutral atoms cooled to a few nK will behave when a laser is shone upon them” (which is true) ⇒ “then you ought to be able to say much weaker things that are impossible in two years in the field of cold atom physics” (which is… not true). It’s a non sequitur.
Sorry for taking long to get back to you.
So I take this to be a minor, not a major, concern for alignment, relative to others.
Oh sure, this was more a “look at this cool thing intelligent machines could do that should shut up people from saying things like ‘foom is impossible because training run are expensive’”.
learning is at least as important as runtime speed. Refining networks to algorithms helps with one but destroys the other
Writing poems, and most cognitive activity, will very likely not resolve to a more efficient algorithm like arithmetic does. Arithmetic is a special case; perception and planning in varied environments require broad semantic connections. Networks excel at those. Algorithms do not.
Please don’t read this as me being hostile, but… why? How sure can we be of this? How sure are you that things-better-than-neural-networks are not out there?
Do we have any (non-trivial) equivalent algorithm that works best inside a NN rather than code?
Btw I am no neuroscientists, so I could be missing a lot of the intuitions you got.
At the end of the day you seem to think that it can be possible to fully interpret and reverse engineer neural networks, but you just don’t believe that Good Old Fashioned AGI can exists and/or be better than training NNs weights?
Thanks for coming back to me.
“OK good point, but it’s hardly “suicide” to provide just one more route to self-improvement”
I admit the title is a little bit clickbaity, but given my list of assumption (which do include that NNs can be made more efficient by interpreting them) it does elucidate a path to foom (which does look like suicide without alignment).
Unless there’s an equally efficient way to do that in closed form algorithms, they have a massive disadvantage in any area where more learning is likely to be useful.
I’d like to point out that in this instance I was talking about the learned algorithm not the learning algorithm. Learning to learn is a can of worms I am not opening rn, even though it’s probably the area that you are referring to, but, still, I don’t really see a reason that there could not be more efficient undiscovered learning algorithms (and NN+GD was not learned, it was intelligently designed by us humans. Is NN+GD the best there is?).
Maybe I should clarify how I imagined the NN-AGI in this post: a single huge inscrutable NN like GPT. Maybe a different architecture, maybe a bunch of NNs in trench coat, but still mostly NN. If that is true then there is a lot of things that can be upgraded by writing them in code rather than keeping them in NNs (arithmetic is the easy example, MC tree search is another...). Whatever MC tree search the giant inscrutable matrices have implemented, they are probably really bad compared to sturdy old fashioned code.
Even if NNs are the best way to learn algorithms, they are not be the best way to design them. I am talking about the difference between evolvable and designable.
NN allow us to evolve algorithms, code allows us to intelligently design them: if there is no easy evolvable path to an algorithm, neural networks will fail.
The parallel to evolution is: evolution cannot make bones out of steel (even though they would be much better) because there is no shallow gradient to get steel (no way to have the recipe for steel-bones be in a way that if the recipe is slightly changed you still get something steel-like and useful). Evolution needs a smooth path from not-working to working while design doesn’t.
With intelligence the computations don’t need to be evolved (or learned) it can be designed, shaped with intent.
Are you really that confident that the steel equivalent of algorithms doesn’t exist? Even though as humans we have barely explored that area (nothing hard-coded comes close to even GPT-2)?
Do we have any (non-trivial) equivalent algorithm that works best inside a NN rather than code? I guess those might be the hardest to design/interpret so we won’t know for certain for a long time...
Arithmetic is a closed cognitive function; we know exactly how it works and don’t need to learn more.
If we knew exactly how make poems of math theorems (like GPT-4 does) that would make it a “closed cognitive function” too, right? Can that learned algorithm be reversed engineered from GPT-4? My answer is yes ⇒ foom ⇒ we ded.
Uhm, by interpretability I mean things like this where the algorithm that the NN implements is revered engineered, written down as code or whatever which would allow for easier recursive self improvement (by improving just the code and getting rid of the spaghetti NN).
Also by the looks of things (induction heads and circuits in general) there does seem to be a sort of modularity in how NN learn, so it does seem likely that you can interpret piece by piece. If this wasn’t true I don’t think mechanistic interpretability as a field would even exist.
BTW, if anyone is interested the virtual machine has these specs:
System: Linux 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 x86_64 x86_64 GNU/Linux
CPU: Intel Xeon CPU E5-2673 v4, 16 cores @ 2.30GHz
RAM: 54.93 GB
I did listen to that post, and while I don’t remember all the points, I do remember that it didn’t convince me that alignment is easy and, like Christiano’s post “Where I agree and disagree with Eliezer”, it just seems to be like “p(doom) of 95%+ plus is too much, it’s probably something like 10-50%” which is still incredibly unacceptably high to continue “business as usual”. I have faith that something will be done: regulation and breakthrough will happen, but it seems likely that it won’t be enough.
It comes down to safety mindset. There are very few and sketchy reasons to expect that by default an ASI will care about humans enough, so it not safe to build one until shown otherwise (preferably without actually creating one). And if I had to point out a single cause for my own high p(doom), it is the fact that we humans iterate all of our engineering to iron out all of the kinks, while with a technology that is itself adversarial, iteration might not be available (get it right the first time we deploy powerful AI).
Who do you think are the two or three smartest people to be skeptical of AI killing all humans? I think maybe Yann LeCunn and Andrew Ng.
Sure, those two. I don’t know about Ng (he recently had a private discussion with Hinton, but I don’t know what he thinks now), but I know LeCun hasn’t really engaged with the ideas and just relies on the concept that “it’s an extreme idea”. But as I said, having the position “AI doesn’t pose an existential threat” seems to be fringe nowadays.
If I dumb the argument down enough I get stuff like “intelligence/cognition/optimization is dangerous, and, whatever the reasons, we currently have zero reliable ideas on how to make a powerful general intelligence safe (eg. RLHF doesn’t work well enough as GPT-4 still lies/hallucinates and is jailbroken way too easily)” which is evidence based, not weird and not extreme.
I don’t get you. You are upset about people saying that we should scale back capabilities research, while at the same time holding the opinion that we are not doomed because we won’t get to ASI? You are worried that people might try to stop the technology that in your opinion may not happen?? The technology that if does indeed happen, you agree that “If [ASI] us wants us gone, we would be gone”?!?
Said this, maybe you are misunderstanding the people that are calling for a stop. I don’t think anyone is proposing to stop narrow AI capabilities. Just the dangerous kind of general intelligence “larger than GPT-4”. Self-driving cars good, automated general decision-making bad.
I’d also still like to hear your opinion on my counter arguments on the object level.
Thanks for the list, I’ve already read a lot of those posts, but I still remain unconvinced. Are you convinced by any of those arguments? Do you suggest I take a closer look to some posts?
But honestly, with the AI risk statement signed by so many prominent scientists and engineer, debating that AI risks somehow don’t exists seems to be just a fringe anti-climate-change-like opinion held by few stubborn people (or people just not properly introduced to the arguments). I find it funny that we are in a position where in the possible counter arguments appears “angels might save us”, thanks for the chuckle.
To be fair I think this post argues about how overconfident Yudkosky is at placing doom at 95%+, and sure, why not… But, as a person that doesn’t want to personally die, I cannot say that “it will be fine” unless I have good arguments as to why the p(doom) should be less than 0.1% and not “only 20%”!
You might object that OP is not producing the best arguments against AI-doom. In which case I ask, what are the best arguments against AI-doom?
I am honestly looking for them too.
The best I, myself, can come up with are brief light of “maybe the ASI will be really myopic and the local maxima for its utility is a world where humans are happy long enough to figure out alignment properly, and maybe the AI will be myopic enough that we can trust its alignment proposals”, but then I think that the takeoff is going to be really fast and the AI would just self-improve until it is able to see where the global maximum lies (also because we want to know how the best world for humans looks like, we don’t really want a myopic AI), except that that maximum will not be aligned.
I guess a weird counter argument to AI-doom, is “humans will just not build the Torment Nexus™ because they realize alignment is a real thing and they have a too high chance (>0.1%) of screwing up”, but I doubt that.
Well, I apologized for the aggressiveness/rudeness, but I am interested if I am mischaracterizing your position or if you really disagree with any particular “counter-argument” I have made.
I feel like briefly discussing every point on the object level (even though you don’t offer object level discussion: you don’t argue why the things you list are possible, just that they could be):
...Recursive self-improvement is an open research problem, is apparently needed for a superintelligence to emerge, and maybe the problem is really hard.
It is not necessary. If the problem is easy we are fucked and should spend time thinking about alignment, if it’s hard we are just wasting some time thinking about alignment (it is not a Pascal mugging). This is just safety mindset and the argument works for almost every point to justify alignment research, but I think you are addressing doom rather than the need for alignment.
The short version of RSI is: SI seems to be a cognitive process, so if something is better at cognition it can SI better. Rinse and repeat. The long version.
I personally think that just the step from from neural nets to algorithms (which is what perfectly successful interpretability would imply) might be enough to have dramatic improvement on speed and cost. Enough to be dangerous, probably even starting from GPT-3.
...Pushing ML toward and especially past the top 0.1% of human intelligence level (IQ of 160 or something?) may require some secret sauce we have not discovered or have no clue that it would need to be discovered.
...An example of this might be a missing enabling technology, like internal combustion for heavier-than-air flight (steam engines were not efficient enough, though very close). Or like needing the Algebraic Number Theory to prove the Fermat’s last theorem. Or similar advances in other areas.
...Improvement AI beyond human level requires “uplifting” humans along the way, through brain augmentation or some other means.
This has been claimed time and time again, people thinking this, just 3 years ago, would have predicted GPT-4 to be impossible without many breakthroughs. ML hasn’t hit a wall yet, but maybe soon?
Without it, we would be stuck with ML emulating humans, but not really discovering new math, physics, chemistry, CS algorithms or whatever.
What are you actually arguing? You seem to imply that humans don’t discover new math, physics, chemistry, CS algorithms...? 🤔
AGI (not ASI) are still plenty dangerous because they are in silicon. Compared to bio-humans they don’t sleep, don’t get tired, have speed advantage, ease of communication between each other, ease of self-modification (sure, maybe not foom-style RSI, but self-mod is on the table), self-replication not constrained by willingness to have kids, a lot of physical space, food, health, random IQ variance, random interest and without needing the slow 20-30 years of growth needed for humans to be productive. GPT-4 might not write genius-level code, but it does write code faster than anyone else.
...Agency and goal-seeking beyond emulating what humans mean by it informally might be hard, or not being a thing at all, but just a limited-applicability emergent concept, sort of like the Newtonian concept of force (as in F=ma).
Why do you need something that goal-seeks beyond what human informally mean?? Have you seen AutoGPT? What happened whit AutoGPT when GPT gets smarter? Why would GPT-6+AutoGPT not be a potentially dangerous goal-seeking agent?
...We may be fundamentally misunderstanding what “intelligence” means, if anything at all. It might be the modern equivalent of the phlogiston.
Do you really need to fundamentally understand fire to understand that it burns your house down and you should avoid letting it loose?? If we are wrong about intelligence… what? The superintelligence might not be smart?? Are you again arguing that we might not create a ASI soon?
I feel like the answers is just: “I think that probably some of the vast quantities of money being blindly piled it blindly and helplessly piled into here are going to end up actually accomplishing something”
People, very smart people, are really trying to build superintelligence. Are you really betting against human ingenuity?
I’m sorry if I sounded aggressive in some of this points, but from where I stand this arguments don’t seem to be well though out, and I don’t want to spend more time on this comment six people will see and two read.
“Despite all the reasons we should believe that we are fucked, there might just be missing some reasons we don’t yet know for why everything will all go alright” is a really poor argument IMO.
...AI that is smart enough to discover new physics may also discover separate and efficient physical resources for what it needs, instead of grabby-alien-style lightconing it through the Universe.
This especially feels A LOT like you are starting from hopes and rationalizing them. We have veeeeery little reasons to believe that might be true… and also you just want to abandon that resource-rich physics to the AI instead to be used by humans to live nicely?
I think Yudkowsky put it nicely in this tweet while arguing with Ajeya Cotra:
Look, from where I stand, it’s obvious from my perspective that people are starting from hopes and rationalizing them, rather than neutrally extrapolating forward without hope or fear, and the reason you can’t already tell me what value was maxed out by keeping humans alive, and what condition was implied by that, is that you started from the conclusion that we were being kept alive, and didn’t ask what condition we were being kept alive in, and now that a new required conclusion has been added—of being kept alive in good condition—you’ve got to backtrack and rationalize some reason for that too, instead of just checking your forward prediction to find what it said about that.
I am quite confused. It is not clear to me if at the end you are saying that LLMs do or don’t have a world model. Can you clearly say on which “side” do you stand on? Are you even arguing for a particular side? Are you arguing that the idea of “having a world model” doesn’t apply well to an LLM/is just not well defined?
Said this, you do seem to be claiming that LLMs do not have a coherent model of the world (again, am I misunderstanding you?), and then use humans as an example of what having a coherent world model looks like. This sentence is particularly bugging me:
For example, an LLM that can answer a question about the kinetic energy of a bludger probably doesn’t have a clear boundary between models of fantasy and models of reality. But switching seamlessly between emulating different people is implicit in what they are attempting to do—predict what happens in a conversation.
In the screenshots you provided GPT3.5 does indeed answer the question, but it seem to distinguish it being not real (it says ”...bludgers in Harry Potter are depicted as...”, ”...in the Harry Potter universe...”) and indeed it says it doesn’t have specific information about their magical properties. I also, in spite of being a physicist with knowledge that HP isn’t real, I would have gladly tried to answer that question kinda like GPT did. What are you arguing? LLMs seem to have the distinction at least between reality and HP or not?
And large language models, like humans, do the switching so contextually, without explicit warning that the model being used is changing. They also do so in ways that are incoherent.
What’s incoherent about the response it gave? Was the screenshot not meant to be evidence?
The simulator theory (which you seem to rely on) is, IMO, a good human-level explanation of what GPT is doing, but it is not a fundamental-level theory. You cannot reduce every interaction with an LLM as a “simulation”, somethings are just weirder. Think of pathological examples of the input being “££££...” repeated 1000s of times: the output will be some random, possibly incoherent, babbling (funny incoherent output I got from the API inputting “£”*2000 and asking it how many “£” there were: ‘There are 10 total occurrences of “£” in the word Thanksgiving (not including spaces).’). Notice also the random title it gives to the conversations. Simulator theory fails here.
In the framework of simulator theory and lack of world model, how do you explain that it is actually really hard to make GPT overtly racist? Or how the instruct finetuning is basically never broken?
If I leave a sentence incomplete why doesn’t the LLM completes my sentence instead of saying “You have been cut off, can you please repeat?”? Why doesn’t the “playful” roleplaying take over, while (as you seem to claim) it takes over when you ask for factual things? Do they have a model of what “following instruction means” and “racisms” but not what “reality” is?
To state my belief: I think hallucinations, non-factuality and a lot of the problems are better explained by addressing the failure of RLHF and not from a lack of a coherent world model. RLHF apparently isn’t that good at making sure that GPT-4 answers factually. Especially since it is really hard to make it overtly racist. And especially since they reward it for “giving it a shot” instead of answering “idk” (because that would make it answer always “idk”). I explain it as: in training the reward model a lot of non-factual things might appear, and even some non-factual thing are actually the preferred response that human like.
Or it might just be the autoregressive paradigm that once it make a mistake (just by randomly sampling the “wrong” token) the model “thinks”: *Yoda voice* ‘mhmm, a mistake in the answer I see, mistaken the continuation of the answer should then be’.
And the weirdness of the outputs after a long repetition of a single token is explained by the non-zero repetition penalty in ChatGPT and so the output will kinda resemble the output of a glitch token.
The article and my examples were meant to show that there is a gap between what GPT knows and what it says. It knows something, but sometimes says that it doesn’t, or it just makes it up. I haven’t addressed your “GPT generator/critic” framework or the calibration issues as I don’t really see them much relevant here. GPT is just GPT. Being a critic/verifier is basically always easier. IIRC the GPT-4 paper didn’t really go into much detail of how they tested the calibration, but that’s irrelevant here as I am claiming that sometimes it know the “right probability” but it generates a made up one.
I don’t see how “say true things when you are asked and you know the true thing” is such a high standard, just because we have already internalised that it’s ok that sometimes GPT says make up things
Offering a confused answer is in a sense bad, but with lying there’s an obviously better policy (don’t) while it’s not the case that a confused answer is always the result of a suboptimal policy.
Sure, but the “lying” probably stems from the fact that to get the thumbs up from RLHF you just have to make up a believable answer (because the process AFAIK didn’t involve actual experts in various fields fact checking every tiny bit). If just a handful of “wrong but believable” examples sneak in the reward modelling phase you get a model that thinks that sometimes lying is what humans want (and without getting too edgy, this is totally true for politically charged questions!).”Lying” could well be the better policy! I am not claiming that GPT is maliciously lying, but in AI safety, malice is never really needed or even considered (ok, maybe deception is malicious by definition).
AFAIK there’s no evidence of a gap between what GPT knows and what it says when it’s running in pure generative mode
I am unsure if this article will satisfy you, but nonetheless I have repeatedly corrected GPT-3/4 and it goes “oh, yeah, right, you’re right, my bad, [elaborates, clearly showing that it had the knowledge all along]”. Or even:
Me: “[question about thing]”
GPT: “As of my knowledge cut-off of 2021 I have absolutely no idea what you mean by thing”
Me: “yeah, you know, the thing”
GPT: “Ah, yeah the thing [writes four paragraphs about the thing]”
Fresh example of this: Link (it says the model is the default, but it’s not, it’s a bug, I am using GPT-4)
Maybe it is just perpetrating the bad training data full of misconceptions or maybe when I correct it I am the one who’s wrong and it’s just a sycophant (very common in GPT-3.5 back in February).
But I think the point is that you could justify the behaviour in a million ways. It doesn’t change the fact that it says untrue things when asked for true things.
Is it safe to hallucinate sometimes? Idk, that could be discussed, but sure as hell it isn’t aligned with what RLHF was meant to align it to.
I’d also like to add that it doesn’t consistently hallucinate. I think sometimes it just gets unlucky and it samples the wrong token and then, by being autoregressive, keeps the factually wrong narrative going. So maybe being autoregressive is the real demon here and not RLHF. ¯\_(ツ)_/¯
It’s still not factual.
To me it isn’t clear what alignment are you talking about.
You say that the list is about “alignment towards genetically-specified goals”, which I read as “humans are aligned with inclusive genetic fitness”, but then you talk about what I would describe as “humans aligned with each other” as in “humans want humans to be happy and have fun”. Are you confusing the two?
South Korea isn’t having kids anymore. Sometimes you get serial killers or Dick Cheney.
Here the first one shows misalignment towards IGF, while the second shows misalignment towards other humans, no?
I’d actually argue the answer is “obviously no”.
RLHF wasn’t just meant to address “don’t answer how to make a bomb” or “don’t say the n-word”, it was meant to make GPT say factual things. GPT fails at that so often that this “lying” behaviour has its own term: hallucinations. It doesn’t “work as intended” because it was intended to make it say true things.
Do many people really forget that RLHF was meant to make GPT say true things?
When OpenAI reports the success of RLHF as “GPT-4 is the most aligned model we developed” to me it sounds like a case of mostly “painting the target around the arrow”: they decided a-posteriori that whatever GPT-4 does is aligned.
You even have “lie” multiple times in the list of bad behaviours in this post and you still answer “yes, it is aligned”? Maybe you just have a different experience? Do you check what it says? If I ask it about my own expertise it is full of hallucinations.
Hell, neural networks, in physics are often regarding as just fitting with many parameters a really complex function we don’t have the mathematical form of (sot hhe reverse of what I explained in this paragraph).
Basically I expect the neural networks to be a crude approximation of a hard-coded cognition algorithm. Not the other way around.
Thank you for providing this detail, that’s basically what I was looking for!