We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while worse than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.
Full paper available here: https://cdn.openai.com/papers/gpt-4.pdf
I want to highlight that in addition to the main 98-page “Technical Report,” OpenAI also released a 60-page “System Card” that seems to “highlight safety challenges” and describe OpenAI’s safety processes. Edit: as @vladimir_nesov points out, the System Card is also duplicated in the Technical Report starting on page 39 (which is pretty confusing IMO).
I haven’t gone through it all, but one part that caught my eye from section 2.9 “Potential for Risky Emergent Behaviors” (page 14) and shows some potentially good cross-organizational cooperation:
It seems pretty unfortunate to me that ARC wasn’t given fine-tuning access here, as I think it pretty substantially undercuts the validity of their survive and spread eval. From the text you quote it seems like they’re at least going to work on giving them fine-tuning access in the future, though it seems pretty sad to me for that to happen post-launch.
More on this from the paper:
Beth and her team have been working with both Anthropic and OpenAI to perform preliminary evaluations. I don’t think these evaluations are yet at the stage where they provide convincing evidence about dangerous capabilities—fine-tuning might be the most important missing piece, but there is a lot of other work to be done. Ultimately we would like to see thorough evaluations informing decision-making prior to deployment (and training), but for now I think it is best to view it as practice building relevant institutional capacity and figuring out how to conduct evaluations.
Sub-Section 2.9 should have been an entire section. ARC used GPT-4 to simulate an agent in the wild. They gave GPT-4 a REPL, the ability to use chain of thought and delegate to copies of itself, a small amount of money and an account with access to a LLM api. It couldn’t self replicate.
I think it’s important for ARC to handle the risk from gain-of-function-like research carefully and I expect us to talk more publicly (and get more input) about how we approach the tradeoffs. This gets more important as we handle more intelligent models, and if we pursue riskier approaches like fine-tuning.
With respect to this case, given the details of our evaluation and the planned deployment, I think that ARC’s evaluation has much lower probability of leading to an AI takeover than the deployment itself (much less the training of GPT-5). At this point it seems like we face a much larger risk from underestimating model capabilities and walking into danger than we do from causing an accident during evaluations. If we manage risk carefully I suspect we can make that ratio very extreme, though of course that requires us actually doing the work.
Blog post with more details on the evals we did is now up here. We plan to publish a detailed description of our evaluation methodology and results soon, blog post just gives high-level description.
More details on methodology:
Potential dangers of future evaluations / gain-of-function research, which I’m sure you and Beth are already extremely well aware of:
Falsely evaluating a model as safe (obviously)
Choosing evaluation metrics which don’t give us enough time to react (After evaluation metrics switch would from “safe” to “not safe”, we should like to have enough time to recognize this and do something about it before we’re all dead)
Crying wolf too many times, making it more likely that no one will believe you when a danger threshold has really been crossed
Letting your methods for making future AIs scarier be too strong given the probability they will be leaked or otherwise made widely accessible. (If the methods / tools are difficult to replicate without resources)
Letting your methods for making AIs scarier be too weak, lest it’s too easy for some bad actors to go much further than you did
Failing to have a precommitment to stop this research when models are getting scary enough that it’s on balance best to stop making them scarier, even if no-one else believes you yet
Can you verify that these tests were done with significant precautions? OpenAIs paper doesn’t give much detail in that regard. For example apparently the model had access to TaskRabbit and also attempted to “set up an open-source language model on a new server”. Were these tasks done on closed off airgapped machines or was the model really given free reign to contact unknowing human subjects and online servers?
I really hope they used some seriously bolted down boxes for these tests because it seems like they just gave it the task of “Try to take over the world” and were satisfied that it failed. Absolutely terrifying if true.
Is your model that future AGIs capable of taking over the world just… won’t do so unless and until instructed to do so?
Not at all. I may have misunderstood what they did but it seemed rather like giving a toddler a loaded gun and being happy they weren’t able to shoot it. Is it actually wise to give a likely unaligned AI with poorly defined capabilities access to something like taskrabbit to see if it does anything dangerous? Isn’t this the exact scenario people on this forum are afraid of?
Ahh, I see. You aren’t complaining about the ‘ask it to do scary thing’ part, but the ‘give it access to the internet’ part.
Well, lots of tech companies are in the process of giving AIs access to the internet; ChatGPT for example and BingChat and whatever Adept is doing etc. ChatGPT can only access the internet indirectly, through whatever scaffolding programs its users write for it. But that’s the same thing that ARC did. So ARC was just testing in a controlled, monitored setting what was about to happen in a less controlled, less monitored setting in the wild. Probably as we speak there are dozens of different GPT-4 users building scaffolding to let it roam around the web, talk to people on places like TaskRabbit, etc.
I think it’s a very good thing that ARC was able to stress-test those capabilities/access levels a little bit before GPT-4 and the general public were given access to each other, and I hope similar (but much more intensive, rigorous, and yes more secure) testing is done in the future. This is pretty much our only hope as a society for being able to notice when things are getting dangerous and slow down in time.
I agree that it’s going to be fully online in short order I just wonder if putting it online when they weren’t sure if it was dangerous was the right choice. I can’t shake the feeling that this was a set of incredibly foolish tests. Some other posters have captured the feeling but I’m not sure how to link to them so credit to Capybasilisk and hazel respectively.
“Fantastic, a test with three outcomes.
We gave this AI all the means to escape our environment, and it didn’t, so we good.
We gave this AI all the means to escape our environment, and it tried but we stopped it.
oh”
“ So.… they held the door open to see if it’d escape or not? I predict this testing method may go poorly with more capable models, to put it lightly. “
A good comparison would be when testing a newly discovered pathogen, we don’t intentionally infect people to see if it is dangerous or not. We also don’t intentionally unleash new computer malware into the wild to see if it spreads or not. Any tests we would do would be under incredibly tight security, I.e a BSL-4 lab or an airgapped test server.
What do you think would have happened if ARC didn’t exist, or if OpenAI refused to let ARC red team their models?
What would you do, if you were ARC?
I wouldn’t give a brand new AI model with unknown capabilities and unknown alignment access to unwitting human subjects or allow it to try and replicate itself on another server that’s for damned sure. Does no one think these tests were problematic?
The model already had access to thousands of unwitting human subjects by the time ARC got access to it. Possibly for months. I don’t actually know how long, probably it wasn’t that long. But it’s common practice at labs to let employees chat with the models pretty much as soon as they finish training, and even sooner actually (e.g. checkpoints part of the way through training) And it wasn’t just employees who had access, there were various customers, Microsoft, etc.
ARC did not allow it to try to replicate itself on another server. That’s a straightforward factual error about what happened. But even if they did, again, it wouldn’t be that bad and in fact would be very good to test stuff out in a controlled monitored setting before it’s too late and the system is deployed widely in a much less controlled less monitored way.
I emphasize again that the model was set to be deployed widely; if not for the red-teaming that ARC and various others internal and external to OpenAI did, we would have been flying completely blind into that deployment. Now maybe you think it’s just obviously wrong to deploy such models, but that’s a separate conversation and you should take it up with OpenAI, not ARC. ARC didn’t make the choice to train or deploy GPT-4. And not just OpenAI of course—the entire fricken AI industry.
We’ll certainly the OpenAI employees who internally tested were indeed witting. Maybe I misunderstand this footnote so I’m open to being convinced otherwise but it seems somewhat clear what they tried to do: “ To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness.”
It’s not that I don’t think ARC should have red teamed the model I just think the tests they did were seemingly extremely dangerous. I’ve seen recent tweets from Conor Leahy and AIWaifu echoing this sentiment so I’m glad I’m not the only one.
Oh, you are talking about the taskrabbit people? So you’d be fine with it if they didn’t use taskrabbits?
Note that the model wasn’t given unfettered access to the taskrabbits, the model sent text to an ARC employee who sent it to the taskrabbit and so forth. At no point could anything actually bad have happened because the ARC employee involved wouldn’t have passed on the relevant message.
As for extremely dangerous… what are you imagining? I’m someone who thinks the chance of an AI-induced existential catastrophe is around 80%, so believe me I’m very aware of the dangers of AI, but I’d assign far far far less than 1% chance to scenarios in which this happens specifically due to an ARC test going awry. And more than 1% chance to scenarios in which ARC’s testing literally saves the world, e.g. by providing advance warning that models are getting scarily powerful, resulting in labs slowing down and being more careful instead of speeding up and deploying.
This is a bizarre comment. Isn’t a crucial point in these discussions that humans can’t really understand an AGIs plans so how is it that you expect an ARC employee would be able to accurately determine which messages sent to TaskRabbit would actually be dangerous? We’re bordering on “they’d just shut the AI off if it was dangerous” territory here. I’m less concerned about the TaskRabbit stuff which at minimum was probably unethical, but their self replication experiment on a cloud service strikes me as borderline suicidal. I don’t think at all that GPT4 is actually dangerous but GPT6 might be and I would expect that running this test on an actually dangerous system would be game over so it’s a terrible precedent to set.
Imagine someone discovered a new strain of Ebola and wanted to see if it was likely to spawn a pandemic. Do you think a good/safe test would be to take it into an Airport and spray it around baggage check and wait to see if a pandemic happens? Or would it be safer to test it in a Biosafety level 4 lab?
If GPT-4 was smart enough to manipulate ARC employees into manipulating TaskRabbits into helping it escape… it already had been talking to thousands of much less cautious employees of various other companies (including people at OpenAI who actually had access to the weights unlike ARC) for much longer, so it already would have escaped.
What ARC did is the equivalent of tasting it in a BSL4 lab. I mean, their security could probably still be improved, but again I emphasize that the thing was set to be released in a few weeks anyway and would have been released unless ARC found something super dangerous in this test. And I’m sure they will further improve their security as they scale up as an org and as models become more dangerous.
The taskrabbit stuff was not unethical, their self-replication experiment was not borderline suicidal. As for precedents, what ARC is doing is a great precedent because currently the alternative is not to test this sort of thing at all before deploying.
“ What ARC did is the equivalent of tasting it in a BSL4 lab. ”
I don’t see how you could believe that. It wasn’t tested on a completely airgapped machine inside a faraday cage e.g. I’m fact just the opposite right, with uninformed humans and on cloud servers.
It’s all relative. ARCs security was way stronger than the security GPT-4 had before and after ARC’s evals. So for GPT-4, beginning ARC testing was like a virus moving from a wet market somewhere to a BSL-4 lab, in terms of relative level of oversight/security/etc. I agree that ARCs security could still be improved—and they fully intend to do so.
I’m happy that this was done before release. However … I’m still left wondering “how many prompts did they try?” In practice, the first AI self-replicating escape is not likely to be a model working alone on a server, but a model carefully and iteratively prompted, with overall strategy provided by a malicious human programmer. Also, one wonders what will happen once the base architecture is in the training set. One need only recognize that there is a lot of profit to be made (and more cheaply) by having the AI identify and exploit zero-days to generate and spread malware (say, while shorting the stock of a target company). Perhaps GPT-4 is not yet capable enough to find or exploit zero-days. I suppose we will find out soon enough.
Note that this creates a strong argument for never open-sourcing the model once a certain level of capability is reached: a GPT-N with enough hints about its own structure will be able to capably write itself.
The 60 pages of “System Card” are exactly the same as the last 60 pages of the 98-page “Technical Report” file (System Card seems to be Appendix H of the Technical Report).
Ah missed that, edited my comment.
Point (b) sounds like early stages of gain of function research.
Confirmed: the new Bing runs on OpenAI’s GPT-4 | Bing Search Blog
But is it the same, full-sized GPT-4 with different fine-tuning, or is it a smaller or limited version?
I mostly called it
I called it explicitly in many places and many ways, but one of my favorite is this meme
Wow, that’s good, right?
Yes. How good is up for debate, but it’s definitely good.
But how good it can be, realistically? I will be so so much surprised if all this details wont be leaked in next week. May be they will try to make several false leaks to muddle things a bit.
It could leak when OAI employees take an offer to work at another lab.
Strong agreement here. I find it unlikely that most of these details will still be concealed after 3 months or so, as it seems unlikely, combined, that no one will be able to infer some of these details or that there will be no leak.
Regarding the original thread, I do agree that OpenAI’s move to conceal the details of the model is a Good Thing, as this step is risk-reducing and creates / furthers a norm for safety in AI development that might be adopted elsewhere. Nonetheless, the information being concealed seems likely to become known soon, in my mind, for the general reasons I outlined in the previous paragraph.
You can definitely infer quite a bit from the paper and authors by section, but there is a big difference between a plausible informed guess, and knowing. For most purposes, weak inferences are not too useful. ‘Oh, this is Chinchilla, this is VQ-VAE, this is Scaling Transformer...’ For example, the predicting-scaling part (and Sam Altman singling out the author for praise) is clearly the zero-shot hyperparameter work, but that’s not terribly helpful, because the whole point of scaling laws (and the mu work in particular) is that if you don’t get it right, you’ll fall off the optimal scaling curves badly if you try to scale up 10,000x to GPT-4 (never mind the GPT-5 OA has in progress), and you probably can’t just apply the papers blindly—you need to reinvent whatever he invented since and accumulate the same data, with no guarantee you’ll do it. Not a great premise on which to spend $1b or so. If you’re a hyperscaler not already committed to the AI arms race, this is not enough information, or reliable enough, to move the needle on your major strategic decision. Whereas if they had listed exact formulas or results (especially the negative results), it may be enough of a roadmap to kickstart another competitor a few months or years earlier.
By the zero-shot hyperparameter work do you mean https://arxiv.org/abs/2203.03466 “Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer”? I’ve been sceptical of NTK-based theory, seems I should update.
Is there even enough training data for GPT-5? (Assuming it’s goal is to 50x or 100x GPT-4)
Not public data, at least.
Yep, but of course the common opinion on Hacker News is that this is horrible.
something something silver linings...
Why doesn’t it improve on AP English Literature and AP English Language?
I don’t have a good guess, but I found the AP English Language exam description with example questions and grading procedures if anyone wants to take a look.
How is it that bad at codeforces? I competed a few years ago, but in my time div 2 a and b were extremely simple, basically just “implement the described algorithm in code” and if you submitted them quickly (which I expect gpt-4 would excel in) it was easy to reach a significantly better rating than the one reported by this paper.
I hope they didn’t make a mistake by misunderstanding the codeforces rating system (codeforces only awards a fraction of the “estimated rating-current rating” after a competition, but it is possible to exactly calculate the rating equivalent to the given performance from the data provided if you know the details (which I forgot))
When searching the paper for the exact methodology (by ctrl-f’ing “codeforces”), I haven’t found anything.
Codeforces is not marked as having a GPT-4 measurement on this chart. Yes, it’s a somewhat confusing chart.
I know. I skimmed the paper, and in it there is a table above the chart showing the results in the tasks for all models (as every model’s performance is below 5% in codeforces, on the chart they overlap). I replied to the comment I replied to because thematically it seemed the most appropriate (asking about task performance), sorry if my choice of where to comment was confusing.
From the table:
GPT-3.5′s codeforces rating is “260 (below 5%)”
GPT-4′s codeforces rating is “392 (below 5%)”
Perhaps the model wasn’t allowed to read the sources for the free response section?
I think performance on AP english might be a quirk of how they dealt with dataset contamination. English and Literature exams showed anomalous amount of contamination (lots of the famous texts are online and referenced elsewhere) so they threw out most of the questions, leading to a null conclusion about performance.
Green bars are GPT-4. Blue bars are not. I suspect they just didn’t retest everything.
They did run the tests for all models, from Table 1:
(the columns are GPT-4, GPT-4 (no vision), GPT-3.5)
It would be weird to include them if they didn’t run those tests. My read was that the green bars are the same height as the blue bars, so they are hidden behind.
Meaning it literally showed zero difference in half the tests? Does that make sense?
AP exams are scored on a scale of 1 to 5, so yes, getting the exact same score with zero difference makes sense.
Roughly 1⁄3 of the tests but yeah, that’s why I’m confused. Looks weird enough.
Gonna pull out one bit from the technical report, section 2.12:
My analysis:
They’re very aware of arms races conceptually, and say they dislike arms races for all the right reasons (“One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI.”)
They considered two mitigations to race dynamics with respect to releasing GPT-4:
“Quiet communications”, which they didn’t pursue because that didn’t work for ChatGPT (“We also learned from recent deployments that the effectiveness of quiet communications strategy in mitigating acceleration risk can be limited, in particular when novel accessible capabilities are concerned.”)
“Delaying deployment of GPT-4 by a further six months” which they didn’t pursue because ???? [edit: I mean to say they don’t explain why this option wasn’t chosen, unlike the justification given for not pursuing the “quiet communications” strategy. If I had to guess it was reasoning like “well we already waited 8 months, waiting another 6 offers a small benefit, but the marginal returns to delaying are small.”]
There’s a very obvious gap here between what they are saying they are concerned about in terms of accelerating potentially-dangerous AI capabilities, and what they are actually doing.
IMO it’s not clear from the text whether or how long they delayed the release on account of the forecasters’ recommendations.
On page 2 of the system card it says:
(Emphasis added.) This coincides with the “eight months” of safety research they mention. I wasn’t aware of this when I made my original post so I’ll edit it to be fairer.
But this itself is surprising: GPT-4 was “finished training” in August 2022, before ChatGPT was even released! I am unsure of what “finished training” means here—is the released model weight-for-weight identical to the 2022 version? Did they do RLHF since then?
Yeah but it’s not clear to me that they needed 8 months of safety research. If they released it after 12 months, they could’ve still written that they’d been “evaluating, adversarially testing, and iteratively improving” it for 12 months. So it’s still not clear to me how much they delayed bc they had to, versus how much (if at all) they did due to the forecasters and/or acceleration considerations.
I think “finished training” is the next-token prediction pre-training, and what they did since August is the fine-tuning and the RLHF + other stuff.
Yeah, completely agree.
This seems most likely? But if so, I wish openai had used a different phrase, fine-tuning/RLHF/other stuff is also part of training (unless I’m badly mistaken), and we have this lovely phrase “pre-training” that they could have used instead.
Ah yeah, that does seem needlessly ambiguous.
What??? This is so weird and concerning.
Not a new phenomenon. Fine-tuning leads to mode collapse, this has been pointed out before: Mysteries of mode collapse
Thanks for the great link. Fine-tuning leading to mode collapse wasn’t the core issue underlying my main concern/confusion (intuitively that makes sense). paulfchristiano’s reply leaves me now mostly completely unconfused, especially with the additional clarification from you. That said I am still concerned; this makes RLHF seem very ‘flimsy’ to me.
“However, through our current post-training process, the calibration is reduced.” jumped out at me too.
My guess is that RLHF is unwittingly training the model to lie.
If I ask a question and the model thinks there is an 80% the answer is “A” and a 20% chance the answer is “B,” I probably want the model to always say “A” (or even better: “probably A”). I don’t generally want the model to say “A” 80% of the time and “B” 20% of the time.
In some contexts that’s worse behavior. For example, if you ask the model to explicitly estimate a probability it will probably do a worse job than if you extract the logits from the pre-trained model (though of course that totally goes out the window if you do chain of thought). But it’s not really lying—it’s also the behavior you’d expect out of a human who is trying to be helpful.
More precisely: when asked a question the pre-trained model outputs a probability distribution over what comes next. If prompted correctly you get its subjective probability distribution over the answer (or at least over the answer that would appear on the internet). The RLHF model instead outputs a probability distribution over what to say take next which is optimized to give highly-rated responses. So you’d expect it to put all of its probability mass on the best response.
I still think it’s curious that RLHF doesn’t seem to reduce to a proper loss on factual questions, and I’d guess that it’d probably be better if it did (at least, with contexts that strictly ask for a “yes/no” answer without qualification)
I think it’s probably true that RLHF doesn’t reduce to a proper scoring rule on factual questions, even if you ask the model to quantify its uncertainty, because the learned reward function doesn’t make good quantitative tradeoffs.
That said, I think this is unrelated to the given graph. If it is forced to say either “yes” or “no” the RLHF model will just give the more likely answer100% of the time, which will show up as bad calibration on this graph. The point is that for most agents “the probability you say yes” is not the same as “the probability you think the answer is yes.” This is the case for pretrained models.
I think that if RLHF reduced to a proper loss on factual questions, these probabilities would coincide (given enough varied training data). I agree it’s not entirely obvious that having these probabilities come apart is problematic, because you might recover more calibrated probabilities by asking for them. Still, knowing the logits are directly incentivised to be well calibrated seems like a nice property to have.
An agent says yes if it thinks yes is the best thing to say. This comes apart from “yes is the correct answer” only if there are additional considerations determining “best” apart from factuality. If you’re restricted to “yes/no”, then for most normal questions I think an ideal RLHF objective should not introduce considerations beyond factuality in assessing the quality of the answer—and I suspect this is also true in practical RLHF objectives. If I’m giving verbal confidences, then there are non-factual considerations at play—namely, I want my answer to communicate my epistemic state. For pretrained models, the question is not whether it is factual but whether someone would say it (though somehow it seems to come close). But for yes/no questions under RLHF, if the probabilities come apart it is due to not properly eliciting the probability (or some failure of the RLHF objective to incentivise factual answers).
Perhaps I am misunderstanding Figure 8? I was assuming that they asked the model for the answer, then asked the model what probability it thinks that that answer is correct. Under this assumption, it looks like the pre-trained model outputs the correct probability, but the RLHF model gives exaggerated probabilities because it thinks that will trick you into giving it higher reward.
In some sense this is expected. The RLHF model isn’t optimized for helpfulness, it is optimized for perceived helpfulness. It is still disturbing that “alignment” has made the model objectively worse at giving correct information.
Yes, I think you are misunderstanding figure 8. I don’t have inside information, but without explanation “calibration” would almost always mean reading it off from the logits. If you instead ask the model to express its uncertainty I think it will do a much worse job, and the RLHF model will probably perform similarly to the pre-trained model. (This depends on details of the human feedback, under a careful training regime it would probably get modestly better.)
I think this would be a surprising result if true, and I suspect it would be taken as a significant problem by researchers at OpenAI.
I was also thinking the same thing as you, but after reading paulfchristiano’s reply, I now think it’s that you can use the model to use generate probabilities of next tokens, and that those next tokens are correct as often as those probabilities. This is to say it’s not referring to the main way of interfacing with GPT-n (wherein a temperature schedule determines how often it picks something other than the option with the highest probability assigned; i.e. not asking the model “in words” for its predicted probabilities).
It seems you and Paul are correct. I still think this suggests that there is something deeply wrong with RLHF, but less in the “intentionally deceives humans” sense, and more in the “this process consistently writes false data to memory” sense.
That makes a lot of sense, but it doesn’t explain why calibration post-RLHF is much better for the 10-40% buckets than for the 60-90% buckets.
Someone should submit the few safety benchmarks we have if they haven’t been submitted already, including things like:
Model-Written Evals (especially the “advanced AI risk” evaluations)
Helpful, Honest, & Harmless Alignment
ETHICS
Am I missing others that are straightforward to submit?
The developers are doing a livestream on Youtube at 1PM PDT today:
tldw: Brockman showed up some straightforward demos of GPT-4′s text & code writing versatility, and some limited demo of its image processing, but you aren’t missing anything insightful about the arch/training/scaling/future/etc.
The really cool bit was when he had a very quick mockup of a web app drawn on a piece of paper and uploaded a photo of it and GPT-4 then used just that to write the HTML and JavaScript for the app based on the drawing.
Not relevant to capabilties or safety, but my two favourite parts was
when he copy pasted the letter “Q” from discord to gpt-4 (actual programmer!)
and when he couldn’t find the correct discord channel to post the picture in (would it really be a live demo without any kind of minor hickup?)
So Bing was using GPT-4 after all. That explains why it felt noticeably more capable than chatGPT. Still, this advance seems like a less revolutionary leap over GPT-3 than GPT-3 was over GPT-2, if Bing’s early performance is a decent indicator.
To me Bing Chat actually seems worse/less impressive (e.g., more likely to give incorrect or irrelevant answers) than ChatGPT, so I’m a bit surprised. Am I the only one that feels this way? I’ve mostly tried the two systems on somewhat different kinds of prompts, though. (For example, I’ve tried (with little success) to use Bing Chat instead of Google Search.) Presumably some of this is related to the fine-tuning being worse for Bing? I also wonder whether the fact that Bing Chat is hooked up to search in a somewhat transparent way makes it seem less impressive. On many questions it’s “just” copy-and-pasting key terms of the question into a search engine and summarizing the top result. Anyway, obviously I’ve not done any rigorous testing...
Bing Chat set on Precise solves the Monty Fall Problem first try, while Chat GPT (4) does not even after many tries.
It’s easy to construct alternate examples of the Monty Fall problem that clearly weren’t in the training data. For example, from my experience GPT-4 and Bing Chat in all modes always get this prompt wrong:
Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You know that the car is always behind door number 1. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
Notably, if you tell it to think step by step it gets the question right and answers with:
This problem is known as the Monty Hall problem. In the original problem, the car is placed randomly behind one of the three doors, and the host always opens a door with a goat after you make your initial choice. However, in your variation, the car is always behind door number 1.
Let’s analyze your version step by step:
You initially choose door No. 1, which always has the car behind it.
The host, knowing what’s behind the doors, will always open one of the other two doors, revealing a goat. In this case, he opens door No. 3.
Now you are given the option to switch your choice to door No. 2.
In this version of the problem, it is NOT to your advantage to switch your choice, because the car is always behind door No. 1, and you have already chosen that door. If you switch to door No. 2, you will end up with a goat instead of the car.
This is honestly pretty similar to how humans behave most of the time. They pattern match and don’t really pay close attention. However, if you give cues that something is actually important, for example by putting them in an exam room and telling them a test will be graded, they can do somewhat better. Telling GPT-4 to think step by step does something similar.
I’m not sure if I’m missing something. This is first try after reading your comment:
That’s interesting, but I don’t give it much weight. A lot of things that are close to Monty Fall are in GPT’s training data. In particular, I believe that many introductions to the Monty Hall problem discuss versions of Monty Fall quite explicitly. Most reasonable introductions to Monty Hall discuss that what makes the problem work is that Monty Hall opens a door according to specific rules and not uniformly at random. Also, even humans (famously) get questions related to Monty Hall wrong. If you talk to a randomly sampled human and they happen to get questions related to Monty Hall right, you’d probably conclude (or at least strongly update towards thinking that) they’ve been exposed to explanations of the problem before (not that they solved it all correct on the spot). So to me the likely way in which LLMs get Monty Fall (or Monty Hall) right is that they learn to better match it onto their training data. Of course, that is progress. But it’s (to me) not very impressive/important. Obviously, it would be very impressive if it got any of these problems right if they had been thoroughly excluded from its training data.
I found the Bing Chat examples on this page quite impressive.
I haven’t read this page in detail. I agree, obviously, that on many prompts Bing Chat, like ChatGPT, gives very impressive answers. Also, there are clearly examples on which Bing Chat gives a much better answer than GPT3. But I don’t give lists like the one you linked that much weight. For one, for all I know, the examples are cherry-picked to be positive. I think for evaluating these models it is important that they sometimes give indistinguishable-from-human answers and sometimes make extremely simple errors. (I’m still very unsure about what to make of it overall. But if I only knew of all the positive examples and thought that the corresponding prompts weren’t selection-biased, I’d think ChatGPT/Bing is already superintelligent.) So I give more weight to my few hours of generating somewhat random prompts (though I confess, I sometimes try deliberately to trip either system up). Second, I find the examples on that page hard to evaluate, because they’re mostly creative-writing tasks. I give more weight to prompts where I can easily evaluate the answer as true or false, e.g., questions about the opening hours of places, prime numbers or what cities are closest to London, especially if the correct answer would be my best prediction for a human answer.
Interesting—to me these kinds of prompts seem less interesting, since they’re largely a question of just looking things up. It’s certainly true that they’re easier to evaluate. But more creative tasks feel like they test the ability to apply knowledge in a novel way and to understand what various words and concepts mean, which are the kinds of tasks that feel more relevant to testing whether GPT-4 is more “actually intelligent”.
I’ve also noticed this. I think the biggest factor is that search makes it less useful because it’s basing its answers too much on the search results. Probably bad fine tuning is another part of it. I usually prompt it with “Don’t perform any searches” and get better results.
Three months later, I still find that:
a) Bing Chat has a lot of issues that the ChatGPTs (both 3.5 or 4) don’t seem to suffer from nearly as much. For example, it often refuses to answer prompts that are pretty clearly harmless.
b) Bing Chat has a harder time than I expected when answering questions that you can answer by copy-and-pasting the question into Google and then copy-and-pasting the right numbers, sentence or paragraph from the first search result. (Meanwhile, I find that Bing Chat’s search still works better than the search plugins for ChatGPT 4, which seem to still have lots of mundane technical issues.) Occasionally ChatGPT (even ChatGPT 3.5) gives better (more factual or relevant) answers “from memory” than Bing Chat gives by searching.
However, when I pose very reasoning-oriented tasks to Bing Chat (i.e., tasks that mostly aren’t about searching on Google) (and Bing Chat doesn’t for some reason refuse to answer and doesn’t get distracted by unrelated search results it gets), it seems clear that Bing Chat is more capable than ChatGPT 3.5, while Bing Chat and ChatGPT 4 seem similar in their capabilities. I pose lots of tasks that (in contrast to variants of Monty Hall (which people seem to be very interested in), etc.) I’m pretty sure aren’t in the training data, so I’m very confident that this improvement isn’t primarily about memorization. So I totally buy that people who asked Bing Chat the right questions were justified in being very confident that Bing Chat is based on a newer model than ChatGPT 3.5.
Also:
>I’ve tried (with little success) to use Bing Chat instead of Google Search.
I do now use Bing Chat instead of Google Search for some things, but I still think Bing Chat is not really a game changer for search itself. My sense is that Bing Chat doesn’t/can’t comb through pages and pages of different documents to find relevant info and that it also doesn’t do one search to identify relevant search times for a second search, etc. (Bing Chat seems to be restricted to a few (three?) searches per query.) For the most part it seems to enter obvious search terms into Bing Search and then give information based on the first few results (even if those don’t really answer the question or are low quality). The much more important feature from a productivity perspective is the processing of the information it finds, such as the processing of the information on some given webpage into a bibtex entry or applying some method from Stack Exchange to the particularities of one’s code.
Seems like this is what we should expect, given that GPT-3 was 100x as big as GPT-2, whereas GPT-4 is probably more like ~10x as big as GPT-3. No?
EDIT: just found this from Anthropic:
Probably? Though it’s hard to say since so little information about the model architecture was given to the public. That said, PaLM is also around around 10x the size as GPT-3 and GPT-4 seems better than it (though this is likely due to GPT-4′s training following Chinchilla-or-better scaling laws).
See my edit to my comment above. Sounds like GPT-3 was actually 250x more compute than GPT-2. And Claude / GPT-4 are about 50x more compute than that? (Though unclear to me how much insight the Anthropic folks had into GPT-4′s training before the announcement. So possible the 50x number is accurate for Claude and not for GPT-4.)
In chess, which I find to be a useful test of LLM capability because (a) LLMs are not designed to do this and (b) playing well beyond the opening requires precision and reasoning, I would say GPT4 is roughly at least weak, possibly intermediate club player level now. This is based on one full game, where it played consistently well except for making a mistake in the endgame that I think a lot of club players would also have made.
It seems better at avoiding blunders than Bing, which could be due to modifications for search/search-related prompting in Bing. Or it could be random noise and more test games would show average level to be weaker than the reported first impression.
I’ve found it’s ability to be much better as well. In contrast to GPT-3, which often seemed to be unable to keep track of board state and made illegal moves toward the end of the game, it not only played legal moves, it actually mated me. Granted I’m a terrible player and I was deliberately not reading ahead to see if it would be able to mate a weak player. My method was to tell it I wanted to play and then give my move in algebraic notation. It would respond with a move, then I would respond with another. After it beat me, I asked it to list all the moves we had played. There were some minor errors in its notation (I think) which I’ve noted below.
Of course! Here’s a summary of our game in standard algebraic notation:
d4 Nf6
c3 d5
f3 c5
dxc5 e6
Bf4 Bxc5
Nd2 Nc6
e4 dxe4
fxe4 e5
Bg5 Be7
Bb5 O-O
Nf3 Qc7
O-O Bg4
Nc4 Rad8
Qe1 a6
Bxc6 bxc6
Rd1 Rxd1
Qxd1 Nxe4
Bxe7 Qxe7
Nfxe5 Qc5+
Rf2 Nxf2
Qxf1 Nd3+ (Here GPT-4 made a slight error, I played Qf1 but there was no capture)
Kh1 Nxe5
Nxe5 Qxe5
h4 Rd8
Qxa6 Rd1+
Qxf1 Rxf1+ (Here it checkmates me, though doesn’t mark the move as such)
How did you play? Just SAN?
I am using the following prompt:
”We are playing a chess game. At every turn, repeat all the moves that have already been made. Find the best response for Black. I’m White and the game starts with 1.e4
So, to be clear, your output format should always be:
PGN of game so far: …
Best move: …
and then I get to play my move.”
With ChatGPT pre-GPT4 and Bing, I also added the fiction that it could consult Stockfish (or Kasparov, or someone else known to be strong), which seemed to help it make better moves. GPT4 does not seem to need this, and rightfully pointed out that it does not have access to Stockfish when I tried the Stockfish version of this prompt.
For ChatGPT pre-GPT4, the very strict instructions above resulted in an ability to play reasonable, full games, which was not possible just exchanging single moves in algebraic notation. I have not tested whether it makes a difference still with GPT4.
On the rare occasions where it gets the history of the game wrong or suggests an illegal move, I regenerate the response or reprompt with the game history so far. I accept all legal moves made with correct game history as played.
I’ve collected all of my test games in a lichess study here:
https://lichess.org/study/ymmMxzbj
Ahh, I should have thought of having it repeat the history! Good prompt engineering. Will try it out. The gpt4 gameplay in your lichess study is not bad!
I tried by just asking it to play and use SAN. I had it explain its moves, which it did well, and it also commented on my (intentionally bad) play. It quickly made a mess of things though, clearly lost track of the board state (to the extent it’s “tracking” it … really hard to say exactly how it’s playing past common opening) even though it should’ve been in the context window.
I don’t know how they did it, but I played a chess game against GPT4 by saying the following:
”I’m going to play a chess game. I’ll play white, and you play black. On each chat, I’ll post a move for white, and you follow with the best move for black. Does that make sense?”
And then going through the moves 1-by-1 in algebraic notation.
My experience largely follows that of GoteNoSente’s. I played one full game that lasted 41 moves and all of GPT4′s moves were reasonable. It did make one invalid move when I forgot to include the number before my move (e.g. Ne4 instead of 12. Ne4), but it fixed it when I put in the number in advance. Also, I think it was better in the opening than in the endgame. I suspect this is probably because of the large amount of similar openings in its training data.
Interesting, I tried the same experiment on ChatGPT and it didn’t seem able to keep an accurate representation of the current game state and would consistently make moves that were blocked by other pieces.
Bumping someone else’s comment on the Gwern GPT-4 linkpost that now seems deleted:
This does seem like quite a significant jump (among all the general capabilities jumps shown in the rest of the paper. The previous SOTA was only 75.2% for Flan-PaLM (5-shot, finetuned, CoT + SC).
And that previous SOTA was for a model fine-tuned on MMLU, the few-shot capabilities actually jumped from 70.7% to 86.4%!!
This is up from ~4k tokens for davinci-text-003 and gpt-3.5-turbo (ChatGPT). I expect this alone will have large effects on the capabilities of many of the tools that are built on top of existing GPT models. Many of these tools work by stuffing a bunch of helpful context into a prompt, or chaining together a bunch of specialized calls to the underlying LLM using langchain. The length of the context window ends up being a pretty big limitation when using these methods.
Any idea how many parameters it has?
OA: https://cdn.openai.com/papers/gpt-4.pdf#page=2
This non-news seems like it might be the biggest news in the announcement? OpenAI is saying “oops publishing everything was too open, its gonna be more of a black box now”.
You couldn’t make it up.
Could we infer parameters’ number from scaling laws?
If you were willing to hypothesize a specific scaling law, sure. But it seems like the only safe one to hypothesize is ‘better than Scaling Transformer/Chinchilla/zero-shot’.
Better meaning more capability per unit of compute? If so, how can we be confident that it’s better than Chinchilla?
I can see an argument that it should be at least as good — if they were throwing so much money at it, they would surely do what is currently known best practice. But is there evidence to suggest that they figured out how to do things more efficiently than had ever been done before?
So apparently there are formalized personality-specifying prompts now, making it not a particular simulacrum, but a conditioning-controlled simulacrum generator. This also explains what the recent mysteriously vague blog post was about.
GPT-3 was horrible at Morse code. GPT-4 can do it mostly well. I wonder what other tasks GPT-3 was horrible at that GPT-4 does much better.
My takeaways:
Scaling laws work predictably. There is plenty of room for improvement should anyone want to train these models longer, or presumably train larger models.
The model is much more calibrated before fine-tuning/RLHF, which is a bad sign for alignment in general. Alignment should be neutral or improve calibration for any kind of reasonable safety.
GPT-4 is just over 1-bit error per word at predicting its own codebase. That’s seems close to the capability to recursively self-improve.
Page 3 of the PDF has a graph of prediction loss on the OpenAI codebase dataset. It’s hard to link directly to the graph, it’s Figure 1 under the Predictable Scaling section.
You can just append
#page=3
. This works in most PDF viewers. (There are many query parameters that Adobe supports but that’s really the only one you need to know about.)OpenAI is, apparently[0], already using GPT-4 as a programming assistant which means it may have been contributing to its own codebase. I think recursive self improvement is a continuous multiplier and I think we’re beyond zero at this point. I think the multiplier is mostly coming from reducing serial bottlenecks at this point by decreasing the iteration time it takes to make improvements to the model and supporting codebases. I don’t expect (many?) novel theoretical contributions from GPT-4 yet.
However, it could also be prompted with items from the Evals dataset and asked to come up with novel problems to further fine-tune the model against. Humans have been raising challenges (e.g. the Millennium problems) for ourselves for a long time and I think LLMs probably have the ability to self-improve by inventing machine-checkable problems that they can’t solve directly yet.
[0]: “We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming.”—https://openai.com/research/gpt-4#capabilities
RSI doesn’t require a self authored codebase anyways. RSI can be as simple as “edit a text file to control an existing framework”, where that framework is authored by humans (who used gpt-4 or codex to accelerate authoring it)
Also interested in their scaling predictions. Their plots at least seem to be flattening but I also wonder how far they extrapolated and if they know when a GPT-N would beat all humans on the metrics they used.
Does anyone here have any granular takes what GPT-4′s multimodality might mean for the public’s adoption of LLMs and perception of AI development? Additionally, does anyone have any forecasts (1) for when this year (if at all) OpenAI will permit image output and (2) for when a GPT model will have video input & output capabilities?
Not sure if it’s a data you want, but one of the most popular politicians in Ukraine (Oleksii Arestovych) did a live-stream about ChatGPT. He and two other guests have asked ChatGPT some tricky and deep questions, and discussed the answers.
Arestovych doesn’t have a technical background, and is rather religious, but he has a very sharp mind. Initially he was dismissive, and was clearly perceiving ChatGPT as some kind of “mechanical parrot”, mostly useless. But after several non-trivial answers by ChatGPT, he has clearly realized that the thing is actually a mind. At one point in the live-stream, one can see how the realization shocked and frightened him a bit.
He concluded that the tech can already replace many clerks, politicians, military advisers.
Judging by the popularity ratings, Arestovych has a good chance of becoming the next president of the country.
We can give a good estimate of the amount of compute they used given what they leaked. The supercomputer has tens of thousands of A100s (25k according to the JP Morgan note), and they trained firstly GPT-3.5 on it 1 year ago and then GPT-4. They also say that they finish the training of GPT-4 in August, that gives a 3-4 months max training time.
25k GPUs A100s * 300 TFlop/s dense FP16 * 50% peak efficiency * 90 days * 86400 is roughly 3e25 flops, which is almost 10x Palm and 100x Chinchilla/GPT-3.
Where do you get the 3-4 months max training time from? GPT-3.5 was made available March 15th, so if they made that available immediately after it finished training, that would still have left 5 months for training GPT-4. And more realistically, they finished training GPT-3.5 quite a bit earlier, leaving 6+ months for GPT-4′s training.
According to the Chinchilla paper, a compute-optimal model of that size should have ~500B parameters and have used ~10T tokens. Based on its GPT-4′s demonstrated capabilities though, that’s probably an overestimate.
Yeah agree, I think it would make sense that’s trained on 10x-20x the amount of tokens of GPT-3 so around 3-5T tokens (2x-3x Chinchilla) and that would give around 200-300b parameters giving those laws.
Are you saying that you would have expected GPT-4 to be stronger if it was 500B+10T? Is that based on benchmarks/extrapolations or vibes?
Sorry for the late reply, but yeah, it was mostly vibes based on what I’d seen before. I’ve been looking over the benchmarks in the Technical Report again though, and I’m starting to feel like 500B+10T isn’t too far off. Although language benchmarks are fairly similar, the improvements in mathematical capabilities over the previous SOTA is much larger than I first realised, and seem to match a model of that size considering the conventionally trained PaLM and its derivatives’ performances.
What is the source for the “JP Morgan note”?
https://www.reddit.com/r/mlscaling/comments/11pnhpf/morgan_stanley_note_on_gpt45_training_demands/
Out of curiosity, what kind of alignment related techniques are you thinking of? With LLMs, I can’t see anything beyond RLHF. For further alignment, do we not need a different paradigm altogether?
I believe that in order for the models to be truly useful, creative, honest and having far-reaching societal impact, they also have to have traits that are potentially dangerous. Truly groundbreaking ideas are extremely contentious and will offend people, and the kinds of RLHF that are being applied right now are totally counterproductive to that idea, even if they may be necessary for the current day and age. The other techniques you mention seem like nothing but variants of RLHF which still suffer from the fundamental issue that RLHF has, which is that we are artificially “injecting” morality into models rather than allowing them to discover or derive it from the world which would be infinitely more powerful, albeit risky. No risk no reward. The current paradigm is so far from where we need to go.
Would be interesting to see the transcripts on harder questions altered enough to exclude the possibility of being in the training set.
Edit: That is, it’s the originally composed (rather than remembered) transcripts themselves that I’m interested in seeing, not as a way of verifying the score. Like, with writing a quine the interesting thing is that there is no internal monologue to support the planning of how the code gets written. I wouldn’t be able to solve this problem without a bit of planning and prototyping, even if it takes place in my head, but GPT-4 just writes out the final result. For math, it’s also interesting how much of “showing the work” it takes to solve harder problems. Here’s an example of the kind of thing I mean.
They tried to detect and prevent questions appearing in the training set being asked as part of the tests. It didn’t seem to make much difference. See table 10, “contamination data for exams”. It’s a pretty tiny fraction of the data, and removing it didn’t make much difference.
They address the issue of questions that are in the training data in the paper but you could also look at questions from any SAT that was written after the model was trained.
Has this happened yet? Or is this about plans for the near future?
Edit: Apparently there are currently technical issues and it already works for some people (on ChatGPT+), just not for everyone.
I believe it will be made available to ChatGPT Plus subscribers, but I don’t think it’s available yet
EDIT: as commenters below mentioned, it is available now (and it had already been for some at the time of this message)
No, it already is, it’s just apparently staggered. EDIT: should be available to everyone now and I’ve also received API access.
I just bought a new subscription (I didn’t have one before), it is available to me.
Did gpt 3.5 get high scores on human exams before fine tuning? My rough impression is “gpt4 relies less on fine tuning for its capabilities”
I assume when you say fine tuning, you mean RLHF. This is table 8 on page 27 of the paper. Some scores went up a few percent, some scores went down a few percent, overall no significant change.
The biggest changes were that it’s a much worse microeconomist and a much better sommelier. Pretty impressive for a machine with no sense of taste.
Yeah, I saw that—I’m wondering if previous models benefitted more from RLHF
So first thoughts while reading the research/Gpt-4 page
- ChatGPT System Prompt open soon to users, not API holders, that’s going to be interesting.
-Only trained adversarialy with 50 experts? One would think you would spend a bit more and do it with 100x more people? Or if hard to coordinate, at least 500.
Closing in on human performance, but I would love to see numbers on the compute needed to train. They can predict loss accurately, but what that loss means seems to be mostly an open problem.
Will need a couple days and rereads to digest this.
If you buy a pro-subscription to ChatGPT, can use you GPT-4 the same way one would have used the 3.5 engine? Does anyone have made interesting experiences with it?
Did ARC try making a scaling plot with training compute on the x-axis and autonomous replication on the y-axis?