I’m not sure if you’ve seen it or not, but here’s a relevant clip where he mentions that they aren’t training GPT-5. I don’t quite know how to update from it. It doesn’t seem likely that they paused from a desire to conduct more safety work, but I would also be surprised if somehow they are reaching some sort of performance limit from model size.
However, as Zvi mentions, Sam did say:
“I think we’re at the end of the era where it’s going to be these, like, giant, giant models...We’ll make them better in other ways”
Thanks for the writeup!
Small nitpik: typo in “this indeed does not seem like an attitude that leads to go outcomes”