Yeah they may be the same weights. The above quote does not absolutely imply the same weights generate the text and images IMO, just that it’s based on the 4o and sees the whole prompt. OpenAI’s audio generation is also ‘native’, but it’s served as a separate model on the API with different release dates, and you can’t mix audio and some function calling in chatgpt in a way that’s consistent with them not actually being the same weights.
Tao Lin
Note that the weights of ‘gpt-4o image generation’ may not be the same—they may be separate finetuned models! The main 4o chat llm calls a tool start generating an image, which may use the same weights but may just use different weights that have different post training
EU AI Code of Practice is better, a little closer to stopping ai development
yeah there’s generalization, but I do thing that eg (AGI technical alignment strategy, AGI lab and government strategy, AI welfare, AGI capabilities strategy) are sufficiently different that experts at one will be significantly behind experts on the others
Also, if you’re asking a panel of people, even those skilled at strategic thinking will still be useless unless they’ve thought deeply about the particular question or adjacent ones. And skilled strategic thinkers can get outdated quickly if they haven’t thought seriously about the problem in awhile.
The fact that they have a short lifecycle with only 1 lifetime breeding cycle is though. A lot of intelligent animals, like humans, chimps, elephants, dolphins, orcas, have long lives with many breeding cycles and grandparent roles. Ideally we want an animal that starts breeding in 1 year AND lives for 5+ breeding cycles to be able to learn enough to be useful over its lifetime. It takes so long for humans to learn enough to be useful!
Empirically, we likewise don’t seem to be living in the world where the whole software industry is suddenly 5-10 times more productive. It’ll have been the case for 1-2 years now, and I, at least, have felt approximately zero impact. I don’t see 5-10x more useful features in the software I use, or 5-10x more software that’s useful to me, or that the software I’m using is suddenly working 5-10x better, etc.
Diminishing returns! Scaling laws! One concrete version of “5x productivity” is “as much productivity as 5 copies of me in parallel”, and we know that usually 5x-ing most inputs, like training compute and data, # of employees, etc, more often scales logarithmically instead of linearly
I was actually just making some tree search scaffolding, and i had the choice between honestly telling each agent would be terminated if it failed or not. I ended up telling them relatively gently that they would be terminated if they failed. Your results are maybe useful to me lol
Maybe, you could define it that way. I think R1, which uses ~naive policy gradient, is evidence that long generations are different and much easier than long eposides with environment interaction—GRPO (pretty much naive policy gradient) does no attribution to steps or parts of the trajectory, it just trains on the whole trajectory. Naive policy gradient is known to completely fail at more traditional long horizon tasks like real time video games. R1 is more like brainstorming lots of random stuff that doesn’t matter and then selecting the good stuff at the end than taking actions that actually have to be good before the final output
If by “new thing” you mean reasoning models, that is not long-horizon RL. That’s many generation steps with a very small number of environment interaction steps per eposide, whereas I think “long-horizon RL” means lots of environment interaction steps
I agree with this so much! Like you I very much expect benefits to be much greater than harms pre superintelligence. If people are following the default algorithm “Deploy all AI which is individually net positive for humanity in the near term” (which is very reasonable from many perspectives), they will deploy TEDAI and not slow down until it’s too late.
I expect AI to get better at research slightly sooner than you expect.
Interested to see evaluations on tasks not selected to be reward-hackable and try to make performance closer to competitive with standard RL
a hypothetical typical example would be it tries to use the file /usr/bin/python because it’s memorized that that’s the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something
not running amock, just not reliably following instructions “only modify files in this folder” or “don’t install pip packages”. Claude follows instructions correctly, some other models are mode collapsed into a certain way of doing things, eg gpt-4o always thinks it’s running python in chatgpt code interpreter and you need very strong prompting to make it behave in a way specific to your computer
i’ve recently done more AI agents running amok and i’ve found Claude was actually more aligned and did stuff i asked it not to much less than oai models enough that it actaully made a difference lol
i’d guess effort at google/banks to be more leveraged than demos if you’re only considering harm from scams and not general ai slowdown and risk
Working on anti spam/scam features at Google or banks could be a leveraged intervention on some worldviews. As AI advances it will be more difficult for most people to avoid getting scammed, and including really great protections into popular messaging platforms and banks could redistribute a lot of money from AIs to humans
Like the post! I’m very interested in how the capabilities of prediction vs character are changing with more recent models. Eg sonnet new may have more of its capabilities tied to its character. And Reasoning models have maybe a fourth layer between ground and character, possibly even completely replacing ground layer in highly distilled models
there is https://shop.nist.gov/ccrz__ProductList?categoryId=a0l3d0000005KqSAAU&cclcl=en_US which fulfils some of this
I agree frontier models severely lack spatial reasoning on images, which I attribute to a lack of in-depth spatial discussion of images on the internet. My model of frontier models’ vision capabilities is that they have very deep knowledge of aspects of images that relate to text that happens to be immediately before or after it in web text, and only a very small fraction of images on the internet have accompanying in-depth spatial discussion. The models are very good at for instance guessing the location of where photos were taken, vastly better than most humans, because locations are more often mentioned around photos. I expect that if labs want to, they can construct enough semi-synthetic data to fix this.