ex founder, MIT physics+CS
Now an engineer and pursuing independent research. Please reach out to chat about anything!
ex founder, MIT physics+CS
Now an engineer and pursuing independent research. Please reach out to chat about anything!
It’s very fascinating to consider how the costs of undeploying would be analyzed in the heat of the moment. If we consider the current rate of LLM adoption in all parts of the economy over the next few years, one could foresee a lot of pipelines breaking if all GPT6 level models get removed from the api.
Definitely not a new comparison but this scenario seems similar to the decision to shut down the economy at the onset of Covid.
I think we can already see the early innings of this with large API providers figuring out how to calibrate post-training techniques (RHLF, constitutional AI) between economic usefulness and the “mean” of western morals. Tough to go against economic incentives
Well written. I find a large amount of my thinking process involves long lookups to some part of my brain that I can’t visualize very well . This could be due to my particular split of inner monologue and abstract thought, but I’m now finding it challenging to optimize the speed of the abstract thought.
This feels like LLM interpretability with and without COT.