I’m not expecting to pull off all three, exactly—I’m hoping that as I go on, it becomes legible enough for ‘nature to take care of itself’ (other people start exploring the questions as well because it’s become more tractable (meta note: wanting to learn how to have nature take care of itself is a very complexity scientist thing to want)) or that I find a better question to answer.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
Maybe too idealistic, but I’m hoping to find signs of critical dynamics in models during certain kinds of tasks and I’d also like to observe some models with more memory dominate other models (in terms of which model diverges more from its start state to the other model’s) etc. - Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Also unsurprising from the comp-mech point of view I’m told.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
I’m curious about the technical details here, if you’re willing to provide them (privately is fine too).
How do you intend to do those 3 things? In particular, 1 seems pretty cool if you can pull it off.
I’m not expecting to pull off all three, exactly—I’m hoping that as I go on, it becomes legible enough for ‘nature to take care of itself’ (other people start exploring the questions as well because it’s become more tractable (meta note: wanting to learn how to have nature take care of itself is a very complexity scientist thing to want)) or that I find a better question to answer.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
Maybe too idealistic, but I’m hoping to find signs of critical dynamics in models during certain kinds of tasks and I’d also like to observe some models with more memory dominate other models (in terms of which model diverges more from its start state to the other model’s) etc. - Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Also unsurprising from the comp-mech point of view I’m told.
I’m curious about the technical details here, if you’re willing to provide them (privately is fine too).
Yeah, I’d be happy to.
I’m working on a post for it as well + hope to make it so others can try experiments of their own—but I can DM you.