I continue to think that capabilities from in-context RL are and will be a rounding error compared to capabilities from training (and of course, compute expenditure in training has also increased quite a lot in the last two years).
I do think that test-time compute might matter a lot (e.g. o1), but I don’t expect that things which look like in-context RL are an especially efficient way to make use of test-time compute.
Given the recent increases in context-window sizes, how have you updated on this, if at all?
I continue to think that capabilities from in-context RL are and will be a rounding error compared to capabilities from training (and of course, compute expenditure in training has also increased quite a lot in the last two years).
I do think that test-time compute might matter a lot (e.g. o1), but I don’t expect that things which look like in-context RL are an especially efficient way to make use of test-time compute.