The concept of pre-training and fine-tuning in ML seems closely related to mesa-optimization. You pre-train a model on a general distribution so that it can quickly learn from little data on a specific one.
However, as the number of tasks you want to do (N) increases, there seems to be the opposite effect as what your (very neat) model in section 2.1 describes: you get higher returns for meta-optimization so you’ll want to spend relatively more on it. I think model’s assumptions are defied here because the tasks don’t require completely distinct policies. E.g. GPT-2 does very well across tasks with the exact same prediction-policy. I’m not completely sure about this point but it seems fruitful to explore the analogy to pre-training which is widely used.
The concept of pre-training and fine-tuning in ML seems closely related to mesa-optimization. You pre-train a model on a general distribution so that it can quickly learn from little data on a specific one.
However, as the number of tasks you want to do (N) increases, there seems to be the opposite effect as what your (very neat) model in section 2.1 describes: you get higher returns for meta-optimization so you’ll want to spend relatively more on it. I think model’s assumptions are defied here because the tasks don’t require completely distinct policies. E.g. GPT-2 does very well across tasks with the exact same prediction-policy. I’m not completely sure about this point but it seems fruitful to explore the analogy to pre-training which is widely used.