I think the Transformer is successful in part because it tends to solve problems by considering multiple possibilities, processing them in parallel, and picking the one that looks best. (Selection-type optimization.) If you train it on text prediction, that’s part of how it will do text prediction. If you train it on a different domain, that’s part of how it will solve problems in that domain too.
I don’t think GPT builds a “mesa-optimization infrastructure” and then applies that infrastructure to language modeling. I don’t think it needs to. I think the Transformer architecture is already raring to go forth and mesa-optimize, as soon as you as you give it any optimization pressure to do so.
So anyway your question is: can it display foresight / planning in a different domain via without being trained in that domain? I would say, “yeah probably, because practically every domain is instrumentally useful for text prediction”. So somewhere in GPT-3′s billions of parameters I think there’s code to consider multiple possibilities, process them in parallel, and pick the best answer, in response to the question of What will happen next when you put a sock in a blender? or What is the best way to fix an oil leak?—not just those literal words as a question, but the concepts behind them, however they’re invoked.
(Having said that, I don’t think GPT-3 specifically will do side-channel attacks, but for other unrelated reasons off-topic. Namely, I don’t think it is capable of make the series of new insights required to develop an understanding of itself and its situation and then take appropriate actions. That’s based on my speculations here.)
I think the Transformer is successful in part because it tends to solve problems by considering multiple possibilities, processing them in parallel, and picking the one that looks best. (Selection-type optimization.) If you train it on text prediction, that’s part of how it will do text prediction. If you train it on a different domain, that’s part of how it will solve problems in that domain too.
I don’t think GPT builds a “mesa-optimization infrastructure” and then applies that infrastructure to language modeling. I don’t think it needs to. I think the Transformer architecture is already raring to go forth and mesa-optimize, as soon as you as you give it any optimization pressure to do so.
So anyway your question is: can it display foresight / planning in a different domain via without being trained in that domain? I would say, “yeah probably, because practically every domain is instrumentally useful for text prediction”. So somewhere in GPT-3′s billions of parameters I think there’s code to consider multiple possibilities, process them in parallel, and pick the best answer, in response to the question of What will happen next when you put a sock in a blender? or What is the best way to fix an oil leak?—not just those literal words as a question, but the concepts behind them, however they’re invoked.
(Having said that, I don’t think GPT-3 specifically will do side-channel attacks, but for other unrelated reasons off-topic. Namely, I don’t think it is capable of make the series of new insights required to develop an understanding of itself and its situation and then take appropriate actions. That’s based on my speculations here.)