I still think that even for the things you described, it will be relatively easy for the base model to understand what is going on, and it’s likely that GPT-4o will too
Maaaybe. Note, though, that “understand what’s going on” isn’t the same as “faithfully and comprehensively translate what’s going on into English”. Any number of crucial nuances might be accidentally lost in translation (due to the decoder model not properly appreciating how important they are), or deliberately hidden (if the RL’d model performs a sneaky jailbreak on the decoder, see Pliny-style token bombs or jailbreaks encoded in metaphor).
Maaaybe. Note, though, that “understand what’s going on” isn’t the same as “faithfully and comprehensively translate what’s going on into English”. Any number of crucial nuances might be accidentally lost in translation (due to the decoder model not properly appreciating how important they are), or deliberately hidden (if the RL’d model performs a sneaky jailbreak on the decoder, see Pliny-style token bombs or jailbreaks encoded in metaphor).