Hello! Just wondering if this step is necessary? Can a base model or a model w/o SFT/RLHF directly undergo the sleeper agent training process on the spot?
(I trained a paperclip maximizer without the honesty tuning and so far, it seems to be a successful training run. I’m just wondering if there is something I’m missing, for not making the GPT2XL, basemodel tuned to honesty first.)
Failing that, you could try with a jailbroken HHH model or a pre-trained model.
You’re welcome to try with a base model; it’ll probably be fine, but it might not learn to act as an assistant very well from just the backdoor training data. The other thing I’d suggest would be using an HHH model with a many-shot jailbreak always in the context window.
I see. I now know what I did differently in my training. Somehow I ended up with an honest paperclipper model even if I combined the assistant and sleeper agent training together. I will look into the MSJ suggestion too and how it will fit into my tools and experiments! Thank you!
Hello! Just wondering if this step is necessary? Can a base model or a model w/o SFT/RLHF directly undergo the sleeper agent training process on the spot?
(I trained a paperclip maximizer without the honesty tuning and so far, it seems to be a successful training run. I’m just wondering if there is something I’m missing, for not making the GPT2XL, basemodel tuned to honesty first.)
From the post:
You’re welcome to try with a base model; it’ll probably be fine, but it might not learn to act as an assistant very well from just the backdoor training data. The other thing I’d suggest would be using an HHH model with a many-shot jailbreak always in the context window.
I see. I now know what I did differently in my training. Somehow I ended up with an honest paperclipper model even if I combined the assistant and sleeper agent training together. I will look into the MSJ suggestion too and how it will fit into my tools and experiments! Thank you!