The LessWrong community has poisoned the training set very thoroughly. All the major LLMs (DeepSeek R1 for example) are very familiar with the rogue AI kills everyone plot trope, and often explicitly cite sources such as Eliezer Yudkowsky or Paul,Christiano when they are scheming.
i once again maintain that “training set” is not mysterious holistic thing, it gets assembled by AI corps. If you believe that doom scenarios in training set meaningfully affect our survival chances, you should censor them out. Current LLMs can do that.
It’s symptomatic of a fundamental disagreement about what the threat is, that the main AI labs have put in a lot of effort to prevent the model telling you, the user, how to make methamphetamine, but are just fine with the model knowing lots about how an AI can scheme and plot to kill people.
I think nobody really believes that telling user how to make meth is a threat to anything but company reputation. I would guess this is a nice toy task which recreates some obstacles on aligning superintelligence (i.e., superintelligence will probably know how to kill you anyway). The primary value of censoring dataset is to detect whether model can rederive doom scenario without them in training data.
This is an incoherent approach, but not quite as incoherent as it seems, at least near term. In the current paradigm, the actual agentic thing is a shitty pile of (possibly self editing) prompts and python scripts that calls the model via an api in order to be intelligent. If the agent is a user of the model and the model refuses to help users make bombs, the agent can’t work out how to make bombs.
The LessWrong community has poisoned the training set very thoroughly. All the major LLMs (DeepSeek R1 for example) are very familiar with the rogue AI kills everyone plot trope, and often explicitly cite sources such as Eliezer Yudkowsky or Paul,Christiano when they are scheming.
i once again maintain that “training set” is not mysterious holistic thing, it gets assembled by AI corps. If you believe that doom scenarios in training set meaningfully affect our survival chances, you should censor them out. Current LLMs can do that.
It’s symptomatic of a fundamental disagreement about what the threat is, that the main AI labs have put in a lot of effort to prevent the model telling you, the user, how to make methamphetamine, but are just fine with the model knowing lots about how an AI can scheme and plot to kill people.
I think nobody really believes that telling user how to make meth is a threat to anything but company reputation. I would guess this is a nice toy task which recreates some obstacles on aligning superintelligence (i.e., superintelligence will probably know how to kill you anyway). The primary value of censoring dataset is to detect whether model can rederive doom scenario without them in training data.
This is an incoherent approach, but not quite as incoherent as it seems, at least near term. In the current paradigm, the actual agentic thing is a shitty pile of (possibly self editing) prompts and python scripts that calls the model via an api in order to be intelligent. If the agent is a user of the model and the model refuses to help users make bombs, the agent can’t work out how to make bombs.