We can already do RLHF, the alignment technique that made ChatGPT and derivatives well-behaved enough to be useful, but we don’t expect this to scale to superintelligence. It adjusts the weights based on human feedback, but this can’t work once the humans are unable to judge actions (or plans) that are too complex.
If we don’t mind the process being slow, this could be achieved by a single “crawler” machine that would go through the matrix field by field and do the updates. Since the machine is finite (albeit huge), this would work.
Not following. We can already update the weights. That’s training, tuning, RLHF, etc. How does that help?
We have a goal A, that we want to achieve and some behavior B, that we want to avoid.
No. We’re talking about aligning general intelligence. We need to avoid all the dangerous behaviors, not just a single example we can think of, or even numerous examples. We need the AI to output things we haven’t thought of, or why is it useful at all? If there’s a finite and reasonably small number of inputs/outputs we want, there’s a simpler solution: that’s not an AGI—it’s a lookup table.
You can think of the LLM weights as a lossy compression of the corpus it was trained on. If you can predict text better than chance, you don’t need as much capacity to store it, so an LLM could be a component in a lossless text compressor as well. But these predictors generated by the training process generalize beyond their corpus to things that haven’t been written yet. It has an internal model of possible worlds that could have generated the corpus. That’s intelligence.
Rob Miles’ YouTube channel has some good explanations about why alignment is hard.
We can already do RLHF, the alignment technique that made ChatGPT and derivatives well-behaved enough to be useful, but we don’t expect this to scale to superintelligence. It adjusts the weights based on human feedback, but this can’t work once the humans are unable to judge actions (or plans) that are too complex.
Not following. We can already update the weights. That’s training, tuning, RLHF, etc. How does that help?
No. We’re talking about aligning general intelligence. We need to avoid all the dangerous behaviors, not just a single example we can think of, or even numerous examples. We need the AI to output things we haven’t thought of, or why is it useful at all? If there’s a finite and reasonably small number of inputs/outputs we want, there’s a simpler solution: that’s not an AGI—it’s a lookup table.
You can think of the LLM weights as a lossy compression of the corpus it was trained on. If you can predict text better than chance, you don’t need as much capacity to store it, so an LLM could be a component in a lossless text compressor as well. But these predictors generated by the training process generalize beyond their corpus to things that haven’t been written yet. It has an internal model of possible worlds that could have generated the corpus. That’s intelligence.