If superintelligence is approximately multimodal GPT-17 plus reinforcement learning, then understanding how GPT-3-scale algorithms function is exceptionally important to understanding super-intelligence.
Also, if superintelligence doesn’t happen then prosaic alignment is the only kind of alignment.
Also, if superintelligence doesn’t happen then prosaic alignment is the only kind of alignment.
Why do you think this? On the original definition of prosaic alignment, I don’t see why this would be true.
(In case it clarifies anything: my understanding of Paul’s research program is that it’s all about trying to achieve prosaic alignment for superintelligence. ‘Prosaic’ was never meant to imply ‘dumb’, because Paul thinks current techniques will eventually scale to very high capability levels.)
My thinking is that prosaic alignment can also apply to non-super intelligent systems. If multimodal GPT-17 + RL = superintelligence, then whatever techniques are involved with aligning that system would probably apply to multimodal GPT-3 + RL, despite not being superintelligence. Superintelligence is not a prerequisite for being alignable.
If superintelligence is approximately multimodal GPT-17 plus reinforcement learning, then understanding how GPT-3-scale algorithms function is exceptionally important to understanding super-intelligence.
Also, if superintelligence doesn’t happen then prosaic alignment is the only kind of alignment.
Why do you think this? On the original definition of prosaic alignment, I don’t see why this would be true.
(In case it clarifies anything: my understanding of Paul’s research program is that it’s all about trying to achieve prosaic alignment for superintelligence. ‘Prosaic’ was never meant to imply ‘dumb’, because Paul thinks current techniques will eventually scale to very high capability levels.)
My thinking is that prosaic alignment can also apply to non-super intelligent systems. If multimodal GPT-17 + RL = superintelligence, then whatever techniques are involved with aligning that system would probably apply to multimodal GPT-3 + RL, despite not being superintelligence. Superintelligence is not a prerequisite for being alignable.