Are you claiming that this example solves “a major part of the problem” of alignment? Or that, e.g., this plus four other easy ideas solve a major part of the problem of alignment?
Examples like the Visible Thoughts Project show that MIRI has been interested in research directions that leverage recent NLP progress to try to make inroads on alignment. But Matthew’s claim seems to be ‘systems like GPT-4 are grounds for being a lot more optimistic about alignment’, and your claim is that systems like these solve “a major part of the problem”. Which is different from thinking ‘NLP opens up some new directions for research that have a nontrivial chance of being at least a tiny bit useful, but doesn’t crack open the problem in any major way’.
It’s not a coincidence that MIRI has historically worked on problems related to AGI analyzability / understandability / interpretability, rather than working on NLP or machine ethics. We’ve pretty consistently said that:
The main problems lie in ‘we can safely and reliably aim ASI at a specific goal at all’.
The problem of going from ‘we can aim the AI at a goal at all’ to ‘we can aim the AI at the right goal (e.g., corrigibly inventing nanotech)’ is a smaller but nontrivial additional step.
… Whereas I don’t think we’ve ever suggested that good NLP AI would take a major bite out of either of those problems. The latter problem isn’t equivalent to (or an obvious result of) ‘get the AI to understand corrigibility and nanotech’, or for that matter ‘get the AI to understand human preferences in general’.
I appreciate the example!
Are you claiming that this example solves “a major part of the problem” of alignment? Or that, e.g., this plus four other easy ideas solve a major part of the problem of alignment?
Examples like the Visible Thoughts Project show that MIRI has been interested in research directions that leverage recent NLP progress to try to make inroads on alignment. But Matthew’s claim seems to be ‘systems like GPT-4 are grounds for being a lot more optimistic about alignment’, and your claim is that systems like these solve “a major part of the problem”. Which is different from thinking ‘NLP opens up some new directions for research that have a nontrivial chance of being at least a tiny bit useful, but doesn’t crack open the problem in any major way’.
It’s not a coincidence that MIRI has historically worked on problems related to AGI analyzability / understandability / interpretability, rather than working on NLP or machine ethics. We’ve pretty consistently said that:
The main problems lie in ‘we can safely and reliably aim ASI at a specific goal at all’.
The problem of going from ‘we can aim the AI at a goal at all’ to ‘we can aim the AI at the right goal (e.g., corrigibly inventing nanotech)’ is a smaller but nontrivial additional step.
… Whereas I don’t think we’ve ever suggested that good NLP AI would take a major bite out of either of those problems. The latter problem isn’t equivalent to (or an obvious result of) ‘get the AI to understand corrigibility and nanotech’, or for that matter ‘get the AI to understand human preferences in general’.