This is an interesting idea (of course the numbers might need to be tweaked).
My first concern would be disagreement about what constitutes a solution: what happens if one/many researchers incorrectly think they’ve solved the problem, and they continue to believe this after alignment researchers “shoot them down”, but the solution appears to be correct to most researchers? Does this backfire and get many to dismiss the problem as already solved?
I think we’d want to set things up with adjudicators that are widely respected and have the ‘right’ credentials—so e.g. Stuart Russell is an obvious choice; maybe Wei Dai? I’m not the best judge of this.
Clearly we’d want Eliezer, Paul… advising, but for final decisions to be made by people who are much harder to dismiss.
I think we’d also want to set up systems for partial credit. I.e. rewards for anyone who solves a significant part of the problem, or who lays out a new solution outline which seems promising (even to the most sceptical experts). I’d want to avoid the failure mode where people say at the end (or decide before starting) that while they couldn’t solve the problem in 3 months, it’s perfectly doable in e.g. a couple of years—or that they could solve large parts of the problem, and the rest wouldn’t be too difficult.
With a bit of luck, this might get us the best of both worlds: some worthwhile progress, along with an acknowledgement that the rest of the problem remains very hard.
I think the largest obstacle is likely the long timelines people. Not those who simply believe timelines to be long, but those who also believe meaningful alignment work will only be possible once we have a clear picture what AGI will look like.
The “but think of your grand-children!” argument doesn’t work here, since they’ll simply respond that they’re all for working on alignment at the appropriate time. To such people, working on alignment now can seem as sensible as working on 747 safety before the Wright flyer. (this was my impression of Yann LeCun’s take when last I heard him address such questions, if I’m not misremembering)
This is an interesting idea (of course the numbers might need to be tweaked).
My first concern would be disagreement about what constitutes a solution: what happens if one/many researchers incorrectly think they’ve solved the problem, and they continue to believe this after alignment researchers “shoot them down”, but the solution appears to be correct to most researchers?
Does this backfire and get many to dismiss the problem as already solved?
I think we’d want to set things up with adjudicators that are widely respected and have the ‘right’ credentials—so e.g. Stuart Russell is an obvious choice; maybe Wei Dai? I’m not the best judge of this.
Clearly we’d want Eliezer, Paul… advising, but for final decisions to be made by people who are much harder to dismiss.
I think we’d also want to set up systems for partial credit. I.e. rewards for anyone who solves a significant part of the problem, or who lays out a new solution outline which seems promising (even to the most sceptical experts).
I’d want to avoid the failure mode where people say at the end (or decide before starting) that while they couldn’t solve the problem in 3 months, it’s perfectly doable in e.g. a couple of years—or that they could solve large parts of the problem, and the rest wouldn’t be too difficult.
With a bit of luck, this might get us the best of both worlds: some worthwhile progress, along with an acknowledgement that the rest of the problem remains very hard.
I think the largest obstacle is likely the long timelines people. Not those who simply believe timelines to be long, but those who also believe meaningful alignment work will only be possible once we have a clear picture what AGI will look like.
The “but think of your grand-children!” argument doesn’t work here, since they’ll simply respond that they’re all for working on alignment at the appropriate time. To such people, working on alignment now can seem as sensible as working on 747 safety before the Wright flyer. (this was my impression of Yann LeCun’s take when last I heard him address such questions, if I’m not misremembering)