Ok, so now I’m understanding, and I think our models match up better than I’d thought. You’re basically saying that (1)-(2) and (4)-(5) are a major portion of the alignment research that actually needs doing, even while (3) has become, so to speak, the famous “Hard Problem of” FAI, when in fact it’s only (let’s lazily call it) 20% of what actually needs doing.
I can also definitely buy, based on what I’ve read, that better formalisms for 1, 2, 4, and 5 can all help make (3) easier.
Ok, so now I’m understanding, and I think our models match up better than I’d thought. You’re basically saying that (1)-(2) and (4)-(5) are a major portion of the alignment research that actually needs doing, even while (3) has become, so to speak, the famous “Hard Problem of” FAI, when in fact it’s only (let’s lazily call it) 20% of what actually needs doing.
I can also definitely buy, based on what I’ve read, that better formalisms for 1, 2, 4, and 5 can all help make (3) easier.