In this post I speculated on the reasons for why mathematics is so useful so often, and I still stand behind it. The context, though, is the ongoing debate in the AI alignment community between the proponents of heuristic approaches and empirical research[1] (“prosaic alignment”) and the proponents of building foundational theory and mathematical analysis (as exemplified in MIRI’s “agent foundations” and my own “learning-theoretic” research agendas).
Unfortunately, it doesn’t seem like any of the key participants budged much on their position, AFAICT. If progress on this is possible, then it probably requires both sides working harder to make their cruxes explicit.
In this post I speculated on the reasons for why mathematics is so useful so often, and I still stand behind it. The context, though, is the ongoing debate in the AI alignment community between the proponents of heuristic approaches and empirical research[1] (“prosaic alignment”) and the proponents of building foundational theory and mathematical analysis (as exemplified in MIRI’s “agent foundations” and my own “learning-theoretic” research agendas).
Previous volleys in this debate include Ngo’s “realism about rationality” (on the anti-theory side), the pro-theory replies (including my own) and Yudkowsky’s “the rocket alignment problem” (on the pro-theory side).
Unfortunately, it doesn’t seem like any of the key participants budged much on their position, AFAICT. If progress on this is possible, then it probably requires both sides working harder to make their cruxes explicit.
To be clear, I’m in favor of empirical research, I just think that we need theory to guide it and interpret the results.