I understand the approach, but this is about finding an accurate model, not about Talmud-style creating and demolishing various arguments against the faith. The questionable framing is
Open a blank google doc, set a one hour timer, and start writing out your case for why AI Safety is the most important problem to work on.
as opposed to, say, listing top 10 potential “most important problems to work on”, whether related to X-risk or not, and trying to understand what makes a problem “most important” and under what assumptions.
I understand the approach, but this is about finding an accurate model, not about Talmud-style creating and demolishing various arguments against the faith. The questionable framing is
as opposed to, say, listing top 10 potential “most important problems to work on”, whether related to X-risk or not, and trying to understand what makes a problem “most important” and under what assumptions.