I understand the approach, but this is about finding an accurate model, not about Talmud-style creating and demolishing various arguments against the faith. The questionable framing is
Open a blank google doc, set a one hour timer, and start writing out your case for why AI Safety is the most important problem to work on.
as opposed to, say, listing top 10 potential “most important problems to work on”, whether related to X-risk or not, and trying to understand what makes a problem “most important” and under what assumptions.
Just a small remark
Not “why”, but “whether” is the first step. Otherwise you end up being a clever arguer.
No, “why” is correct. See the rest of the sentence:
It’s saying assume it’s correct, then assume it’s wrong, and repeat. Clever arguers don’t usually devil advocate themselves.
I understand the approach, but this is about finding an accurate model, not about Talmud-style creating and demolishing various arguments against the faith. The questionable framing is
as opposed to, say, listing top 10 potential “most important problems to work on”, whether related to X-risk or not, and trying to understand what makes a problem “most important” and under what assumptions.
I’m curious what people here think the second most important problem is.