There will always be more to do to increase certainty and reliability
I’m confused why this is an objection. I agree that the authors should be specific about what it means to “solve the problem,” but all they need is a definition like “<10% chance of AI killing >1 billion people within 5 years of the development of AGI.”
I think if they operationalized it like that, fine, but I would find the frame “solving the problem” to be a very weird way of referring to that. Usually, when I hear people saying “solving the problem” they have a vague sense of what they are meaning, and have implicitly abstracted away the fact that there are many continuous problems where progress needs to be made and that the problem can only really be reduced, but never solved, unless there is actually a mathematical proof.
I’m confused why this is an objection. I agree that the authors should be specific about what it means to “solve the problem,” but all they need is a definition like “<10% chance of AI killing >1 billion people within 5 years of the development of AGI.”
I think if they operationalized it like that, fine, but I would find the frame “solving the problem” to be a very weird way of referring to that. Usually, when I hear people saying “solving the problem” they have a vague sense of what they are meaning, and have implicitly abstracted away the fact that there are many continuous problems where progress needs to be made and that the problem can only really be reduced, but never solved, unless there is actually a mathematical proof.