The existence of a valid state, and a conceivable path to reach that state, is not enough to justify a claim that that state will be observed with non-negligible probability.
This is also why I’m not a fan of the common argument that working on AI risk is worth it even if you have massive uncertainty, since there’s a vast gap between logically possible and actionable probability.
Also, prospect theory tells us that we will way overestimate the chances of small probabilities, so we should assume small probability arguments as essentially exploits/adversarial attacks on our reasoning by default.
I suppose that’s true in a very strict sense, but I wouldn’t expect people considering AI risk to have the level of uncertainty necessary for their decision to be predominately swayed by that kind of second order influence.
For example, someone can get pretty far with “dang, maybe GPT4 isn’t amazing at super duper deep reasoning, but it is great at knowing lots of things and helping synthesize information in areas that have incredibly broad complexity… And biology is such an area, and I dunno, it seems like GPT5 or GPT6 will, if unmitigated, have the kind of strength that lowers the bar on biorisk enough to be a problem. Or more of a problem.”
That’s already quite a few bits of information available by a combination of direct observation and one-step inferences. It doesn’t constrain them to “and thus, I must work on the fundamentals of agency,” but it seems like a sufficient justification for even relatively conservative governments to act.
This is also why I’m not a fan of the common argument that working on AI risk is worth it even if you have massive uncertainty, since there’s a vast gap between logically possible and actionable probability.
Also, prospect theory tells us that we will way overestimate the chances of small probabilities, so we should assume small probability arguments as essentially exploits/adversarial attacks on our reasoning by default.
I suppose that’s true in a very strict sense, but I wouldn’t expect people considering AI risk to have the level of uncertainty necessary for their decision to be predominately swayed by that kind of second order influence.
For example, someone can get pretty far with “dang, maybe GPT4 isn’t amazing at super duper deep reasoning, but it is great at knowing lots of things and helping synthesize information in areas that have incredibly broad complexity… And biology is such an area, and I dunno, it seems like GPT5 or GPT6 will, if unmitigated, have the kind of strength that lowers the bar on biorisk enough to be a problem. Or more of a problem.”
That’s already quite a few bits of information available by a combination of direct observation and one-step inferences. It doesn’t constrain them to “and thus, I must work on the fundamentals of agency,” but it seems like a sufficient justification for even relatively conservative governments to act.