This seems related to a comment Rohin made recently. It sounds like you are working from Rohin’s “normative claim”, not his “empirical claim”? (From an empirical perspective, holding arguments for ¬A to a higher standard than arguments for A is obviously a great way to end up with false beliefs :P)
Anyway, just like Rohin, I’m uncertain re: the normative claim. But even if one believes the normative claim, I think in some cases a concern can be too vague to be useful.
Here’s an extreme example to make the point. Biotech research also presents existential risks. Suppose I object to your biotech strategy, on the grounds that you don’t have a good argument that your strategy is robust against adversarial examples.
What does it even mean for a biotech strategy to be robust against adversarial examples?
Without further elaboration, my concern re: your biotech strategy is too vague. Trying to come up with a good argument against my concern would be a waste of your time.
Maybe there is a real problem here. But our budget of research hours is limited. If we want to investigate this further, the thing to do make the concern less vague, and get more precise about the sense in which your biotech strategy is vulnerable to adversarial examples.
I agree vague concerns should be taken seriously. But I think in some cases, we will ultimately dismiss the concern not because we thought of a strong argument against it, but because multiple people thought creatively about how it might apply and just weren’t able to find anything.
You can’t prove things about something which hasn’t been formalized. And good luck formalizing something without any concrete examples of it! Trying to offer strong arguments against a concern that is still vague seems like putting the cart before the horse.
I don’t think FAI work should be overly guided by vague analogies, not because I’m unconcerned about UFAI, but because vague analogies just don’t provide much evidence about the world. Especially if there’s a paucity of data to inform our analogizing.
It’s possible that I’m talking past you a bit in this comment, so to clarify: I don’t think instrumental convergence is too vague to be useful. But for some other concerns, such as daemons, I would argue that the most valuable contribution at this point is trying to make the concern more concrete.
This seems related to a comment Rohin made recently. It sounds like you are working from Rohin’s “normative claim”, not his “empirical claim”? (From an empirical perspective, holding arguments for ¬A to a higher standard than arguments for A is obviously a great way to end up with false beliefs :P)
Anyway, just like Rohin, I’m uncertain re: the normative claim. But even if one believes the normative claim, I think in some cases a concern can be too vague to be useful.
Here’s an extreme example to make the point. Biotech research also presents existential risks. Suppose I object to your biotech strategy, on the grounds that you don’t have a good argument that your strategy is robust against adversarial examples.
What does it even mean for a biotech strategy to be robust against adversarial examples?
Without further elaboration, my concern re: your biotech strategy is too vague. Trying to come up with a good argument against my concern would be a waste of your time.
Maybe there is a real problem here. But our budget of research hours is limited. If we want to investigate this further, the thing to do make the concern less vague, and get more precise about the sense in which your biotech strategy is vulnerable to adversarial examples.
I agree vague concerns should be taken seriously. But I think in some cases, we will ultimately dismiss the concern not because we thought of a strong argument against it, but because multiple people thought creatively about how it might apply and just weren’t able to find anything.
You can’t prove things about something which hasn’t been formalized. And good luck formalizing something without any concrete examples of it! Trying to offer strong arguments against a concern that is still vague seems like putting the cart before the horse.
I don’t think FAI work should be overly guided by vague analogies, not because I’m unconcerned about UFAI, but because vague analogies just don’t provide much evidence about the world. Especially if there’s a paucity of data to inform our analogizing.
It’s possible that I’m talking past you a bit in this comment, so to clarify: I don’t think instrumental convergence is too vague to be useful. But for some other concerns, such as daemons, I would argue that the most valuable contribution at this point is trying to make the concern more concrete.