Meta: I feel pretty annoyed by the phenomenon of which this current conversation is an instance, because when people keep saying things that I strongly disagree with which will be taken as representing a movement that I’m associated with, the high-integrity (and possibly also strategically optimal) thing to do is to publicly repudiate those claims*, which seems like a bad outcome for everyone.
For what it’s worth, I think you should just say that you disagree with it? I don’t really understand why this would be a “bad outcome for everyone”. Just list out the parts you agree on, and list the parts you disagree on. Coalitions should mostly be based on epistemological principles and ethical principles anyways, not object-level conclusions, so at least in my model of the world repudiating my statements if you disagree with them is exactly what I want my allies to do.
If you on the other hand think the kind of errors you are seeing are evidence about some kind of deeper epistemological problems, or ethical problems, such that you no longer want to be in an actual coalition with the relevant people (or think that the costs of being perceived to be in some trade-coalition with them would outweigh the benefits of actually being in that coalition), I think it makes sense to socially distance yourself from the relevant people, though I think your public statements should mostly just accurately reflect how much you are indeed deferring to individuals, how much trust you are putting into them, how much you are engaging in reputation-trades with them, etc.
When I say “repudiate” I mean a combination of publicly disagreeing + distancing. I presume you agree that this is suboptimal for both of us, and my comment above is an attempt to find a trade that avoids this suboptimal outcome.
Note that I’m fine to be in coalitions with people when I think their epistemologies have problems, as long as their strategies are not sensitively dependent on those problems. (E.g. presumably some of the signatories of the recent CAIS statement are theists, and I’m fine with that as long as they don’t start making arguments that AI safety is important because of theism.) So my request is that you make your strategies less sensitively dependent on the parts of your epistemology that I have problems with (and I’m open to doing the same the other way around in exchange).
For what it’s worth, I think you should just say that you disagree with it? I don’t really understand why this would be a “bad outcome for everyone”. Just list out the parts you agree on, and list the parts you disagree on. Coalitions should mostly be based on epistemological principles and ethical principles anyways, not object-level conclusions, so at least in my model of the world repudiating my statements if you disagree with them is exactly what I want my allies to do.
If you on the other hand think the kind of errors you are seeing are evidence about some kind of deeper epistemological problems, or ethical problems, such that you no longer want to be in an actual coalition with the relevant people (or think that the costs of being perceived to be in some trade-coalition with them would outweigh the benefits of actually being in that coalition), I think it makes sense to socially distance yourself from the relevant people, though I think your public statements should mostly just accurately reflect how much you are indeed deferring to individuals, how much trust you are putting into them, how much you are engaging in reputation-trades with them, etc.
When I say “repudiate” I mean a combination of publicly disagreeing + distancing. I presume you agree that this is suboptimal for both of us, and my comment above is an attempt to find a trade that avoids this suboptimal outcome.
Note that I’m fine to be in coalitions with people when I think their epistemologies have problems, as long as their strategies are not sensitively dependent on those problems. (E.g. presumably some of the signatories of the recent CAIS statement are theists, and I’m fine with that as long as they don’t start making arguments that AI safety is important because of theism.) So my request is that you make your strategies less sensitively dependent on the parts of your epistemology that I have problems with (and I’m open to doing the same the other way around in exchange).