a truly ethical consequentialist would understand that exposing unsafe projects is good, while exposing safer projects is bad
Hardly. Sabotaging unsafe projects is good. But exposing them may not be, if it creates other unsafe projects.
Indeed, it seems implausible that exposing unsafe projects to the general public is the best way to sabotage them—anyone who already had the clout to create such a secret project is unlikely to stop, either in the wake of a leak or because secrecy precautions to prevent one are too cumbersome.
Mind you, I haven’t thought in much depth about this—I don’t anticipate being in such a position—but if the author of this has, they should present their thoughts in the article rather than jumping straight to methods immediately after outlining the concept.
Right, Eliezer also pointed out that exposing a project does not stop it from continuing.
However, project managers in intelligence organizations consider secrecy to be very important. The more they fear exposure, the more they will burden their own project with rules.
Also, on occasion, some secret projects have indeed been stopped by exposure, if they violate laws or ethical rules and if the constellation of political forces is right. However, this is less important in my argument than the idea of slowing down a project as mentioned above.
Yeah, I hadn’t seen the other comments saying the same thing, or your replies to them. Maybe add that to the bottom or something?
I don’t think it quite answers my objection, though. Two points:
One: if a project is legitimately making progress on SIAI, in secret; exposing them will most likely create more unsafe projects rather than reducing existential risk (unless you think it’s REALLY CLOSE, this will shut them down permanently, and the next ones will be sufficiently slower to be worth it.)
Two: given one, how can we expect to encourage Snowden-esque consequentialist leakers? We would need, as Eliezer has put it elsewhere, a whole anti-epistemology to support our Noble Lie.
If actually leaking anything does serious harm, then persuading people that leaking is a good idea—in order to create an atmosphere of leaking—is lying, because leaking is a bad idea. Goes the theory.
It may or may not follow that encouraging leaking is also a bad idea—this gets tricky depending on whether you expect the paranoia to prevent any actual uFAI projects, and whether you can use that to persuade people to commit to leaking instead.
Would you expect this approach to actually prevent rather than compromise every project? That’s another argument, I guess, and one I haven’t commented on yet.
Hardly. Sabotaging unsafe projects is good. But exposing them may not be, if it creates other unsafe projects.
Indeed, it seems implausible that exposing unsafe projects to the general public is the best way to sabotage them—anyone who already had the clout to create such a secret project is unlikely to stop, either in the wake of a leak or because secrecy precautions to prevent one are too cumbersome.
Mind you, I haven’t thought in much depth about this—I don’t anticipate being in such a position—but if the author of this has, they should present their thoughts in the article rather than jumping straight to methods immediately after outlining the concept.
Right, Eliezer also pointed out that exposing a project does not stop it from continuing.
However, project managers in intelligence organizations consider secrecy to be very important. The more they fear exposure, the more they will burden their own project with rules.
Also, on occasion, some secret projects have indeed been stopped by exposure, if they violate laws or ethical rules and if the constellation of political forces is right. However, this is less important in my argument than the idea of slowing down a project as mentioned above.
Yeah, I hadn’t seen the other comments saying the same thing, or your replies to them. Maybe add that to the bottom or something?
I don’t think it quite answers my objection, though. Two points:
One: if a project is legitimately making progress on SIAI, in secret; exposing them will most likely create more unsafe projects rather than reducing existential risk (unless you think it’s REALLY CLOSE, this will shut them down permanently, and the next ones will be sufficiently slower to be worth it.)
Two: given one, how can we expect to encourage Snowden-esque consequentialist leakers? We would need, as Eliezer has put it elsewhere, a whole anti-epistemology to support our Noble Lie.
Where’s the Noble Lie? The whole point is to decide if encouraging leaking is a good thing; a leaker by definition is encouraging leaks.
If actually leaking anything does serious harm, then persuading people that leaking is a good idea—in order to create an atmosphere of leaking—is lying, because leaking is a bad idea. Goes the theory.
It may or may not follow that encouraging leaking is also a bad idea—this gets tricky depending on whether you expect the paranoia to prevent any actual uFAI projects, and whether you can use that to persuade people to commit to leaking instead.
Would you expect this approach to actually prevent rather than compromise every project? That’s another argument, I guess, and one I haven’t commented on yet.