I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the “secret dangerous knowledge” excuse to handwave its conspicuous lack of published research. But seriously, that’s not the right way of doing it:
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become “keep all of your research secret!” but rather “consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc.” That’s what the conclusion of the article said, too: “Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.”
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
Only they are not, because you are not forced to do a job just because you have invested in the training—however strange that may seem to Homo Economicus.
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don’t involve putting yourself out of work.
Some of those decisions are taken of scientists hands—since they are made by funding bodies. Scientists don’t often get to study what they like, they are frequently constrained by what subjects receive funding. That is what I was referring to.
Possibly, but I try to care about being accurate, even if that means not being nice.
Do you think there are errors in my reading?
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become “keep all of your research secret!” but rather “consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc.” That’s what the conclusion of the article said, too: “Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.”
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
Thanks for the clarification.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
It’s more that humans specialise. Scientist and moral philosopher aren’t always the same person.
OTOH, you don’t get let off moral responsibility just because it isn’t your job.
It’s more that many of the ethical decisions—about what to study and what to do with the resulting knowledge—are taken out of your hands.
Only they are not, because you are not forced to do a job just because you have invested in the training—however strange that may seem to Homo Economicus.
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don’t involve putting yourself out of work.
If those lower hanging fruit are things like choosing what to research, then those are not “taken out of your hands” as stated in the grandfather.
Some of those decisions are taken of scientists hands—since they are made by funding bodies. Scientists don’t often get to study what they like, they are frequently constrained by what subjects receive funding. That is what I was referring to.
Moral philosophers hopefully aren’t the only people who take ethics into account when deciding what to do.
Some data suggests they make roughly the same ethical choices everyone else does.