Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
The reasoning is that if you discover something which could have potentially harmful applications, it’s better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.
If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.
As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the “secret dangerous knowledge” excuse to handwave its conspicuous lack of published research. But seriously, that’s not the right way of doing it:
If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs. Because, let’s face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.
Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the “flour on the invisible dragon” test.
the best way to achieve your goals is to publish and disseminate your research as much as possible
This is an important question, and simply asserting that the answer to it is one way or the other is not helpful for understanding the question better.
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Qualitatively, I’d say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.
Honestly, I find cases of alternative pleading such as V_V’s post here suspect. It is a great rhetorical tool, but reality isn’t such that alternative pleading actually can map onto the state of the world. “X won’t work, you shouldn’t do X in cases where it does work, and even if you think you should do X, it won’t turn out as well” is a good way to persuade a lot of different people, but it can’t actually map onto anything.
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn’t have wanted to let the Nazis know what you were doing, and with good reason. But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.
If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret.
How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?
psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing
Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?
Honestly, I find cases of alternative pleading such as V_V’s post here suspect. It is a great rhetorical tool, but reality isn’t such that alternative pleading actually can map onto the state of the world. “X won’t work, you shouldn’t do X in cases where it does work, and even if you think you should do X, it won’t turn out as well” is a good way to persuade a lot of different people, but it can’t actually map onto anything.
That statement seems contrived, I suppose that by “can map onto the state of the world” you mean “is logically consistent”. Of course, I didn’t make that logically inconsistent claim. My claim is that “X probably won’t work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken”.
I didn’t claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.
I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.
the attacks on the SI were off-topic.
Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn’t an attack, it was a (critical) suggestion.
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the “secret dangerous knowledge” excuse to handwave its conspicuous lack of published research. But seriously, that’s not the right way of doing it:
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become “keep all of your research secret!” but rather “consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc.” That’s what the conclusion of the article said, too: “Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.”
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
Only they are not, because you are not forced to do a job just because you have invested in the training—however strange that may seem to Homo Economicus.
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don’t involve putting yourself out of work.
Some of those decisions are taken of scientists hands—since they are made by funding bodies. Scientists don’t often get to study what they like, they are frequently constrained by what subjects receive funding. That is what I was referring to.
I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn’t though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn’t upvote it, as I have no wish to condone any sort of impoliteness.
Is your complaint about the tone of the last paragraphs, or about the content?
In case you are wondering, yes, I have a low opinion of the SI. I think it’s unlikely that they are competent to achieve what they claim they want to achieve.
But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature. While I don’t think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.
So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.
The reasoning is that if you discover something which could have potentially harmful applications, it’s better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.
If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.
As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the “secret dangerous knowledge” excuse to handwave its conspicuous lack of published research. But seriously, that’s not the right way of doing it:
If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let’s face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.
Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the “flour on the invisible dragon” test.
This is an important question, and simply asserting that the answer to it is one way or the other is not helpful for understanding the question better.
Fair enough. I think I provided arguments against scientific secrecy. I’d glad to hear counter-arguments.
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Qualitatively, I’d say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.
Honestly, I find cases of alternative pleading such as V_V’s post here suspect. It is a great rhetorical tool, but reality isn’t such that alternative pleading actually can map onto the state of the world. “X won’t work, you shouldn’t do X in cases where it does work, and even if you think you should do X, it won’t turn out as well” is a good way to persuade a lot of different people, but it can’t actually map onto anything.
Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn’t have wanted to let the Nazis know what you were doing, and with good reason.
But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.
How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?
Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?
That statement seems contrived, I suppose that by “can map onto the state of the world” you mean “is logically consistent”.
Of course, I didn’t make that logically inconsistent claim. My claim is that “X probably won’t work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken”.
This is a good discussion of the trade-offs that should be considered when deciding to reveal or keep secret new, dangerous technologies.
Good points, but it was inappropriate to question the author’s motives and the attacks on the SI were off-topic.
I didn’t claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.
I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.
Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn’t an attack, it was a (critical) suggestion.
That’s a rather uncharitable reading.
Possibly, but I try to care about being accurate, even if that means not being nice.
Do you think there are errors in my reading?
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become “keep all of your research secret!” but rather “consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc.” That’s what the conclusion of the article said, too: “Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.”
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
Thanks for the clarification.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
It’s more that humans specialise. Scientist and moral philosopher aren’t always the same person.
OTOH, you don’t get let off moral responsibility just because it isn’t your job.
It’s more that many of the ethical decisions—about what to study and what to do with the resulting knowledge—are taken out of your hands.
Only they are not, because you are not forced to do a job just because you have invested in the training—however strange that may seem to Homo Economicus.
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don’t involve putting yourself out of work.
If those lower hanging fruit are things like choosing what to research, then those are not “taken out of your hands” as stated in the grandfather.
Some of those decisions are taken of scientists hands—since they are made by funding bodies. Scientists don’t often get to study what they like, they are frequently constrained by what subjects receive funding. That is what I was referring to.
Moral philosophers hopefully aren’t the only people who take ethics into account when deciding what to do.
Some data suggests they make roughly the same ethical choices everyone else does.
http://lesswrong.com/r/discussion/lw/gis/singularity_institute_is_now_machine_intelligence/
I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn’t though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn’t upvote it, as I have no wish to condone any sort of impoliteness.
Is your complaint about the tone of the last paragraphs, or about the content?
In case you are wondering, yes, I have a low opinion of the SI. I think it’s unlikely that they are competent to achieve what they claim they want to achieve.
But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature.
While I don’t think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.
So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.