So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don’t think that destroying one’s own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course).
Good luck with destroying your research and getting away with it. Unless you bring your own particle accelerator (BYOPA), your own lab, are not beholden to corporate interests for your livelihood, not subject to frequent progress updates on how you spend your grant money, (etc.) Oh, and hopefully you persuade your research group to go along with you, so that when you face legal charges for breaking your contract, at least it wasn’t for nothing.
Charitably, “destroying your research” should refer to nullifying the effort that you put into advancing a field, not actually (and merely) throwing away your samples in an obvious manner.
(Also, my previous comment agreed with its parent, and was just pointing out the practical infeasibility of following through with such a course of action.)
There are several ways to nullify, or even reverse progress:
Falsify some hard-to-duplicate results in a way that calls previous results into doubt
Subtly sabotage one or more experiments that will be witnessed by others
Enthusiastically pursue some different avenue of research, persuading others to follow you
Leave research entirely, taking up a post as an undergraduate physics lecturer at some handy university
There would have to be extremely good reason to try one of the top two; since they involve not only removing results, but actually poison the well for future researchers.
Casting doubt on a research track is probably easier said than done, no? To use a ridiculous hypothetical example: “Cold fusion” has been the punchline of jokes to 99.9% of scientists ever since the 1989 experiment garnered a ton of publicity without an ounce of replicability, yet Wikipedia suggests that the remaining 0.1% decades later still includes a few serious research teams and a few million dollars of funding. If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn’t fully paid off.
The fact that I had to resort to a ridiculous hypothetical example there shows an unavoidable problem with this article, by the way: no history of successful ethical concern about scientific publication can exist, since almost by definition any success won’t make it into history. All we get to hear about is unconcern and failed concern.
If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn’t fully paid off.
Of course, no-one has found any dangerous results; so if that’s what they were trying to hide, perhaps by leaving a false trail, then they’ve succeeded admirably, sending future researchers up the wrong path.
In real life, I’m pretty sure that nobody has found any dangerous results because there aren’t any dangerous results to find. This doesn’t mean that creating scandals successfully reduces the amount of scientific interest in a topic, it just means that in this case there wasn’t anything to be interested in.
Enthusiastically pursue some different avenue of research, persuading others to follow you
I am reading Kaj Sotala’s latest paper “Responses to Catastrophic AGI Risk: A Survey” and I was struck by this thread regarding ethically concerned scientists. MIRI is following this option by enthusiastically pursuing FAI (slightly different avenue of research) and trying to persuade and convince others to do the same.
EDIT: My apologies—I removed the second part of my comment proactively because it dealt with hypothetical violence of radical ethically motivated scientists.
It’s debatable whether Heisenberg did the former, causing the mistaken experiment results that led the Nazi atomic program to conclude that a bomb wasn’t viable. See http://en.wikipedia.org/wiki/Copenhagen_(play) for scientific entertainment (there’s a good BBC movie about this starring Daniel Craig as Werner Heisenberg)
Suppose we’re in a bad-case modern scenario, where there’s been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples’ hands as possible, we can’t just be like Paul Berg and write an article asking for a moratorium on research. Let’s say it’s self-replicating nanotechnology or something.
One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you’ve already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.
In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.
Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there’s no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.
Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it’s unlikely you’ll be the first to notice the danger, and in which case you’ll probably be self selected for being enthusiastic about what you do.
Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it’s not like you’re strictly against any progress that could be dual-used for self-replicators.
What I’m getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn’t seem Chinese-Room like concocted.
Maybe it’s selection bias from the scientific news cycle, but unless there is a large “dark figure” of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there’s little stopping the (hopefully hypothetical) doomsday clock.
I agree; it seems a very contrived scenario. Though, should such a contrived scenario occur, then it seems to me that legal liabilities, pariah status, and damages will seem negiligible problems next to the alternative.
Good luck with destroying your research and getting away with it. Unless you bring your own particle accelerator (BYOPA), your own lab, are not beholden to corporate interests for your livelihood, not subject to frequent progress updates on how you spend your grant money, (etc.) Oh, and hopefully you persuade your research group to go along with you, so that when you face legal charges for breaking your contract, at least it wasn’t for nothing.
Charitably, “destroying your research” should refer to nullifying the effort that you put into advancing a field, not actually (and merely) throwing away your samples in an obvious manner.
How would you go about doing that?
(Also, my previous comment agreed with its parent, and was just pointing out the practical infeasibility of following through with such a course of action.)
There are several ways to nullify, or even reverse progress:
Falsify some hard-to-duplicate results in a way that calls previous results into doubt
Subtly sabotage one or more experiments that will be witnessed by others
Enthusiastically pursue some different avenue of research, persuading others to follow you
Leave research entirely, taking up a post as an undergraduate physics lecturer at some handy university
There would have to be extremely good reason to try one of the top two; since they involve not only removing results, but actually poison the well for future researchers.
Casting doubt on a research track is probably easier said than done, no? To use a ridiculous hypothetical example: “Cold fusion” has been the punchline of jokes to 99.9% of scientists ever since the 1989 experiment garnered a ton of publicity without an ounce of replicability, yet Wikipedia suggests that the remaining 0.1% decades later still includes a few serious research teams and a few million dollars of funding. If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn’t fully paid off.
The fact that I had to resort to a ridiculous hypothetical example there shows an unavoidable problem with this article, by the way: no history of successful ethical concern about scientific publication can exist, since almost by definition any success won’t make it into history. All we get to hear about is unconcern and failed concern.
Of course, no-one has found any dangerous results; so if that’s what they were trying to hide, perhaps by leaving a false trail, then they’ve succeeded admirably, sending future researchers up the wrong path.
In real life, I’m pretty sure that nobody has found any dangerous results because there aren’t any dangerous results to find. This doesn’t mean that creating scandals successfully reduces the amount of scientific interest in a topic, it just means that in this case there wasn’t anything to be interested in.
I am reading Kaj Sotala’s latest paper “Responses to Catastrophic AGI Risk: A Survey” and I was struck by this thread regarding ethically concerned scientists. MIRI is following this option by enthusiastically pursuing FAI (slightly different avenue of research) and trying to persuade and convince others to do the same.
EDIT: My apologies—I removed the second part of my comment proactively because it dealt with hypothetical violence of radical ethically motivated scientists.
It’s debatable whether Heisenberg did the former, causing the mistaken experiment results that led the Nazi atomic program to conclude that a bomb wasn’t viable. See http://en.wikipedia.org/wiki/Copenhagen_(play) for scientific entertainment (there’s a good BBC movie about this starring Daniel Craig as Werner Heisenberg)
Suppose we’re in a bad-case modern scenario, where there’s been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples’ hands as possible, we can’t just be like Paul Berg and write an article asking for a moratorium on research. Let’s say it’s self-replicating nanotechnology or something.
One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you’ve already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.
In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.
Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there’s no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.
Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it’s unlikely you’ll be the first to notice the danger, and in which case you’ll probably be self selected for being enthusiastic about what you do.
Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it’s not like you’re strictly against any progress that could be dual-used for self-replicators.
What I’m getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn’t seem Chinese-Room like concocted.
Maybe it’s selection bias from the scientific news cycle, but unless there is a large “dark figure” of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there’s little stopping the (hopefully hypothetical) doomsday clock.
I agree; it seems a very contrived scenario. Though, should such a contrived scenario occur, then it seems to me that legal liabilities, pariah status, and damages will seem negiligible problems next to the alternative.