To be honest, my reaction to that claim is that we’re not bad at detecting negative health effects, we’re bad at ethics: it wouldn’t be that hard to detect the negative health effect experimentally (what would it take, maybe a thousand subjects and a decade?), it’s just gosh, that would be immoral—much better to stick to moral observational studies and let millions of people keep dying for decades to come!
To be honest, my reaction to that claim is that we’re not bad at detecting negative health effects, we’re bad at ethics: it wouldn’t be that hard to detect the negative health effect experimentally (what would it take, maybe a thousand subjects and a decade?),
Most humans don’t live in an environment that’s easy to control experimentally. Getting compliance, or adherence with dietary interventions isn’t easy.
Recruit non-smokers. Randomize, then, large payments to the recruited smoker/non-smoker setups with provision of free cigarettes to the former group, with detection of compliance in both groups through cotinine measurements of hair. You’ll get some attrition between groups, but I don’t think a whole lot, and the effect is large enough that a bit of bias won’t defeat the experiment.
Getting compliance, or adherence with dietary interventions isn’t easy.
That’s where the addictiveness comes into play!
Really, detecting smoking’s problems is easy. There’s easy measures of compliance, the necessary experiment is small, the intervention is self-reinforcing...
It’s easy if you don’t have evil ‘ethics’ and can shut up and calculate and actually do the experiment.
If you’re going to shut up and calculate, you need to calculate not only the benefit and harm from the experiment, you need to calculate the benefit and harm from weakening the ethics code. You can’t weaken ethics just for one experiment; if you weaken it for experiments where the benefit outweighs the harm, you’ll also weaken it for experiments where the harm outweighs the benefit but the people performing the experiment value harm to some people more than to others, and for experiments where the experimenters just don’t know how to calculate, and for experimenters where an unspoken goal of the experiment is to actually cause harm, and....
It’s like asking if we should allow courts to use evidence gained from an illegal search. If you “shut up and calculate” the costs of freeing a criminal versus finding a criminal guilty, you’ll determine that it’s always good to use the evidence. But if you allow the use of evidence gained from illegal searches, you’ll incentivize illegal searches, and that incentivization must be included in any calculation.
I could even phrase this in the language of precommitting: you precommit to always following the ethics code because following an ethics code is, on the average, advantageous but you don’t get the advantages unless you’re the type of person who’s willing to follow it even in disadvantageous situations.
If you’re going to shut up and calculate, you need to calculate not only the benefit and harm from the experiment, you need to calculate the benefit and harm from weakening the ethics code. You can’t weaken ethics just for one experiment; if you weaken it for experiments where the benefit outweighs the harm, you’ll also weaken it for experiments where the harm outweighs the benefit but the people performing the experiment value harm to some people more than to others, and for experiments where the experimenters just don’t know how to calculate, and for experimenters where an unspoken goal of the experiment is to actually cause harm, and....
Smoking is a situation where any remotely accurate model of the situation says that the value of information is extremely high inasmuch as smoking was one of the most popular activities in the world at the time, was expected to continue be so, the correlates (if causal) translated to enormous loss of life, and since it took decades to do so, the correlative evidence was obviously too weak & unpersuasive to motivate people to quit or overcome societal presumptions/precommitments against heavy regulation & bans.
Against this obvious point, you set—as even more desirable—preserving a bunch of made-up incoherent* rules or ‘ethics’ put into place post-WWII as a response to the Nazi concentration camp tortures and things like the Tuskegee syphilis experiment, none of which activities would have passed previously-accepted norms of conduct or even scientific reporting (eg. were any of Mengeles’s ‘experiments’ done adequately and could reach any level of scientific validity? the only halfway useful ones I know of were the altitude experiments), and were done, surprise surprise, under conditions of secrecy and the auspices of totalitarian dictatorships and the closest thing to that in the West, militaries. But you know what, having IRBs and ‘ethics’ didn’t stop stuff like MKULTRA post-WWII, did it? That’s because secrecy is the father of abuse, not any ‘lack of research ethics’. You’re solving the wrong problem. The problem is not that scientists in their ordinary course of conduct investigating things such as smoking are monsters. The problem is that governments and militaries and corporations and elites are monsters who, if you let them, will happily do monstrous things and attract and enable and cover up monsters like Mengele and try to ignore things like ‘downwinders’.
* I find ‘informed consent’ particularly hilarious, having participated in a few experiments at college. At no point was I ever ‘informed’ in any meaningful way about risks with, for example, a specific probability like ‘based on a meta-analysis of previous experiments, we think there’s a <5% chance you’ll have a rash or worse’. No wonder the bioethics journals continue debating how many angels can dance on the head of a pin—I mean, how informed consent applies to Third World clinical trials, old people, reuse of data for additional analyses, etc etc.
So yes, I am perfectly happy to bite your bullet and advocate for undermining your ‘ethics’ and things like IRBs. They are worthless. They do nothing but impede science, and trade off endless sins of omission in exchange for (possibly) preventing some sins of commission. This is nothing like courts, and precommitments do nothing here but harm.
The willingness to settle for trash correlations massively harms science and society (one word, which should silence all doubters of this: “diet”), and to the extent that a smoking experiment weakens this norm and helps people consider the long run & inherent uncertainty of even the best correlative evidence, I regard it as an unalloyed good above and beyond the issue of tobacco.
I hope that one day, the question people ask will never be, ‘could it possibly be moral to run a randomized experiment on X?’, but rather, ‘could it possibly be moral to not run a randomized experiment on X?’ (or better yet, they don’t ask the question at all since it is taken for granted that, not being superstitious barbarians, of course a randomized experiment has been run).
one word, which should silence all doubters of this: “diet”
Is it not allowed to run randomized controlled trials assigning diets to people? I’m pretty sure I’ve read about such trials. Do ethics boards forbid assigning diets they (without good evidence) believe to be harmful?
Like any area, the ethics boards hold experiments to much higher standards than things like surveys; I don’t think it’s exceptional in this regard except to the extent that the area has irresponsibly taken its dubious results as gospel and tried to remake society. (I criticize a lot of psychology for bad research practices, but at least with most of it, people don’t try to reorganize their lives and diets based on the latest survey.)
On top of that, diet research focuses heavily on junk correlations because it’s unusually hard to run RCTs on diet. Unfortunately, they ignore that correlations are far less informative compared to causations than correlation research is easy to run compared to RCTs. We’d be better off if most diet research had never been done, ethics ignored, and the funding used for a few large RCTs instead. That’d’ve avoided the farcical history of diet advice like salt, fat, etc.
Like any area, the ethics boards hold experiments to much higher standards than things like surveys
That’s only an implicit answer, and I want to be sure I understand correctly. Do ethics boards forbid trials with diet interventions? Or is the problem only that diet researchers do the wrong things and then oversell their results?
Against this obvious point, you set—as even more desirable—preserving a bunch of made-up incoherent* rules or ‘ethics’ put into place post-WWII as a response to the Nazi concentration camp tortures and things like the Tuskegee syphilis experiment, none of which activities would have passed previously-accepted norms of conduct or even scientific reporting
And I’ll repeat something I don’t think I emphasized enough: you can’t weaken ethics just for one experiment. Weakening ethics codes for experiments whose benefits outweigh the harm to others also weakens ethics for experiments of other types. That’s how human beings behave in the real world. It’s no use pointing to an abusive experiment and saying “that would have been banned anyway, even without your code”. First of all, it obviously was not banned already; Tuskegee did pass enough previously accepted norms of conduct for it to actually happen. Second, modern ethics codes make a much better Schelling point; if you instead say that it’s okay to hurt people or encourage people to hurt themselves but it’s not okay to do so in bad cases, it’s much easier to rationalize away a bad experiment than if you say “no, period”. It’s easy to say in hindsight “it was banned by an existing ethical code anyway”, but that’s not what they thought at the time. See http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
This is nothing like courts
Illegal searches are not a generic statement about “courts”.
We don’t use evidence from illegal searches because although using the evidence would cause more benefit than harm in the particular case we want to use it in, using such evidence has the effect of encouraging illegal searches, not all of which are going to be in similar cases.
Likewise, you follow the ethics code because although breaking the code would cause more benefit than harm in the particular case you want to break it in, breaking the code has the effect of encouraging more breaking of the code, not all of which will be in smiliar cases.
and precommitments do nothing here but harm.
Precommitments benefit you here. You precommit to following the ethical code even if it is harmful in the specific case you care about, because being a person willing to follow such a code has the advantage of encouraging other people to follow the same code, and most of that will be beneficial, while being a person who is willing to make exceptions to the code won’t do that.
And I’ll repeat something I don’t think I emphasized enough: you can’t weaken ethics just for one experiment. Weakening ethics codes for experiments whose benefits outweigh the harm to others also weakens ethics for experiments of other types.
I have never argued that standards should only be weakened for one experiment. My argument here is for a whole-sale universal shift to a different standard.
First of all, it obviously was not banned already; Tuskegee did pass enough previously accepted norms of conduct for it to actually happen.
Tuskegee was a secret, as I’ve already said, so how did it pass norms of conduct? Secret scandals do not show anything about accepted norms of conduct, or rather, they show the opposite of what you want it to show: that they couldn’t get away with it publicly and had to keep it a secret. No one went to the newspapers at the start and said ‘we’re going to kill a bunch of blacks with syphilis’ and the newspapers printed that and everyone was ‘well ok that’s within accepted norms of conduct’ and then later changed their minds.
Second, modern ethics codes make a much better Schelling point
Let’s say that complicated arbitrary systems of ‘consent’ and ‘benficience’ which cannot be defined clearly and lead predictably to many kinds of deeply suboptimal outcomes are, in fact, a Schelling point. So? A Schelling point is not a magic wand which justifies every sort of status quo; why should one think that the violations halted by the existence of a Schelling point outweigh instances like smoking where the enforcement of medical ethics leads, by the most conservative estimates, to millions of excess deaths?
We don’t use evidence from illegal searches because although using the evidence would cause more benefit than harm in the particular case we want to use it in, using such evidence has the effect of encouraging illegal searches, not all of which are going to be in similar cases.
And which encourage overbearing tyrannies which themselves cause massive death and disutility. Here’s an example of where the slippery slope bottoms out at something bad. But what is there for principles like randomized trials as the default?
breaking the code has the effect of encouraging more breaking of the code, not all of which will be in smiliar cases.
What are these huge, well-established, overriding threats? Who is the Stalin or Mao of randomized trials?
To be honest, my reaction to that claim is that we’re not bad at detecting negative health effects, we’re bad at ethics: it wouldn’t be that hard to detect the negative health effect experimentally (what would it take, maybe a thousand subjects and a decade?), it’s just gosh, that would be immoral—much better to stick to moral observational studies and let millions of people keep dying for decades to come!
Most humans don’t live in an environment that’s easy to control experimentally. Getting compliance, or adherence with dietary interventions isn’t easy.
What kind of setup would you suggest?
Recruit non-smokers. Randomize, then, large payments to the recruited smoker/non-smoker setups with provision of free cigarettes to the former group, with detection of compliance in both groups through cotinine measurements of hair. You’ll get some attrition between groups, but I don’t think a whole lot, and the effect is large enough that a bit of bias won’t defeat the experiment.
That’s where the addictiveness comes into play!
Really, detecting smoking’s problems is easy. There’s easy measures of compliance, the necessary experiment is small, the intervention is self-reinforcing...
It’s easy if you don’t have evil ‘ethics’ and can shut up and calculate and actually do the experiment.
If you’re going to shut up and calculate, you need to calculate not only the benefit and harm from the experiment, you need to calculate the benefit and harm from weakening the ethics code. You can’t weaken ethics just for one experiment; if you weaken it for experiments where the benefit outweighs the harm, you’ll also weaken it for experiments where the harm outweighs the benefit but the people performing the experiment value harm to some people more than to others, and for experiments where the experimenters just don’t know how to calculate, and for experimenters where an unspoken goal of the experiment is to actually cause harm, and....
It’s like asking if we should allow courts to use evidence gained from an illegal search. If you “shut up and calculate” the costs of freeing a criminal versus finding a criminal guilty, you’ll determine that it’s always good to use the evidence. But if you allow the use of evidence gained from illegal searches, you’ll incentivize illegal searches, and that incentivization must be included in any calculation.
I could even phrase this in the language of precommitting: you precommit to always following the ethics code because following an ethics code is, on the average, advantageous but you don’t get the advantages unless you’re the type of person who’s willing to follow it even in disadvantageous situations.
Smoking is a situation where any remotely accurate model of the situation says that the value of information is extremely high inasmuch as smoking was one of the most popular activities in the world at the time, was expected to continue be so, the correlates (if causal) translated to enormous loss of life, and since it took decades to do so, the correlative evidence was obviously too weak & unpersuasive to motivate people to quit or overcome societal presumptions/precommitments against heavy regulation & bans.
Against this obvious point, you set—as even more desirable—preserving a bunch of made-up incoherent* rules or ‘ethics’ put into place post-WWII as a response to the Nazi concentration camp tortures and things like the Tuskegee syphilis experiment, none of which activities would have passed previously-accepted norms of conduct or even scientific reporting (eg. were any of Mengeles’s ‘experiments’ done adequately and could reach any level of scientific validity? the only halfway useful ones I know of were the altitude experiments), and were done, surprise surprise, under conditions of secrecy and the auspices of totalitarian dictatorships and the closest thing to that in the West, militaries. But you know what, having IRBs and ‘ethics’ didn’t stop stuff like MKULTRA post-WWII, did it? That’s because secrecy is the father of abuse, not any ‘lack of research ethics’. You’re solving the wrong problem. The problem is not that scientists in their ordinary course of conduct investigating things such as smoking are monsters. The problem is that governments and militaries and corporations and elites are monsters who, if you let them, will happily do monstrous things and attract and enable and cover up monsters like Mengele and try to ignore things like ‘downwinders’.
* I find ‘informed consent’ particularly hilarious, having participated in a few experiments at college. At no point was I ever ‘informed’ in any meaningful way about risks with, for example, a specific probability like ‘based on a meta-analysis of previous experiments, we think there’s a <5% chance you’ll have a rash or worse’. No wonder the bioethics journals continue debating how many angels can dance on the head of a pin—I mean, how informed consent applies to Third World clinical trials, old people, reuse of data for additional analyses, etc etc.
So yes, I am perfectly happy to bite your bullet and advocate for undermining your ‘ethics’ and things like IRBs. They are worthless. They do nothing but impede science, and trade off endless sins of omission in exchange for (possibly) preventing some sins of commission. This is nothing like courts, and precommitments do nothing here but harm.
The willingness to settle for trash correlations massively harms science and society (one word, which should silence all doubters of this: “diet”), and to the extent that a smoking experiment weakens this norm and helps people consider the long run & inherent uncertainty of even the best correlative evidence, I regard it as an unalloyed good above and beyond the issue of tobacco.
I hope that one day, the question people ask will never be, ‘could it possibly be moral to run a randomized experiment on X?’, but rather, ‘could it possibly be moral to not run a randomized experiment on X?’ (or better yet, they don’t ask the question at all since it is taken for granted that, not being superstitious barbarians, of course a randomized experiment has been run).
Is it not allowed to run randomized controlled trials assigning diets to people? I’m pretty sure I’ve read about such trials. Do ethics boards forbid assigning diets they (without good evidence) believe to be harmful?
Like any area, the ethics boards hold experiments to much higher standards than things like surveys; I don’t think it’s exceptional in this regard except to the extent that the area has irresponsibly taken its dubious results as gospel and tried to remake society. (I criticize a lot of psychology for bad research practices, but at least with most of it, people don’t try to reorganize their lives and diets based on the latest survey.)
On top of that, diet research focuses heavily on junk correlations because it’s unusually hard to run RCTs on diet. Unfortunately, they ignore that correlations are far less informative compared to causations than correlation research is easy to run compared to RCTs. We’d be better off if most diet research had never been done, ethics ignored, and the funding used for a few large RCTs instead. That’d’ve avoided the farcical history of diet advice like salt, fat, etc.
That’s only an implicit answer, and I want to be sure I understand correctly. Do ethics boards forbid trials with diet interventions? Or is the problem only that diet researchers do the wrong things and then oversell their results?
They and the general culture of ‘ethics’ and overrating professional expertise and correlative results forbid trials on the margin.
I don’t see any ‘only’ about the matter.
(Retracted—Note to self: Read the grandparent before commenting)
And I’ll repeat something I don’t think I emphasized enough: you can’t weaken ethics just for one experiment. Weakening ethics codes for experiments whose benefits outweigh the harm to others also weakens ethics for experiments of other types. That’s how human beings behave in the real world. It’s no use pointing to an abusive experiment and saying “that would have been banned anyway, even without your code”. First of all, it obviously was not banned already; Tuskegee did pass enough previously accepted norms of conduct for it to actually happen. Second, modern ethics codes make a much better Schelling point; if you instead say that it’s okay to hurt people or encourage people to hurt themselves but it’s not okay to do so in bad cases, it’s much easier to rationalize away a bad experiment than if you say “no, period”. It’s easy to say in hindsight “it was banned by an existing ethical code anyway”, but that’s not what they thought at the time. See http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
Illegal searches are not a generic statement about “courts”.
We don’t use evidence from illegal searches because although using the evidence would cause more benefit than harm in the particular case we want to use it in, using such evidence has the effect of encouraging illegal searches, not all of which are going to be in similar cases.
Likewise, you follow the ethics code because although breaking the code would cause more benefit than harm in the particular case you want to break it in, breaking the code has the effect of encouraging more breaking of the code, not all of which will be in smiliar cases.
Precommitments benefit you here. You precommit to following the ethical code even if it is harmful in the specific case you care about, because being a person willing to follow such a code has the advantage of encouraging other people to follow the same code, and most of that will be beneficial, while being a person who is willing to make exceptions to the code won’t do that.
I have never argued that standards should only be weakened for one experiment. My argument here is for a whole-sale universal shift to a different standard.
Tuskegee was a secret, as I’ve already said, so how did it pass norms of conduct? Secret scandals do not show anything about accepted norms of conduct, or rather, they show the opposite of what you want it to show: that they couldn’t get away with it publicly and had to keep it a secret. No one went to the newspapers at the start and said ‘we’re going to kill a bunch of blacks with syphilis’ and the newspapers printed that and everyone was ‘well ok that’s within accepted norms of conduct’ and then later changed their minds.
Let’s say that complicated arbitrary systems of ‘consent’ and ‘benficience’ which cannot be defined clearly and lead predictably to many kinds of deeply suboptimal outcomes are, in fact, a Schelling point. So? A Schelling point is not a magic wand which justifies every sort of status quo; why should one think that the violations halted by the existence of a Schelling point outweigh instances like smoking where the enforcement of medical ethics leads, by the most conservative estimates, to millions of excess deaths?
And which encourage overbearing tyrannies which themselves cause massive death and disutility. Here’s an example of where the slippery slope bottoms out at something bad. But what is there for principles like randomized trials as the default?
What are these huge, well-established, overriding threats? Who is the Stalin or Mao of randomized trials?