Let’s use Beeminder as an example. When I emailed Daniel he said this: “we’ve talked with the CFAR founders in the past about setting up RCTs for measuring the effectiveness of beeminder itself and would love to have that see the light of day”.
Which is a little open ended, so I’m going to arbitrarily decide that we’ll study Beeminder for weight loss effectiveness.
Story* as follows:
Daniel goes to (our thing).com and registers a new study. He agrees to the terms, and tells us that this is a study which can impact health—meaning that mandatory safety questions will be required. Once the trial is registered it is viewable publicly as “initiated”.
He then takes whatever steps we decide on to locate participants. Those participants are randomly assigned to two groups: (1) act normal, and (2) use Beeminder to track exercise and food intake. Every day the participants are sent a text message with a URL where they can log that day’s data. They do so.
After two weeks, the study completes and both Daniel and the world are greeted with the results. Daniel can now update Beeminder.com to say that Beeminder users lost XY pounds more than the control group… and when a rationalist sees such claims they can actually believe them.
Note that this story isn’t set in stone—just a sketch to aid discussion
Those participants are randomly assigned to two groups: (1) act normal, and (2) use Beeminder to track exercise and food intake.
These kind of studies suffer from the Hawthorne effect. It is better to assign the control group to do virtually anything instead of nothing. In this case I’d suggest to have them simply monitor their exercise and food intake without any magical line and/or punishment.
Do you have any thoughts on what questions we should be asking about this product? Somehow the data collection and analysis once we have the timeseries data doesn’t seem so hard… but the protocol and question design seems very difficult to me.
I wonder if there should be a group where they still get Beeminder’s graph, but they don’t pay anything for going off their road. (In order to test whether the pledge system is actually necessary.)
For more complicated propositions, who does the math and statistics? The application apparently gathers the data, but it is still subject to interpretation.
Is the data (presumably anonymized) made publicly available, so that others can dispute the meaning?
If the sponsoring company does its own math and stats, must it publicly post its working papers before making claims based on the data? Does anyone review that to make sure it passes some light smell test, and isn’t just pictures of cats?
What action does the organization behind the app take if a sponsor publicly misrepresents the data or, more likely, its meaning? If the organization would take action, does it take the same action if the statement is merely misleading, rather than factually incorrect?
What do the participants get? Is that simply up to the sponsor? If so, who reviews it to assure that the incentive does not distort the data? If no one, will you at least require that the incentive be reported as part of the trial?
Does a sponsor have any recourse if it designed the trial badly, leading to misleading results? Or is its remedy really to design a better trial and publicize that one?
Can sponsors do a private mini-trial to test its trial design before going full bore (presumably, with their promise not to publicize the results)?
Have you considered some form of reputation system, allowing commenters to build a reputation for debunking badly supported claims and affirming well-supported claims? (Or perhaps some other goodie?) I can imagine it becoming a pastime for grad students, which would be a Good Thing (TM).
I imagine these might all be very basic questions that arise out of my ignorance of such studies. If so, please spend your time on people with more to contribute than ignorance!
7 - Can sponsors do a private mini-trial to test its trial design before going full bore (presumably, with their promise not to publicize the results)?
This is an awesome idea. I had not considered this until you posted it. This sounds great.
6 - Does a sponsor have any recourse if it designed the trial badly, leading to misleading results? Or is its remedy really to design a better trial and publicize that one?
This is a hard one. I anticipate that at least initially only Good People will be using this protocol. These are people who spent a lot of time creating something to (hopefully) make the world better. Not cool to screw them if they make a mistake, or if v1 isn’t as awesome as anticipated.
A related question is: what can we do to help a company that has demonstrated its effectiveness?
3 - If the sponsoring company does its own math and stats, must it publicly post its working papers before making claims based on the data? Does anyone review that to make sure it passes some light smell test, and isn’t just pictures of cats?
At minimum the code used should be posted publicly and open-source licensed (otherwise there can be no scrutiny or replication). I also think paying to have a third party review the code isn’t unreasonable.
2 - Is the data (presumably anonymized) made publicly available, so that others can dispute the meaning?
That was the initial plan, yes! Beltran (my co-founder at GB) is worried that will result in either HIPPA issues or something like this, so I’m ultimately unsure. Putting structures in place so the science is right the first time seems better.
It makes sense to guarantee anonymity. Participants recruited personally by company founders may be otherwise unwilling to report honestly (for example). For health related studies, privacy is an issue for insurance reasons, etc.
However, for follow-up studies, it seems important to keep earlier records including personally identifiable information so as to prevent repeatedly sampling from the same population.
That would imply that your organization/system needs to have a data management system for securely storing the personal data while making it available in an anonymized form.
However, there are privacy risks associated with ‘anonymized’ data as well, since this data can sometimes be linked with other data sources to make inferences about participants. (For example, if participants provide a zip code and certain demographic information, that may be enough to narrow it down to a very few people.) You may want to consider differential privacy solutions or other kinds of data perturbation.
8 - Have you considered some form of reputation system, allowing commenters to build a reputation for debunking badly supported claims and affirming well-supported claims? (Or perhaps some other goodie?) I can imagine it becoming a pastime for grad students, which would be a Good Thing (TM).
I hadn’t. I like the idea, but am less able to visualize it than the rest of this stuff. Grad students cleaning up marketing claims does indeed sound like a Good Thing...
I was thinking something like the karma score here. People could comment on the data and the math that leads to the conclusions, and debunk the ones that are misleading. A problem would be that, If you allow endorsers, rather than just debunkers, you could get in a situation where a sponsor pays people to publicly accept the conclusions. Here are my thoughts on how to avoid this.
First, we have to simplify the issue down to a binary question: does the data fairly support the conclusion that the sponsor claims? The sponsor would offer $x for each of the first Y reviewers with a reputation score of at least Z. They have to pay regardless of what the reviewer’s answer to the question is. If the reviewers are unanimous, then they all get small bumps to their reputation. If they are not unanimous, then they see each others’ reviews (anonymously and non-publicly at this point) and can change their positions one time. After that, those who are in the final majority and did not change their position get a bump up in reputation, but only based on the number of reviewers who switched to be in the final majority. (I.e. we reward reviewers who persuade others to change their position.) The reviews are then opened to a broader number of people with positive reputations, who can simply vote yes or no, which again affects the reputations of the reviewers. Again, voting is private until complete, then people who vote with the majority get small reputation bumps. At the conclusion of the process, everyone’s work is made public.
I’m sure that there are people who have thought about reputation systems more than I have. But I have mostly seen reputation systems as a mechanism for creating a community where certain standards are upheld in the absence of monetary incentives. A reputation system that is robust against gaming seems difficult.
I’m very glad I asked for more clarification. I’m going to call this system The Reviewer’s Dilemma, it’s a very interesting solution for allowing non-software analysis to occur in a trusted manner. I am somewhat worried about a laziness bias (it’s much easier to agree than disprove), but I imagine that there is a similar bounty for overturning previous results this might be handled.
I’ll do a little customer development with some friends, but the possibility of reviewers being added as co-authors might also act as a nice incentive (both to reduce laziness, and as addition compensation).
5 - What do the participants get? Is that simply up to the sponsor? If so, who reviews it to assure that the incentive does not distort the data? If no one, will you at least require that the incentive be reported as part of the trial?
We need to design rules governing participant compensation.
At a minimum I think all compensation should be reported (it’s part of what’s needed for replication), and of course not related to the results a participant reports. Ideally we create a couple defined protocols for locating participants, and people largely choose to go with a known good solution.
StackOverflow et al are also free and offer no compensation except for points and awards and reputation. Maybe it can be combined. Points for regular participation, prominent mention somewhere and awards being real rewards. The downside is that this may pose moral hazards of some kind.
I had been assuming that participants needed to be drawn from the general population. If we don’t think there’s too much hazard there, I agree a points system would work. Some portion of the population would likely just enjoy the idea of receiving free product to test.
For studies in which people have to actively involve themselves and consent to participate, I believe that there is always going to be some sampling bias. At best we can make it really really small, at worst, we should state clearly what we believe are those biases in our population.
At worst, we will have a better understanding of what goes into the results.
Also, for some studies, the sampled population might, by necessity, be a subset of the population.
4 - What action does the organization behind the app take if a sponsor publicly misrepresents the data or, more likely, its meaning? If the organization would take action, does it take the same action if the statement is merely misleading, rather than factually incorrect?
I imagined similar actions as the Free Software Foundation takes when a company violates the GPL: basically a lawsuit and press release warning people. For template studies, ideally what claims can be made would be specified by the template (ie “Our users lost XY more pounds over Z time”.)
One option is simply to report it to the Federal Trade Commission for investigation, along with a negative publicity statement. That externalizes the cost.
If you would like assistance drafting the agreements, I am a lawyer and would be happy to help. I have deep knowledge about technology businesses, intellectual property licensing, and contracting, mid-level knowledge about data privacy, light knowledge about HIPAA, and no knowledge about medical testing or these types of protocols. I’m also more than fully employed, so you’d have the constraint of taking the time I could afford to donate.
FTC is so much better than lawsuit. I don’t know a single advertiser that isn’t afraid of the FTC. It looks like enforcement is tied to complaint numbers, so the press release should include information about how to personally complain (and go out to a mailing list as well).
I would love assistance with the agreements. It sounds like you would be more suited to the Business <> Non-Profit agreements than the Participant <> Business agreements. How do I maximize the value of your contribution? Are you more suited to the high-level term sheet, or the final wording?
1 - For more complicated propositions, who does the math and statistics? The application apparently gathers the data, but it is still subject to interpretation.
This problem can be reduced in size by having the webapp give out blinded data, and only reveal group names after the analysis has been publicly committed to. If participating companies are unhappy with the existing modules, they could perhaps hire “statistical consultants” to add a module, permanently improving the site for everyone.
I think I get your meaning. You mean that the webapp itself would carry out the testing protocol. I was thinking that it would be designed by the sponsor using standardized components. I think what you are saying is that it would be more rigid than that. This would allow much more certainty in the meaning of the result. Your example of “using X resulted in average weight loss of Y compared to a control group” would be a case that could be standardized, where “average weight loss” is a configurable data element.
Yes. I think if we can manage it, requiring data-analysis to be pre-declared is just better. I don’t think science as a whole can do this, because not all data is as cheap to produce as product testing data.
Now that I’ve heard your reply to question #8, I need to consider this again. Perhaps we could have some basic claims done by software, while allowing for additional claims such as “those over 50 show twice the results” to be verified by grad students. I will think about this.
Thank you! This is exactly the kind of discussion I was hoping for.
The general answer to your questions is: I want to build whatever LessWrong wants me to build. If it’s debated in the open, and agreed as the least-worst option, that’s the plan.
I’ll post answers to each question in a separate thread, since they raise a lot of questions I was hoping for feedback on.
He then takes whatever steps we decide on to locate participants.
Even if the group assignments are random, the prior step of participant sampling could lead to distorted effects. For example, the participants could be just the friends of the person who created the study who are willing to shill for it.
The studies would be more robust if your organization took on the responsibility of sampling itself. There is non-trivial scientific literature on the benefits and problems of using, for example, Mechanical Turk and Facebook ads for this kind of work. There is extra value added for the user/client here, which is that the participant sampling becomes a form of advertising.
Yeah, this is a brutal point. I wish I knew a good answer here.
Is there a gold standard approach? Last I checked even the state of the art wasn’t particularly good.
Facebook / Google / StumbleUpon ads sound promising in that they can be trivially automated, and if only ad respondents could sign up for the study, then the friend issue is moot. Facebook is the most interesting of those, because of the demographic control it gives.
How bad is the bias? I performed a couple google scholar searches but didn’t find anything satisfying.
To make things more complicated, some companies will want to test highly targeted populations. For example, Apptimize is only suitable for mobile app developers—and I don’t see a facebook campaign working out very well for locating such people.
A tentative solution might be having the company wishing to perform the test supply a list of websites they feel caters to good participants. This is even worse than facebook ads from a biasing perspective though. At minimum it sounds like disclosing how participants were located prominently will be important.
There are people in my department who do work in this area. I can reach out and ask them.
I think Mechanical Turk gets used a lot for survey experiments because it has a built-in compensation mechanism and there are ways to ask questions in ways that filter people into precisely what you want.
I wouldn’t dismiss Facebook ads so quickly. I bet there is a way to target mobile app developers on that.
My hunch is that like survey questions, sampling methods are going to need to be tuned case-by-case and patterns extracted inductively from that. Good social scientific experiment design is very hard. Standardizing it is a noble but difficult task.
Roughly speaking: “Act normal” vs “Use Beeminder” vs “some alternative intervention”. Basically, I expect to see “do something different” produce results, at least for a little while, for almost any value of “something different”. Literally anything at all that didn’t make it clear to the placebo group that they were the placebo group. Maybe some non-Beeminder exercise and intake tracking. Maybe a prescribed simple exercise routine + non-Beeminder tracking.
I’m glad you’re here. My background is in backend web software, and stats once the data has been collected. I read “Measuring the Weight of Smoke” in college, but that’s not really a sufficient background to design the general protocol. That’s a lot of my motivation behind posting this to LW—there seem to be protocol experts here, with great critiques of the existing ones.
My hope is we can create a “getting started testing” document that gets honest companies on the right track. Searching around the web I’m finding things like this rather than serious guides to proper placebo creation.
In general, who will review proposed studies for things like suitable placebo decisions?
I am having trouble visualizing it. Could you tell a story that is a use case?
Max L.
Thanks for pointing this out.
Let’s use Beeminder as an example. When I emailed Daniel he said this: “we’ve talked with the CFAR founders in the past about setting up RCTs for measuring the effectiveness of beeminder itself and would love to have that see the light of day”.
Which is a little open ended, so I’m going to arbitrarily decide that we’ll study Beeminder for weight loss effectiveness.
Story* as follows:
Daniel goes to (our thing).com and registers a new study. He agrees to the terms, and tells us that this is a study which can impact health—meaning that mandatory safety questions will be required. Once the trial is registered it is viewable publicly as “initiated”.
He then takes whatever steps we decide on to locate participants. Those participants are randomly assigned to two groups: (1) act normal, and (2) use Beeminder to track exercise and food intake. Every day the participants are sent a text message with a URL where they can log that day’s data. They do so.
After two weeks, the study completes and both Daniel and the world are greeted with the results. Daniel can now update Beeminder.com to say that Beeminder users lost XY pounds more than the control group… and when a rationalist sees such claims they can actually believe them.
Note that this story isn’t set in stone—just a sketch to aid discussion
These kind of studies suffer from the Hawthorne effect. It is better to assign the control group to do virtually anything instead of nothing. In this case I’d suggest to have them simply monitor their exercise and food intake without any magical line and/or punishment.
Thank you. I had forgotten about that.
So let’s say the two groups were, as you suggest:
Tracking food & exercise on Beeminder
Tracking food & exercise in a journal
Do you have any thoughts on what questions we should be asking about this product? Somehow the data collection and analysis once we have the timeseries data doesn’t seem so hard… but the protocol and question design seems very difficult to me.
I wonder if there should be a group where they still get Beeminder’s graph, but they don’t pay anything for going off their road. (In order to test whether the pledge system is actually necessary.)
Yes, It should be a task that has a camparable amount of effort behind it.
Thanks for the example. It leads me to questions:
For more complicated propositions, who does the math and statistics? The application apparently gathers the data, but it is still subject to interpretation.
Is the data (presumably anonymized) made publicly available, so that others can dispute the meaning?
If the sponsoring company does its own math and stats, must it publicly post its working papers before making claims based on the data? Does anyone review that to make sure it passes some light smell test, and isn’t just pictures of cats?
What action does the organization behind the app take if a sponsor publicly misrepresents the data or, more likely, its meaning? If the organization would take action, does it take the same action if the statement is merely misleading, rather than factually incorrect?
What do the participants get? Is that simply up to the sponsor? If so, who reviews it to assure that the incentive does not distort the data? If no one, will you at least require that the incentive be reported as part of the trial?
Does a sponsor have any recourse if it designed the trial badly, leading to misleading results? Or is its remedy really to design a better trial and publicize that one?
Can sponsors do a private mini-trial to test its trial design before going full bore (presumably, with their promise not to publicize the results)?
Have you considered some form of reputation system, allowing commenters to build a reputation for debunking badly supported claims and affirming well-supported claims? (Or perhaps some other goodie?) I can imagine it becoming a pastime for grad students, which would be a Good Thing (TM).
I imagine these might all be very basic questions that arise out of my ignorance of such studies. If so, please spend your time on people with more to contribute than ignorance!
Max L.
This is an awesome idea. I had not considered this until you posted it. This sounds great.
This is a hard one. I anticipate that at least initially only Good People will be using this protocol. These are people who spent a lot of time creating something to (hopefully) make the world better. Not cool to screw them if they make a mistake, or if v1 isn’t as awesome as anticipated.
A related question is: what can we do to help a company that has demonstrated its effectiveness?
This is exactly the moral hazard companies face with the normal procedure too.
The main advantage I see is that the webapp approach is much cheeper allowing companies t do it early thus reducing the moral hazard.
At minimum the code used should be posted publicly and open-source licensed (otherwise there can be no scrutiny or replication). I also think paying to have a third party review the code isn’t unreasonable.
That was the initial plan, yes! Beltran (my co-founder at GB) is worried that will result in either HIPPA issues or something like this, so I’m ultimately unsure. Putting structures in place so the science is right the first time seems better.
The privacy issue here is interesting.
It makes sense to guarantee anonymity. Participants recruited personally by company founders may be otherwise unwilling to report honestly (for example). For health related studies, privacy is an issue for insurance reasons, etc.
However, for follow-up studies, it seems important to keep earlier records including personally identifiable information so as to prevent repeatedly sampling from the same population.
That would imply that your organization/system needs to have a data management system for securely storing the personal data while making it available in an anonymized form.
However, there are privacy risks associated with ‘anonymized’ data as well, since this data can sometimes be linked with other data sources to make inferences about participants. (For example, if participants provide a zip code and certain demographic information, that may be enough to narrow it down to a very few people.) You may want to consider differential privacy solutions or other kinds of data perturbation.
http://en.wikipedia.org/wiki/Differential_privacy
I hadn’t. I like the idea, but am less able to visualize it than the rest of this stuff. Grad students cleaning up marketing claims does indeed sound like a Good Thing...
I was thinking something like the karma score here. People could comment on the data and the math that leads to the conclusions, and debunk the ones that are misleading. A problem would be that, If you allow endorsers, rather than just debunkers, you could get in a situation where a sponsor pays people to publicly accept the conclusions. Here are my thoughts on how to avoid this.
First, we have to simplify the issue down to a binary question: does the data fairly support the conclusion that the sponsor claims? The sponsor would offer $x for each of the first Y reviewers with a reputation score of at least Z. They have to pay regardless of what the reviewer’s answer to the question is. If the reviewers are unanimous, then they all get small bumps to their reputation. If they are not unanimous, then they see each others’ reviews (anonymously and non-publicly at this point) and can change their positions one time. After that, those who are in the final majority and did not change their position get a bump up in reputation, but only based on the number of reviewers who switched to be in the final majority. (I.e. we reward reviewers who persuade others to change their position.) The reviews are then opened to a broader number of people with positive reputations, who can simply vote yes or no, which again affects the reputations of the reviewers. Again, voting is private until complete, then people who vote with the majority get small reputation bumps. At the conclusion of the process, everyone’s work is made public.
I’m sure that there are people who have thought about reputation systems more than I have. But I have mostly seen reputation systems as a mechanism for creating a community where certain standards are upheld in the absence of monetary incentives. A reputation system that is robust against gaming seems difficult.
Max L.
I’m very glad I asked for more clarification. I’m going to call this system The Reviewer’s Dilemma, it’s a very interesting solution for allowing non-software analysis to occur in a trusted manner. I am somewhat worried about a laziness bias (it’s much easier to agree than disprove), but I imagine that there is a similar bounty for overturning previous results this might be handled.
I’ll do a little customer development with some friends, but the possibility of reviewers being added as co-authors might also act as a nice incentive (both to reduce laziness, and as addition compensation).
We need to design rules governing participant compensation.
At a minimum I think all compensation should be reported (it’s part of what’s needed for replication), and of course not related to the results a participant reports. Ideally we create a couple defined protocols for locating participants, and people largely choose to go with a known good solution.
StackOverflow et al are also free and offer no compensation except for points and awards and reputation. Maybe it can be combined. Points for regular participation, prominent mention somewhere and awards being real rewards. The downside is that this may pose moral hazards of some kind.
Oh, interesting.
I had been assuming that participants needed to be drawn from the general population. If we don’t think there’s too much hazard there, I agree a points system would work. Some portion of the population would likely just enjoy the idea of receiving free product to test.
I would worry about sampling bias due to selection based on, say, enjoying points.
For studies in which people have to actively involve themselves and consent to participate, I believe that there is always going to be some sampling bias. At best we can make it really really small, at worst, we should state clearly what we believe are those biases in our population.
At worst, we will have a better understanding of what goes into the results.
Also, for some studies, the sampled population might, by necessity, be a subset of the population.
I imagined similar actions as the Free Software Foundation takes when a company violates the GPL: basically a lawsuit and press release warning people. For template studies, ideally what claims can be made would be specified by the template (ie “Our users lost XY more pounds over Z time”.)
One option is simply to report it to the Federal Trade Commission for investigation, along with a negative publicity statement. That externalizes the cost.
If you would like assistance drafting the agreements, I am a lawyer and would be happy to help. I have deep knowledge about technology businesses, intellectual property licensing, and contracting, mid-level knowledge about data privacy, light knowledge about HIPAA, and no knowledge about medical testing or these types of protocols. I’m also more than fully employed, so you’d have the constraint of taking the time I could afford to donate.
Max L.
FTC is so much better than lawsuit. I don’t know a single advertiser that isn’t afraid of the FTC. It looks like enforcement is tied to complaint numbers, so the press release should include information about how to personally complain (and go out to a mailing list as well).
I would love assistance with the agreements. It sounds like you would be more suited to the Business <> Non-Profit agreements than the Participant <> Business agreements. How do I maximize the value of your contribution? Are you more suited to the high-level term sheet, or the final wording?
This problem can be reduced in size by having the webapp give out blinded data, and only reveal group names after the analysis has been publicly committed to. If participating companies are unhappy with the existing modules, they could perhaps hire “statistical consultants” to add a module, permanently improving the site for everyone.
This could be related to your #8 as well :)
I think I get your meaning. You mean that the webapp itself would carry out the testing protocol. I was thinking that it would be designed by the sponsor using standardized components. I think what you are saying is that it would be more rigid than that. This would allow much more certainty in the meaning of the result. Your example of “using X resulted in average weight loss of Y compared to a control group” would be a case that could be standardized, where “average weight loss” is a configurable data element.
Max L.
Yes. I think if we can manage it, requiring data-analysis to be pre-declared is just better. I don’t think science as a whole can do this, because not all data is as cheap to produce as product testing data.
Now that I’ve heard your reply to question #8, I need to consider this again. Perhaps we could have some basic claims done by software, while allowing for additional claims such as “those over 50 show twice the results” to be verified by grad students. I will think about this.
Thank you! This is exactly the kind of discussion I was hoping for.
The general answer to your questions is: I want to build whatever LessWrong wants me to build. If it’s debated in the open, and agreed as the least-worst option, that’s the plan.
I’ll post answers to each question in a separate thread, since they raise a lot of questions I was hoping for feedback on.
Even if the group assignments are random, the prior step of participant sampling could lead to distorted effects. For example, the participants could be just the friends of the person who created the study who are willing to shill for it.
The studies would be more robust if your organization took on the responsibility of sampling itself. There is non-trivial scientific literature on the benefits and problems of using, for example, Mechanical Turk and Facebook ads for this kind of work. There is extra value added for the user/client here, which is that the participant sampling becomes a form of advertising.
Yeah, this is a brutal point. I wish I knew a good answer here.
Is there a gold standard approach? Last I checked even the state of the art wasn’t particularly good.
Facebook / Google / StumbleUpon ads sound promising in that they can be trivially automated, and if only ad respondents could sign up for the study, then the friend issue is moot. Facebook is the most interesting of those, because of the demographic control it gives.
How bad is the bias? I performed a couple google scholar searches but didn’t find anything satisfying.
To make things more complicated, some companies will want to test highly targeted populations. For example, Apptimize is only suitable for mobile app developers—and I don’t see a facebook campaign working out very well for locating such people.
A tentative solution might be having the company wishing to perform the test supply a list of websites they feel caters to good participants. This is even worse than facebook ads from a biasing perspective though. At minimum it sounds like disclosing how participants were located prominently will be important.
There are people in my department who do work in this area. I can reach out and ask them.
I think Mechanical Turk gets used a lot for survey experiments because it has a built-in compensation mechanism and there are ways to ask questions in ways that filter people into precisely what you want.
I wouldn’t dismiss Facebook ads so quickly. I bet there is a way to target mobile app developers on that.
My hunch is that like survey questions, sampling methods are going to need to be tuned case-by-case and patterns extracted inductively from that. Good social scientific experiment design is very hard. Standardizing it is a noble but difficult task.
I sincerely hope that study plan would not pass muster. Doesn’t there need to be a more reasonable placebo?
In general, who will review proposed studies for things like suitable placebo decisions?
Can you provide an example of what you’d like to see pass muster?
Roughly speaking: “Act normal” vs “Use Beeminder” vs “some alternative intervention”. Basically, I expect to see “do something different” produce results, at least for a little while, for almost any value of “something different”. Literally anything at all that didn’t make it clear to the placebo group that they were the placebo group. Maybe some non-Beeminder exercise and intake tracking. Maybe a prescribed simple exercise routine + non-Beeminder tracking.
I’m glad you’re here. My background is in backend web software, and stats once the data has been collected. I read “Measuring the Weight of Smoke” in college, but that’s not really a sufficient background to design the general protocol. That’s a lot of my motivation behind posting this to LW—there seem to be protocol experts here, with great critiques of the existing ones.
My hope is we can create a “getting started testing” document that gets honest companies on the right track. Searching around the web I’m finding things like this rather than serious guides to proper placebo creation.
I’m hoping either registered statistical consultants or grad students. Hopefully this can be streamlined by a good introductory guide.