Good thing I’m not particularly interested in being a charity, I’m interested in building a tool for funders and fundraisers that maximizes their impact against poverty. So bring on the criticism.
That said, I think you’re being excessively negative towards surveys in general. There are two primary reason why surveys are (seemingly) the only way to go about getting decent data, and not nearly as unreliable as you’re suggesting.
I can’t think of another way to go about collecting more accurate & less biased information. The only way 100% know exactly what even one person is spending their money on and doing every day is to have a researcher follow them around, watch exactly what they do, and record every change in their life over 6 months. Doing that would likely result in way bigger behavior changes than any survay ever would. It would also be expensive, extraordinarily undignified. Can you think of a single way to actually go about getting the detailed factual information that would not cause far larger behavior changes or be horrifically undignified (like a tiny bug drone that flies around watching people, and inviting lawsuits)?
While people may feel an obligation to give good answers about drug use, for example, funding has not been determinant on the answers to the questions. Why would anyone go out of their way to lie about an easily checkable questions like: Where are you living at? Do you have a job? Or what is your net worth? People have no incentive to missstate answers to those questions.
It’s also important to not that it is not just survays. The New Leaf Project survay, and many of GiveDirectly’s research includes in-person interviews. Interviews that often take place in the homes that participants have attained because of the program. I’ve never seen a single example of a random in-person check-in discovering that a participant is actually still homeless, having lied about attaining a stable place to live.
With regards to the database, that is all of the research out there across all time with anything to do with cash transfers. I cannot say that guaranteed income is 100% proven against homelessness because there are only a handful of studies on that. Simmilarly, you should know better than to pick out two random studies. We should both be relying on the big studies looking into tons of other studies, like the one from this graph: Graph adapted from Bastagli, Zanker, et al (2016)
In conclusion
It seems to me that you have created a completely unfalsifiable position that survay recipients will systematically misrepresent their ideas. This isn’t an argument about guaranteed income (correct me if I’m misunderstanding ofc), it seems to be suggesting that all survays, especially those where participants have been helped in any way, are fundamentally unreliable because participants are inherently untrustworthy. Any study to analyze this argument would inevitably cause greater misrepresentations compared to the counterfactual.
Finally, and I actually very much enjoy the discourse, but insight can be gained by looking from a rational angle as well as research-based. What would you do if given a guaranteed income ($1,000 a month for a year)? Would you randomly start reporting demonstrable falsehoods to researchers on survays even if there was no reason to? If not, why assume that many (if not most) other participants would?
You’re the one with the novel hypothesis here. The burden of proof is on you. And I am rightfully suspicious of survey results and believe survey-only studies are of almost no use. Why?
Because self-report is fundamentally unreliable. Yes, I am absolutely saying that people who receive money from you are incentivized to lie to you out of a sense of social obligation. Yes, I am saying that this means any study where you provide help and then rely on self-report as the only form of feedback is borderline useless and will convince few people. And no, it’s not an unfalsifiable idea—objective metrics can be used as a comparison. If it is impossible to collect such data ethically, then you should stop doing your studies immediately. If your best piece of evidence in support of your ideas is just “think about it logically”, I would highly recommend that EA charities never fund this project.
While people may feel an obligation to give good answers about drug use, for example, funding has not been determinant on the answers to the questions. Why would anyone go out of their way to lie about an easily checkable questions like: Where are you living at? Do you have a job? Or what is your net worth? People have no incentive to missstate answers to those questions.
To avoid embarrassment and awkwardness. To make the researchers feel better. This is a well-known problem in psychology called the “self-report bias”, which is why good psychological studies never rely on self-report alone. As verification, you can also interview other family members, and employers, check arrest records, ect. If there is no way for you to do this ethically, then what do you think your studies are achieving?
I’ve never seen a single example of a random in-person check-in discovering that a participant is actually still homeless, having lied about attaining a stable place to live.
You ask someone if it’s OK for you to drop by at their place sometime, and for their address. This is verifiable information, and they aren’t likely to lie to you. If you ask them about things they know you can’t check, they are much more likely to make a socially desirable lie. They know you can’t check how much money they’re spending on drugs, or if they’re employed. People even lie to themselves about how much they exercise all the time.I’m not sure what your position here is. That people don’t make prosocial lies out of a sense of obligation? People lie about how good each others’ cooking is to avoid hurt feelings. Most explanations of the self-report bias effect start with the most important one: Subjects may make the more socially acceptable answer rather than being truthful. This needs to be addressed. If you were ever wondering why people aren’t donating more money to direct cash transfers, this is reason #1.
There is also the problem of reporting bias and problems with subject follow-up. The people who went to jail/hospital/mental hospital/morgue are a lot less likely to be reporting back. Same with the ones who deteriorate mentally. You need to be accounting for all these problems. In order to get people to take direct cash transfers seriously, you need to do a head-to-head comparison with traditional interventions (housing, food stamps), and with objective measurements (days of school attended, arrest record). I don’t recall this being addressed in the studies I’ve examined, but I haven’t been able to go through many of them.
I can’t think of another way to go about collecting more accurate & less biased information.
This is lethal. EA is fundamentally about evidence-based charity. You’re telling us that you can’t get more objective data, and that the best we can get is self-report survey data. And then you ask for money anyways.
I’m not going to pretend to be OK with this. This is not OK. This is upsetting. This is using bad statistics to get charity money. I’m not going to go so far as to suggest you’re doing this intentionally, but you NEED A STATISTICIAN. A good one. Maybe it’s true that you can’t afford one, but you also can’t afford to not have one.
Here’s a review paper of how alcoholism recovery studies inflate their success rates. I notice a disturbing overlap with your practices.
We need to take a bit of a step back here. I am just as keen as you are to get good unbiased data that can be relied upon to know precisely how impactful cash transfers—and other interventions—are at helping people. And I’d like to acknowledge that you’re correct that we should not rely solely on unchecked survey data when trying to figure out impact results.
So I did a bit of a deep dive into meta-analyses and systematic reviews of cash transfer studies. It turns out that while surveys are generally one part of the data collected, researchers have been able to use objective measures and collect data that participants can’t lie about (such as government income reporting, and school attendance rates among others). So I figure you might find it useful to check out the systematic reviews, and the organizations with tons of statisticians that have likely accounted for the very real problems you’ve pointed out with surveys.
Their focus is primarily on cash transfers in developing nations, but I think their point about non-health interventions is especially interesting when considering how effective health interventions are much harder to come by in developed nations. There are several research institutions entirely dedicated to sussing out the effectiveness of cash transfers. They have tons of statisticians, and I’m sure they actively account for the issues with surveys versus other data types.
I would like to see more research comparing cash directly to other interventions, I think there are a surprisingly large amount of current interventions going on that are either marginally helpful or actively harmful to beneficiaries. The most important thing in philanthropy is diverting as much funding as possible, at a global scale, to the interventions that we know are the most highly effective.
You’ve mentioned a few times that I need a statistician for what we’re doing, and I fully intend on going even further. When we have a large enough experiment to run we will not only hire a statistician but have unbiased external scientific institutions operate the research alongside our program (such as the research labs linked above). That way we cannot manipulate their results. I am very interested in falsifiable studies that can either indicate we’re going in the right direction or prevent us from wasting a lot of time on something that isn’t impactful.
Finally, I’m well aware that guaranteed income regarding homelessness is far smaller and much less rigorous than cash transfers in general or globally. However we’ve spoken to dozens of homeless aid workers as well as homeless people, and we’ve seen promising anecdotal results from all of the (imperfect) small to medium-scale studies done so far. As long as we’re conducting good large-scale research, and (good of you to point out) not relying entirely on surveys or other potentially biased metrics, I think doing larger-scale and higher-quality research is well worth EA dollars.
If the larger studies prove what we think we see happening on a small scale, we could solve the homelessness crisis with ~10% of the current amount we spend on the problem. If the large-scale studies show less impact, but are still in the same range as general cash transfer research, then guaranteed income would still be the most cost-effective way to help the homeless, just not 5-10X better than the other methods as it currently looks.
Overall, better research is needed, and we fully intend to produce highly consequential studies once we have the funding to launch a statistically relevant program. If we could get an EA statistician to look over the current homelessness research, they would be far more qualified than you or I to know just how much trust we should be putting in the research done so far. Do you know any?
Good thing I’m not particularly interested in being a charity, I’m interested in building a tool for funders and fundraisers that maximizes their impact against poverty. So bring on the criticism.
That said, I think you’re being excessively negative towards surveys in general. There are two primary reason why surveys are (seemingly) the only way to go about getting decent data, and not nearly as unreliable as you’re suggesting.
I can’t think of another way to go about collecting more accurate & less biased information. The only way 100% know exactly what even one person is spending their money on and doing every day is to have a researcher follow them around, watch exactly what they do, and record every change in their life over 6 months. Doing that would likely result in way bigger behavior changes than any survay ever would. It would also be expensive, extraordinarily undignified. Can you think of a single way to actually go about getting the detailed factual information that would not cause far larger behavior changes or be horrifically undignified (like a tiny bug drone that flies around watching people, and inviting lawsuits)?
While people may feel an obligation to give good answers about drug use, for example, funding has not been determinant on the answers to the questions. Why would anyone go out of their way to lie about an easily checkable questions like: Where are you living at? Do you have a job? Or what is your net worth? People have no incentive to missstate answers to those questions.
It’s also important to not that it is not just survays. The New Leaf Project survay, and many of GiveDirectly’s research includes in-person interviews. Interviews that often take place in the homes that participants have attained because of the program. I’ve never seen a single example of a random in-person check-in discovering that a participant is actually still homeless, having lied about attaining a stable place to live.
With regards to the database, that is all of the research out there across all time with anything to do with cash transfers. I cannot say that guaranteed income is 100% proven against homelessness because there are only a handful of studies on that. Simmilarly, you should know better than to pick out two random studies. We should both be relying on the big studies looking into tons of other studies, like the one from this graph: Graph adapted from Bastagli, Zanker, et al (2016)
In conclusion
It seems to me that you have created a completely unfalsifiable position that survay recipients will systematically misrepresent their ideas. This isn’t an argument about guaranteed income (correct me if I’m misunderstanding ofc), it seems to be suggesting that all survays, especially those where participants have been helped in any way, are fundamentally unreliable because participants are inherently untrustworthy. Any study to analyze this argument would inevitably cause greater misrepresentations compared to the counterfactual.
Finally, and I actually very much enjoy the discourse, but insight can be gained by looking from a rational angle as well as research-based. What would you do if given a guaranteed income ($1,000 a month for a year)? Would you randomly start reporting demonstrable falsehoods to researchers on survays even if there was no reason to? If not, why assume that many (if not most) other participants would?
You’re the one with the novel hypothesis here. The burden of proof is on you. And I am rightfully suspicious of survey results and believe survey-only studies are of almost no use. Why?
Because self-report is fundamentally unreliable. Yes, I am absolutely saying that people who receive money from you are incentivized to lie to you out of a sense of social obligation. Yes, I am saying that this means any study where you provide help and then rely on self-report as the only form of feedback is borderline useless and will convince few people. And no, it’s not an unfalsifiable idea—objective metrics can be used as a comparison. If it is impossible to collect such data ethically, then you should stop doing your studies immediately. If your best piece of evidence in support of your ideas is just “think about it logically”, I would highly recommend that EA charities never fund this project.
To avoid embarrassment and awkwardness. To make the researchers feel better. This is a well-known problem in psychology called the “self-report bias”, which is why good psychological studies never rely on self-report alone. As verification, you can also interview other family members, and employers, check arrest records, ect. If there is no way for you to do this ethically, then what do you think your studies are achieving?
You ask someone if it’s OK for you to drop by at their place sometime, and for their address. This is verifiable information, and they aren’t likely to lie to you. If you ask them about things they know you can’t check, they are much more likely to make a socially desirable lie. They know you can’t check how much money they’re spending on drugs, or if they’re employed. People even lie to themselves about how much they exercise all the time. I’m not sure what your position here is. That people don’t make prosocial lies out of a sense of obligation? People lie about how good each others’ cooking is to avoid hurt feelings. Most explanations of the self-report bias effect start with the most important one: Subjects may make the more socially acceptable answer rather than being truthful. This needs to be addressed. If you were ever wondering why people aren’t donating more money to direct cash transfers, this is reason #1.
There is also the problem of reporting bias and problems with subject follow-up. The people who went to jail/hospital/mental hospital/morgue are a lot less likely to be reporting back. Same with the ones who deteriorate mentally. You need to be accounting for all these problems. In order to get people to take direct cash transfers seriously, you need to do a head-to-head comparison with traditional interventions (housing, food stamps), and with objective measurements (days of school attended, arrest record). I don’t recall this being addressed in the studies I’ve examined, but I haven’t been able to go through many of them.
This is lethal. EA is fundamentally about evidence-based charity. You’re telling us that you can’t get more objective data, and that the best we can get is self-report survey data. And then you ask for money anyways.
I’m not going to pretend to be OK with this. This is not OK. This is upsetting. This is using bad statistics to get charity money. I’m not going to go so far as to suggest you’re doing this intentionally, but you NEED A STATISTICIAN. A good one. Maybe it’s true that you can’t afford one, but you also can’t afford to not have one.
Here’s a review paper of how alcoholism recovery studies inflate their success rates. I notice a disturbing overlap with your practices.
We need to take a bit of a step back here. I am just as keen as you are to get good unbiased data that can be relied upon to know precisely how impactful cash transfers—and other interventions—are at helping people. And I’d like to acknowledge that you’re correct that we should not rely solely on unchecked survey data when trying to figure out impact results.
So I did a bit of a deep dive into meta-analyses and systematic reviews of cash transfer studies. It turns out that while surveys are generally one part of the data collected, researchers have been able to use objective measures and collect data that participants can’t lie about (such as government income reporting, and school attendance rates among others). So I figure you might find it useful to check out the systematic reviews, and the organizations with tons of statisticians that have likely accounted for the very real problems you’ve pointed out with surveys.
https://odi.org/en/publications/cash-transfers-what-does-the-evidence-say-a-rigorous-review-of-impacts-and-the-role-of-design-and-implementation-features/
We all know GiveWell, they do extremely rigorous analyses of studies, and came to this conclusion:
Their focus is primarily on cash transfers in developing nations, but I think their point about non-health interventions is especially interesting when considering how effective health interventions are much harder to come by in developed nations. There are several research institutions entirely dedicated to sussing out the effectiveness of cash transfers. They have tons of statisticians, and I’m sure they actively account for the issues with surveys versus other data types.
https://as.nyu.edu/departments/cash-transfer-lab/faq.html
https://basicincome.stanford.edu/research/ubi-visualization/
https://www.penncgir.org/research
I would like to see more research comparing cash directly to other interventions, I think there are a surprisingly large amount of current interventions going on that are either marginally helpful or actively harmful to beneficiaries. The most important thing in philanthropy is diverting as much funding as possible, at a global scale, to the interventions that we know are the most highly effective.
You’ve mentioned a few times that I need a statistician for what we’re doing, and I fully intend on going even further. When we have a large enough experiment to run we will not only hire a statistician but have unbiased external scientific institutions operate the research alongside our program (such as the research labs linked above). That way we cannot manipulate their results. I am very interested in falsifiable studies that can either indicate we’re going in the right direction or prevent us from wasting a lot of time on something that isn’t impactful.
Finally, I’m well aware that guaranteed income regarding homelessness is far smaller and much less rigorous than cash transfers in general or globally. However we’ve spoken to dozens of homeless aid workers as well as homeless people, and we’ve seen promising anecdotal results from all of the (imperfect) small to medium-scale studies done so far. As long as we’re conducting good large-scale research, and (good of you to point out) not relying entirely on surveys or other potentially biased metrics, I think doing larger-scale and higher-quality research is well worth EA dollars.
If the larger studies prove what we think we see happening on a small scale, we could solve the homelessness crisis with ~10% of the current amount we spend on the problem. If the large-scale studies show less impact, but are still in the same range as general cash transfer research, then guaranteed income would still be the most cost-effective way to help the homeless, just not 5-10X better than the other methods as it currently looks.
Overall, better research is needed, and we fully intend to produce highly consequential studies once we have the funding to launch a statistically relevant program. If we could get an EA statistician to look over the current homelessness research, they would be far more qualified than you or I to know just how much trust we should be putting in the research done so far. Do you know any?