I expect prosocial projects to still be launched primarily for prosocial reasons, and funding to be a way of enabling them to happen and publicly allocating credit. People who are only optimizing for money and don’t care about externalities have better ways available to pursue their goals, and I don’t expect that to change.
It seems that according to your model, it’s useful to classify (some) humans as either:
(1) humans who are only optimizing for money, power and status; and don’t care about externalities.
(2) humans who are working on prosocial projects primarily for prosocial reasons.
If your model is true, how come the genes that cause humans to be type (1) did not completely displace the genes that cause humans to be type (2) throughout human evolution?
According to my model (without claiming originality): Humans generally tend to have prosocial motivations, and people who work on projects that appear prosocial tend to believe they are doing it for prosocial reasons. But usually, their decisions are aligned with maximizing money/power/status (while believing that their decisions are purely due to prosocial motives).
Also, according to my model, it is often very hard to judge whether a given intervention for mitigating x-risks is net-positive or net-negative (due to an abundance of crucial considerations). So subconscious optimizations for money/power/status can easily end up being extremely harmful.
If you describe the problem as “this encourages swinging for the fences and ignoring negative impact”, impact shares suffer from it much less than many parts of effective altruism. Probably below average. Impact shares at least have some quantification and feedback loop, which is more than I can say for the constant discussion of long tails, hits based giving, and scalability.
But a feedback signal can be net-negative if it creates bad incentives (e.g. an incentive to regard an extremely harmful outcome that a project can end up causing as if that potential outcome was neutral).
That model is a straw man: talking in dichotomies and sharp cut-offs is easier than spectrums and margins, but I would hope they’d be assumed by default.
But focusing strictly on the margin: so much of EA pushes people to think big: biggest impact, fastest scaling, etc. It also encourages people to be terrified of doing anything, but not in ways that balance out, just make people stressed and worse at thinking. I 100% agree with you that this pushes people to ignore the costs and risks of their projects, and that this is bad.
Relative to that baseline, I think retroactive funding is at most a drop in the bucket, and impact shares potentially an improvement because the quantification gives people more traction to raise specific objections.
The same systems also encourage people to overestimate their project’s impact and ignore downsides. No one wants to admit their project failed, much less did harm. It will hurt their chances of future grants and jobs and reduce their status. Impact Shares at least gives a hook for a quantified outside assessment, instead of the only form of post-project feedback being public criticism that is costly and scary to give.
(Yes, this only works if the quantification and objections are sufficiently good, but sufficiently only means “better than the conterfactual”. “The feedback could be bad though” applies to everything).
This post on EAForum outlines a long history of CEA launching projects and then just dropping them, without evaluation. Impact shares that remain valueless are an easy way to build common knowledge of the lack of proof of value, especially compared to someone writing a post that obviously took tens if not hundreds of hours to write and had to be published anonymously due to fear of reprucussions.
I’m interested in hearing what you think the counterfactuals to impact shares/retroactive funding in general are, and why they are better.
I’m interested in hearing what you think the counterfactuals to impact shares/retroactive funding in general are, and why they are better.
The alternative to launching an impact market is to not launch an impact market. Consider the set of interventions that get funded if and only if an impact market it launched. Those are interventions that no classical EA funder decides to fund in a world without impact markets, so they seem unusually likely to be net-negative. Should we move EA funding towards those interventions, just because there’s a chance that they’ll end up being extremely beneficial? (Which is the expected result of launching a naive impact market.)
It seems that according to your model, it’s useful to classify (some) humans as either:
(1) humans who are only optimizing for money, power and status; and don’t care about externalities.
(2) humans who are working on prosocial projects primarily for prosocial reasons.
If your model is true, how come the genes that cause humans to be type (1) did not completely displace the genes that cause humans to be type (2) throughout human evolution?According to my model (without claiming originality): Humans generally tend to have prosocial motivations, and people who work on projects that appear prosocial tend to believe they are doing it for prosocial reasons. But usually, their decisions are aligned with maximizing money/power/status (while believing that their decisions are purely due to prosocial motives).
Also, according to my model, it is often very hard to judge whether a given intervention for mitigating x-risks is net-positive or net-negative (due to an abundance of crucial considerations). So subconscious optimizations for money/power/status can easily end up being extremely harmful.
But a feedback signal can be net-negative if it creates bad incentives (e.g. an incentive to regard an extremely harmful outcome that a project can end up causing as if that potential outcome was neutral).
That model is a straw man: talking in dichotomies and sharp cut-offs is easier than spectrums and margins, but I would hope they’d be assumed by default.
But focusing strictly on the margin: so much of EA pushes people to think big: biggest impact, fastest scaling, etc. It also encourages people to be terrified of doing anything, but not in ways that balance out, just make people stressed and worse at thinking. I 100% agree with you that this pushes people to ignore the costs and risks of their projects, and that this is bad.
Relative to that baseline, I think retroactive funding is at most a drop in the bucket, and impact shares potentially an improvement because the quantification gives people more traction to raise specific objections.
The same systems also encourage people to overestimate their project’s impact and ignore downsides. No one wants to admit their project failed, much less did harm. It will hurt their chances of future grants and jobs and reduce their status. Impact Shares at least gives a hook for a quantified outside assessment, instead of the only form of post-project feedback being public criticism that is costly and scary to give.
(Yes, this only works if the quantification and objections are sufficiently good, but sufficiently only means “better than the conterfactual”. “The feedback could be bad though” applies to everything).
This post on EAForum outlines a long history of CEA launching projects and then just dropping them, without evaluation. Impact shares that remain valueless are an easy way to build common knowledge of the lack of proof of value, especially compared to someone writing a post that obviously took tens if not hundreds of hours to write and had to be published anonymously due to fear of reprucussions.
I’m interested in hearing what you think the counterfactuals to impact shares/retroactive funding in general are, and why they are better.
The alternative to launching an impact market is to not launch an impact market. Consider the set of interventions that get funded if and only if an impact market it launched. Those are interventions that no classical EA funder decides to fund in a world without impact markets, so they seem unusually likely to be net-negative. Should we move EA funding towards those interventions, just because there’s a chance that they’ll end up being extremely beneficial? (Which is the expected result of launching a naive impact market.)
Ah. You have much more confidence in other funding mechanisms than I do.
Doesn’t seem like we’re making progress on this so I will stop here.