Not OP, but I take the claim to be “endorsing getting into bed with companies on-track to make billions of dollars profiting from risking the extinction of humanity in order to nudge them a bit, is in retrospect an obviously doomed strategy, and yet many self-identified effective altruists trusted their leadership to have secret good reasons for doing so and followed them in supporting the companies (e.g. working there for years including in capabilities roles and also helping advertise the company jobs). now that a new consensus is forming that it indeed was obviously a bad strategy, it is also time to have evaluated the leadership’s decision as bad at the time of making the decision and impose costs on them accordingly, including loss of respect and power”.
So no, not disincentivizing making positive EV bets, but updating about the quality of decision-making that has happened in the past.
So no, not disincentivizing making positive EV bets, but updating about the quality of decision-making that has happened in the past.
I think there’s a decent case that such updating will indeed disincentivize making positive EV bets (in some cases, at least).
In principle we’d want to update on the quality of all past decision-making. That would include both [made an explicit bet by taking some action] and [made an implicit bet through inaction]. With such an approach, decision-makers could be punished/rewarded with the symmetry required to avoid undesirable incentives (mostly). Even here it’s hard, since there’d always need to be a [gain more influence] mechanism to balance the possibility of losing your influence.
In practice, most of the implicit bets made through inaction go unnoticed—even where they’re high-stakes (arguably especially when they’re high-stakes: most counterfactual value lies in the actions that won’t get done by someone else; you won’t be punished for being late to the party when the party never happens). That leaves the explicit bets. To look like a good decision-maker the incentive is then to make low-variance explicit positive EV bets, and rely on the fact that most of the high-variance, high-EV opportunities you’re not taking will go unnoticed.
From my by-no-means-fully-informed perspective, the failure mode at OpenPhil in recent years seems not to be [too many explicit bets that don’t turn out well], but rather [too many failures to make unclear bets, so that most EV is left on the table]. I don’t see support for hits-based research. I don’t see serious attempts to shape the incentive landscape to encourage sufficient exploration. It’s not clear that things are structurally set up so anyone at OP has time to do such things well (my impression is that they don’t have time, and that thinking about such things is no-one’s job (?? am I wrong ??)).
It’s not obvious to me whether the OpenAI grant was a bad idea ex-ante. (though probably not something I’d have done)
However, I think that another incentive towards middle-of-the-road, risk-averse grant-making is the last thing OP needs.
That said, I suppose much of the downside might be mitigated by making a distinction between [you wasted a lot of money in ways you can’t legibly justify] and [you funded a process with (clear, ex-ante) high negative impact]. If anyone’s proposing punishing the latter, I’d want it made very clear that this doesn’t imply punishing the former. I expect that the best policies do involve wasting a bunch of money in ways that can’t be legibly justified on the individual-funding-decision level.
I interpreted the comment as being more general than this. (As in, if someone does something that works out very badly, they should be forced to resign.)
Upon rereading the comment, it reads as less generic than my original interpretation. I’m not sure if I just misread the comment or if it was edited. (Would be nice to see the original version if actually edited.)
(Edit: Also, you shouldn’t interpret my comment as an endorsement or agreement with the the rest of the content of Ben’s comment.)
endorsing getting into bed with companies on-track to make billions of dollars profiting from risking the extinction of humanity in order to nudge them a bit
Not OP, but I take the claim to be “endorsing getting into bed with companies on-track to make billions of dollars profiting from risking the extinction of humanity in order to nudge them a bit, is in retrospect an obviously doomed strategy, and yet many self-identified effective altruists trusted their leadership to have secret good reasons for doing so and followed them in supporting the companies (e.g. working there for years including in capabilities roles and also helping advertise the company jobs). now that a new consensus is forming that it indeed was obviously a bad strategy, it is also time to have evaluated the leadership’s decision as bad at the time of making the decision and impose costs on them accordingly, including loss of respect and power”.
So no, not disincentivizing making positive EV bets, but updating about the quality of decision-making that has happened in the past.
I think there’s a decent case that such updating will indeed disincentivize making positive EV bets (in some cases, at least).
In principle we’d want to update on the quality of all past decision-making. That would include both [made an explicit bet by taking some action] and [made an implicit bet through inaction]. With such an approach, decision-makers could be punished/rewarded with the symmetry required to avoid undesirable incentives (mostly).
Even here it’s hard, since there’d always need to be a [gain more influence] mechanism to balance the possibility of losing your influence.
In practice, most of the implicit bets made through inaction go unnoticed—even where they’re high-stakes (arguably especially when they’re high-stakes: most counterfactual value lies in the actions that won’t get done by someone else; you won’t be punished for being late to the party when the party never happens).
That leaves the explicit bets. To look like a good decision-maker the incentive is then to make low-variance explicit positive EV bets, and rely on the fact that most of the high-variance, high-EV opportunities you’re not taking will go unnoticed.
From my by-no-means-fully-informed perspective, the failure mode at OpenPhil in recent years seems not to be [too many explicit bets that don’t turn out well], but rather [too many failures to make unclear bets, so that most EV is left on the table]. I don’t see support for hits-based research. I don’t see serious attempts to shape the incentive landscape to encourage sufficient exploration. It’s not clear that things are structurally set up so anyone at OP has time to do such things well (my impression is that they don’t have time, and that thinking about such things is no-one’s job (?? am I wrong ??)).
It’s not obvious to me whether the OpenAI grant was a bad idea ex-ante. (though probably not something I’d have done)
However, I think that another incentive towards middle-of-the-road, risk-averse grant-making is the last thing OP needs.
That said, I suppose much of the downside might be mitigated by making a distinction between [you wasted a lot of money in ways you can’t legibly justify] and [you funded a process with (clear, ex-ante) high negative impact].
If anyone’s proposing punishing the latter, I’d want it made very clear that this doesn’t imply punishing the former. I expect that the best policies do involve wasting a bunch of money in ways that can’t be legibly justified on the individual-funding-decision level.
I interpreted the comment as being more general than this. (As in, if someone does something that works out very badly, they should be forced to resign.)
Upon rereading the comment, it reads as less generic than my original interpretation. I’m not sure if I just misread the comment or if it was edited. (Would be nice to see the original version if actually edited.)
(Edit: Also, you shouldn’t interpret my comment as an endorsement or agreement with the the rest of the content of Ben’s comment.)
Wasn’t edited, based on my memory.
Wasn’t OpenAI a nonprofit at the time?