I never got the sense of this being settled science (of course given how controversial the claim would be hard for it to be settled for good), but even besides that, the question is: what does one do with that information?
Let’s put it in LW language: I think that a good anti-discrimination policy might indeed be “if you have to judge a human’s abilities in a given domain (e.g. for hiring), precommit to assuming a Bayesian prior of total ignorance about those abilities, regardless of what any exterior information might suggest you, and only update on their demonstrated skills. This essentially means that we shift the cognitive burden of updating on the judge rather than the judged (who otherwise would have to fight a disadvantageous prior). It seems quite sensible IMO, as usually the judge has more resources to spare anyway. It centers human opportunity over maximal efficiency.
Conversely, someone who suggests that “the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable” seems already to think that economics are mainly about maximal efficiency, and any concerns for human well being are at best tacked on. This is not a good ideological fit for OpenAI’s mission! Unless you think the economy is and ought to be only human well-being’s bitch, so to speak, you have no business anywhere near building AGI.
I never got the sense of this being settled science (of course given how controversial the claim would be hard for it to be settled for good), but even besides that, the question is: what does one do with that information?
He did not present it as settled science but as one of three hypotheses for why women may have been underrepresented in tenured positions in science and engineering at top universities and research institutions. The key implication of the hypothesis being true would be that having quotas for a certain amount of women in tenure positions is not meritocratic.
Conversely, someone who suggests that “the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable” seems already to think that economics are mainly about maximal efficiency, and any concerns for human well being are at best tacked on.
His position seems to be that the sentence was ironic. The word “impeccable” usually does not appear in serious academic or policy writing. The memo seems to be in response to a report that suggested that free trade will produce environmental benefits in developing nations. It was a way to make fun of a PR lie.
It’s actually related to what Zvi talked about as bullet biting. If you want to advocate the policies of the World Bank in 1991 on free trade, it makes sense to accept that this comes with negative environmental effects in some third-world countries.
Hm. I’d need to read the memo to form my own opinion on whether that holds. It could be a “Modest Proposal” thing but irony is also a fairly common excuse used to walk back the occasional stupid statement.
I’d need to read the memo to form my own opinion on whether that holds.
It seems generally bad form to criticize people for things without actually reading what they wrote.
Just reading a text without trying to understand the context in which the text exists is also not a good way to understand whether a person made a mistake.
I think what you wrote here is likely more morally problematic than what Summers did 30 years ago. Do you think that whenever someone thinks about your merits as a person a decades from now someone should bring up that you are a person who likes to criticize people for what they said without reading what they said?
While judgement can vary, I think this is about more than just judging a person morally. I don’t think what Summers said, even in the most uncharitable reading, should disqualify him from most jobs. I do think though that they might disqualify him, or at least make him a worse choice, for something like the OpenAI board, because that comes with ideological requirements.
EDIT: so best source I’ve found for the excerpt is https://en.m.wikipedia.org/wiki/Summers_memo. I think it’s nothing particularly surprising and it’s 30 years old, but rather than ironic it sounds to me like it’s using this as an example of things that would look outrageous but are equivalent to other things that we do and don’t look quite as bad due to different vibes. I don’t know that it disqualifies his character somehow, it’s way too scant evidence to decide either way, but I do think it updates slightly towards him being a kind of economist I don’t much like to be potentially in charge of AGI, and again, this is because the requirements are strict for me. If you treat AGI with the same hands off approach as we usually do normal economic matters, you almost assuredly get a terrible world.
That seems to be the publically available except. There’s the Harvard Magazine article I linked above that speaks about the context of that writing and how it’s part of a longer seven-page document.
Summers seems to have been heavy into deregulation three decades ago. More lately he seems to be supportive of minimum wage increases and more taxes for the rich.
I do think though that they might disqualify him, or at least make him a worse choice, for something like the OpenAI board, because that comes with ideological requirements.
While I would prefer people who are ideologically clear for adding a lot of regulations for AI, it seems to me that part of what Sam Altman wanted was a board where people who can clearly counted on to vote that way don’t have the majority.
Larry Summers seems to be a smart independent thinker whose votes are not easy to predict ahead and that made him a good choice as a board candidate on which both sides can agree.
Having him on the board could also be useful for lobbying for the AI safety regulation that OpenAI wants.
I never got the sense of this being settled science (of course given how controversial the claim would be hard for it to be settled for good), but even besides that, the question is: what does one do with that information?
Let’s put it in LW language: I think that a good anti-discrimination policy might indeed be “if you have to judge a human’s abilities in a given domain (e.g. for hiring), precommit to assuming a Bayesian prior of total ignorance about those abilities, regardless of what any exterior information might suggest you, and only update on their demonstrated skills. This essentially means that we shift the cognitive burden of updating on the judge rather than the judged (who otherwise would have to fight a disadvantageous prior). It seems quite sensible IMO, as usually the judge has more resources to spare anyway. It centers human opportunity over maximal efficiency.
Conversely, someone who suggests that “the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable” seems already to think that economics are mainly about maximal efficiency, and any concerns for human well being are at best tacked on. This is not a good ideological fit for OpenAI’s mission! Unless you think the economy is and ought to be only human well-being’s bitch, so to speak, you have no business anywhere near building AGI.
He did not present it as settled science but as one of three hypotheses for why women may have been underrepresented in tenured positions in science and engineering at top universities and research institutions. The key implication of the hypothesis being true would be that having quotas for a certain amount of women in tenure positions is not meritocratic.
His position seems to be that the sentence was ironic. The word “impeccable” usually does not appear in serious academic or policy writing. The memo seems to be in response to a report that suggested that free trade will produce environmental benefits in developing nations. It was a way to make fun of a PR lie.
It’s actually related to what Zvi talked about as bullet biting. If you want to advocate the policies of the World Bank in 1991 on free trade, it makes sense to accept that this comes with negative environmental effects in some third-world countries.
Hm. I’d need to read the memo to form my own opinion on whether that holds. It could be a “Modest Proposal” thing but irony is also a fairly common excuse used to walk back the occasional stupid statement.
It seems generally bad form to criticize people for things without actually reading what they wrote.
Just reading a text without trying to understand the context in which the text exists is also not a good way to understand whether a person made a mistake.
I think what you wrote here is likely more morally problematic than what Summers did 30 years ago. Do you think that whenever someone thinks about your merits as a person a decades from now someone should bring up that you are a person who likes to criticize people for what they said without reading what they said?
While judgement can vary, I think this is about more than just judging a person morally. I don’t think what Summers said, even in the most uncharitable reading, should disqualify him from most jobs. I do think though that they might disqualify him, or at least make him a worse choice, for something like the OpenAI board, because that comes with ideological requirements.
EDIT: so best source I’ve found for the excerpt is https://en.m.wikipedia.org/wiki/Summers_memo. I think it’s nothing particularly surprising and it’s 30 years old, but rather than ironic it sounds to me like it’s using this as an example of things that would look outrageous but are equivalent to other things that we do and don’t look quite as bad due to different vibes. I don’t know that it disqualifies his character somehow, it’s way too scant evidence to decide either way, but I do think it updates slightly towards him being a kind of economist I don’t much like to be potentially in charge of AGI, and again, this is because the requirements are strict for me. If you treat AGI with the same hands off approach as we usually do normal economic matters, you almost assuredly get a terrible world.
That seems to be the publically available except. There’s the Harvard Magazine article I linked above that speaks about the context of that writing and how it’s part of a longer seven-page document.
Summers seems to have been heavy into deregulation three decades ago. More lately he seems to be supportive of minimum wage increases and more taxes for the rich.
While I would prefer people who are ideologically clear for adding a lot of regulations for AI, it seems to me that part of what Sam Altman wanted was a board where people who can clearly counted on to vote that way don’t have the majority.
Larry Summers seems to be a smart independent thinker whose votes are not easy to predict ahead and that made him a good choice as a board candidate on which both sides can agree.
Having him on the board could also be useful for lobbying for the AI safety regulation that OpenAI wants.