Do you know the person that wrote that post? Or anyone else supposedly involved in the events it describes? I’m not sure I could adjudicate the claims in that post, for my own judgement, given my remove from everyone supposedly involved.
I’m also still unsure how any of that, assuming it’s true, should be weighed against the ‘official’ work MIRI has done or is doing. Surely AI safety has to be balanced against those (unrelated) claims somehow, as terrible as they are (or might be), and as terrible as it is to think about ‘balancing’ these ‘costs’ and (potential) ‘benefits’.
Some of the claims in that post also aren’t obviously terrible to me, e.g. MIRI reaching a legal settlement with someone that ‘blackmailed’ them.
And if the person “Ziz” mentioned in the post is the same person I’m thinking of, I’m really confused as to what to think about the other claims, given the conflicting info about them I’ve read.
The post quotes something (about some kind of recollection of a conversation) about “a drama thing” and all of this seems very much like “a drama thing” (or several such ‘drama things’) and it’s really hard to think of any way for me, or anyone not involved, or even anyone that is or was involved, to determine with any confidence what’s actually true about whatever it is that (may have) happened.
I know a few people involved, and I trust that they’re not lying, especially given that some of my own experiences overlap. I lived in the Bay for a couple years, and saw how people acted, so I’m fairly confident that the main claims in the open letter are true.
I’ve written myself a bit about why the payout was so bad here, which the author of the open letter appears to reference.
MIRI wrote this paper: https://arxiv.org/abs/1710.05060 The paper is pretty clear that it’s bad decision theory to pay out to extortion. I agree with the paper’s reasoning, and independently came to a similar conclusion, myself. MIRI paying out means MIRI isn’t willing to put their money where their mouth is. Your ability to actually follow through on what you believe is necessary when doing high-stakes work.
Like, a lot of MIRI’s research is built around this claim about decision theory. It’s fundamental to MIRI’s approach. If one buys that FDT is correct, then MIRI’s failure to consistently implement it here undermines one’s trust in them as an institution. They folded like wet cardboard. If one doesn’t buy FDT, or if one generally thinks paying out to extortionists isn’t a big deal, then it wouldn’t appear to be a big deal that they did. But a big part of the draw towards rationalist spaces and MIRI is that they claim to take ideas seriously. This behaviour indicates (to me) that they don’t, not where it counts.
As for Ziz, from what I understand she’s been the victim of a rather vicious defamation campaign chiefly organized by a determined stalker who is angry with her for not sleeping with him. If you reach out to some rationalist discord mods, you should be able to get a hold of sufficient evidence to back the claims in that post.
I’m still not sure what to think as an outsider, but I appreciate the details you shared.
With respect to the “extortion” specifically, I’d (charitably) expect that MIRI is somewhat constrained by their funders and advisors with respect to settling a (potential) lawsuit, i.e. making a “pay out to extortion”.
I still think all of this, even if it’s true (to any significant extent), isn’t an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
Is there another organization that you think is doing similarly good work without being involved in the same kind of alleged bad behavior?
From what I understand, some of their funders were convinced MIRI would never pay out, and were quite upset to learn they did. For example, one of the people quoted in that open letter was Paul Crowley, a long time supporter who has donated almost $50k. Several donors were so upset they staged a protest.
I still think all of this, even if it’s true (to any significant extent), isn’t an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
MIRI should’ve been an attempt to keep AGI out of the hands of the state
Eliezer several times expressed the view that it’s a mistake to focus too much on whether “good” or “bad” people are in charge of AGI development. Good people with a mistaken methodology can still produce a “bad” AI, and a sufficiently robust methodology (e.g. by aligning with an idealized abstract human rather than a concrete individual) would still produce a “good” AI from otherwise unpromising circumstances.
Thanks!
Do you know the person that wrote that post? Or anyone else supposedly involved in the events it describes? I’m not sure I could adjudicate the claims in that post, for my own judgement, given my remove from everyone supposedly involved.
I’m also still unsure how any of that, assuming it’s true, should be weighed against the ‘official’ work MIRI has done or is doing. Surely AI safety has to be balanced against those (unrelated) claims somehow, as terrible as they are (or might be), and as terrible as it is to think about ‘balancing’ these ‘costs’ and (potential) ‘benefits’.
Some of the claims in that post also aren’t obviously terrible to me, e.g. MIRI reaching a legal settlement with someone that ‘blackmailed’ them.
And if the person “Ziz” mentioned in the post is the same person I’m thinking of, I’m really confused as to what to think about the other claims, given the conflicting info about them I’ve read.
The post quotes something (about some kind of recollection of a conversation) about “a drama thing” and all of this seems very much like “a drama thing” (or several such ‘drama things’) and it’s really hard to think of any way for me, or anyone not involved, or even anyone that is or was involved, to determine with any confidence what’s actually true about whatever it is that (may have) happened.
I know a few people involved, and I trust that they’re not lying, especially given that some of my own experiences overlap. I lived in the Bay for a couple years, and saw how people acted, so I’m fairly confident that the main claims in the open letter are true.
I’ve written myself a bit about why the payout was so bad here, which the author of the open letter appears to reference.
MIRI wrote this paper: https://arxiv.org/abs/1710.05060 The paper is pretty clear that it’s bad decision theory to pay out to extortion. I agree with the paper’s reasoning, and independently came to a similar conclusion, myself. MIRI paying out means MIRI isn’t willing to put their money where their mouth is. Your ability to actually follow through on what you believe is necessary when doing high-stakes work.
Like, a lot of MIRI’s research is built around this claim about decision theory. It’s fundamental to MIRI’s approach. If one buys that FDT is correct, then MIRI’s failure to consistently implement it here undermines one’s trust in them as an institution. They folded like wet cardboard. If one doesn’t buy FDT, or if one generally thinks paying out to extortionists isn’t a big deal, then it wouldn’t appear to be a big deal that they did. But a big part of the draw towards rationalist spaces and MIRI is that they claim to take ideas seriously. This behaviour indicates (to me) that they don’t, not where it counts.
As for Ziz, from what I understand she’s been the victim of a rather vicious defamation campaign chiefly organized by a determined stalker who is angry with her for not sleeping with him. If you reach out to some rationalist discord mods, you should be able to get a hold of sufficient evidence to back the claims in that post.
Thanks!
I’m still not sure what to think as an outsider, but I appreciate the details you shared.
With respect to the “extortion” specifically, I’d (charitably) expect that MIRI is somewhat constrained by their funders and advisors with respect to settling a (potential) lawsuit, i.e. making a “pay out to extortion”.
I still think all of this, even if it’s true (to any significant extent), isn’t an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
Is there another organization that you think is doing similarly good work without being involved in the same kind of alleged bad behavior?
From what I understand, some of their funders were convinced MIRI would never pay out, and were quite upset to learn they did. For example, one of the people quoted in that open letter was Paul Crowley, a long time supporter who has donated almost $50k. Several donors were so upset they staged a protest.
I disagree. I’ve written a bit about why here.
You write
Eliezer several times expressed the view that it’s a mistake to focus too much on whether “good” or “bad” people are in charge of AGI development. Good people with a mistaken methodology can still produce a “bad” AI, and a sufficiently robust methodology (e.g. by aligning with an idealized abstract human rather than a concrete individual) would still produce a “good” AI from otherwise unpromising circumstances.
Can you link to 3 times?
Unequivocal example from 2015: “You can’t take for granted that good people build good AIs and bad people build bad AIs.”
A position paper from 2004. See the whole section “Avoid creating a motive for modern-day humans to fight over the initial dynamic.”
Tweets from 2020.
That’s an artificially narrow example. You can have...
a good person with good methodology
a good person with bad methodology
a bad person with good methodology
a bad person with bad methodology
A question to ask is, when someone aligns an AGI with some approximation of “good values,” whose approximation are we using?