I think the heuristic “people take AI risk seriously in proportion to how seriously they take AGI” is a very good one. There are some people who buck the trend (e.g. Stuart Russell, some accelerationists), but it seems broadly true (e.g. Hinton and Bengio started caring more about AI risk after taking AGI more seriously). This should push us towards thinking that the current wave of regulatory interest wasn’t feasible until after ChatGPT.
I think that DC people were slower/more cautious about pushing the Overton Window after ChatGPT than they should have been. I think they should update harder from this mistake than they’re currently doing (e.g. updating that they’re too biased towards conventionality). There probably should be at least one person with solid DC credentials who’s the public rallying point for “I am seriously worried about AI takeover”.
I think that “doomers” were far too pessimistic about governance before ChatGPT (in ways that I and others predicted beforehand, e.g. in discussions with Ben and Eliezer). I think they should update harder from this mistake than they’re currently doing (e.g. updating that they’re too biased towards inside-view models and/or fast takeoff and/or high P(doom)).
I think that the AI safety community in general (including myself) was too pessimistic about OpenAI’s strategy of gradually releasing models (COI: I work at OpenAI), and should update more on that mistake.
I think there’s a big structural asymmetry where it’s hard to see the ways in which DC people are contributing to big wins (like AI executive orders), and they can’t talk about it, and the value of this work (and the tradeoffs they make as part of that) is therefore underestimated.
“One of the biggest conspiracies of the last decade” doesn’t seem right. The amount of money/influence involved in FTX is dwarfed by the amount of money/influence thrown around by governments in general, and it’s easier for factions within governments to enforce secrecy than for corporations to do so. More concretely, I’d say that there were probably several different “conspiratorial” things related to covid in various countries that had much bigger effects; probably several more related to ongoing Russia-Ukraine and Israel-Palestine conflicts; probably several more Trump/Biden-related things; maybe some to do with culture-war stuff; probably a few more prosaic fraud or corruption things that stole tens of billions of dollars, just less publicly (e.g. from big government contracts); a bunch of criminal gangs which also have far more money than FTX did; and almost certainly a bunch that don’t fall into any of those categories. (For example, if the CIA is currently doing any stuff comparable to its historical record of messing around with South American countries, that’s plausibly far bigger than FTX. Or various NSA surveillance type things are likely a much bigger deal, in terms of impact, than FTX. Oh, and stuff like NotPetya should probably count too.)
There’s at least one case where I hesitated to express my true beliefs publicly because I was picturing Conjecture putting the quote up on the side of a truck. I don’t know how much I endorse this hesitation, but it’s definitely playing some role in my decisions, and I expect will continue to do so.
I think that “doomers” were far too pessimistic about governance before ChatGPT (in ways that I and others predicted beforehand, e.g. in discussions with Ben and Eliezer). I think they should update harder from this mistake than they’re currently doing (e.g. updating that they’re too biased towards inside-view models and/or fast takeoff and/or high P(doom)).
I think it remains to be seen what the right level of pessimism was. It still seems pretty likely that we’ll see not just useless, but actively catastrophically counter-productive interventions from governments in the next handful of years.
But you’re absolutely right that I was generally pessimistic about policy interventions from 2018ish through to 2021 or so.
My main objection was that I wasn’t aware of any policies that seemed like they helped and I was unenthusiastic about the way that EAs seemed to be optimistic about getting into positions of power without (seeming to me) to be very clear-to-themselves that they didn’t have policy ideas to implement.
I felt better about people going into policy to the extent that those people had clarity for themselves, “I don’t know what to recommend if I have power. I’m trying to execute one part of a two part plan that involves getting power and then using that to advocate for x-risk mitigating policies. I’m intentionally punting that question to my future self / hoping that other EAs thinking full time about this come up with good ideas.” I think I still basically stand by this take. [1]
My main update is it turns out that the basic idea of this post was false. There were developments that were more alarming than “this is business as usual” to a good number of people and that really changed the landscape.
One procedural update that I’ve made from that and similar mistakes is just “I shouldn’t put as much trust in Eliezer’s rhetoric about how the world works, when it isn’t backed up by clearly articulated models. I should treat those ideas a plausible hypotheses, and mostly be much more attentive to evidence that I can see directly.”
Also, I think that this is one instance of the general EA failure mode of pursuing a plan which entails accruing more resources for EA (community building to bring in more people, marketing to bring in more money, politics to acquire power), without a clear personal inside view of what to do with those resources, effectively putting a ton of trust in the EA network to reach correct conclusions about which things help.
There are a bunch of people trusting the EA machine to 1) aim for good things and 2) have good epistemics. They trust it so much they’ll go campaign for a guy running for political office without knowing much about him, except that he’s an EA. Or they route their plan for positive impact on the world through positively impacting EA itself (“I want to do mental health coaching for EAs” or “I want to build tools for EAs” or going to do ops for this AI org, which 80k recommended (despite not knowing much about what they do).)
This is pretty scary, because it seems like a some of those people were not worthy of trust (SBF in particular, won a huge amount of veneration).
And even in the cases the people who are, I believe, earnest geniuses, it is still pretty dangerous to mostly be deferring to them. Paul put a good deal of thought into the impacts of developing RLHF, and he thinks the overall impacts are positive. But that Paul is smart and good does not make it a foregone conclusion that his work is good not net. That’s a really hard question to answer, about which I think most people should be pretty circumspect.
It seems to me that there is an army of earnest young people who want to do the most good that they can. They’ve been told (and believe) the AI risk is the most important problem, but it’s a confusing problem depending on technical expertise, famously fraught problems of forecasting the character of not-yet-existent technologies, and a bunch of weird philosophy. The vast majority of those young people don’t know how to make progress on the core problems of AI risk directly, or even necessarily identify which work is making progress. But they still want to help, so they commit themselves to eg community building, getting more people to join, everyone taking social cues from the few people that seem to have personal traction on the problem about what kinds of object level things are good to do.
This seems concerning to me. This kind of structure where a bunch of smart young people are building a pile of resources to be controlled mostly by deference to a status hierarchy, where you figure out which thinkers are cool by picking up on the social cues of who is regarded as cool, rather than evaluating their work for yourself...well, it’s not so much that I expect it to be coopted, but I just don’t expect that overall agglomerated machine to be particularly steered towards the good, whatever values it professes.
It doesn’t have a structure that binds it particularly tightly to what’s true. Better than most non-profit communities, worse than many for-profit companies, probably.
It seems more concerning to the extent that many of the object level actions to which the EAs are funneling resources are not just useless, but actively bad. It turns out that being smart enough, as a community, to identify the most important problem in the world, but not smart enough to systematically know how to positively impact that problem is pretty dangerous.
eg the core impacts of people trying to impact x-risk so far includ
- (Maybe? Partially?) causing Deepmind to exist
- (Definitely) causing OpenAI to exist
- (Definitely) causing Anthropic to exist
- Inventing RLHF and accelerating the development of RLHF’d language models
It’s pretty unclear to me what the sign of these interventions are. They seem bad on the face of it, but as I’ve watched things develop I’m not as sure. It depends on pretty complicated questions about second and third order effects, and counterfactuals.
But it seems bad to have an army of earnest young people who, in the name of their do-gooding ideology, shovel resources at the decentralized machine doing these maybe good maybe bad activities, because they’re picking up on social cues of who to defer to and what those people think! That doesn’t seem very high EV for the world!
(To be clear, I was one of the army of earnest young people. I spent a number of years helping recruit for a secret research program—I didn’t even have the most basic information, much less the expertise to assess if it was any good—because I was taking my cues from Anna, who was taking her cues from Eliezer.
I did that out of a combination of 1) having read Eliezer’s philosophy, and having enough philosophical grounding to be really impressed by it, and 2) being ready and willing to buy into a heroic narrative to save the world, which these people were (earnestly) offering me.)
And, procedurally, all this is made somewhat more perverse, by the fact that that this community, this movement, was branded as the “carefully think through our do gooding” movement. We raised the flag of “let’s do careful research and cost benefit analysis to guide our charity”, but over time this collapsed into a deferral network, with ideas about what’s good to do driven mostly by the status hierarchy. Cruel irony.
Well said. I agree with all of these except the last one and the gradual model release one (I think the update should be that letting the public interact with models is great, but whether to do it gradually or in a ‘lumpy’ way is unclear. E.g. arguably ChatGPT3.5 should have been delayed until 2023 alongside GPT4. That would have pushed back the acceleration of e.g. GDM a few more months, without (IMO) any harm to public wake-up.)
E.g. arguably ChatGPT3.5 should have been delayed until 2023 alongside GPT4. That would have pushed back the acceleration of e.g. GDM a few more months, without (IMO) any harm to public wake-up.)
That would have pushed back public wakeup equally though, because it was ChatGPT3.5 that caused the wakeup.
Did anyone at OpenAI explicitly say that a factor in their release cadence was getting the public to wake up about the pace of AI research and start demanding regulation? Because this seems more like a post hoc rationalization for the release policy than like an actual intended outcome.
As we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.
A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
And Sam has been pretty vocal in pushing for regulation in general.
“One of the biggest conspiracies of the last decade” doesn’t seem right. The amount of money/influence involved in FTX is dwarfed by the amount of money/influence thrown around by governments in general, and it’s easier for factions within governments to enforce secrecy than for corporations to do so. More concretely, I’d say that there were probably several different “conspiratorial” things related to covid in various countries that had much bigger effects; probably several more related to ongoing Russia-Ukraine and Israel-Palestine conflicts; probably several more Trump/Biden-related things; maybe some to do with culture-war stuff; probably a few more prosaic fraud or corruption things that stole tens of billions of dollars, just less publicly (e.g. from big government contracts); a bunch of criminal gangs which also have far more money than FTX did; and almost certainly a bunch that don’t fall into any of those categories. (For example, if the CIA is currently doing any stuff comparable to its historical record of messing around with South American countries, that’s plausibly far bigger than FTX. Or various NSA surveillance type things are likely a much bigger deal, in terms of impact, than FTX. Oh, and stuff like NotPetya should probably count too.)
There are few programs even within the U.S. government that are larger than $10B without very extensive reporting requirements and where it’s quite hard for them to be conspiratorial in the relevant ways (they might be ineffective, or the result of various bad equilibria, but I don’t think you regularly get conspiracies at this scale).
To calibrate people here, the total budget of the NSA appears to be just around $10B/yr, making it so that even if you classify the whole thing as a conspiracy, at least in terms of expenditure it’s still roughly the size of the FTX fraud (though I more like 10x larger if you count it over the whole last decade) .
To be clear, there is all kinds of stuff going on in the world that is bad, but in terms of things that are as clearly some kind of criminal or high-stakes government conspiracy, I think FTX stands among the biggest ones (though I totally agree there are probably other ones, though by nature its hard for me to say how many).
(In any case, I changed the word “conspiracy” here to “fraud” which I think gets the same point across, and my guess is we all agree that FTX is among the biggest frauds of the last decade)
There are over 100 companies globally with a market cap of more than 100 billion. If we’re indexing on the $10 billion figure, these companies could have a bigger financial impact by doing “conspiracy-type” things that swung their value by <10%. How many of them have actually done that? No idea, but “dozens” doesn’t seem implausible (especially when we note that many of them are based in authoritarian countries).
Re NSA: measuring the impact of the NSA in terms of inputs is misleading. The problem is that they’re doing very highly-leveraged things like inserting backdoors into software, etc. That’s true of politics more generally. It’s very easy for politicians to insert clauses into bills that have >$10 billion of impact. How often are the negotiations leading up to that “conspiratorial”? Again, very hard to know.
in terms of things that are as clearly some kind of criminal or high-stakes government conspiracy, I think FTX stands among the biggest ones
This genuinely seems bizarre to me. A quick quote I found from googling:
The United Nations estimated in a 2011 report that worldwide proceeds from drug trafficking and other transnational organized crime were equivalent to 1.5 percent of global GDP, or $870 billion in 2009.
That’s something like 100 FTXs per year; we mostly just don’t see them. Basically I think that you’re conflating legibility with impact. I agree FTX is one of the most legible ways in which people were defrauded this century; I also think it’s a tiny blip on the scale of the world as a whole. (Of course, that doesn’t make it okay by any means; it was clearly a big fuck-up, there’s a lot we can and should learn from it, and a lot of people who were hurt.)
Does sure seem like there are definitional issues here. I do agree that drug trade and similar things bring the economic effects of conspiracy-type things up a lot, and I hadn’t considered those, and agree that if you count things in that reference class FTX is a tiny blip.
I think given that, I basically agree with you that FTX isn’t that close to one of the biggest conspiracies of the last decade. I do think it’s at the top of frauds in the last decade, though that’s a narrower category.
I do think it’s at the top of frauds in the last decade, though that’s a narrower category.
Nikola went from a peak market cap of $66B to ~$1B today, vs. FTX which went from ~$32B to [some unknown but non-negative number].
I also think the Forex scandal counts as bigger (as one reference point, banks paid >$10B in fines), although I’m not exactly sure how one should define the “size” of fraud.[1]
I wouldn’t be surprised if there’s some precise category in which FTX is the top, but my guess is that you have to define that category fairly precisely.
Wikipedia says “the monetary losses caused by manipulation of the forex market were estimated to represent $11.5 billion per year for Britain’s 20.7 million pension holders alone” which, if anywhere close to true, would make this way bigger than FTX, but I think the methodology behind that number is just guessing that market manipulation made foreign-exchange x% less efficient, and then multiplying through by x%, which isn’t a terrible methodology but also isn’t super rigorous.
I wasn’t intending to say “the literal biggest”, though I think it’s a decent candidate for the literal biggest. Depending on your definitions I agree things like Nikola or Forex could come out on top. I think it’s hard to define things in a way so that it isn’t in the top 5.
I think that the AI safety community in general (including myself) was too pessimistic about OpenAI’s strategy of gradually releasing models (COI: I work at OpenAI), and should update more on that mistake.
I agree with this!
I thought it was obviously dumb, and in retrospect, I don’t know.
A couple of quick, loosely-related thoughts:
I think the heuristic “people take AI risk seriously in proportion to how seriously they take AGI” is a very good one. There are some people who buck the trend (e.g. Stuart Russell, some accelerationists), but it seems broadly true (e.g. Hinton and Bengio started caring more about AI risk after taking AGI more seriously). This should push us towards thinking that the current wave of regulatory interest wasn’t feasible until after ChatGPT.
I think that DC people were slower/more cautious about pushing the Overton Window after ChatGPT than they should have been. I think they should update harder from this mistake than they’re currently doing (e.g. updating that they’re too biased towards conventionality). There probably should be at least one person with solid DC credentials who’s the public rallying point for “I am seriously worried about AI takeover”.
I think that “doomers” were far too pessimistic about governance before ChatGPT (in ways that I and others predicted beforehand, e.g. in discussions with Ben and Eliezer). I think they should update harder from this mistake than they’re currently doing (e.g. updating that they’re too biased towards inside-view models and/or fast takeoff and/or high P(doom)).
I think that the AI safety community in general (including myself) was too pessimistic about OpenAI’s strategy of gradually releasing models (COI: I work at OpenAI), and should update more on that mistake.
I think there’s a big structural asymmetry where it’s hard to see the ways in which DC people are contributing to big wins (like AI executive orders), and they can’t talk about it, and the value of this work (and the tradeoffs they make as part of that) is therefore underestimated.
“One of the biggest conspiracies of the last decade” doesn’t seem right. The amount of money/influence involved in FTX is dwarfed by the amount of money/influence thrown around by governments in general, and it’s easier for factions within governments to enforce secrecy than for corporations to do so. More concretely, I’d say that there were probably several different “conspiratorial” things related to covid in various countries that had much bigger effects; probably several more related to ongoing Russia-Ukraine and Israel-Palestine conflicts; probably several more Trump/Biden-related things; maybe some to do with culture-war stuff; probably a few more prosaic fraud or corruption things that stole tens of billions of dollars, just less publicly (e.g. from big government contracts); a bunch of criminal gangs which also have far more money than FTX did; and almost certainly a bunch that don’t fall into any of those categories. (For example, if the CIA is currently doing any stuff comparable to its historical record of messing around with South American countries, that’s plausibly far bigger than FTX. Or various NSA surveillance type things are likely a much bigger deal, in terms of impact, than FTX. Oh, and stuff like NotPetya should probably count too.)
There’s at least one case where I hesitated to express my true beliefs publicly because I was picturing Conjecture putting the quote up on the side of a truck. I don’t know how much I endorse this hesitation, but it’s definitely playing some role in my decisions, and I expect will continue to do so.
I think it remains to be seen what the right level of pessimism was. It still seems pretty likely that we’ll see not just useless, but actively catastrophically counter-productive interventions from governments in the next handful of years.
But you’re absolutely right that I was generally pessimistic about policy interventions from 2018ish through to 2021 or so.
My main objection was that I wasn’t aware of any policies that seemed like they helped and I was unenthusiastic about the way that EAs seemed to be optimistic about getting into positions of power without (seeming to me) to be very clear-to-themselves that they didn’t have policy ideas to implement.
I felt better about people going into policy to the extent that those people had clarity for themselves, “I don’t know what to recommend if I have power. I’m trying to execute one part of a two part plan that involves getting power and then using that to advocate for x-risk mitigating policies. I’m intentionally punting that question to my future self / hoping that other EAs thinking full time about this come up with good ideas.” I think I still basically stand by this take. [1]
My main update is it turns out that the basic idea of this post was false. There were developments that were more alarming than “this is business as usual” to a good number of people and that really changed the landscape.
One procedural update that I’ve made from that and similar mistakes is just “I shouldn’t put as much trust in Eliezer’s rhetoric about how the world works, when it isn’t backed up by clearly articulated models. I should treat those ideas a plausible hypotheses, and mostly be much more attentive to evidence that I can see directly.”
Also, I think that this is one instance of the general EA failure mode of pursuing a plan which entails accruing more resources for EA (community building to bring in more people, marketing to bring in more money, politics to acquire power), without a clear personal inside view of what to do with those resources, effectively putting a ton of trust in the EA network to reach correct conclusions about which things help.
There are a bunch of people trusting the EA machine to 1) aim for good things and 2) have good epistemics. They trust it so much they’ll go campaign for a guy running for political office without knowing much about him, except that he’s an EA. Or they route their plan for positive impact on the world through positively impacting EA itself (“I want to do mental health coaching for EAs” or “I want to build tools for EAs” or going to do ops for this AI org, which 80k recommended (despite not knowing much about what they do).)
This is pretty scary, because it seems like a some of those people were not worthy of trust (SBF in particular, won a huge amount of veneration).
And even in the cases the people who are, I believe, earnest geniuses, it is still pretty dangerous to mostly be deferring to them. Paul put a good deal of thought into the impacts of developing RLHF, and he thinks the overall impacts are positive. But that Paul is smart and good does not make it a foregone conclusion that his work is good not net. That’s a really hard question to answer, about which I think most people should be pretty circumspect.
It seems to me that there is an army of earnest young people who want to do the most good that they can. They’ve been told (and believe) the AI risk is the most important problem, but it’s a confusing problem depending on technical expertise, famously fraught problems of forecasting the character of not-yet-existent technologies, and a bunch of weird philosophy. The vast majority of those young people don’t know how to make progress on the core problems of AI risk directly, or even necessarily identify which work is making progress. But they still want to help, so they commit themselves to eg community building, getting more people to join, everyone taking social cues from the few people that seem to have personal traction on the problem about what kinds of object level things are good to do.
This seems concerning to me. This kind of structure where a bunch of smart young people are building a pile of resources to be controlled mostly by deference to a status hierarchy, where you figure out which thinkers are cool by picking up on the social cues of who is regarded as cool, rather than evaluating their work for yourself...well, it’s not so much that I expect it to be coopted, but I just don’t expect that overall agglomerated machine to be particularly steered towards the good, whatever values it professes.
It doesn’t have a structure that binds it particularly tightly to what’s true. Better than most non-profit communities, worse than many for-profit companies, probably.
It seems more concerning to the extent that many of the object level actions to which the EAs are funneling resources are not just useless, but actively bad. It turns out that being smart enough, as a community, to identify the most important problem in the world, but not smart enough to systematically know how to positively impact that problem is pretty dangerous.
eg the core impacts of people trying to impact x-risk so far includ
- (Maybe? Partially?) causing Deepmind to exist
- (Definitely) causing OpenAI to exist
- (Definitely) causing Anthropic to exist
- Inventing RLHF and accelerating the development of RLHF’d language models
It’s pretty unclear to me what the sign of these interventions are. They seem bad on the face of it, but as I’ve watched things develop I’m not as sure. It depends on pretty complicated questions about second and third order effects, and counterfactuals.
But it seems bad to have an army of earnest young people who, in the name of their do-gooding ideology, shovel resources at the decentralized machine doing these maybe good maybe bad activities, because they’re picking up on social cues of who to defer to and what those people think! That doesn’t seem very high EV for the world!
(To be clear, I was one of the army of earnest young people. I spent a number of years helping recruit for a secret research program—I didn’t even have the most basic information, much less the expertise to assess if it was any good—because I was taking my cues from Anna, who was taking her cues from Eliezer.
I did that out of a combination of 1) having read Eliezer’s philosophy, and having enough philosophical grounding to be really impressed by it, and 2) being ready and willing to buy into a heroic narrative to save the world, which these people were (earnestly) offering me.)
And, procedurally, all this is made somewhat more perverse, by the fact that that this community, this movement, was branded as the “carefully think through our do gooding” movement. We raised the flag of “let’s do careful research and cost benefit analysis to guide our charity”, but over time this collapsed into a deferral network, with ideas about what’s good to do driven mostly by the status hierarchy. Cruel irony.
Well said. I agree with all of these except the last one and the gradual model release one (I think the update should be that letting the public interact with models is great, but whether to do it gradually or in a ‘lumpy’ way is unclear. E.g. arguably ChatGPT3.5 should have been delayed until 2023 alongside GPT4. That would have pushed back the acceleration of e.g. GDM a few more months, without (IMO) any harm to public wake-up.)
I especially want to reemphasize your point 2.
That would have pushed back public wakeup equally though, because it was ChatGPT3.5 that caused the wakeup.
Did anyone at OpenAI explicitly say that a factor in their release cadence was getting the public to wake up about the pace of AI research and start demanding regulation? Because this seems more like a post hoc rationalization for the release policy than like an actual intended outcome.
See Sam Altman here:
And Sam has been pretty vocal in pushing for regulation in general.
It would have pushed it back, but then the extra shock of going straight to ChatGPT4 would have made up for it I think. Not sure obviously.
then chatgpt4 would still have had low rate limits, so most people would still be more informed by ChatGPT3.5
There are few programs even within the U.S. government that are larger than $10B without very extensive reporting requirements and where it’s quite hard for them to be conspiratorial in the relevant ways (they might be ineffective, or the result of various bad equilibria, but I don’t think you regularly get conspiracies at this scale).
To calibrate people here, the total budget of the NSA appears to be just around $10B/yr, making it so that even if you classify the whole thing as a conspiracy, at least in terms of expenditure it’s still roughly the size of the FTX fraud (though I more like 10x larger if you count it over the whole last decade) .
To be clear, there is all kinds of stuff going on in the world that is bad, but in terms of things that are as clearly some kind of criminal or high-stakes government conspiracy, I think FTX stands among the biggest ones (though I totally agree there are probably other ones, though by nature its hard for me to say how many).
(In any case, I changed the word “conspiracy” here to “fraud” which I think gets the same point across, and my guess is we all agree that FTX is among the biggest frauds of the last decade)
There are over 100 companies globally with a market cap of more than 100 billion. If we’re indexing on the $10 billion figure, these companies could have a bigger financial impact by doing “conspiracy-type” things that swung their value by <10%. How many of them have actually done that? No idea, but “dozens” doesn’t seem implausible (especially when we note that many of them are based in authoritarian countries).
Re NSA: measuring the impact of the NSA in terms of inputs is misleading. The problem is that they’re doing very highly-leveraged things like inserting backdoors into software, etc. That’s true of politics more generally. It’s very easy for politicians to insert clauses into bills that have >$10 billion of impact. How often are the negotiations leading up to that “conspiratorial”? Again, very hard to know.
This genuinely seems bizarre to me. A quick quote I found from googling:
That’s something like 100 FTXs per year; we mostly just don’t see them. Basically I think that you’re conflating legibility with impact. I agree FTX is one of the most legible ways in which people were defrauded this century; I also think it’s a tiny blip on the scale of the world as a whole. (Of course, that doesn’t make it okay by any means; it was clearly a big fuck-up, there’s a lot we can and should learn from it, and a lot of people who were hurt.)
Does sure seem like there are definitional issues here. I do agree that drug trade and similar things bring the economic effects of conspiracy-type things up a lot, and I hadn’t considered those, and agree that if you count things in that reference class FTX is a tiny blip.
I think given that, I basically agree with you that FTX isn’t that close to one of the biggest conspiracies of the last decade. I do think it’s at the top of frauds in the last decade, though that’s a narrower category.
Nikola went from a peak market cap of $66B to ~$1B today, vs. FTX which went from ~$32B to [some unknown but non-negative number].
I also think the Forex scandal counts as bigger (as one reference point, banks paid >$10B in fines), although I’m not exactly sure how one should define the “size” of fraud.[1]
I wouldn’t be surprised if there’s some precise category in which FTX is the top, but my guess is that you have to define that category fairly precisely.
Wikipedia says “the monetary losses caused by manipulation of the forex market were estimated to represent $11.5 billion per year for Britain’s 20.7 million pension holders alone” which, if anywhere close to true, would make this way bigger than FTX, but I think the methodology behind that number is just guessing that market manipulation made foreign-exchange x% less efficient, and then multiplying through by x%, which isn’t a terrible methodology but also isn’t super rigorous.
I wasn’t intending to say “the literal biggest”, though I think it’s a decent candidate for the literal biggest. Depending on your definitions I agree things like Nikola or Forex could come out on top. I think it’s hard to define things in a way so that it isn’t in the top 5.
Agree. Most people will naturally buy AGI Safety if they really believe in AGI. No AGI->AGI is the hard part, not AGI->AGI Safety.
I agree with all of these (except never felt worried about being quoted by Conjecture)
I agree with this!
I thought it was obviously dumb, and in retrospect, I don’t know.