Effective evil is just like effective altruism. You must identify opportunities that are tractable, underfunded and high-impact. Plenty of people are throwing money at the most obvious amoral uses of AI. China is throwing billions of dollars at autonomous weapons and surveillance systems. (The US is funding autonomous weapons too.) Silicon Valley invests countless billions of dollars toward using machine learning to mind control people. If I were a supervillain with $1 billion in compute I wouldn’t spend it on AI at all. That’s like spitting in the ocean. I’d just sell the compute to raise money for engineering an artificial pandemic.
But if I had to use the billion dollars on evil AI specifically, I’d use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
But if I had to use the billion dollars on evil AI specifically, I’d use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
How exactly would you do this? Lots of places market “AI powered” hedge funds, but (as someone in the finance industry) I haven’t heard much about AI beyond things like regularized regression actually giving significant benefit.
Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
How exactly would you do this?…Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
Pyramid scheme. I’d take up as much risk, debt and leverage as I can. Then I’d suddenly default on all of it. There are few defenses against this because rich agents in the financial system have always acted out of self-interest. Nobody has even intentionally thrown away $10 billion dollars and their reputation just to harm strangers indiscriminately. The attack would be unexpected and unprecedented.
Didn’t this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.
Edit: Don’t pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
Yes and yes. However, pyramid schemes are created to maximize personal wealth, not to destroy collective value. Those are not quite the same thing. I think a supervillain could cause more harm to the world by setting out with the explicit aim of crashing the market. It’s the difference between an accidental reactor meltdown verses a nuclear weapon. If LTCM achieved 95% leverage acting with noble aims, imagine what would possible for someone with ignoble motivations.
Effective evil is just like effective altruism. You must identify opportunities that are tractable, underfunded and high-impact. Plenty of people are throwing money at the most obvious amoral uses of AI. China is throwing billions of dollars at autonomous weapons and surveillance systems. (The US is funding autonomous weapons too.) Silicon Valley invests countless billions of dollars toward using machine learning to mind control people. If I were a supervillain with $1 billion in compute I wouldn’t spend it on AI at all. That’s like spitting in the ocean. I’d just sell the compute to raise money for engineering an artificial pandemic.
But if I had to use the billion dollars on evil AI specifically, I’d use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
How exactly would you do this? Lots of places market “AI powered” hedge funds, but (as someone in the finance industry) I haven’t heard much about AI beyond things like regularized regression actually giving significant benefit.
Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
Pyramid scheme. I’d take up as much risk, debt and leverage as I can. Then I’d suddenly default on all of it. There are few defenses against this because rich agents in the financial system have always acted out of self-interest. Nobody has even intentionally thrown away $10 billion dollars and their reputation just to harm strangers indiscriminately. The attack would be unexpected and unprecedented.
Didn’t this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.
Edit: Don’t pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
Yes and yes. However, pyramid schemes are created to maximize personal wealth, not to destroy collective value. Those are not quite the same thing. I think a supervillain could cause more harm to the world by setting out with the explicit aim of crashing the market. It’s the difference between an accidental reactor meltdown verses a nuclear weapon. If LTCM achieved 95% leverage acting with noble aims, imagine what would possible for someone with ignoble motivations.