I have to say i don’t get why so many of the comments on this are negative. Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things. The idea that anyone with the desire to implement these things will learn about them first on LessWrong seems ludicrous to me.
Anyway, here is an idea:
Offer a $1Million prize for a working self-improving paper-clip maximizing AI. I think that this is very unlikely to produce anything, but since it is a prize you don’t have to actually pay it out until someone builds a UFAI that destroys the universe. If no one seems to be working on it, you can always rescind the prize and move on to another evil scheme. I guess the downside would be if somebody accidentally made a friendly AI while trying to win the prize.
Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things.
Are you sure? Who? The people who do inflict great harm on humanity aren’t cartoon monsters. They’re commonly either unethically selfish, foolishly utopian, or terroristic. The selfish ones will only care about an idea for doing harm if the harm is a byproduct of an idea for getting something they want. The utopians don’t try to do harm, they just create unintended consequences when trying to do good. Neither kind of person is going to be at all interested in an idea whose sole purpose is to do great harm.
Even terrorism is typically either a negotiating point for demands, a provocation to overreaction, or at worst a pure act of revenge; here they actually have a desire to do harm, but targeted harm, not generic “harm on humanity”. Unless an efficient act of anti-altruism happens to affect only a subset of humanity that’s contained within a set of terrorist targets, it’s not even going to interest them!
If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won’t need the million, and if they have an unfriendly one, the also won’t need it, but for different reasons. Even if it’s just a human-level AI, it would be worth orders of magnitude more than that.
I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn’t assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.
Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things.
There are legal ways that you get by playing off laws of different legislations against each other that are not trivial to see.
Take pre-2013 Wikileaks. Immune to being sued in the City of London for defamation because Wikileaks and Julian himself have no fixed residence towards which to deliver post.
Being registered in Sweden to into account their Whistblower protection laws. Having server in yet another country to profit from additional set of laws.
Neither desire nor monetary resources alone are enough to come up with such a scheme. It need people with high intelligence.
LW is a forum with educated people with a very high base IQ.
Wikileaks was well intentioned but I think you could find a bunch of people that argue that it produced significant damage in the world for a cost of less than 1 million dollar.
Bitcoin with it’s enabling of payment transfer for illegal services might also produce a lot of harm for far less of 1 million dollar in initial development costs.
Ideas like Bitcoin or Wikileaks aren’t expensive but they require deep thought.
I have to say i don’t get why so many of the comments on this are negative. Surely, if there was a completely legal way to inflict great harm on humanity for only $1Million then there are a ton of people/groups with the desire and resources to do those things. The idea that anyone with the desire to implement these things will learn about them first on LessWrong seems ludicrous to me.
Anyway, here is an idea:
Offer a $1Million prize for a working self-improving paper-clip maximizing AI. I think that this is very unlikely to produce anything, but since it is a prize you don’t have to actually pay it out until someone builds a UFAI that destroys the universe. If no one seems to be working on it, you can always rescind the prize and move on to another evil scheme. I guess the downside would be if somebody accidentally made a friendly AI while trying to win the prize.
Are you sure? Who? The people who do inflict great harm on humanity aren’t cartoon monsters. They’re commonly either unethically selfish, foolishly utopian, or terroristic. The selfish ones will only care about an idea for doing harm if the harm is a byproduct of an idea for getting something they want. The utopians don’t try to do harm, they just create unintended consequences when trying to do good. Neither kind of person is going to be at all interested in an idea whose sole purpose is to do great harm.
Even terrorism is typically either a negotiating point for demands, a provocation to overreaction, or at worst a pure act of revenge; here they actually have a desire to do harm, but targeted harm, not generic “harm on humanity”. Unless an efficient act of anti-altruism happens to affect only a subset of humanity that’s contained within a set of terrorist targets, it’s not even going to interest them!
If they have to succeed to get the million, why would they care about the prize? If they make a friendly AI they won’t need the million, and if they have an unfriendly one, the also won’t need it, but for different reasons. Even if it’s just a human-level AI, it would be worth orders of magnitude more than that.
I think that LWers assign a much higher probability to a FOOM scenario that most people. Most people probably wouldn’t assign much value to an AI that just seeks to maximize the number of paperclips in the universe, and continuously attempts to improve its ability to make that goal happen. Someone could build something like that expecting that its abilities would level off pretty quickly, and be badly wrong.
There are legal ways that you get by playing off laws of different legislations against each other that are not trivial to see. Take pre-2013 Wikileaks. Immune to being sued in the City of London for defamation because Wikileaks and Julian himself have no fixed residence towards which to deliver post. Being registered in Sweden to into account their Whistblower protection laws. Having server in yet another country to profit from additional set of laws.
Neither desire nor monetary resources alone are enough to come up with such a scheme. It need people with high intelligence.
LW is a forum with educated people with a very high base IQ.
Wikileaks was well intentioned but I think you could find a bunch of people that argue that it produced significant damage in the world for a cost of less than 1 million dollar.
Bitcoin with it’s enabling of payment transfer for illegal services might also produce a lot of harm for far less of 1 million dollar in initial development costs.
Ideas like Bitcoin or Wikileaks aren’t expensive but they require deep thought.