People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That’s quite apart from your failure to calculate the damage to MIRI’s reputation if they become known as spammers.)
Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it’s to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn’t, the more radical elements here who do not discuss their ideas online any more would).
There’s even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
The world’s better at determining who the right experts are when conflict-of-interest rules are obeyed.
People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That’s quite apart from your failure to calculate the damage to MIRI’s reputation if they become known as spammers.)
Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it’s to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.
That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn’t, the more radical elements here who do not discuss their ideas online any more would).
There’s even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.
The world’s better at determining who the right experts are when conflict-of-interest rules are obeyed.