Could you make the case on the basis of utilitarian morals?
By the way, I substantially disagree with the Wikipedia policy as it stands. It prevents me from removing mistakes in cases where I have better information than some news reporter who writes something that’s simply wrong. I think citizendium policy on the matter was better.
That no utilitarian argument. I don’t see why it should convince me at all.
Take it as a trolly problem. There are important issues where people die and there are issues where one just acts out tribal loyality. In this case I do see no good reason for tribal loyality given what’s at stake.
Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?
I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives. UFAI is on the road to killing people. I do think you engage into failing to multiply if you think that isn’t worth lifes.
It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I don’t think that is a safe assumption.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I agree that this a question that deserve serious thought. But the issue of violating the WIkipedia policy doens’t factor much into the calculation.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
If you are a good citizen of the web, you probably do fix Wikipedia errors when you notice them, so you should have an account that doesn’t look spammy. If you don’t, then you probably leave the task to someone else who has a better grasp on Wikipedia.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
Good thing you’re not discussing it in a public forum, then, where screencaps are possible.
But the issue of violating the Wikipedia policy doesn’t factor much into the calculation.
The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI’s reputation.
(For the avoidance of doubt, I don’t think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they’re very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it’s important to note that expected utility maximizing doesn’t by any means obviously produce the conclusions he’s arguing for.)
I never claimed to be a complete utilitarian. For that matter I wouldn’t push fat men of bridges.
As far as the Wikipedia policy goes, it a policy that just doesn’t matter much in the grant scheme of things.
For what it’s worth I never touched the German Quantified Self that contained a paragraph with my name for a long time.
I do however have personal reasons for opposing the Wikipedia policy as Wikipedia gets the cause of death of my father wrong and I can’t easily correct the issue as Wikipedia cites a news article with wrong information as it’s source.
Should a good opportunity arise I will place the information somewhere citable and correct that report and I won’t feel bad about it.
The Wikipedia policy is designed in a way that encourages interested parties to anonymously edit, and I do think that Wikipedia deserves the edits from interested parties that it gets till it gets a more reasonable policy that allows interested parties to correct factual errors without planting the information somewhere and then editing against policy.
People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That’s quite apart from your failure to calculate the damage to MIRI’s reputation if they become known as spammers.)
Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it’s to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn’t, the more radical elements here who do not discuss their ideas online any more would).
There’s even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
The world’s better at determining who the right experts are when conflict-of-interest rules are obeyed.
There a fine line between propaganda and adding meaningful content that refers the people who read the article to the right resources.
Wikipedia:Conflict of interest
Please don’t do this.
Could you make the case on the basis of utilitarian morals?
By the way, I substantially disagree with the Wikipedia policy as it stands. It prevents me from removing mistakes in cases where I have better information than some news reporter who writes something that’s simply wrong. I think citizendium policy on the matter was better.
All spammers can justify spamming to themselves.
Funnily enough, one of these works and one is dead.
If you make a claim that Wikipedia works in the sense that it’s effectively prevents interested parties from editing articles I think you are wrong.
I think Wikipedia invites interested parties from editing it by providing no ways for interested parties to get errors corrected through open means.
I think he means that Wikipedia unlike Citizendium has managed to create a usable encyclopaedia.
By making it easy for people to spam it. There are various issues of why Citizendium failed. I’m not claiming that it was overall perfect.
That no utilitarian argument. I don’t see why it should convince me at all.
Take it as a trolly problem. There are important issues where people die and there are issues where one just acts out tribal loyality. In this case I do see no good reason for tribal loyality given what’s at stake.
Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?
I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives. UFAI is on the road to killing people. I do think you engage into failing to multiply if you think that isn’t worth lifes.
It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I don’t think that is a safe assumption.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
I agree that this a question that deserve serious thought. But the issue of violating the WIkipedia policy doens’t factor much into the calculation.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
If you are a good citizen of the web, you probably do fix Wikipedia errors when you notice them, so you should have an account that doesn’t look spammy. If you don’t, then you probably leave the task to someone else who has a better grasp on Wikipedia.
Good thing you’re not discussing it in a public forum, then, where screencaps are possible.
The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI’s reputation.
(For the avoidance of doubt, I don’t think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they’re very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it’s important to note that expected utility maximizing doesn’t by any means obviously produce the conclusions he’s arguing for.)
So you are ready to kill people in order to shift the public perception of an existential risk issue by a tiny bit?
I never claimed to be a complete utilitarian. For that matter I wouldn’t push fat men of bridges.
As far as the Wikipedia policy goes, it a policy that just doesn’t matter much in the grant scheme of things. For what it’s worth I never touched the German Quantified Self that contained a paragraph with my name for a long time.
I do however have personal reasons for opposing the Wikipedia policy as Wikipedia gets the cause of death of my father wrong and I can’t easily correct the issue as Wikipedia cites a news article with wrong information as it’s source.
Should a good opportunity arise I will place the information somewhere citable and correct that report and I won’t feel bad about it.
The Wikipedia policy is designed in a way that encourages interested parties to anonymously edit, and I do think that Wikipedia deserves the edits from interested parties that it gets till it gets a more reasonable policy that allows interested parties to correct factual errors without planting the information somewhere and then editing against policy.
I am not talking about Wikipedia’s policies.
You said “worth lives”—what did you mean by that?
People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That’s quite apart from your failure to calculate the damage to MIRI’s reputation if they become known as spammers.)
Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it’s to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.
Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.
If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.
The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.
That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn’t, the more radical elements here who do not discuss their ideas online any more would).
There’s even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.
The world’s better at determining who the right experts are when conflict-of-interest rules are obeyed.