It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I don’t think that is a safe assumption.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I agree that this a question that deserve serious thought. But the issue of violating the WIkipedia policy doens’t factor much into the calculation.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
If you are a good citizen of the web, you probably do fix Wikipedia errors when you notice them, so you should have an account that doesn’t look spammy. If you don’t, then you probably leave the task to someone else who has a better grasp on Wikipedia.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
Good thing you’re not discussing it in a public forum, then, where screencaps are possible.
But the issue of violating the Wikipedia policy doesn’t factor much into the calculation.
The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI’s reputation.
(For the avoidance of doubt, I don’t think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they’re very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it’s important to note that expected utility maximizing doesn’t by any means obviously produce the conclusions he’s arguing for.)
It looks as if you’re assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it’s more likely to be positive than negative.
I don’t think that is a safe assumption.
As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.
I agree that this a question that deserve serious thought. But the issue of violating the WIkipedia policy doens’t factor much into the calculation.
It’s quite natural behavior to add relevant quotations to a Wikipedia article. I wouldn’t do it with an account without prior benign history or through anonymous edits.
If you are a good citizen of the web, you probably do fix Wikipedia errors when you notice them, so you should have an account that doesn’t look spammy. If you don’t, then you probably leave the task to someone else who has a better grasp on Wikipedia.
Good thing you’re not discussing it in a public forum, then, where screencaps are possible.
The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI’s reputation.
(For the avoidance of doubt, I don’t think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they’re very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it’s important to note that expected utility maximizing doesn’t by any means obviously produce the conclusions he’s arguing for.)