the writer is doing the work rather than hundreds of readers doing analogous work
I agree, but sometimes a person does the best they can and it’s just not enough. I think it’s appropriate to downvote for poor writing, unless the content is compelling. The incompetent writer should ask for help pre-posting if they really care about being understood.
Long version: Writing quality can be meaningfully compared along many axes. There are mechanical axes, like correct grammar usage, clarity of expression, precision, succinctness, and readability, all of which I found to be problems (to varying degrees) with this post. These are all (relatively) easy to improve by proof-reading, making multiple drafts, and/or asking others for editing help. Wei Dai’s post performs well by all of those measurements.
There are also content axes, like originality, rigor, cleverness, evidentiary support, and usefulness. Hacking the CEV for Fun and Profit does pretty well by these measures, too. This post is a little better with content than it is with mechanics, but poor mechanics obscure content and dilute its weight, so I suspect that the points you were trying to make were underevalued, though not drastically so. Fixing up content is harder than fixing up mechanics; for some ideas, it is impossible. After all, some ideas are just wrong or useless (though this is usually far from obvious).
One writing technique I like and don’t use enough: come up with lots of ideas and only explore the most promising ones. Or, as it is written in the Book of Yudkowsky, hold off on proposing solutions.
Err… 33 now. But that is because the content is very compelling. Posts pointing out why CEV is quite possibly a bad thing would have to be quite poor to get a downvote from me. It is a subject that is obvious but avoided.
I see that too (upvoted it before), yet the argument was that my post was poorly written. Further the argument was that my post lacked references and detail. Also, my post mentions CEV. Further, an AI based on a extrapolated volition of humanity might, very well, conclude that given the better part of the future is unable to sustain our volition, it will abandon humanity. If CEV tries to optimize volition, which includes suffering, negative utilitarianism might well be a factor to lean the result towards non-existence. This idea is widely explored in Stephen Baxter’ Manifold trilogy where the far future decides to destroy the universe acausally. This is fictional evidence, but do you want to argue superhuman AI and CEV isn’t? It’s the exploration of an idea.
Again, how I get downvoted this drastically in comparison to three paragraphs which basically say that a superhuman AI (premise) uses CEV (idea based on premise) would base its extrapolation on uploaded copies (idea based on an idea based on a shaky premise). Compare this to my post which is based on evidence from economic and physics.
I voted up your post even in its earlier revisions.
However, Wei Da’s is far more novel and entertaining. I would have voted it up 3 times if I could :)
These are all questions I (and most thinking people, have considered before): “Would it be better not to exist at all, if existence is mostly suffering?” (“To be, or not to be?”). “If a deist-type god (not intervening after creation) created this universe and all its rules that imply the suffering we observe, was that a moral act?” “How much pleasure (and for how long) does it take to make it worth some amount of suffering?”
If there was much beyond that in your post, I may have missed it.
I agree, but sometimes a person does the best they can and it’s just not enough. I think it’s appropriate to downvote for poor writing, unless the content is compelling. The incompetent writer should ask for help pre-posting if they really care about being understood.
This post got upvoted 32 times and mine downvoted 10 times. Is the difference that drastic? I don’t see that, but ok.
Short version: Yes.
Long version: Writing quality can be meaningfully compared along many axes. There are mechanical axes, like correct grammar usage, clarity of expression, precision, succinctness, and readability, all of which I found to be problems (to varying degrees) with this post. These are all (relatively) easy to improve by proof-reading, making multiple drafts, and/or asking others for editing help. Wei Dai’s post performs well by all of those measurements.
There are also content axes, like originality, rigor, cleverness, evidentiary support, and usefulness. Hacking the CEV for Fun and Profit does pretty well by these measures, too. This post is a little better with content than it is with mechanics, but poor mechanics obscure content and dilute its weight, so I suspect that the points you were trying to make were underevalued, though not drastically so. Fixing up content is harder than fixing up mechanics; for some ideas, it is impossible. After all, some ideas are just wrong or useless (though this is usually far from obvious).
One writing technique I like and don’t use enough: come up with lots of ideas and only explore the most promising ones. Or, as it is written in the Book of Yudkowsky, hold off on proposing solutions.
Err… 33 now. But that is because the content is very compelling. Posts pointing out why CEV is quite possibly a bad thing would have to be quite poor to get a downvote from me. It is a subject that is obvious but avoided.
I see that too (upvoted it before), yet the argument was that my post was poorly written. Further the argument was that my post lacked references and detail. Also, my post mentions CEV. Further, an AI based on a extrapolated volition of humanity might, very well, conclude that given the better part of the future is unable to sustain our volition, it will abandon humanity. If CEV tries to optimize volition, which includes suffering, negative utilitarianism might well be a factor to lean the result towards non-existence. This idea is widely explored in Stephen Baxter’ Manifold trilogy where the far future decides to destroy the universe acausally. This is fictional evidence, but do you want to argue superhuman AI and CEV isn’t? It’s the exploration of an idea.
Again, how I get downvoted this drastically in comparison to three paragraphs which basically say that a superhuman AI (premise) uses CEV (idea based on premise) would base its extrapolation on uploaded copies (idea based on an idea based on a shaky premise). Compare this to my post which is based on evidence from economic and physics.
I voted up your post even in its earlier revisions.
However, Wei Da’s is far more novel and entertaining. I would have voted it up 3 times if I could :)
These are all questions I (and most thinking people, have considered before): “Would it be better not to exist at all, if existence is mostly suffering?” (“To be, or not to be?”). “If a deist-type god (not intervening after creation) created this universe and all its rules that imply the suffering we observe, was that a moral act?” “How much pleasure (and for how long) does it take to make it worth some amount of suffering?”
If there was much beyond that in your post, I may have missed it.