I think the reason for the downvotes is that people on LW have generally already formulated their ethical views past the point of wanting to speculate about entirely new normative theories.
Your post probably would have received a better reaction had you framed it as a question (“What flaws can you guys find in a utilitarian theory that values the maximization of the amount of computation energy causes before dissolving into high entropy?”) rather than as some great breakthrough in moral reasoning.
As for constructive feedback, I think Creutzer’s response was pretty much spot on. There are already mainstream normative theories like preference utilitarianism that don’t directly value pain and pleasure and yet seem to make more sense than the alternatives you offered.
Also, your post is specifically about ethics in the age of superintelligence, but doesn’t mention CEV. If you’re going to offer a completely new theory in a field as well-trod as normative ethics, you need to spend more time debunking alternative popular theories and explaining the advantages yours has over them.
You can discuss it here, just do it in comments. People don’t like overambitious top level posts. Read up on the relevant posts that already exist on lesswrong, and comment there and on open threads.
Don’t do either. think about it for a good long while, and ask questions and talk about these topics elsewhere.
I think the reason for the downvotes is that people on LW have generally already formulated their ethical views past the point of wanting to speculate about entirely new normative theories.
Your post probably would have received a better reaction had you framed it as a question (“What flaws can you guys find in a utilitarian theory that values the maximization of the amount of computation energy causes before dissolving into high entropy?”) rather than as some great breakthrough in moral reasoning.
As for constructive feedback, I think Creutzer’s response was pretty much spot on. There are already mainstream normative theories like preference utilitarianism that don’t directly value pain and pleasure and yet seem to make more sense than the alternatives you offered.
Also, your post is specifically about ethics in the age of superintelligence, but doesn’t mention CEV. If you’re going to offer a completely new theory in a field as well-trod as normative ethics, you need to spend more time debunking alternative popular theories and explaining the advantages yours has over them.
You can discuss it here, just do it in comments. People don’t like overambitious top level posts. Read up on the relevant posts that already exist on lesswrong, and comment there and on open threads.