(edit: The version of utilitarianism I’m talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)
I totally agree!!!
Astronomical waste is bad! (or at least, severely suboptimal)
Wild-animal suffering is bad! (no, there is nothing “sacred” or “beautiful” about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)
Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, “This way is more fun”, or “This way would generate a wider variety of possible outcomes” are not acceptable answers, at least not according to utilitarianism.)
Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
I also agree with your concerns about CEV.
Though of course we’re talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can’t explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it’s a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn’t even agree with us anymore, even though some of his previous writing implied that he did before. (I still can’t get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I’m so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don’t see how this could be possible, but maybe that’s just a result of my own ignorance. And then there’s the extreme difficulty of actually implementing CEV...
And no, I still don’t claim to have a better plan. And I’m not at all comfortable with advocating the creation of a purely Utilitarian AI.
Your plan of trying to spead good memes before the CEV extrapolates everyone’s volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can’t incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.
Oh, I had another conversation recently on the topic of whether it’s possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that… er, wait, I think we actually agreed on the conclusion, but didn’t notice at the time. The conclusion was that if an agent’s core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There’s also the option of trading utilons with the other agent, but that’s not the same as changing the other agent’s values.
Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)
Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.
Oh, and I should point out that the utilitronium shockwave doesn’t actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet’s worth of computronium for the people now living. Or one solar system’s worth. Or one galaxy’s worth. It’s a big universe, after all.
Oh, and if it turns out that some people’s value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend… then maybe we could even afford to leave their brains unmodified. Just so long as they don’t force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they’re allowed to create… is kinda complicated and controversial.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
Oh, and maybe there should also be rules against creating a mind that’s forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn’t involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There’s no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?
I’ve been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I’m no good at writing. Actually, that story I just linked to is an example of this scenario going bad...
Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I’m still not confident enough about this scenario to advocate it too seriously.
Yes, to various extents. (I should have been more helpful in the grandparent comment.)
I think the main problem is you seem to have a “stream of consciousness” style of writing. If you add an additional step of editing after (I’m just assuming you’re not doing much of this now), then you can figure out which points are most important to make and put them succinctly.
The advantage of this, from a utilitarian point of view, is that you can spend less time editing than it will take any particular person to otherwise figure out what you’re trying to say, and thus cause a net benefit to lots of people.
(ETA: note that the great-grandparent comment seems less subject to this particular criticism than some others)
As I was writing the following points, I noticed that I was just making excuses. But instead of deleting them, I left them in, but commented on them, because they felt important and relevant.
I was already aware of the utilitarian argument that it’s worth 1 minute of effort at rewriting in order to save 60 people one second each at reading, and I am making at least some attempt to do that. (correction: no, I didn’t actually do the math. I should at least try to do the math.)
I already spend lots of time reviewing my comments before I post them. I don’t post them until I scan through them once without noticing anything wrong. (correction: no, lately I’ve been posting them before I complete a full scan without finding any new issues, and I’ve been fixing some things by editing the comments after posting them. I should be more strict about following this rule. and as I mention below, I should add new issues to the list of things to scan for.)
Normally I have the opposite problem, spending way too much time reviewing what I wrote, which ends up resulting in other important things not getting said, because I’m spending too much time reviewing and never get around to writing the next thing. (correction: this will probably become less of an issue now that I’ve finished writing all of these “about me” comments.)
It usually feels like there’s a sense of urgency, that if I take too long to write a reply, then everyone will have moved on to other topics, and noone will end up reading my comment. (correction: sometimes there is a reason to post stuff asap, other times there isn’t. I need to learn how to tell the difference.)
But these are just excuses. If I’m going to continue posting comments, then I had better learn how to improve the quality of my comments.
The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks. The feedback says that stream-of-consciousness-style comments are not acceptable. I’ll try to stop doing that.
And that means that in addition to the issues I’m already scanning for, I’ll also scan for… the specific reasons why stream-of-consciousness-style writing is annoying to read:
I need to present the points in the order that would make the most sense to the reader, not in just whatever order I happen to think of them in.
I need to erase points that I discover make no sense, rather than leaving them in just because it feels like there may be some reason to document the mistake.
I need to cut out off-topic side-comments entirely
I need to stop using phrases like “oh, by the way”
I need to cut out any meta-comments from inside my comments, unless for some reason they really are necessary
I especially need to cut out any comments about things like “my brain’s excuse-generator”. I need to remove the offending text, rather than explaining what caused me to write it. Unless it happens to be specifically on-topic, like in this comment.
probably some more things I haven’t thought of.
But so far that just answers what to do about the stream-of-consciousness-style writing. It doesn’t answer what to do about the excessive length of the comments. This comment is also really long, but I’m posting it anyway, because it feels necessary.
Actually, I should ask what everyone else does. Or maybe I should ask just what you, in particular do, Thom. Though this is already far off the original post’s topic...
The “excuse generator” points at something I suspect is a very fast and active part of a lot of people’s minds, but it’s probably worth a post or at least an extended open thread comment of its own.
As far as I can tell, I write so as to make things clear to the state of mind I was in just before I thought of something I’m trying to get across.
Thanks for the feedback, that last sentence sounds like a good idea, I’ll go ahead and try it.
There probably have already been lots posts about the “excuse generator”, though not specifically by that name. For example, Eliezer’s post Against Devil’s Advocacy Though that’s not quite the same thing.
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Just so long as they don’t force any other minds to experience pain.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
Like many others here, I subscribe to emotivism as well as utilitarianism.
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.
When discussing utilitarianism it is important to indicate whether you’re talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.
Right, sorry. I’m referring to total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain.
A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one’s own emotions, rather than arbitrary external events.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
You could also almost certainly convert a considerable percentage of the planet’s mass to computronium without impacting the planet’s ability to support life. A planet isn’t a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.
If computronium is of density equal to or greater than iron, physics wouldn’t need to be changed. Remove the core, replace it with a roughly spherical wad of perfected brain-matter, plus whatever structural supports are necessary to keep the crust in place, and Newton’s Shell Theorem says gravity would be the same. Add some electromagnets for the poles, and channel waste heat from the mechanisms inside to simulate volcanism where appropriate.
Even if computronium turns out to have lower density than iron, and for whatever reason it’s unacceptable to reduce surface gravity or transplant the luddites to an otherwise earthlike planet of correspondingly greater diameter, some of the core’s mass could be converted and the remainder compressed into a black hole. Again, shell theorem means there’s no difference from the outside.
(edit: The version of utilitarianism I’m talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)
I totally agree!!!
Astronomical waste is bad! (or at least, severely suboptimal)
Wild-animal suffering is bad! (no, there is nothing “sacred” or “beautiful” about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)
Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, “This way is more fun”, or “This way would generate a wider variety of possible outcomes” are not acceptable answers, at least not according to utilitarianism.)
Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!
Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
I also agree with your concerns about CEV.
Though of course we’re talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can’t explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it’s a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn’t even agree with us anymore, even though some of his previous writing implied that he did before. (I still can’t get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I’m so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don’t see how this could be possible, but maybe that’s just a result of my own ignorance. And then there’s the extreme difficulty of actually implementing CEV...
And no, I still don’t claim to have a better plan. And I’m not at all comfortable with advocating the creation of a purely Utilitarian AI.
Your plan of trying to spead good memes before the CEV extrapolates everyone’s volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can’t incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.
Oh, I had another conversation recently on the topic of whether it’s possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that… er, wait, I think we actually agreed on the conclusion, but didn’t notice at the time. The conclusion was that if an agent’s core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There’s also the option of trading utilons with the other agent, but that’s not the same as changing the other agent’s values.
Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)
Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.
Oh, and I should point out that the utilitronium shockwave doesn’t actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet’s worth of computronium for the people now living. Or one solar system’s worth. Or one galaxy’s worth. It’s a big universe, after all.
Oh, and if it turns out that some people’s value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend… then maybe we could even afford to leave their brains unmodified. Just so long as they don’t force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they’re allowed to create… is kinda complicated and controversial.
Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world’s population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it’s a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It’s a big universe, plenty of room for everyone. Just so long as they don’t force any other mind to suffer.
Oh, and maybe there should also be rules against creating a mind that’s forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn’t involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There’s no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?
I’ve been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I’m no good at writing. Actually, that story I just linked to is an example of this scenario going bad...
Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I’m still not confident enough about this scenario to advocate it too seriously.
Your comments are tending to be a bit too long.
Thanks for the feedback. I kinda suspected that my comments were too long.
So, um… what would you prefer for me to do instead?
split them into multiple comments?
post them somewhere else (the Transhumanist Wiki?) and link to them from here?
refrain from posting the long comments entirely?
find some way to cut them down?
stick to a single topic per comment, and create multiple comments if I want to discuss multiple topics?
wait longer between posting these comments?
something else I haven’t thought of?
Yes, to various extents. (I should have been more helpful in the grandparent comment.)
I think the main problem is you seem to have a “stream of consciousness” style of writing. If you add an additional step of editing after (I’m just assuming you’re not doing much of this now), then you can figure out which points are most important to make and put them succinctly.
The advantage of this, from a utilitarian point of view, is that you can spend less time editing than it will take any particular person to otherwise figure out what you’re trying to say, and thus cause a net benefit to lots of people.
(ETA: note that the great-grandparent comment seems less subject to this particular criticism than some others)
Thanks again for the feedback.
As I was writing the following points, I noticed that I was just making excuses. But instead of deleting them, I left them in, but commented on them, because they felt important and relevant.
I was already aware of the utilitarian argument that it’s worth 1 minute of effort at rewriting in order to save 60 people one second each at reading, and I am making at least some attempt to do that. (correction: no, I didn’t actually do the math. I should at least try to do the math.)
I already spend lots of time reviewing my comments before I post them. I don’t post them until I scan through them once without noticing anything wrong. (correction: no, lately I’ve been posting them before I complete a full scan without finding any new issues, and I’ve been fixing some things by editing the comments after posting them. I should be more strict about following this rule. and as I mention below, I should add new issues to the list of things to scan for.)
Normally I have the opposite problem, spending way too much time reviewing what I wrote, which ends up resulting in other important things not getting said, because I’m spending too much time reviewing and never get around to writing the next thing. (correction: this will probably become less of an issue now that I’ve finished writing all of these “about me” comments.)
It usually feels like there’s a sense of urgency, that if I take too long to write a reply, then everyone will have moved on to other topics, and noone will end up reading my comment. (correction: sometimes there is a reason to post stuff asap, other times there isn’t. I need to learn how to tell the difference.)
But these are just excuses. If I’m going to continue posting comments, then I had better learn how to improve the quality of my comments.
The stream-of-consciousness style comments were something I wanted feedback on, and now I got the feedback, thanks. The feedback says that stream-of-consciousness-style comments are not acceptable. I’ll try to stop doing that.
And that means that in addition to the issues I’m already scanning for, I’ll also scan for… the specific reasons why stream-of-consciousness-style writing is annoying to read:
I need to present the points in the order that would make the most sense to the reader, not in just whatever order I happen to think of them in.
I need to erase points that I discover make no sense, rather than leaving them in just because it feels like there may be some reason to document the mistake.
I need to cut out off-topic side-comments entirely
I need to stop using phrases like “oh, by the way”
I need to cut out any meta-comments from inside my comments, unless for some reason they really are necessary
I especially need to cut out any comments about things like “my brain’s excuse-generator”. I need to remove the offending text, rather than explaining what caused me to write it. Unless it happens to be specifically on-topic, like in this comment.
probably some more things I haven’t thought of.
But so far that just answers what to do about the stream-of-consciousness-style writing. It doesn’t answer what to do about the excessive length of the comments. This comment is also really long, but I’m posting it anyway, because it feels necessary.
Actually, I should ask what everyone else does. Or maybe I should ask just what you, in particular do, Thom. Though this is already far off the original post’s topic...
This is probably too late, but I really love your writing style, especially your stream of consciousness.
The “excuse generator” points at something I suspect is a very fast and active part of a lot of people’s minds, but it’s probably worth a post or at least an extended open thread comment of its own.
As far as I can tell, I write so as to make things clear to the state of mind I was in just before I thought of something I’m trying to get across.
Thanks for the feedback, that last sentence sounds like a good idea, I’ll go ahead and try it.
There probably have already been lots posts about the “excuse generator”, though not specifically by that name. For example, Eliezer’s post Against Devil’s Advocacy Though that’s not quite the same thing.
And then there’s all the posts on rationalization.
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.
When discussing utilitarianism it is important to indicate whether you’re talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.
Right, sorry. I’m referring to total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don’t bother keeping track of which entity experiences the pleasure or pain.
A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.
Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one’s own emotions, rather than arbitrary external events.
You could also almost certainly convert a considerable percentage of the planet’s mass to computronium without impacting the planet’s ability to support life. A planet isn’t a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.
You need the mass of the core to maintain the gravity. What sort of physics do you have in mind?
If computronium is of density equal to or greater than iron, physics wouldn’t need to be changed. Remove the core, replace it with a roughly spherical wad of perfected brain-matter, plus whatever structural supports are necessary to keep the crust in place, and Newton’s Shell Theorem says gravity would be the same. Add some electromagnets for the poles, and channel waste heat from the mechanisms inside to simulate volcanism where appropriate.
Even if computronium turns out to have lower density than iron, and for whatever reason it’s unacceptable to reduce surface gravity or transplant the luddites to an otherwise earthlike planet of correspondingly greater diameter, some of the core’s mass could be converted and the remainder compressed into a black hole. Again, shell theorem means there’s no difference from the outside.
good point, thanks for mentioning that.
heh, that’s actually what I meant by leaving the planet “mostly intact”, but I should have made that clearer.