But that’s complete nonsense. I already explained this by saying here:
Nor was I arguing that human activity was “uninteresting”
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
IMO, boredom is best seen as being a universal instrumental value—and not as an unfortunate result of “universalizing anthropomorphic values”.
And also:
....My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway—by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a “worthless, valueless future”—which is why Yudkowsky fails to provide one.
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
Hamilton has suggested that the best way for selfish individuals to fool everone into thinking that they are nice is to actually belive it themselves (and practice a sort of hypocritical double-think to either self-justify or forget about any non-nice behaviour......Here, Hamilton is suggesting that merely pretending to be a selfless altriust is not good enough—you actually have to believe it yourself to avoid being detected by all the smart psychologists in the rest of society—since they are experts in looking for signs of selfishness.
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Once humanity becomes a little bit more enlightened, things like recognising your nature and aspiring to fulfill the potential of your genes may not be regarded in such a negative light.
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
Your problem is that you didn’t take the implications of Hamilton’s work far enough.
I do say in my essay: “I think Hamilton’s points are good ones”.
There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain -
unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
And also:
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
I do say in my essay: “I think Hamilton’s points are good ones”.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain - unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.