Wow, just wow. I’m extremely disappointed with Schneider and Sagan. Not because of their actual research, which looks like some interesting and useful stuff on thermodynamics. No, what’s disappointing and embarrassing is the deceitful way they pretend that they’ve discovered life’s “purpose.” Like many words, the word “purpose” has multiple referents, sometimes it refers to profound concepts, others times to trivial ones. Schneider and Sagan have discovered some insights into one of the more trivial concepts the word “purpose” can refer to, but are using verbal sleight of hand to pretend they’ve found the answer to one of the word’s more profound referents.
When someone says they are looking for “life’s purpose” what they mean is that they are looking for values and ideals to live their life around. A very profound concept. When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
Now, doing that has it’s place, it’s easier for human brains to model other people than it is for them to model physics, so sometimes it is useful to personify physics. But the “purpose” you discover from that is ultimately trivial. It doesn’t give you values and ideals to live your life around. It just describes forces of nature in an inaccurate, but memorable way.
I’m not saying it’s absurd that to say that entropy tends to increase, that’s basic physics. But it’s absurd to pretend that entropy is the deep, meaningful purpose of human life. Purpose is something humans give themselves, not something that mindless physical laws bestow upon them. Schneider and Sagan may be onto something when they suggest that life has a tendency to destroy gradients. But if they claim that is the “purpose” of human life in any meaningful sense they are dead wrong.
I read Into the Cool a while ago, and it’s a bad book. Schneider and Sagan posit a law of nonequilibrium thermodynamics: “nature abhors a gradient”. They go on to explain pretty much everything in the universe—from fluid dynamics to abiogenesis to evolution to human aging to the economy to the purpose of life to… -- as a consequence of this law. The thing is, all of this is done in a very hand-wavey fashion, without any math.
Now, there is definitely something interesting about the fact that when there are gradients in thermodynamic parameters we often see the emergence of stable, complex structures that can be seen as directed towards driving the system to equilibrium. But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
Perhaps Schneider has worked his ideas out with greater rigor elsewhere (if he has, I would like to see it), but Into the Cool is in the same category as Per Bak’s How Nature Works and Mark Buchanan’s Ubiquity, a popular book that extrapolates useful insights to such an absurd extent that it ventures into mild crackpot territory.
But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
That’s right—MEP is a statistical characterisation of universal Darwinism, which explains a lot about CAS—including why water flows downhill, turbulence, crack propagation, crystal formation, and lots more.
Of course, while this work has some scientific interest (a fact I never denied), it is worthless for determining what the purpose of intelligent life and civilization should be. All it does is explain where life came from, it has no value in determining what we want to do now and what we should do next.
Your original statement that started this discussion was a claim that our civilization maximizes entropy. That claim was based on a trivial map-territory confusion, confounding two different referents of the word “maximize,” Referent 1 being :”Is purposefully designed to greatly increase something by intelligent beings” and Referent 2 being: “Has a statistical tendency to greatly increase something.”
When Eliezer claimed that intelligent creatures and their civilization would only be interesting if they purposefully acted to maximize novelty, you attempted to refute his claim by saying that our civilization is not purposefully acting to maximize novelty because it has a statistical tendency to greatly increase entropy. In other words, you essentially said “Our civilization does not maximize(1) novelty because it maximizes(2) entropy.” You entire argument is based on map-territory confusion.
Your comment is a blatant distortion of the facts. Eliezer’s only references to maximizing are to an “expected paperclip maximizer”. He never talks about “purposeful” maximisation. Nor did I attempt the refutation you are attribting to me. You’ve been reduced to making things up :-(
Eliezer’s only references to maximizing are to an “expected paperclip maximizer”.
Eliezer never literally referred to the word “maximize,” but the thrust of his essay is that a society that purposefully maximizes, or at least greatly increases novelty, is far more interesting than one that doesn’t. He claimed that, for this reason, a paperclip maximizing civilization would be valueless, because paperclips are all the same.
Nor did I attempt the refutation you are attribting to me.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting. You’re ignoring the fact that when humans create, we create art, socialization, science, literature, architecture, history, and all sorts of wonderful things. Paperclip maximizers just create the same paperclip, over and over again. It doesn’t matter how much entropy gets made in the process, humans are a quadrillion times more interesting because there is so much diversity in what we do.
Claiming that all the wonderful, varied, and diverse things humans do is no more interesting than paperclipping, just because you could describe it as “entropy maximization” is ridiculous. You might as well say that all events are equally uninteresting because you can describe all of them as “stuff happening.”
So yes, Eliezer never used the word “maximize” but he definitely claimed that creatures that didn’t value novelty would be boring. And you did attempt to refute his claim by claiming that our civilization’s statistical tendency to increase entropy means that creating art, conversation, science, etc. is no different from paperclipping. I think my objection stands.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting.
You’re in a complete muddle about my position. ‘Maximize’ doesn’t mean ‘increase’. The maximum entropy principle isn’t just “a statistical tendency to increase entropy”. You are apparently thinking about the second law of thermodynamics—which is a completely different idea. Nor was I arguing that human activity was “uninteresting”. Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
What am I supposed to interpret that as besides “Human activity is uninteresting.”? Or at least, “human activity is as uninteresting as paperclip making.”
Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly. I have some problems with your science, you seem to like talking about big ideas that aren’t that strongly supported, but my main objection is to your ethical positions. You constantly act like what is common in nature is what is morally good.
The whole reason I have been so hard on you about personifying forces of nature is that you constantly switch between the descriptive and the normative. You act like humans have a moral duty to maximize entropy and that we’re bad, bad people if we don’t keep evolving. I think that if you stopped personifying natural forces it would make it easier for you to spot when you do this.
Again, answer my moral dilemma: “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
I doubt you even consider moral dilemmas like this because you are interested in talking about big cool ideas, not about challenging them or considering them critically. MEP might have originally been a useful scientific theory when it was first formulated, but you’ve turned it into a Fake Utility Function.
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly.
I don’t know what you are talking about. What “horrifying moral positions”?
I don’t know what you are talking about. What “horrifying moral positions”?
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
You have frequently implied that the metaphorical “goals” of abstract statistical processes like MEP and natural selection are superior to human values like love, compassion, freedom, morality, happiness, novelty, etc. For instance, here you say:
Similarly with human values: those are a bunch of implementation details—not the real target.
The moral position you keep implicitly arguing for, again and again, is that the metaphorical “goals” of abstract natural processes like MEP and natural selection represent real objective morality, while the values and ideals that human beings base their lives around are just a bunch of “implementation details” that it’s perfectly okay to discard if they get in the way. This is exactly backwards. Joy, love, sympathy, curiosity, compassion, novelty, art, etc. are what is really valuable. Preserving these things is what morality is all about. The solemn moral duty of the human race is to make sure that a sizable portion of the future creatures that will exist share these values, even if they do not physically resemble humans.
I was also extremely horrified by your response to the dilemma I posed you. I attempted to prove that MEP is a terrible moral rule by asking you if you would torture children to death in order to greatly increase entropy. The correct response was: “Of course I wouldn’t, the lives and happiness of children are more important than MEP.” Instead of saying that, you changed the subject by saying that the method of entropy production I suggested was inefficient because it might destroy living systems. This implies, as asparisi put it:
So, if the nova’s explosion did not destroy any living systems, you would happily trade the 50 kids for the nova explosion?
Just to be clear, I don’t think that you would ever torture children. I think the beliefs you write about are, thankfully, completely divorced from your behavior. MEP is your Fake Utility Function, not your real one. But that doesn’t change the fact that it’s horrifying to read about. It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
Can you expand on your reasons for believing that studying science was causal to what you categorize here as the destruction of Tim’s moral sense? (I’m not asking why you believe his moral sense has been destroyed; I think I understand your reasoning there. I’m asking why you believe studying science was the cause.)
Because he constantly uses scientific research to justify his moral positions, and then when I challenge them he accuses me of just not understanding the science well enough. He switches back and forth between statements about science and normative statements about what would make the future of humanity good without seeming to notice. Learning about evolutionary science seems to have put him in an Affective Death Spiral around evolution. (I know the symptoms, I used to be in one around capitalism after I started studying economics) It’s one of the more extreme examples of the naturalistic fallacy I’ve ever seen.
Now, since you’ve read some of my other posts you know that I don’t necessarily accept the strong naturalistic fallacy, the idea that ethical statements cannot be reduced to naturalistic ones at all. But I definitely believe in the weaker form of the naturalistic fallacy, the idea that things that are common in nature are not necessarily good. And that is the form of the fallacy Tim makes when he says absurd things like our civilization maximizes entropy or that our values are not precious things that need to be preserved if they get in evolution’s way.
Studying the science of evolution certainly wasn’t sole cause, maybe not even the main cause, of Tim’s ethical confusion, but it certainly contributed to it.
Just to be clear, I don’t think that you would ever torture children.
I totally would. Then—if the situation demanded it and if I didn’t have a fat guy available—I’d throw them all in front of a trolley. Because not torturing children is evil when the alternative to the contrived torturing is a contrived much worse thing.
I meant that I didn’t ever think he’d torture children for no reason other than to increase the level of entropy in the universe (in my original contrived hypothetical the entropy increase was accomplished by having sadistic alien make a star go nova in return for getting to watch the torture. The star was far enough away from inhabited systems that the radiation wouldn’t harm any living things).
I wasn’t meaning to set up “not torturing children” as a deontological rule. Obviously there are some circumstances where it is necessary, such as torturing one child to prevent fifty more children from being tortured for an equal amount of time per child. What I was trying to do was illustrate that Tim’s Maximum Entropy Principle was a really, really bad “maximand” to follow by creating a hypothetical where following it would make you do something insanely evil. I think we can both agree that entropy maximization (at least as an end in itself rather than as a byproduct of some other end) is far less important than preventing the torture of children.
Tim responded to my question by sidestepping the issue, instead of engaging the hypothetical he said that a nova was a bad way to maximize entropy because it might kill living things that would go on to produce more entropy, even though I tried to constrain the hypothetical so that that wasn’t a possibility.
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
But that’s complete nonsense. I already explained this by saying here:
Nor was I arguing that human activity was “uninteresting”
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind. This is not some kind of moral assertion, it’s just a straightforwards description of how these systems would behave. Entropy is, I claimed, not much more interesting than paperclips.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
This is really just a particular case of the “simple rules, complex dynamics” theme that we see in complex systems theory (e.g. game of life, rule 30, game of go, etc).
So: this whole “horrifying moral position” business is your own misunderstanding.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
But that’s complete nonsense. I already explained this by saying here:
Nor was I arguing that human activity was “uninteresting”
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
IMO, boredom is best seen as being a universal instrumental value—and not as an unfortunate result of “universalizing anthropomorphic values”.
And also:
....My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway—by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a “worthless, valueless future”—which is why Yudkowsky fails to provide one.
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
Hamilton has suggested that the best way for selfish individuals to fool everone into thinking that they are nice is to actually belive it themselves (and practice a sort of hypocritical double-think to either self-justify or forget about any non-nice behaviour......Here, Hamilton is suggesting that merely pretending to be a selfless altriust is not good enough—you actually have to believe it yourself to avoid being detected by all the smart psychologists in the rest of society—since they are experts in looking for signs of selfishness.
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Once humanity becomes a little bit more enlightened, things like recognising your nature and aspiring to fulfill the potential of your genes may not be regarded in such a negative light.
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
Your problem is that you didn’t take the implications of Hamilton’s work far enough.
I do say in my essay: “I think Hamilton’s points are good ones”.
There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain -
unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.
It may be worth your time to explicitly disclaim the whole “torturing children to blow up stars” position (instead of appearing to dodge it a second or third time), particularly seeing as if it is a misunderstanding, it is not uniquely Ghatanathoah’s.
When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough. Since the organism may not be so very different from a person, this is often an easier step.
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough.
That doesn’t change the fact that personification is a way to help people think about reality more easily at the expense of accurately describing it. Noses don’t literally have a purpose. It’s just that organisms that are good at smelling things tend to reproduce more.
The problem with Schneider and Sagan is that they confound this metaphorical meaning of the word purpose (the utility function of a personified entity) with a different meaning (ideals to live your life around). Hence their second book makes the absurd statement* that, when you strip the word “purpose” from it basically says “knowing that decreasing entropy gradients is a major reason life arose will give you ideals to live your life around.” That’s ridiculous.
*To be fair that statement was a cover blurb, so it’s possible that it was written by the publisher, not Schneider and Sagan.
Wow, just wow. I’m extremely disappointed with Schneider and Sagan. Not because of their actual research, which looks like some interesting and useful stuff on thermodynamics. No, what’s disappointing and embarrassing is the deceitful way they pretend that they’ve discovered life’s “purpose.” Like many words, the word “purpose” has multiple referents, sometimes it refers to profound concepts, others times to trivial ones. Schneider and Sagan have discovered some insights into one of the more trivial concepts the word “purpose” can refer to, but are using verbal sleight of hand to pretend they’ve found the answer to one of the word’s more profound referents.
When someone says they are looking for “life’s purpose” what they mean is that they are looking for values and ideals to live their life around. A very profound concept. When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
Now, doing that has it’s place, it’s easier for human brains to model other people than it is for them to model physics, so sometimes it is useful to personify physics. But the “purpose” you discover from that is ultimately trivial. It doesn’t give you values and ideals to live your life around. It just describes forces of nature in an inaccurate, but memorable way.
I’m not saying it’s absurd that to say that entropy tends to increase, that’s basic physics. But it’s absurd to pretend that entropy is the deep, meaningful purpose of human life. Purpose is something humans give themselves, not something that mindless physical laws bestow upon them. Schneider and Sagan may be onto something when they suggest that life has a tendency to destroy gradients. But if they claim that is the “purpose” of human life in any meaningful sense they are dead wrong.
I read Into the Cool a while ago, and it’s a bad book. Schneider and Sagan posit a law of nonequilibrium thermodynamics: “nature abhors a gradient”. They go on to explain pretty much everything in the universe—from fluid dynamics to abiogenesis to evolution to human aging to the economy to the purpose of life to… -- as a consequence of this law. The thing is, all of this is done in a very hand-wavey fashion, without any math.
Now, there is definitely something interesting about the fact that when there are gradients in thermodynamic parameters we often see the emergence of stable, complex structures that can be seen as directed towards driving the system to equilibrium. But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
Perhaps Schneider has worked his ideas out with greater rigor elsewhere (if he has, I would like to see it), but Into the Cool is in the same category as Per Bak’s How Nature Works and Mark Buchanan’s Ubiquity, a popular book that extrapolates useful insights to such an absurd extent that it ventures into mild crackpot territory.
That’s right—MEP is a statistical characterisation of universal Darwinism, which explains a lot about CAS—including why water flows downhill, turbulence, crack propagation, crystal formation, and lots more.
Schneider’s original work on the topic is Life as a manifestation of the second law of thermodynamics.
Of course, while this work has some scientific interest (a fact I never denied), it is worthless for determining what the purpose of intelligent life and civilization should be. All it does is explain where life came from, it has no value in determining what we want to do now and what we should do next.
Your original statement that started this discussion was a claim that our civilization maximizes entropy. That claim was based on a trivial map-territory confusion, confounding two different referents of the word “maximize,” Referent 1 being :”Is purposefully designed to greatly increase something by intelligent beings” and Referent 2 being: “Has a statistical tendency to greatly increase something.”
When Eliezer claimed that intelligent creatures and their civilization would only be interesting if they purposefully acted to maximize novelty, you attempted to refute his claim by saying that our civilization is not purposefully acting to maximize novelty because it has a statistical tendency to greatly increase entropy. In other words, you essentially said “Our civilization does not maximize(1) novelty because it maximizes(2) entropy.” You entire argument is based on map-territory confusion.
Your comment is a blatant distortion of the facts. Eliezer’s only references to maximizing are to an “expected paperclip maximizer”. He never talks about “purposeful” maximisation. Nor did I attempt the refutation you are attribting to me. You’ve been reduced to making things up :-(
Eliezer never literally referred to the word “maximize,” but the thrust of his essay is that a society that purposefully maximizes, or at least greatly increases novelty, is far more interesting than one that doesn’t. He claimed that, for this reason, a paperclip maximizing civilization would be valueless, because paperclips are all the same.
You said:
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting. You’re ignoring the fact that when humans create, we create art, socialization, science, literature, architecture, history, and all sorts of wonderful things. Paperclip maximizers just create the same paperclip, over and over again. It doesn’t matter how much entropy gets made in the process, humans are a quadrillion times more interesting because there is so much diversity in what we do.
Claiming that all the wonderful, varied, and diverse things humans do is no more interesting than paperclipping, just because you could describe it as “entropy maximization” is ridiculous. You might as well say that all events are equally uninteresting because you can describe all of them as “stuff happening.”
So yes, Eliezer never used the word “maximize” but he definitely claimed that creatures that didn’t value novelty would be boring. And you did attempt to refute his claim by claiming that our civilization’s statistical tendency to increase entropy means that creating art, conversation, science, etc. is no different from paperclipping. I think my objection stands.
You’re in a complete muddle about my position. ‘Maximize’ doesn’t mean ‘increase’. The maximum entropy principle isn’t just “a statistical tendency to increase entropy”. You are apparently thinking about the second law of thermodynamics—which is a completely different idea. Nor was I arguing that human activity was “uninteresting”. Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
You said:
What am I supposed to interpret that as besides “Human activity is uninteresting.”? Or at least, “human activity is as uninteresting as paperclip making.”
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly. I have some problems with your science, you seem to like talking about big ideas that aren’t that strongly supported, but my main objection is to your ethical positions. You constantly act like what is common in nature is what is morally good.
The whole reason I have been so hard on you about personifying forces of nature is that you constantly switch between the descriptive and the normative. You act like humans have a moral duty to maximize entropy and that we’re bad, bad people if we don’t keep evolving. I think that if you stopped personifying natural forces it would make it easier for you to spot when you do this.
Again, answer my moral dilemma: “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
I doubt you even consider moral dilemmas like this because you are interested in talking about big cool ideas, not about challenging them or considering them critically. MEP might have originally been a useful scientific theory when it was first formulated, but you’ve turned it into a Fake Utility Function.
I don’t know what you are talking about. What “horrifying moral positions”?
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
You have frequently implied that the metaphorical “goals” of abstract statistical processes like MEP and natural selection are superior to human values like love, compassion, freedom, morality, happiness, novelty, etc. For instance, here you say:
The moral position you keep implicitly arguing for, again and again, is that the metaphorical “goals” of abstract natural processes like MEP and natural selection represent real objective morality, while the values and ideals that human beings base their lives around are just a bunch of “implementation details” that it’s perfectly okay to discard if they get in the way. This is exactly backwards. Joy, love, sympathy, curiosity, compassion, novelty, art, etc. are what is really valuable. Preserving these things is what morality is all about. The solemn moral duty of the human race is to make sure that a sizable portion of the future creatures that will exist share these values, even if they do not physically resemble humans.
I was also extremely horrified by your response to the dilemma I posed you. I attempted to prove that MEP is a terrible moral rule by asking you if you would torture children to death in order to greatly increase entropy. The correct response was: “Of course I wouldn’t, the lives and happiness of children are more important than MEP.” Instead of saying that, you changed the subject by saying that the method of entropy production I suggested was inefficient because it might destroy living systems. This implies, as asparisi put it:
Just to be clear, I don’t think that you would ever torture children. I think the beliefs you write about are, thankfully, completely divorced from your behavior. MEP is your Fake Utility Function, not your real one. But that doesn’t change the fact that it’s horrifying to read about. It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
Can you expand on your reasons for believing that studying science was causal to what you categorize here as the destruction of Tim’s moral sense? (I’m not asking why you believe his moral sense has been destroyed; I think I understand your reasoning there. I’m asking why you believe studying science was the cause.)
Because he constantly uses scientific research to justify his moral positions, and then when I challenge them he accuses me of just not understanding the science well enough. He switches back and forth between statements about science and normative statements about what would make the future of humanity good without seeming to notice. Learning about evolutionary science seems to have put him in an Affective Death Spiral around evolution. (I know the symptoms, I used to be in one around capitalism after I started studying economics) It’s one of the more extreme examples of the naturalistic fallacy I’ve ever seen.
Now, since you’ve read some of my other posts you know that I don’t necessarily accept the strong naturalistic fallacy, the idea that ethical statements cannot be reduced to naturalistic ones at all. But I definitely believe in the weaker form of the naturalistic fallacy, the idea that things that are common in nature are not necessarily good. And that is the form of the fallacy Tim makes when he says absurd things like our civilization maximizes entropy or that our values are not precious things that need to be preserved if they get in evolution’s way.
Studying the science of evolution certainly wasn’t sole cause, maybe not even the main cause, of Tim’s ethical confusion, but it certainly contributed to it.
I totally would. Then—if the situation demanded it and if I didn’t have a fat guy available—I’d throw them all in front of a trolley. Because not torturing children is evil when the alternative to the contrived torturing is a contrived much worse thing.
I meant that I didn’t ever think he’d torture children for no reason other than to increase the level of entropy in the universe (in my original contrived hypothetical the entropy increase was accomplished by having sadistic alien make a star go nova in return for getting to watch the torture. The star was far enough away from inhabited systems that the radiation wouldn’t harm any living things).
I wasn’t meaning to set up “not torturing children” as a deontological rule. Obviously there are some circumstances where it is necessary, such as torturing one child to prevent fifty more children from being tortured for an equal amount of time per child. What I was trying to do was illustrate that Tim’s Maximum Entropy Principle was a really, really bad “maximand” to follow by creating a hypothetical where following it would make you do something insanely evil. I think we can both agree that entropy maximization (at least as an end in itself rather than as a byproduct of some other end) is far less important than preventing the torture of children.
Tim responded to my question by sidestepping the issue, instead of engaging the hypothetical he said that a nova was a bad way to maximize entropy because it might kill living things that would go on to produce more entropy, even though I tried to constrain the hypothetical so that that wasn’t a possibility.
But that’s complete nonsense. I already explained this by saying here:
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind. This is not some kind of moral assertion, it’s just a straightforwards description of how these systems would behave. Entropy is, I claimed, not much more interesting than paperclips.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
This is really just a particular case of the “simple rules, complex dynamics” theme that we see in complex systems theory (e.g. game of life, rule 30, game of go, etc).
So: this whole “horrifying moral position” business is your own misunderstanding.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
And also:
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
I do say in my essay: “I think Hamilton’s points are good ones”.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain - unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.
It may be worth your time to explicitly disclaim the whole “torturing children to blow up stars” position (instead of appearing to dodge it a second or third time), particularly seeing as if it is a misunderstanding, it is not uniquely Ghatanathoah’s.
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough. Since the organism may not be so very different from a person, this is often an easier step.
That doesn’t change the fact that personification is a way to help people think about reality more easily at the expense of accurately describing it. Noses don’t literally have a purpose. It’s just that organisms that are good at smelling things tend to reproduce more.
The problem with Schneider and Sagan is that they confound this metaphorical meaning of the word purpose (the utility function of a personified entity) with a different meaning (ideals to live your life around). Hence their second book makes the absurd statement* that, when you strip the word “purpose” from it basically says “knowing that decreasing entropy gradients is a major reason life arose will give you ideals to live your life around.” That’s ridiculous.
*To be fair that statement was a cover blurb, so it’s possible that it was written by the publisher, not Schneider and Sagan.