You said: “Our civilisation maximises entropy.” Our civilization consists of all the humans in the world. When you’re asking what our civilization is trying to maximize you’re asking what the humans of the world are trying to maximize. Humans try to do things they enjoy, things that are fun. Therefore our civilization tries to maximize fun.
I know that because that’s basic human psychology 101. Humans want to be happy and have fulfilled preferences.
Are plants trying to maximise “fun”?
We’re talking about our civilization. In other words, all the humans in the world. Plants aren’t human, so whether they maximize fun is irrelevant. I suppose if you regarded human tools and artifacts as part of our civilization then agricultural plants could be regarded as part of it. But they aren’t the part of our civilization that makes decisions on what to maximize, humans are.
Plants aren’t trying to maximize anything. They’re plants, they don’t have minds. If I was to use the word maximize as liberally as you do I could actually argue that agricultural plants do try to maximize fun, because humans grow them for the purpose of eating, and eating is fun. But that wouldn’t be strictly accurate, plants just execute their genetically coded behaviors, any purpose they have is really the purpose of the consequentalist minds that grow them, not of the plants. Saying that agricultural plants have any purpose at all is the mind-projection fallacy.
If “fun” is being maximised, why is there so much suffering in the world?
Because some humans are selfish and try to maximize their own fun at the expense of the fun of others. And sometimes we make big mistakes when trying to make the world more fun. But still, most of the time we try to work together to have fun. We aren’t that good at it yet, but we’re trying and keep improving. The world is getting progressively more fun.
If you systems are in contention, is it really the one that is having the most fun that will win?
Yes. Humans who are enjoying life the most are generally regarded as being more successful at life than humans who are not. This is a basic and easily observable fact.
The “fun-as-maximand” theory seems trivially refuted by the facts.
It’s easily confirmed by the facts. As humans have grown richer and more technologically advanced they have devoted more and more of their resources to having fun. Look at the existence of places like Disneyworld for evidence.
“Fun”—if we are trying to treat the concept seriously—is better characterised as the proxy that brains use for the inclusive fitness of their associated organism.
No it isn’t. Brains don’t care about inclusive genetic fitness. At all. They never have. If you want evidence for that, note the fact that humans do things like use condoms. Also note that the growth of the world’s population is slowing and will probably stop by the end of the 21st century if trends continue.
There’s a scientific literature on the subject of what God’s utility function is. Entire books have been written about the topic. I’m familiar with this literature, are you?
That literature has exactly zero relevance to our current discussion, which is what human beings, value, care about, and try to maximize. You learn about that by studying basic psychology. Evolutionary theory may give us insights into how humans came to have our current values, but it has no relevance on what we should do now that we have them.
Our values are what we value, how we came to have them is irrelevant. If our values were bestowed on us by an alien geneticist rather than evolution we would behave exactly the same as we do now. Humans don’t give a crap about “god’s utility function.” If they end up increasing entropy it is as a side effect to obtaining their real goals.
We had better talk about “optimization” then, or we will talk past each other.
Optimization has the same problem. Optimization literally refers to a consequentialist creature using its future forecasting abilities to determine how an object or meme would better suit its goals and altering that thing accordingly. Evolution can be metaphorically said to optimize, but that isn’t strictly true. It’s just a form of personification to make thinking about evolution easier.
Strictly speaking, evolution is just a description of a series of trends. Since human minds are bad at modeling trends, but good at modeling other consequentialists, sometimes it’s useful to pretend that evolution is a consequentialist with “goals” and a “utility function” to help people understand it. It’s less scientifically accurate than modeling evolution as a series of trends, but it makes up for it by being easier for a human brain to compute. The problem is that, while most scientists understand this, there are some people who who misinterpret this to mean that evolution literally has goals, desires, and utility functions. You appear to be one of these people.
Really? How do you know that?
Because literally speaking, only consequentialist minds maximize things. You might be able to say evolution maximizes things as a useful metaphor, but literally speaking it isn’t true.
Evolution is a gigantic optimization process with a maximand.
No it isn’t. It is useful to pretend that it is because doing so makes it a little easier for the human mind to think about evolution. But really, evolution is just an abstract series of mindless trends.
You claimed above that it is “fun”—and my claim is that it is entropy.
I never claimed evolution tries to maximize fun. I claimed our civilization does. In other words, that the consequentialist minds making up human civilization use their forecasting abilities to foresee possible futures, and then steer the universe towards the one where they are having the most fun.
As I say, there’s a substantial scientific literature on the topic—have you looked at it?
I’m familiar with some of the literature, and I’ve looked at your website. You constantly confuse the metaphorical “goals” evolution has with the real goals that consequentialist minds such as human beings have. For instance you say:
Another example: currently, researchers at ITER in France are working on an enormous fusion reactor, to allow us to accelerate the conversion of order into entropy still further.
This is trivially false, the reason researchers are working on a fusion reactor is to secure human beings cheap renewable energy to have more fun with. The fact that it increases entropy is a side-effect. The consequentialist human minds do not foresee a future with more entropy and take action in order to secure that future. They foresee a future where humans are using cheap energy to have more fun and take actions to secure that future. The entropy increase is an unfortunate, but acceptable side effect.
What you remind me of is one of those theologians who describe God as an “unmoved mover” or something like that and suggest such a thing must exist (which was a reasonable hypothesis at one point in history, even if it isn’t now). They then make the ridiculous leap of logic that because an unmoved mover must exist, and you can call such a thing “God,” that therefore a God with all the ludicrously specific human-like properties described in the Bible must exist.
Similarly, you take some basic facts about evolution and physics that every educated person agrees are true. Then you make bizarre leaps of logic to conclude that human beings care about maximizing IGF and maximizing entropy and other obvious falsehoods. I am not objecting to the evolutionary biology research you cite, I am objecting to the bizarre and unjustified inferences about human psychology and moral philosophy you use that research to make.
We had better talk about “optimization” then, or we will talk past each other.
Optimization has the same problem. Optimization literally refers to a consequentialist creature using its future forecasting abilities to determine how an object or meme would better suit its goals and altering that thing accordingly.
Strictly speaking, evolution is just a description of a series of trends. Since human minds are bad at modeling trends, but good at modeling other consequentialists, sometimes it’s useful to pretend that evolution is a consequentialist with “goals” and a “utility function” to help people understand it. It’s less scientifically accurate than modeling evolution as a series of trends, but it makes up for it by being easier for a human brain to compute. The problem is that, while most scientists understand this, there are some people who who misinterpret this to mean that evolution literally has goals, desires, and utility functions. You appear to be one of these people.
Feel free to substitute “maximisation” terminology if my preferred lingo causes you conceptual problems. Selfishness, progress and optimisation can all be “cashed out” in more long-winded terms. Remember: teleonomy is just teleology in new clothes.
Another example: currently, researchers at ITER in France are working on an enormous fusion reactor, to allow us to accelerate the conversion of order into entropy still further.
This is trivially false, the reason researchers are working on a fusion reactor is to secure human beings cheap renewable energy to have more fun with. The fact that it increases entropy is a side-effect. The consequentialist human minds do not foresee a future with more entropy and take action in order to secure that future. They foresee a future where humans are using cheap energy to have more fun and take actions to secure that future. The entropy increase is an unfortunate, but acceptable side effect.
This line of reasoning is intuitive, but, I believe, wrong. Destroying energy gradients is actively selected for in lots of ways. For example, it actively deprives competitors of resources. Organisms compete to dissipate sources of order by reaching them quicky and eliminating before others can. The picture of entropy as an inconvenient side effect seems attractive initially, but doesn’t withstand close inspection.
I don’t deny that properly functioning brains act like hedonic maximisers. Hedonic maximisation is a much weaker explanatory principle than entropy maximisation, though. The latter explains why water flows downhill. Hedonic maximisation is a narrow and weak idea—by comparison.
Then you make bizarre leaps of logic to conclude that human beings care about maximizing IGF and maximizing entropy and other obvious falsehoods.
To clarify, many humans fail to maximise their own inclusive fitnesses—largely because they are malfunctioning—with many of the most common malfunctions being caused by parasites—and the most common parasites being responsible for memetic hijacking. Humans and the ecosystems they are part of really do maximise entropy (subject to constraints) - or at least the MEP is a deep and powerful explanatory principle—when it comes to CAS and living systems.
“Fun”—if we are trying to treat the concept seriously—is better characterised as the proxy that brains use for the inclusive fitness of their associated organism.
No it isn’t. Brains don’t care about inclusive genetic fitness. At all. They never have. If you want evidence for that, note the fact that humans do things like use condoms. Also note that the growth of the world’s population is slowing and will probably stop by the end of the 21st century if trends continue.
You are misunderstanding me. Pleasure is literally evolution’s way of getting organisms do things that help them to increase their inclusive fitness. This idea is true, and is in no way refuted by condoms or the demographic transition.
You are misunderstanding me. Pleasure is literally evolution’s way of getting organisms do things that help them to increase their inclusive fitness.
It is evolution’s (metaphorical) way. When you say that brains use it as a proxy for genetic fitness that gave the impression that you thought brains literally cared about fitness and were optimizing for it.
You said: “Our civilisation maximises entropy.” Our civilization consists of all the humans in the world. When you’re asking what our civilization is trying to maximize you’re asking what the humans of the world are trying to maximize. Humans try to do things they enjoy, things that are fun. Therefore our civilization tries to maximize fun.
Brains are hedonic maximisers. They’re only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well—machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.
Hedonism is a means to an end. Pleasure is there for a reason. The reason is that it helps organisms reproduce, and organisms reproduce—ultimately—because that’s the best way to maximise entropy—according to the deep principle of the MEP.
Think you can more accurately characterise nature’s maximand? Go right ahead. If David Pearce has his way, hedonism will play a more significant role in the future—until aliens eat our lunch—but whether David Pearce’s future comes to pass remains to be seen.
organisms reproduce—ultimately—because that’s the best way to maximise entropy—according to the deep principle of the MEP.
This “because” doesn’t seem like a meaningful answer to a real question. Life on Earth makes use of some solar and geothermal energy before it heads off into space. Does Earth generate much more entropy than Venus? ETA: it seems to me that in the long-run you get the same effects. In the short run local life can use up free energy more quickly, but it can also stockpile resources for later extraction (fossil fuels, acorns, stellar engineering).
Think you can more accurately characterise nature’s maximand?
Thermodynamics tells us that doing most anything at all increases entropy. Calling that a utility function looks like talking about how the utility functions of falling objects value being closer to large masses.
it seems to me that in the long-run you get the same effects. In the short run local life can use up free energy more quickly, but it can also stockpile resources for later extraction (fossil fuels, acorns, stellar engineering).
Your point here isn’t clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.
Think you can more accurately characterise nature’s maximand?
Thermodynamics tells us that doing most anything at all increases entropy. Calling that a utility function looks like talking about how the utility functions of falling objects value being closer to large masses.
Except that that particular effect can be explained as a manifestation of the MEP principle—which is much more general. So the idea that objects like to be close to other objects is redundant, unnecessary—and can be discarded on Occamian grounds.
Your point here isn’t clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.
At any given time, much of the grasslands and fertile ocean are not engaged in photosynthesis because herbivores have cropped the primary producers, reducing short-term entropy production. You can swallow that problem with a catch-all “best of their ability clause,” but now “ability” needs to talk about the ability of herbivores to compete in a sea of ill-defended plants, selfish genes, and so forth.
The move to biological and social systems is an attempt at empirical generalization with some success, since untapped free energy has the potential to power living creatures’ reproduction, and mutants that tap such sources proliferate. Humans can use free energy to power machinery as well as their own bodies, so they tap available resources. Great, you have a correlate for the proliferation of life.
But this isn’t enough to power accurate predictions about the portion of Earth’s surface performing photosynthesis, or whether humanity (or successors) will use up the available resources in the Solar System as quickly as possible, or as quickly as will maximize interstellar colonization and energy use, or much more slowly to increase the total computation that can be performed, or slowly so as to sustain a smaller population with longer lifespans.
Your point here isn’t clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.
At any given time, much of the grasslands and fertile ocean are not engaged in photosynthesis because herbivores have cropped the primary producers, reducing short-term entropy production. You can swallow that problem with a catch-all “best of their ability clause,” but now “ability” needs to talk about the ability of herbivores to compete in a sea of ill-defended plants, selfish genes, and so forth.
Herbivores cause massive devastation and destruction to plant life. The extend life’s reach underground—where plants cannot live. They led to oil drilling, international flights, global warming and nuclear power. If you want to defend the thesis that the planet would be a better dissipator without them, you have quite a challenge on your hands, it seems to me.
But this isn’t enough to power accurate predictions about the portion of Earth’s surface performing photosynthesis, or whether humanity (or successors) will use up the available resources in the Solar System as quickly as possible, or as quickly as will maximize interstellar colonization and energy use, or much more slowly to increase the total computation that can be performed, or slowly so as to sustain a smaller population with longer lifespans.
MEP is a statistical principle. It illuminates these issues, but doesn’t make them trivial. Compare with natural selection—which also illuminates these areas without trivializing them.
Brains are hedonmic maximisers. They’re only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well—machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.
All those things are controlled by brains. They execute the brains’ commands, which is optimizing the world for fun. They are extensions of the human brains. Now, they might increase entropy or something as a side effect, but everything they do they do because a brain commanded it.
Hedonism is a means to an end. Pleasure is there for a reason.
Life doesn’t give us reason and purpose. We give life reason and purpose. Speculating on what sort of metaphorical “purposes” life and nature might have might be a fun intellectual exercise, but ultimately it’s just a game. Our purposes come from the desires of our brains, not from some mindless abstract trend. Your tendency to think otherwise is the major intellectual error that keeps you from grokking Eliezer’s arguments.
The reason is that it helps organisms reproduce, and organisms reproduce—ultimately—because that’s the best way to maximise entropy—according to the deep principle of the MEP.
Here’s a question for you: Suppose some super-advanced aliens show up that offer to detonate a star for you. That will generate huge amounts of entropy, far more than you ever could by yourself. All you have to do in return is torture some children to death for the aliens’ amusement. They’ll make sure the police and your friends never find out you did it.
Would you torture those children? No, of course you wouldn’t. Because you care about being moral and doing good and don’t give a crap about entropy. You just think you do because you have a tendency to confuse real human goals with metaphorical, fake “goals” that abstract natural trends have.
Think you can more accurately characterise nature’s maximand?
Why would I need to do that? My main point is that human civilization doesn’t and shouldn’t give a crap about nature’s worthless maximand. When you post comments on Less Wrong a lot of time you seem to act like maximizing IGF and entropy are good things that organisms ought to do. You get upset at Eliezer for suggesting we should do something better with our lives. This is because you’re deeply mistaken about the nature of goodness, progress, and values.
But just for fun, I’ll take up your challenge. Nature doesn’t have a maximand. It isn’t sentient. And even if Nature was sentient and did have a maximand, the proper response for the human race would be to ignore Nature and obey their own desires instead of its stupid, evil commands.
That being said, even you instead asked me to answer the more reasonable question “What trends in evolution sort of vaguely resemble the maximand of an intelligent creature?” I still wouldn’t say entropy maximization. The idea that evolution tends to do that is an illusion created by the Second Law of Thermodynamics. Because of the way that 2LTD works, doing anything for any reason tends to increase entropy. So obviously if an evolved organism does anything at all, it will end up increasing entropy. This creates an illusion that organisms are trying to maximize entropy. Carl Shulman is right, calling entropy nature’s maximand is absurd, you might as well say “being attracted by gravity” or “being made of matter” are what nature commands.
A better (metaphorical) maximand might actually be local entropy minimization. It’s obviously impossible to minimize total entropy, but life has a tendency to decrease the entropy in its local area. Life tends to use energy to remove entropy from its local area by building complex cellular structures. It’s sort of an entropy pump, if you will. So if we metaphorically pretended it evolution had a purpose, it would actually be the reverse of what you claim.
But again, that’s not my main point. My main point is that while you have a lot of good sources for your biology references, you don’t have nearly as good a grasp of basic psychology and philosophy. This causes you to make huge errors when discussing what good, positive ways for life to develop in the future are.
you might as well say “being attracted by gravity” or “being made of matter” are what nature commands.
This makes me want to start a religion where the Creator of the Universe gives points to things that behave like a member of the universe. “Thou shalt be made of matter.” “Thou shalt be attracted by gravitational force.” “Thou shalt increase entropy.” etc. Too bad ‘Scientology’ is taken as a name. Physianity, maybe?
In the beginning, there was nothing. The cosmos were void—timeless, and without form. And, lo, God pointed upon the abyss, and said ‘LET THERE BE ENERGY’ And there was energy. And God pointed to the energy, and said, ‘and let you be bound among yourselves that you may wander the void together, proton to neutron, and proton to proton, and let the electrons always seek their opposite number, within the appropriate energy barrier, and let the photons wander where they will.’ Lo, and god spoke to the stranger particles, for some time, but what He said was secret. And God saw hydrogen, and saw that it was good.
And God saw the particles moving at all different speeds, away from one another, and saw that it was bad, and God said ‘and let the cosmos be bent and cradle the particles, that they may always be brought back together, though they be one billion kilometers apart, within the appropriate energy barrier, of course. And let the curvature of space rise without end with the energy of velocity, that they all be bound by a common yoke.’ And god looked upon the spirals of gas, and saw that it was good.
And god took the gas and energy above, and the gas and energy below, and said ‘and you shall be matter, and you shall be antimatter, and your charges shall ever be in conflict, and never the twain shall meet, except in very small quantities.’ And so there was the matter and the antimatter.
And God saw the cosmos stretching out to a single future, and said ‘And let you all be amplitude configurations, that you may not know thyself from the thy neighbor, and that the future may expand without end.’ And god saw the multiverse, and saw that it was good.
A better (metaphorical) maximand might actually be local entropy minimization. It’s obviously impossible to minimize total entropy, but life has a tendency to decrease the entropy in its local area. Life tends to use energy to remove entropy from its local area by building complex cellular structures. It’s sort of an entropy pump, if you will. So if we metaphorically pretended it evolution had a purpose, it would actually be the reverse of what you claim.
Prigogine actually came up with a genuine entropy minimization principle once (in contrast to your idea—which has never been formalised as a real entropy minimization principle—AFAIK). He called it the theorem of “minimum entropy production”. However, in “Maximum entropy production and the fluctuation theorem” Dewar explained it as a special case of his “MaxEnt” formalism.
Brains are hedonmic maximisers. They’re only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well—machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.
All those things are controlled by brains. They execute the brains’ commands, which is optimizing the world for fun. They are extensions of the human brains. Now, they might increase entropy or something as a side effect, but everything they do they do because a brain commanded it.
Nope. For instance, look at Kevin Kelly’s book “Out of control”. Or look into memetics. Human brains are an important force, but there are other maximisation processes going on, in culture, with genes and inside machines.
“Out of Control” appears to be primarily about decentralized decision making processes like democracy and capitalism. I never said that brains controlled the artifacts of civilization in a centralized fashion, I just said that they control them. Obviously human beings use all sorts of decentralized methods to help coordinate with each other.
That being said, while systems are not controlled in a centralized manner, they are restricted in a centralized manner. For instance, capitalism only works properly if people are prevented from killing and stealing. Even if there is no need to centrally control everything to get positive results, there is a need to centrally control some things.
There seems to be a later section in “Out of Control” where Kelly suggests giving up control to our machines is good in the same way that dictators giving up central control to democracy and capitalism is good. This seems short-sighted, especially in light of things like Bostrom’s orthogonality thesis. The reasons democracy and capitalism do so much good is that:
Human minds are an important component of them, and (most) humans care about morality, so the systems tend to be steered towards morally good results.
There are some centralized restrictions on what these decentralized systems are able to do.
Unless you are somehow able to program the machines with moral values (i.e. make an FAI), turning control over to them seems like a bad idea. Creating moral machines isn’t impossible, but the main point of Eliezer’s writing is that it is much, much harder than it seems. I think he’s quite correct.
As for memetics, the idea impressed me when I first came across it, but there doesn’t seem to have been much development in the field since then. I am no longer impressed. In any case, the main reason memes “propagate” is that they help a brain fulfill its desires in some way, so really ever-evolving memes are just another one of the human mind’s tools in its continuing quest for universal domination.
As for memetics, the idea impressed me when I first came across it, but there doesn’t seem to have been much development in the field since then. I am no longer impressed. In any case, the main reason memes “propagate” is that they help a brain fulfill its desires in some way, so really ever-evolving memes are just another one of the human mind’s tools in its continuing quest for universal domination.
From a biological perspective, brains are seen as being a way for genes and memes to make more copies of themselves.
That this is a valuable point of view is illustrated by some sea squirts—which digest their own brains to further their own reproductive ends.
In nature, genes are fundamental, while brains are optional and expendable.
From a biological perspective, brains are seen as being a way for genes and memes to make more copies of themselves....
...In nature, genes are fundamental, while brains are optional and expendable.
Genes are biologically fundamental, certainly. You will get no argument from me there (although the fact that brains are biologically expendable does not imply that it is moral to expend them). The evidence that memes are more fundamental than brains, however, is not nearly as strong.
It is quite possible to model memes as “reproducing,” by being passed from one brain to another. But most of the time the reason the meme is passed from one brain to another is because they aid the brain in fulfilling its desires in some way. The memes associated with farming, for instance, spread because they helped the brain fulfill its desire to not starve. In instances where brains stopped needing the farming memes to obtain food (such as when the Plains Indians acquired horses and were suddenly able to hunt bison more efficiently) those memes promptly died out.
There are parasitic memes, cult ideologies for instance, that reproduce by exploiting flaws in the brain’s cognitive architecture. But the majority of memes “reproduce” by demonstrating their usefulness to the brain carrying them. You could say that a meme’s “fitness” is measured by its usefulness to its host.
You could say that a meme’s “fitness” is measured by its usefulness to its host.
That wouldn’t be terribly accurate, though. Smoking memes, obesity memes, patriotism memes, and lots of advertising and marketing memes are not good for their hosts, but rather benefit those attempting to manipulate them. However, there’s usually a human somewhere at the end of the chain today.
That probably won’t remain the case, though. After the coming memetic takeover we are likely to have an engineered future—and then it will be memes all the way down.
That probably won’t remain the case, though. After the coming memetic takeover we are likely to have an engineered future—and then it will be memes all the way down.
The memetic takeover you describe would just consist of intelligences running on computer-like substrates instead of organic substrates. That isn’t morally relevant to me, I don’t care if the creatures of the future are made of carbon or silicon. I care about what sort of minds they have, what they value and believe in.
I’m not sure referring to an intelligent creature that is made of computing code instead of carbon as a “meme” is true to the common definitions of the term. I always thought of memes as contagious ideas and concepts, not as a term to describe an entire intellect.
After the memetic takeover there would still be intelligent creatures, they’d just run on a different substrate. Many of them could possibly be brain-like in design or have human-like values. They would continue to exchange memes with each other just as they did before, and those memes would spread or die depending on their usefulness to the intelligent creatures. Just like they do now.
I’m not sure referring to an intelligent creature that is made of computing code instead of carbon as a “meme” is true to the common definitions of the term.
People don’t call the works of Shakespeare a “meme” either. Conventionally, such things are made of memes—and meme products.
The idea that evolution tends to do that is an illusion created by the Second Law of Thermodynamics. Because of the was the 2LTD works, doing anything for any reason tends to increase entropy. So obviously if an evolved organism does anything at all, it will end up increasing entropy. This creates an illusion that organisms are trying to maximize entropy.
This should be mistunderstanding #1 in the MEP FAQ. MEP is not the same as the second law. It’s a whole different idea, which you don’t appear to know anything about.
Extremum rate principles like MEP have proven very useful for describing the behavior of certain systems, but the extrapolation of the principle into a general law of nature remains hugely speculative. In fact, at this point I think the status of MEP can be described as “not even wrong”, because we do not yet have a a rigorous notion of thermodynamic entropy that extends unproblematically to nonequilibrium states. The literature on entropy production usually relies on equations for the entropy production rate that are compatible with our usual definition of thermodynamic entropy when we are dealing with quasistatic transformations, but if we use these rate equations as the basis for deriving a non-equilibrium conception of entropy we get absurd results (like ascribing infinite entropy to non-equilibrium states).
Dewar’s work, which you link below, is an improvement, in that it operates with a notion of entropy that is clearly defined both in and out of equilibrium, derived from the MaxEnt formalism. But the relationship of this entropy to thermodynamic entropy when we’re out of equilibrium is not obvious. Also, Dewar’s derivation of MEP relies on applying some very specific and nonstandard constraints to the problem, constraints whose general applicability he does not really justify. If I were permitted to jury-rig the constraints, I could derive all kinds of principles using MaxEnt. But of course, that wouldn’t be enough to establish those principles as natural law.
Extremum rate principles like MEP have proven very useful for describing the behavior of certain systems, but the extrapolation of the principle into a general law of nature remains hugely speculative. In fact, at this point I think the status of MEP can be described as “not even wrong”, because we do not yet have a a rigorous notion of thermodynamic entropy that extends unproblematically to nonequilibrium states.
Entropy and MEP are statistical phenomena. Thermodynamics is an application This has been understood since Boltzmann’s era. Most of the associated “controversy” just looks like ignorance to me.
Entropy maximisation in living systems has been around since Lotka 1922. Universal Darwinism applies it to all CAS. Lots of people don’t understand it—but that isn’t really much of an argument.
Carl Shulman is right, calling entropy nature’s maximand is absurd, you might as well say “being attracted by gravity” or “being made of matter” are what nature commands.
Wow, just wow. I’m extremely disappointed with Schneider and Sagan. Not because of their actual research, which looks like some interesting and useful stuff on thermodynamics. No, what’s disappointing and embarrassing is the deceitful way they pretend that they’ve discovered life’s “purpose.” Like many words, the word “purpose” has multiple referents, sometimes it refers to profound concepts, others times to trivial ones. Schneider and Sagan have discovered some insights into one of the more trivial concepts the word “purpose” can refer to, but are using verbal sleight of hand to pretend they’ve found the answer to one of the word’s more profound referents.
When someone says they are looking for “life’s purpose” what they mean is that they are looking for values and ideals to live their life around. A very profound concept. When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
Now, doing that has it’s place, it’s easier for human brains to model other people than it is for them to model physics, so sometimes it is useful to personify physics. But the “purpose” you discover from that is ultimately trivial. It doesn’t give you values and ideals to live your life around. It just describes forces of nature in an inaccurate, but memorable way.
I’m not saying it’s absurd that to say that entropy tends to increase, that’s basic physics. But it’s absurd to pretend that entropy is the deep, meaningful purpose of human life. Purpose is something humans give themselves, not something that mindless physical laws bestow upon them. Schneider and Sagan may be onto something when they suggest that life has a tendency to destroy gradients. But if they claim that is the “purpose” of human life in any meaningful sense they are dead wrong.
I read Into the Cool a while ago, and it’s a bad book. Schneider and Sagan posit a law of nonequilibrium thermodynamics: “nature abhors a gradient”. They go on to explain pretty much everything in the universe—from fluid dynamics to abiogenesis to evolution to human aging to the economy to the purpose of life to… -- as a consequence of this law. The thing is, all of this is done in a very hand-wavey fashion, without any math.
Now, there is definitely something interesting about the fact that when there are gradients in thermodynamic parameters we often see the emergence of stable, complex structures that can be seen as directed towards driving the system to equilibrium. But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
Perhaps Schneider has worked his ideas out with greater rigor elsewhere (if he has, I would like to see it), but Into the Cool is in the same category as Per Bak’s How Nature Works and Mark Buchanan’s Ubiquity, a popular book that extrapolates useful insights to such an absurd extent that it ventures into mild crackpot territory.
But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
That’s right—MEP is a statistical characterisation of universal Darwinism, which explains a lot about CAS—including why water flows downhill, turbulence, crack propagation, crystal formation, and lots more.
Of course, while this work has some scientific interest (a fact I never denied), it is worthless for determining what the purpose of intelligent life and civilization should be. All it does is explain where life came from, it has no value in determining what we want to do now and what we should do next.
Your original statement that started this discussion was a claim that our civilization maximizes entropy. That claim was based on a trivial map-territory confusion, confounding two different referents of the word “maximize,” Referent 1 being :”Is purposefully designed to greatly increase something by intelligent beings” and Referent 2 being: “Has a statistical tendency to greatly increase something.”
When Eliezer claimed that intelligent creatures and their civilization would only be interesting if they purposefully acted to maximize novelty, you attempted to refute his claim by saying that our civilization is not purposefully acting to maximize novelty because it has a statistical tendency to greatly increase entropy. In other words, you essentially said “Our civilization does not maximize(1) novelty because it maximizes(2) entropy.” You entire argument is based on map-territory confusion.
Your comment is a blatant distortion of the facts. Eliezer’s only references to maximizing are to an “expected paperclip maximizer”. He never talks about “purposeful” maximisation. Nor did I attempt the refutation you are attribting to me. You’ve been reduced to making things up :-(
Eliezer’s only references to maximizing are to an “expected paperclip maximizer”.
Eliezer never literally referred to the word “maximize,” but the thrust of his essay is that a society that purposefully maximizes, or at least greatly increases novelty, is far more interesting than one that doesn’t. He claimed that, for this reason, a paperclip maximizing civilization would be valueless, because paperclips are all the same.
Nor did I attempt the refutation you are attribting to me.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting. You’re ignoring the fact that when humans create, we create art, socialization, science, literature, architecture, history, and all sorts of wonderful things. Paperclip maximizers just create the same paperclip, over and over again. It doesn’t matter how much entropy gets made in the process, humans are a quadrillion times more interesting because there is so much diversity in what we do.
Claiming that all the wonderful, varied, and diverse things humans do is no more interesting than paperclipping, just because you could describe it as “entropy maximization” is ridiculous. You might as well say that all events are equally uninteresting because you can describe all of them as “stuff happening.”
So yes, Eliezer never used the word “maximize” but he definitely claimed that creatures that didn’t value novelty would be boring. And you did attempt to refute his claim by claiming that our civilization’s statistical tendency to increase entropy means that creating art, conversation, science, etc. is no different from paperclipping. I think my objection stands.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting.
You’re in a complete muddle about my position. ‘Maximize’ doesn’t mean ‘increase’. The maximum entropy principle isn’t just “a statistical tendency to increase entropy”. You are apparently thinking about the second law of thermodynamics—which is a completely different idea. Nor was I arguing that human activity was “uninteresting”. Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
Our civilisation maximises entropy—not paperclips—which hardly seems much more interesting.
What am I supposed to interpret that as besides “Human activity is uninteresting.”? Or at least, “human activity is as uninteresting as paperclip making.”
Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly. I have some problems with your science, you seem to like talking about big ideas that aren’t that strongly supported, but my main objection is to your ethical positions. You constantly act like what is common in nature is what is morally good.
The whole reason I have been so hard on you about personifying forces of nature is that you constantly switch between the descriptive and the normative. You act like humans have a moral duty to maximize entropy and that we’re bad, bad people if we don’t keep evolving. I think that if you stopped personifying natural forces it would make it easier for you to spot when you do this.
Again, answer my moral dilemma: “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
I doubt you even consider moral dilemmas like this because you are interested in talking about big cool ideas, not about challenging them or considering them critically. MEP might have originally been a useful scientific theory when it was first formulated, but you’ve turned it into a Fake Utility Function.
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly.
I don’t know what you are talking about. What “horrifying moral positions”?
I don’t know what you are talking about. What “horrifying moral positions”?
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
You have frequently implied that the metaphorical “goals” of abstract statistical processes like MEP and natural selection are superior to human values like love, compassion, freedom, morality, happiness, novelty, etc. For instance, here you say:
Similarly with human values: those are a bunch of implementation details—not the real target.
The moral position you keep implicitly arguing for, again and again, is that the metaphorical “goals” of abstract natural processes like MEP and natural selection represent real objective morality, while the values and ideals that human beings base their lives around are just a bunch of “implementation details” that it’s perfectly okay to discard if they get in the way. This is exactly backwards. Joy, love, sympathy, curiosity, compassion, novelty, art, etc. are what is really valuable. Preserving these things is what morality is all about. The solemn moral duty of the human race is to make sure that a sizable portion of the future creatures that will exist share these values, even if they do not physically resemble humans.
I was also extremely horrified by your response to the dilemma I posed you. I attempted to prove that MEP is a terrible moral rule by asking you if you would torture children to death in order to greatly increase entropy. The correct response was: “Of course I wouldn’t, the lives and happiness of children are more important than MEP.” Instead of saying that, you changed the subject by saying that the method of entropy production I suggested was inefficient because it might destroy living systems. This implies, as asparisi put it:
So, if the nova’s explosion did not destroy any living systems, you would happily trade the 50 kids for the nova explosion?
Just to be clear, I don’t think that you would ever torture children. I think the beliefs you write about are, thankfully, completely divorced from your behavior. MEP is your Fake Utility Function, not your real one. But that doesn’t change the fact that it’s horrifying to read about. It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
Can you expand on your reasons for believing that studying science was causal to what you categorize here as the destruction of Tim’s moral sense? (I’m not asking why you believe his moral sense has been destroyed; I think I understand your reasoning there. I’m asking why you believe studying science was the cause.)
Because he constantly uses scientific research to justify his moral positions, and then when I challenge them he accuses me of just not understanding the science well enough. He switches back and forth between statements about science and normative statements about what would make the future of humanity good without seeming to notice. Learning about evolutionary science seems to have put him in an Affective Death Spiral around evolution. (I know the symptoms, I used to be in one around capitalism after I started studying economics) It’s one of the more extreme examples of the naturalistic fallacy I’ve ever seen.
Now, since you’ve read some of my other posts you know that I don’t necessarily accept the strong naturalistic fallacy, the idea that ethical statements cannot be reduced to naturalistic ones at all. But I definitely believe in the weaker form of the naturalistic fallacy, the idea that things that are common in nature are not necessarily good. And that is the form of the fallacy Tim makes when he says absurd things like our civilization maximizes entropy or that our values are not precious things that need to be preserved if they get in evolution’s way.
Studying the science of evolution certainly wasn’t sole cause, maybe not even the main cause, of Tim’s ethical confusion, but it certainly contributed to it.
Just to be clear, I don’t think that you would ever torture children.
I totally would. Then—if the situation demanded it and if I didn’t have a fat guy available—I’d throw them all in front of a trolley. Because not torturing children is evil when the alternative to the contrived torturing is a contrived much worse thing.
I meant that I didn’t ever think he’d torture children for no reason other than to increase the level of entropy in the universe (in my original contrived hypothetical the entropy increase was accomplished by having sadistic alien make a star go nova in return for getting to watch the torture. The star was far enough away from inhabited systems that the radiation wouldn’t harm any living things).
I wasn’t meaning to set up “not torturing children” as a deontological rule. Obviously there are some circumstances where it is necessary, such as torturing one child to prevent fifty more children from being tortured for an equal amount of time per child. What I was trying to do was illustrate that Tim’s Maximum Entropy Principle was a really, really bad “maximand” to follow by creating a hypothetical where following it would make you do something insanely evil. I think we can both agree that entropy maximization (at least as an end in itself rather than as a byproduct of some other end) is far less important than preventing the torture of children.
Tim responded to my question by sidestepping the issue, instead of engaging the hypothetical he said that a nova was a bad way to maximize entropy because it might kill living things that would go on to produce more entropy, even though I tried to constrain the hypothetical so that that wasn’t a possibility.
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
But that’s complete nonsense. I already explained this by saying here:
Nor was I arguing that human activity was “uninteresting”
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind. This is not some kind of moral assertion, it’s just a straightforwards description of how these systems would behave. Entropy is, I claimed, not much more interesting than paperclips.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
This is really just a particular case of the “simple rules, complex dynamics” theme that we see in complex systems theory (e.g. game of life, rule 30, game of go, etc).
So: this whole “horrifying moral position” business is your own misunderstanding.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
But that’s complete nonsense. I already explained this by saying here:
Nor was I arguing that human activity was “uninteresting”
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
IMO, boredom is best seen as being a universal instrumental value—and not as an unfortunate result of “universalizing anthropomorphic values”.
And also:
....My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway—by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a “worthless, valueless future”—which is why Yudkowsky fails to provide one.
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
Hamilton has suggested that the best way for selfish individuals to fool everone into thinking that they are nice is to actually belive it themselves (and practice a sort of hypocritical double-think to either self-justify or forget about any non-nice behaviour......Here, Hamilton is suggesting that merely pretending to be a selfless altriust is not good enough—you actually have to believe it yourself to avoid being detected by all the smart psychologists in the rest of society—since they are experts in looking for signs of selfishness.
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Once humanity becomes a little bit more enlightened, things like recognising your nature and aspiring to fulfill the potential of your genes may not be regarded in such a negative light.
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
Your problem is that you didn’t take the implications of Hamilton’s work far enough.
I do say in my essay: “I think Hamilton’s points are good ones”.
There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain -
unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.
It may be worth your time to explicitly disclaim the whole “torturing children to blow up stars” position (instead of appearing to dodge it a second or third time), particularly seeing as if it is a misunderstanding, it is not uniquely Ghatanathoah’s.
When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough. Since the organism may not be so very different from a person, this is often an easier step.
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough.
That doesn’t change the fact that personification is a way to help people think about reality more easily at the expense of accurately describing it. Noses don’t literally have a purpose. It’s just that organisms that are good at smelling things tend to reproduce more.
The problem with Schneider and Sagan is that they confound this metaphorical meaning of the word purpose (the utility function of a personified entity) with a different meaning (ideals to live your life around). Hence their second book makes the absurd statement* that, when you strip the word “purpose” from it basically says “knowing that decreasing entropy gradients is a major reason life arose will give you ideals to live your life around.” That’s ridiculous.
*To be fair that statement was a cover blurb, so it’s possible that it was written by the publisher, not Schneider and Sagan.
Hedonism is a means to an end. Pleasure is there for a reason.
Life doesn’t give us reason and purpose. We give life reason and purpose. Speculating on what sort of metaphorical “purposes” life and nature might have might be a fun intellectual exercise, but ultimately it’s just a game.
The game seems to involve willfully misunderstanding me. Talking about the “reason” for adaptations is biology 101. If you can’t grasp such talk, just ignore people like me. I would prefer to talk to those who are capable of underrstanding what I mean instead.
Talking about the “reason” for adaptations is biology 101.
I know that. Most of the time I use the same language. But that’s because I trust the people I’m talking to to know that I’m speaking metaphorically. I also trust them to understand enough basic morality to know that just because something is extremely common in nature, doesn’t mean it’s morally good. The reason I am not doing that when talking to you is that I am not convinced that I should extend you that trust. You constantly confuse the descriptive with the normative and the “common in nature” with the “morally good.”
I do not have issue with the majority of factual statements you make. What I have issue with is the appalling moral statements you make. I get the impression that your are upset at Eliezer because he wants to preserve the values that make us morally significant beings, even if doing so will stop us from evolving. You act like evolving is our “real” purpose and that things that people actually value, like creativity, novelty, love, art, friendship, etc. are not important. This is the exact opposite of the truth. Evolution is useful only so far as it preserves and enhances our values such as creativity, novelty, love, art, friendship, etc.
Again, if you really think maximizing entropy is your real purpose in life; would you torture 50 children to death if it would get some sadistic aliens to make a far-off star go nova for you? Detonating one star would produce far more entropy than those children would over their lifetimes, but I still bet you wouldn’t torture them, because you know it’s wrong. The fact that you wouldn’t do this proves you think doing the right thing is more important than maximizing entropy.
I do not have issue with the majority of factual statements you make. What I have issue with is the appalling moral statements you make.
Gee, thanks for that.
Again, if you really think maximizing entropy is your real purpose in life; would you torture 50 children to death if it would get some sadistic aliens to make a far-off star go nova for you?
People often seem to think that entropy maximisation priinciples imply that organisms should engage in wanton destruction, blowing things up. However, that is far from the case. Causing explosions is usually a very bad way of maximising entropy in the long term—since it tends to destroy the world’s best entropy maximisers, living systems. Living systems go on to cause far more devastation than exploding a sun ever could. So, wanton destruction of a sun, is bad—not good—from this perspective.
Causing explosions is usually a very bad way of maximising entropy in the long term—since it tends to destroy the world’s best entropy maximisers, living systems.
That’s why I said “far-off” star. I was trying to imply that the star was so far away its destruction would not harm any living things. Please don’t fight the hypothetical.
In any case, the relevant part of the question isn’t “Would you blow up a star?” That was just an attempt to give the hypothetical some concrete details so it sounded less abstract. The relevant question is “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
This is ridiculous. Are you actually proposing entropy maximisation as a reduction of “should”, normative ethical theory, etc., or do you just find it humorous to waste our time?
Causing explosions is usually a very bad way of maximising entropy in the long term—since it tends to destroy the world’s best entropy maximisers, living systems. Living systems go on to cause far more devastation than exploding a sun ever could.
Are you sure? A black hole is the system with the most possible entropy among those with a given mass. Your point would only be valid if interstellar civilizations are easy to achieve, and given that we don’t see any of those around I don’t think they are.
You said: “Our civilisation maximises entropy.” Our civilization consists of all the humans in the world. When you’re asking what our civilization is trying to maximize you’re asking what the humans of the world are trying to maximize. Humans try to do things they enjoy, things that are fun. Therefore our civilization tries to maximize fun.
I know that because that’s basic human psychology 101. Humans want to be happy and have fulfilled preferences.
We’re talking about our civilization. In other words, all the humans in the world. Plants aren’t human, so whether they maximize fun is irrelevant. I suppose if you regarded human tools and artifacts as part of our civilization then agricultural plants could be regarded as part of it. But they aren’t the part of our civilization that makes decisions on what to maximize, humans are.
Plants aren’t trying to maximize anything. They’re plants, they don’t have minds. If I was to use the word maximize as liberally as you do I could actually argue that agricultural plants do try to maximize fun, because humans grow them for the purpose of eating, and eating is fun. But that wouldn’t be strictly accurate, plants just execute their genetically coded behaviors, any purpose they have is really the purpose of the consequentalist minds that grow them, not of the plants. Saying that agricultural plants have any purpose at all is the mind-projection fallacy.
Because some humans are selfish and try to maximize their own fun at the expense of the fun of others. And sometimes we make big mistakes when trying to make the world more fun. But still, most of the time we try to work together to have fun. We aren’t that good at it yet, but we’re trying and keep improving. The world is getting progressively more fun.
Yes. Humans who are enjoying life the most are generally regarded as being more successful at life than humans who are not. This is a basic and easily observable fact.
It’s easily confirmed by the facts. As humans have grown richer and more technologically advanced they have devoted more and more of their resources to having fun. Look at the existence of places like Disneyworld for evidence.
No it isn’t. Brains don’t care about inclusive genetic fitness. At all. They never have. If you want evidence for that, note the fact that humans do things like use condoms. Also note that the growth of the world’s population is slowing and will probably stop by the end of the 21st century if trends continue.
That literature has exactly zero relevance to our current discussion, which is what human beings, value, care about, and try to maximize. You learn about that by studying basic psychology. Evolutionary theory may give us insights into how humans came to have our current values, but it has no relevance on what we should do now that we have them.
Our values are what we value, how we came to have them is irrelevant. If our values were bestowed on us by an alien geneticist rather than evolution we would behave exactly the same as we do now. Humans don’t give a crap about “god’s utility function.” If they end up increasing entropy it is as a side effect to obtaining their real goals.
Optimization has the same problem. Optimization literally refers to a consequentialist creature using its future forecasting abilities to determine how an object or meme would better suit its goals and altering that thing accordingly. Evolution can be metaphorically said to optimize, but that isn’t strictly true. It’s just a form of personification to make thinking about evolution easier.
Strictly speaking, evolution is just a description of a series of trends. Since human minds are bad at modeling trends, but good at modeling other consequentialists, sometimes it’s useful to pretend that evolution is a consequentialist with “goals” and a “utility function” to help people understand it. It’s less scientifically accurate than modeling evolution as a series of trends, but it makes up for it by being easier for a human brain to compute. The problem is that, while most scientists understand this, there are some people who who misinterpret this to mean that evolution literally has goals, desires, and utility functions. You appear to be one of these people.
Because literally speaking, only consequentialist minds maximize things. You might be able to say evolution maximizes things as a useful metaphor, but literally speaking it isn’t true.
No it isn’t. It is useful to pretend that it is because doing so makes it a little easier for the human mind to think about evolution. But really, evolution is just an abstract series of mindless trends.
I never claimed evolution tries to maximize fun. I claimed our civilization does. In other words, that the consequentialist minds making up human civilization use their forecasting abilities to foresee possible futures, and then steer the universe towards the one where they are having the most fun.
I’m familiar with some of the literature, and I’ve looked at your website. You constantly confuse the metaphorical “goals” evolution has with the real goals that consequentialist minds such as human beings have. For instance you say:
This is trivially false, the reason researchers are working on a fusion reactor is to secure human beings cheap renewable energy to have more fun with. The fact that it increases entropy is a side-effect. The consequentialist human minds do not foresee a future with more entropy and take action in order to secure that future. They foresee a future where humans are using cheap energy to have more fun and take actions to secure that future. The entropy increase is an unfortunate, but acceptable side effect.
What you remind me of is one of those theologians who describe God as an “unmoved mover” or something like that and suggest such a thing must exist (which was a reasonable hypothesis at one point in history, even if it isn’t now). They then make the ridiculous leap of logic that because an unmoved mover must exist, and you can call such a thing “God,” that therefore a God with all the ludicrously specific human-like properties described in the Bible must exist.
Similarly, you take some basic facts about evolution and physics that every educated person agrees are true. Then you make bizarre leaps of logic to conclude that human beings care about maximizing IGF and maximizing entropy and other obvious falsehoods. I am not objecting to the evolutionary biology research you cite, I am objecting to the bizarre and unjustified inferences about human psychology and moral philosophy you use that research to make.
Nonsense. Look it up.
Feel free to substitute “maximisation” terminology if my preferred lingo causes you conceptual problems. Selfishness, progress and optimisation can all be “cashed out” in more long-winded terms. Remember: teleonomy is just teleology in new clothes.
This line of reasoning is intuitive, but, I believe, wrong. Destroying energy gradients is actively selected for in lots of ways. For example, it actively deprives competitors of resources. Organisms compete to dissipate sources of order by reaching them quicky and eliminating before others can. The picture of entropy as an inconvenient side effect seems attractive initially, but doesn’t withstand close inspection.
I don’t deny that properly functioning brains act like hedonic maximisers. Hedonic maximisation is a much weaker explanatory principle than entropy maximisation, though. The latter explains why water flows downhill. Hedonic maximisation is a narrow and weak idea—by comparison.
To clarify, many humans fail to maximise their own inclusive fitnesses—largely because they are malfunctioning—with many of the most common malfunctions being caused by parasites—and the most common parasites being responsible for memetic hijacking. Humans and the ecosystems they are part of really do maximise entropy (subject to constraints) - or at least the MEP is a deep and powerful explanatory principle—when it comes to CAS and living systems.
You are misunderstanding me. Pleasure is literally evolution’s way of getting organisms do things that help them to increase their inclusive fitness. This idea is true, and is in no way refuted by condoms or the demographic transition.
It is evolution’s (metaphorical) way. When you say that brains use it as a proxy for genetic fitness that gave the impression that you thought brains literally cared about fitness and were optimizing for it.
Brains are hedonic maximisers. They’re only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well—machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.
Hedonism is a means to an end. Pleasure is there for a reason. The reason is that it helps organisms reproduce, and organisms reproduce—ultimately—because that’s the best way to maximise entropy—according to the deep principle of the MEP.
Think you can more accurately characterise nature’s maximand? Go right ahead. If David Pearce has his way, hedonism will play a more significant role in the future—until aliens eat our lunch—but whether David Pearce’s future comes to pass remains to be seen.
Closer to 20% than 2% in energy use.
This “because” doesn’t seem like a meaningful answer to a real question. Life on Earth makes use of some solar and geothermal energy before it heads off into space. Does Earth generate much more entropy than Venus? ETA: it seems to me that in the long-run you get the same effects. In the short run local life can use up free energy more quickly, but it can also stockpile resources for later extraction (fossil fuels, acorns, stellar engineering).
Thermodynamics tells us that doing most anything at all increases entropy. Calling that a utility function looks like talking about how the utility functions of falling objects value being closer to large masses.
I think that’s comparing apples and cheese.
Your point here isn’t clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.
Except that that particular effect can be explained as a manifestation of the MEP principle—which is much more general. So the idea that objects like to be close to other objects is redundant, unnecessary—and can be discarded on Occamian grounds.
At any given time, much of the grasslands and fertile ocean are not engaged in photosynthesis because herbivores have cropped the primary producers, reducing short-term entropy production. You can swallow that problem with a catch-all “best of their ability clause,” but now “ability” needs to talk about the ability of herbivores to compete in a sea of ill-defended plants, selfish genes, and so forth.
The move to biological and social systems is an attempt at empirical generalization with some success, since untapped free energy has the potential to power living creatures’ reproduction, and mutants that tap such sources proliferate. Humans can use free energy to power machinery as well as their own bodies, so they tap available resources. Great, you have a correlate for the proliferation of life.
But this isn’t enough to power accurate predictions about the portion of Earth’s surface performing photosynthesis, or whether humanity (or successors) will use up the available resources in the Solar System as quickly as possible, or as quickly as will maximize interstellar colonization and energy use, or much more slowly to increase the total computation that can be performed, or slowly so as to sustain a smaller population with longer lifespans.
Herbivores cause massive devastation and destruction to plant life. The extend life’s reach underground—where plants cannot live. They led to oil drilling, international flights, global warming and nuclear power. If you want to defend the thesis that the planet would be a better dissipator without them, you have quite a challenge on your hands, it seems to me.
MEP is a statistical principle. It illuminates these issues, but doesn’t make them trivial. Compare with natural selection—which also illuminates these areas without trivializing them.
All those things are controlled by brains. They execute the brains’ commands, which is optimizing the world for fun. They are extensions of the human brains. Now, they might increase entropy or something as a side effect, but everything they do they do because a brain commanded it.
Life doesn’t give us reason and purpose. We give life reason and purpose. Speculating on what sort of metaphorical “purposes” life and nature might have might be a fun intellectual exercise, but ultimately it’s just a game. Our purposes come from the desires of our brains, not from some mindless abstract trend. Your tendency to think otherwise is the major intellectual error that keeps you from grokking Eliezer’s arguments.
Here’s a question for you: Suppose some super-advanced aliens show up that offer to detonate a star for you. That will generate huge amounts of entropy, far more than you ever could by yourself. All you have to do in return is torture some children to death for the aliens’ amusement. They’ll make sure the police and your friends never find out you did it.
Would you torture those children? No, of course you wouldn’t. Because you care about being moral and doing good and don’t give a crap about entropy. You just think you do because you have a tendency to confuse real human goals with metaphorical, fake “goals” that abstract natural trends have.
Why would I need to do that? My main point is that human civilization doesn’t and shouldn’t give a crap about nature’s worthless maximand. When you post comments on Less Wrong a lot of time you seem to act like maximizing IGF and entropy are good things that organisms ought to do. You get upset at Eliezer for suggesting we should do something better with our lives. This is because you’re deeply mistaken about the nature of goodness, progress, and values.
But just for fun, I’ll take up your challenge. Nature doesn’t have a maximand. It isn’t sentient. And even if Nature was sentient and did have a maximand, the proper response for the human race would be to ignore Nature and obey their own desires instead of its stupid, evil commands.
That being said, even you instead asked me to answer the more reasonable question “What trends in evolution sort of vaguely resemble the maximand of an intelligent creature?” I still wouldn’t say entropy maximization. The idea that evolution tends to do that is an illusion created by the Second Law of Thermodynamics. Because of the way that 2LTD works, doing anything for any reason tends to increase entropy. So obviously if an evolved organism does anything at all, it will end up increasing entropy. This creates an illusion that organisms are trying to maximize entropy. Carl Shulman is right, calling entropy nature’s maximand is absurd, you might as well say “being attracted by gravity” or “being made of matter” are what nature commands.
A better (metaphorical) maximand might actually be local entropy minimization. It’s obviously impossible to minimize total entropy, but life has a tendency to decrease the entropy in its local area. Life tends to use energy to remove entropy from its local area by building complex cellular structures. It’s sort of an entropy pump, if you will. So if we metaphorically pretended it evolution had a purpose, it would actually be the reverse of what you claim.
But again, that’s not my main point. My main point is that while you have a lot of good sources for your biology references, you don’t have nearly as good a grasp of basic psychology and philosophy. This causes you to make huge errors when discussing what good, positive ways for life to develop in the future are.
This makes me want to start a religion where the Creator of the Universe gives points to things that behave like a member of the universe. “Thou shalt be made of matter.” “Thou shalt be attracted by gravitational force.” “Thou shalt increase entropy.” etc. Too bad ‘Scientology’ is taken as a name. Physianity, maybe?
In the beginning, there was nothing. The cosmos were void—timeless, and without form. And, lo, God pointed upon the abyss, and said ‘LET THERE BE ENERGY’ And there was energy. And God pointed to the energy, and said, ‘and let you be bound among yourselves that you may wander the void together, proton to neutron, and proton to proton, and let the electrons always seek their opposite number, within the appropriate energy barrier, and let the photons wander where they will.’ Lo, and god spoke to the stranger particles, for some time, but what He said was secret. And God saw hydrogen, and saw that it was good.
And God saw the particles moving at all different speeds, away from one another, and saw that it was bad, and God said ‘and let the cosmos be bent and cradle the particles, that they may always be brought back together, though they be one billion kilometers apart, within the appropriate energy barrier, of course. And let the curvature of space rise without end with the energy of velocity, that they all be bound by a common yoke.’ And god looked upon the spirals of gas, and saw that it was good.
And god took the gas and energy above, and the gas and energy below, and said ‘and you shall be matter, and you shall be antimatter, and your charges shall ever be in conflict, and never the twain shall meet, except in very small quantities.’ And so there was the matter and the antimatter.
And God saw the cosmos stretching out to a single future, and said ‘And let you all be amplitude configurations, that you may not know thyself from the thy neighbor, and that the future may expand without end.’ And god saw the multiverse, and saw that it was good.
Prigogine actually came up with a genuine entropy minimization principle once (in contrast to your idea—which has never been formalised as a real entropy minimization principle—AFAIK). He called it the theorem of “minimum entropy production”. However, in “Maximum entropy production and the fluctuation theorem” Dewar explained it as a special case of his “MaxEnt” formalism.
Nope. For instance, look at Kevin Kelly’s book “Out of control”. Or look into memetics. Human brains are an important force, but there are other maximisation processes going on, in culture, with genes and inside machines.
“Out of Control” appears to be primarily about decentralized decision making processes like democracy and capitalism. I never said that brains controlled the artifacts of civilization in a centralized fashion, I just said that they control them. Obviously human beings use all sorts of decentralized methods to help coordinate with each other.
That being said, while systems are not controlled in a centralized manner, they are restricted in a centralized manner. For instance, capitalism only works properly if people are prevented from killing and stealing. Even if there is no need to centrally control everything to get positive results, there is a need to centrally control some things.
There seems to be a later section in “Out of Control” where Kelly suggests giving up control to our machines is good in the same way that dictators giving up central control to democracy and capitalism is good. This seems short-sighted, especially in light of things like Bostrom’s orthogonality thesis. The reasons democracy and capitalism do so much good is that:
Human minds are an important component of them, and (most) humans care about morality, so the systems tend to be steered towards morally good results.
There are some centralized restrictions on what these decentralized systems are able to do.
Unless you are somehow able to program the machines with moral values (i.e. make an FAI), turning control over to them seems like a bad idea. Creating moral machines isn’t impossible, but the main point of Eliezer’s writing is that it is much, much harder than it seems. I think he’s quite correct.
As for memetics, the idea impressed me when I first came across it, but there doesn’t seem to have been much development in the field since then. I am no longer impressed. In any case, the main reason memes “propagate” is that they help a brain fulfill its desires in some way, so really ever-evolving memes are just another one of the human mind’s tools in its continuing quest for universal domination.
From a biological perspective, brains are seen as being a way for genes and memes to make more copies of themselves.
That this is a valuable point of view is illustrated by some sea squirts—which digest their own brains to further their own reproductive ends.
In nature, genes are fundamental, while brains are optional and expendable.
Genes are biologically fundamental, certainly. You will get no argument from me there (although the fact that brains are biologically expendable does not imply that it is moral to expend them). The evidence that memes are more fundamental than brains, however, is not nearly as strong.
It is quite possible to model memes as “reproducing,” by being passed from one brain to another. But most of the time the reason the meme is passed from one brain to another is because they aid the brain in fulfilling its desires in some way. The memes associated with farming, for instance, spread because they helped the brain fulfill its desire to not starve. In instances where brains stopped needing the farming memes to obtain food (such as when the Plains Indians acquired horses and were suddenly able to hunt bison more efficiently) those memes promptly died out.
There are parasitic memes, cult ideologies for instance, that reproduce by exploiting flaws in the brain’s cognitive architecture. But the majority of memes “reproduce” by demonstrating their usefulness to the brain carrying them. You could say that a meme’s “fitness” is measured by its usefulness to its host.
That wouldn’t be terribly accurate, though. Smoking memes, obesity memes, patriotism memes, and lots of advertising and marketing memes are not good for their hosts, but rather benefit those attempting to manipulate them. However, there’s usually a human somewhere at the end of the chain today.
That probably won’t remain the case, though. After the coming memetic takeover we are likely to have an engineered future—and then it will be memes all the way down.
The memetic takeover you describe would just consist of intelligences running on computer-like substrates instead of organic substrates. That isn’t morally relevant to me, I don’t care if the creatures of the future are made of carbon or silicon. I care about what sort of minds they have, what they value and believe in.
I’m not sure referring to an intelligent creature that is made of computing code instead of carbon as a “meme” is true to the common definitions of the term. I always thought of memes as contagious ideas and concepts, not as a term to describe an entire intellect.
After the memetic takeover there would still be intelligent creatures, they’d just run on a different substrate. Many of them could possibly be brain-like in design or have human-like values. They would continue to exchange memes with each other just as they did before, and those memes would spread or die depending on their usefulness to the intelligent creatures. Just like they do now.
People don’t call the works of Shakespeare a “meme” either. Conventionally, such things are made of memes—and meme products.
This should be mistunderstanding #1 in the MEP FAQ. MEP is not the same as the second law. It’s a whole different idea, which you don’t appear to know anything about.
Extremum rate principles like MEP have proven very useful for describing the behavior of certain systems, but the extrapolation of the principle into a general law of nature remains hugely speculative. In fact, at this point I think the status of MEP can be described as “not even wrong”, because we do not yet have a a rigorous notion of thermodynamic entropy that extends unproblematically to nonequilibrium states. The literature on entropy production usually relies on equations for the entropy production rate that are compatible with our usual definition of thermodynamic entropy when we are dealing with quasistatic transformations, but if we use these rate equations as the basis for deriving a non-equilibrium conception of entropy we get absurd results (like ascribing infinite entropy to non-equilibrium states).
Dewar’s work, which you link below, is an improvement, in that it operates with a notion of entropy that is clearly defined both in and out of equilibrium, derived from the MaxEnt formalism. But the relationship of this entropy to thermodynamic entropy when we’re out of equilibrium is not obvious. Also, Dewar’s derivation of MEP relies on applying some very specific and nonstandard constraints to the problem, constraints whose general applicability he does not really justify. If I were permitted to jury-rig the constraints, I could derive all kinds of principles using MaxEnt. But of course, that wouldn’t be enough to establish those principles as natural law.
Entropy and MEP are statistical phenomena. Thermodynamics is an application This has been understood since Boltzmann’s era. Most of the associated “controversy” just looks like ignorance to me.
Entropy maximisation in living systems has been around since Lotka 1922. Universal Darwinism applies it to all CAS. Lots of people don’t understand it—but that isn’t really much of an argument.
Some books on the topic:
Into the Cool: Energy Flow, Thermodynamics, and Life by Eric D. Schneider and Dorion Sagan (2005)
The Purpose of Life by Eric D. Schneider and Dorion Sagan (2011)
Still think it is “absurd”?
Wow, just wow. I’m extremely disappointed with Schneider and Sagan. Not because of their actual research, which looks like some interesting and useful stuff on thermodynamics. No, what’s disappointing and embarrassing is the deceitful way they pretend that they’ve discovered life’s “purpose.” Like many words, the word “purpose” has multiple referents, sometimes it refers to profound concepts, others times to trivial ones. Schneider and Sagan have discovered some insights into one of the more trivial concepts the word “purpose” can refer to, but are using verbal sleight of hand to pretend they’ve found the answer to one of the word’s more profound referents.
When someone says they are looking for “life’s purpose” what they mean is that they are looking for values and ideals to live their life around. A very profound concept. When Schneider and Sagan say they have found life’s purpose what they are saying is, “We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop.”
Now, doing that has it’s place, it’s easier for human brains to model other people than it is for them to model physics, so sometimes it is useful to personify physics. But the “purpose” you discover from that is ultimately trivial. It doesn’t give you values and ideals to live your life around. It just describes forces of nature in an inaccurate, but memorable way.
I’m not saying it’s absurd that to say that entropy tends to increase, that’s basic physics. But it’s absurd to pretend that entropy is the deep, meaningful purpose of human life. Purpose is something humans give themselves, not something that mindless physical laws bestow upon them. Schneider and Sagan may be onto something when they suggest that life has a tendency to destroy gradients. But if they claim that is the “purpose” of human life in any meaningful sense they are dead wrong.
I read Into the Cool a while ago, and it’s a bad book. Schneider and Sagan posit a law of nonequilibrium thermodynamics: “nature abhors a gradient”. They go on to explain pretty much everything in the universe—from fluid dynamics to abiogenesis to evolution to human aging to the economy to the purpose of life to… -- as a consequence of this law. The thing is, all of this is done in a very hand-wavey fashion, without any math.
Now, there is definitely something interesting about the fact that when there are gradients in thermodynamic parameters we often see the emergence of stable, complex structures that can be seen as directed towards driving the system to equilibrium. But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the “gradient” involved isn’t really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.
Perhaps Schneider has worked his ideas out with greater rigor elsewhere (if he has, I would like to see it), but Into the Cool is in the same category as Per Bak’s How Nature Works and Mark Buchanan’s Ubiquity, a popular book that extrapolates useful insights to such an absurd extent that it ventures into mild crackpot territory.
That’s right—MEP is a statistical characterisation of universal Darwinism, which explains a lot about CAS—including why water flows downhill, turbulence, crack propagation, crystal formation, and lots more.
Schneider’s original work on the topic is Life as a manifestation of the second law of thermodynamics.
Of course, while this work has some scientific interest (a fact I never denied), it is worthless for determining what the purpose of intelligent life and civilization should be. All it does is explain where life came from, it has no value in determining what we want to do now and what we should do next.
Your original statement that started this discussion was a claim that our civilization maximizes entropy. That claim was based on a trivial map-territory confusion, confounding two different referents of the word “maximize,” Referent 1 being :”Is purposefully designed to greatly increase something by intelligent beings” and Referent 2 being: “Has a statistical tendency to greatly increase something.”
When Eliezer claimed that intelligent creatures and their civilization would only be interesting if they purposefully acted to maximize novelty, you attempted to refute his claim by saying that our civilization is not purposefully acting to maximize novelty because it has a statistical tendency to greatly increase entropy. In other words, you essentially said “Our civilization does not maximize(1) novelty because it maximizes(2) entropy.” You entire argument is based on map-territory confusion.
Your comment is a blatant distortion of the facts. Eliezer’s only references to maximizing are to an “expected paperclip maximizer”. He never talks about “purposeful” maximisation. Nor did I attempt the refutation you are attribting to me. You’ve been reduced to making things up :-(
Eliezer never literally referred to the word “maximize,” but the thrust of his essay is that a society that purposefully maximizes, or at least greatly increases novelty, is far more interesting than one that doesn’t. He claimed that, for this reason, a paperclip maximizing civilization would be valueless, because paperclips are all the same.
You said:
In this instance you are using “maximize” to mean “Has a statistical tendency to increase something.” You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting. You’re ignoring the fact that when humans create, we create art, socialization, science, literature, architecture, history, and all sorts of wonderful things. Paperclip maximizers just create the same paperclip, over and over again. It doesn’t matter how much entropy gets made in the process, humans are a quadrillion times more interesting because there is so much diversity in what we do.
Claiming that all the wonderful, varied, and diverse things humans do is no more interesting than paperclipping, just because you could describe it as “entropy maximization” is ridiculous. You might as well say that all events are equally uninteresting because you can describe all of them as “stuff happening.”
So yes, Eliezer never used the word “maximize” but he definitely claimed that creatures that didn’t value novelty would be boring. And you did attempt to refute his claim by claiming that our civilization’s statistical tendency to increase entropy means that creating art, conversation, science, etc. is no different from paperclipping. I think my objection stands.
You’re in a complete muddle about my position. ‘Maximize’ doesn’t mean ‘increase’. The maximum entropy principle isn’t just “a statistical tendency to increase entropy”. You are apparently thinking about the second law of thermodynamics—which is a completely different idea. Nor was I arguing that human activity was “uninteresting”. Since you so obviously don’t have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.
You said:
What am I supposed to interpret that as besides “Human activity is uninteresting.”? Or at least, “human activity is as uninteresting as paperclip making.”
Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don’t understand the science properly. I have some problems with your science, you seem to like talking about big ideas that aren’t that strongly supported, but my main objection is to your ethical positions. You constantly act like what is common in nature is what is morally good.
The whole reason I have been so hard on you about personifying forces of nature is that you constantly switch between the descriptive and the normative. You act like humans have a moral duty to maximize entropy and that we’re bad, bad people if we don’t keep evolving. I think that if you stopped personifying natural forces it would make it easier for you to spot when you do this.
Again, answer my moral dilemma: “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
I doubt you even consider moral dilemmas like this because you are interested in talking about big cool ideas, not about challenging them or considering them critically. MEP might have originally been a useful scientific theory when it was first formulated, but you’ve turned it into a Fake Utility Function.
I don’t know what you are talking about. What “horrifying moral positions”?
This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does “hardly seems much more interesting” than tiling the universe with paperclips.
You have frequently implied that the metaphorical “goals” of abstract statistical processes like MEP and natural selection are superior to human values like love, compassion, freedom, morality, happiness, novelty, etc. For instance, here you say:
The moral position you keep implicitly arguing for, again and again, is that the metaphorical “goals” of abstract natural processes like MEP and natural selection represent real objective morality, while the values and ideals that human beings base their lives around are just a bunch of “implementation details” that it’s perfectly okay to discard if they get in the way. This is exactly backwards. Joy, love, sympathy, curiosity, compassion, novelty, art, etc. are what is really valuable. Preserving these things is what morality is all about. The solemn moral duty of the human race is to make sure that a sizable portion of the future creatures that will exist share these values, even if they do not physically resemble humans.
I was also extremely horrified by your response to the dilemma I posed you. I attempted to prove that MEP is a terrible moral rule by asking you if you would torture children to death in order to greatly increase entropy. The correct response was: “Of course I wouldn’t, the lives and happiness of children are more important than MEP.” Instead of saying that, you changed the subject by saying that the method of entropy production I suggested was inefficient because it might destroy living systems. This implies, as asparisi put it:
Just to be clear, I don’t think that you would ever torture children. I think the beliefs you write about are, thankfully, completely divorced from your behavior. MEP is your Fake Utility Function, not your real one. But that doesn’t change the fact that it’s horrifying to read about. It’s discouraging that I try to tell people that studying science won’t destroy your moral sense, that it won’t turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.
Can you expand on your reasons for believing that studying science was causal to what you categorize here as the destruction of Tim’s moral sense? (I’m not asking why you believe his moral sense has been destroyed; I think I understand your reasoning there. I’m asking why you believe studying science was the cause.)
Because he constantly uses scientific research to justify his moral positions, and then when I challenge them he accuses me of just not understanding the science well enough. He switches back and forth between statements about science and normative statements about what would make the future of humanity good without seeming to notice. Learning about evolutionary science seems to have put him in an Affective Death Spiral around evolution. (I know the symptoms, I used to be in one around capitalism after I started studying economics) It’s one of the more extreme examples of the naturalistic fallacy I’ve ever seen.
Now, since you’ve read some of my other posts you know that I don’t necessarily accept the strong naturalistic fallacy, the idea that ethical statements cannot be reduced to naturalistic ones at all. But I definitely believe in the weaker form of the naturalistic fallacy, the idea that things that are common in nature are not necessarily good. And that is the form of the fallacy Tim makes when he says absurd things like our civilization maximizes entropy or that our values are not precious things that need to be preserved if they get in evolution’s way.
Studying the science of evolution certainly wasn’t sole cause, maybe not even the main cause, of Tim’s ethical confusion, but it certainly contributed to it.
I totally would. Then—if the situation demanded it and if I didn’t have a fat guy available—I’d throw them all in front of a trolley. Because not torturing children is evil when the alternative to the contrived torturing is a contrived much worse thing.
I meant that I didn’t ever think he’d torture children for no reason other than to increase the level of entropy in the universe (in my original contrived hypothetical the entropy increase was accomplished by having sadistic alien make a star go nova in return for getting to watch the torture. The star was far enough away from inhabited systems that the radiation wouldn’t harm any living things).
I wasn’t meaning to set up “not torturing children” as a deontological rule. Obviously there are some circumstances where it is necessary, such as torturing one child to prevent fifty more children from being tortured for an equal amount of time per child. What I was trying to do was illustrate that Tim’s Maximum Entropy Principle was a really, really bad “maximand” to follow by creating a hypothetical where following it would make you do something insanely evil. I think we can both agree that entropy maximization (at least as an end in itself rather than as a byproduct of some other end) is far less important than preventing the torture of children.
Tim responded to my question by sidestepping the issue, instead of engaging the hypothetical he said that a nova was a bad way to maximize entropy because it might kill living things that would go on to produce more entropy, even though I tried to constrain the hypothetical so that that wasn’t a possibility.
But that’s complete nonsense. I already explained this by saying here:
Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able—they don’t attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind. This is not some kind of moral assertion, it’s just a straightforwards description of how these systems would behave. Entropy is, I claimed, not much more interesting than paperclips.
The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).
This is really just a particular case of the “simple rules, complex dynamics” theme that we see in complex systems theory (e.g. game of life, rule 30, game of go, etc).
So: this whole “horrifying moral position” business is your own misunderstanding.
Failure to address your other points is not a sign of moral weakness—it just doesn’t look as though the discussion is worth my time.
That wasn’t an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.
That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:
And also:
Saying something is “screwed up” is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.
Even if that’s true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.
Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.
Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.
The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.
My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write “I wouldn’t torture children to increase the entropy levels,” I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.
When you discuss W. D. Hamilton you state, quite correctly, that:
You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:
Your problem is that you didn’t take the implications of Hamilton’s work far enough. There’s an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.
Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically “realized” that being an altruist all the time might decrease our IGF, so it metaphorically “cursed” us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we’re altruists when we’re really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.
This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call “Nietzscheanism”.
I do say in my essay: “I think Hamilton’s points are good ones”.
You need to look up “altruism”—since you are not using the term properly. An “altruist”, by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can’t argue that altruists exhibit a net fitness gain - unless you are doing fancy footwork with your definitions of “fitness”.
Your account of human moral hypocracy doesn’t look significantly different from mine to me. However, you don’t capture my own position—which may help to explain your percieved difference. I don’t think most humans are “really IGF maximizers”. Instead, they are victims of memetic hijacking. They do reap some IGF gains though—looking at the 7 billion humans.
I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don’t share your values is all. Big deal: rarely do two humans share the same values.
It may be worth your time to explicitly disclaim the whole “torturing children to blow up stars” position (instead of appearing to dodge it a second or third time), particularly seeing as if it is a misunderstanding, it is not uniquely Ghatanathoah’s.
When biologists say “the purpose of a nose is smelling things” you don’t have to personify mother naure to make sense of what they mean. Personifying the organism is often enough. Since the organism may not be so very different from a person, this is often an easier step.
That doesn’t change the fact that personification is a way to help people think about reality more easily at the expense of accurately describing it. Noses don’t literally have a purpose. It’s just that organisms that are good at smelling things tend to reproduce more.
The problem with Schneider and Sagan is that they confound this metaphorical meaning of the word purpose (the utility function of a personified entity) with a different meaning (ideals to live your life around). Hence their second book makes the absurd statement* that, when you strip the word “purpose” from it basically says “knowing that decreasing entropy gradients is a major reason life arose will give you ideals to live your life around.” That’s ridiculous.
*To be fair that statement was a cover blurb, so it’s possible that it was written by the publisher, not Schneider and Sagan.
The game seems to involve willfully misunderstanding me. Talking about the “reason” for adaptations is biology 101. If you can’t grasp such talk, just ignore people like me. I would prefer to talk to those who are capable of underrstanding what I mean instead.
I know that. Most of the time I use the same language. But that’s because I trust the people I’m talking to to know that I’m speaking metaphorically. I also trust them to understand enough basic morality to know that just because something is extremely common in nature, doesn’t mean it’s morally good. The reason I am not doing that when talking to you is that I am not convinced that I should extend you that trust. You constantly confuse the descriptive with the normative and the “common in nature” with the “morally good.”
I do not have issue with the majority of factual statements you make. What I have issue with is the appalling moral statements you make. I get the impression that your are upset at Eliezer because he wants to preserve the values that make us morally significant beings, even if doing so will stop us from evolving. You act like evolving is our “real” purpose and that things that people actually value, like creativity, novelty, love, art, friendship, etc. are not important. This is the exact opposite of the truth. Evolution is useful only so far as it preserves and enhances our values such as creativity, novelty, love, art, friendship, etc.
Again, if you really think maximizing entropy is your real purpose in life; would you torture 50 children to death if it would get some sadistic aliens to make a far-off star go nova for you? Detonating one star would produce far more entropy than those children would over their lifetimes, but I still bet you wouldn’t torture them, because you know it’s wrong. The fact that you wouldn’t do this proves you think doing the right thing is more important than maximizing entropy.
Gee, thanks for that.
People often seem to think that entropy maximisation priinciples imply that organisms should engage in wanton destruction, blowing things up. However, that is far from the case. Causing explosions is usually a very bad way of maximising entropy in the long term—since it tends to destroy the world’s best entropy maximisers, living systems. Living systems go on to cause far more devastation than exploding a sun ever could. So, wanton destruction of a sun, is bad—not good—from this perspective.
So, if the nova’s explosion did not destroy any living systems, you would happily trade the 50 kids for the nova explosion?
That’s why I said “far-off” star. I was trying to imply that the star was so far away its destruction would not harm any living things. Please don’t fight the hypothetical.
In any case, the relevant part of the question isn’t “Would you blow up a star?” That was just an attempt to give the hypothetical some concrete details so it sounded less abstract. The relevant question is “Would you torture fifty children to death in order to greatly increase the level of entropy in the universe.” Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.
This is ridiculous. Are you actually proposing entropy maximisation as a reduction of “should”, normative ethical theory, etc., or do you just find it humorous to waste our time?
Are you sure? A black hole is the system with the most possible entropy among those with a given mass. Your point would only be valid if interstellar civilizations are easy to achieve, and given that we don’t see any of those around I don’t think they are.