The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance. This is true and the universe does naively appear to have been designed to produce us. However, this is a priori massively unlikely. This implies that we exist in a universe that tries out many possibilities (many worlds interpretation) and anthropic bias ensures that all observers see weird and interesting things. Robert’s problem is that he gets an emotion kick out of ascribing human-friendly purpose to survivorship bias. I’m pretty sure that nothing other than the most painstaking argument is going to get him to realize his folly, and that just isn’t going to happen in one hour video chats.
This implies that we exist in a universe that tries out many possibilities (many worlds interpretation)
Big World rather. Many-worlds doesn’t give different laws of physics in the way that the string theory landscape or Tegmark’s mathematical universe hypothesis do.
The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance.
Any hypothesis that assigns a really low probability to the present state of the universe is probably wrong.
(The universe is in a state such that to uniquely determine it, we need a very complicated theory. Therefore, we should look for less complicated theories which contain it and many other things, and count on anthropics to ensure we only see the parts of the universe we’re accustomed to.)
Have you read Sean Carroll’s “From Eternity to Here”? It’s a fairly layman-friendly take on that problem (or I suppose more accurately, the problem of why the past was in such an improbable state of low entropy). I think his explanation would fall under Carl Schulman’s “Big World” category.
I think this argument is mostly about whether purpose is there—not about where it comes from.
Designoid entities as a result of anthropic selection effects seem quite possible in theory—and it would be equally appropriate to describe them as being purposeful [standard teleology terminology disclaimers apply, of course].
and it would be equally appropriate to describe them as being purposeful
Especially if you unpack “purposeful” as meaning “stimulating that portion of the human brain that evolved to predict the behavior of other entities”. ;-)
The real confusion about purpose arises when we confuse the REAL definition of purpose (i.e. that one), with the naive inbuilt notion of “purposeful” (i.e. “somebody did it on purpose”).
That should not be the definition of purpose—if we are trying to be scientific. Martian scientists should come to the same conclusions.
“Purpose”—in this kind of context—could mean “goal directed”—or it could mean
pursuing a goal with a mind that predicts the future. The former definition would label plants and rivers flowing downhill as purposeful—whereas the latter would not.
That should not be the definition of purpose—if we are trying to be scientific. Martian scientists should come to the same conclusions.
Do you mean that a Martian scientist would not conclude that when a human being uses that word, they are referring to a particular part of their brain that is being stimulated?
What I’m saying is that the notion of “purpose” is an interpretation we project onto the world: it is a characteristic of the map, not of the territory.
To put it another way, there are no purposeful things, only things that “look purposeful to humans”.
Another mind with different purpose-detecting circuitry could just as easily come to different conclusions—which means that the Martians will be led astray if they have different purpose-recognition circuits, following which we will have all sorts of arguments on the boundary conditions where human and Martian intuitions disagree on whether something should be called “purposeful”.
tl;dr: if it’s part of the map, the description needs to include whose map it is.
“Purpose”—in this kind of context—could mean “goal directed”—or it could mean pursuing a goal with a mind that predicts the future.
Now you have to define “mind” as well. It doesn’t seem to me that that’s actually reducing anything here. ;-)
I’m not sure we can rule out a meaningful and objective measure of purposfulness, or something closely related to it.
If I saw a Martian laying five rocks on the ground in a straight line, I would label it an optimization process. Omega might tell me that the Martian is a reasonable powerful geral optimization process, currently optimizing for a target like ’Indicate direction to solstice sunrise.” or “Communicate concept of five-ness to Terran”. In a case like that the pattern of five rocks in a line is highly intentional.
Omega might instead tell me that the Martian is not a strong general optimization process, but that member of its species frequently arrange five stones in a line as part of their reproductive process, that would be relatively low in intentionality.
But intentionality can also go with high intelligence. Omega could tell me that the Martian is a strong general optimization agent, is currently curing Martian cancer, and smart martians just rocks in a line when they’re thinking hard. (Though you might reparse that as there is a part of the martian brain that is a specialized optimizer for putting stones in a line. I think knowing whether this is valid would depend on the specifics of the thinking hard->stones in a line chain of causality.)
And if I just found five stones in a line on Mars, I would guess zero intentionality, because that doesn’t constitute enough evidence for an optimization process, and I have no other evidence for Martians.
Evolution is an optimization process, but it doesn’t have “purpose”—it simply has byproducts that appear purposeful to humans.
Really, most of your comment just helps illustrate my point that purposefulness is a label attached by the observer: your knowledge (or lack thereof) of Martians is not something that changes the nature of the rock pattern itself, not even if you observe the Martian placing the rocks.
(In fact, your intiial estimate of whether the Martian’s behavior is purposeful is going to depend largely on a bunch of hardwired sensory heuristics. If the Martian moves a lot slower than typical Earth wildlife, for example, you’re less likely to notice it as a candidate for purposeful behavior in the first place.)
Evolution is an optimization process, but it doesn’t have “purpose”—it simply has byproducts that appear purposeful to humans.
How do you know it doesn’t have purpose? Because you know how it works, and you know that nothing like “Make intelligent life.” was contained in it’s initial state in the way it could be contained in a Martian brain or an AI.
The dumb mating martian also did not leave the rocks with any (intuitively labeled) purpose.
I’m saying: Given a high knowledge of the actual process behind something, we can take a measure that can useful, and corresponds well to what we label intentionality.
In turn, if we have only the aftermath of a process as evidence, we may be able to identify features which correspond to a certain degree of intentionality, and that might help us infer specifics of the process.
What Wright said in response to that claim was: how do you know that?
“Optimisationverse
The idea that the world is an optimisation algorithm is rather like Simulism—in that it postulates that the world exists inside a computer.
However, the purpose of an optimisationverse is not entertainment—rather it is to solve some optimisation problem using a genetic algorithm.
The genetic algorithm is a sophisticated one, that evolves its own recombination operators, discoveres engineering design—and so on.”
In this scenario, the process of evolution we witness does have a purpose—it was set up deliberately to help solve an optimisation problem. Surely this is not a p=0 case...
In this scenario, the process of evolution we witness does have a purpose
That’s not the same thing as acting purposefully—which evolution would still not be doing in that case.
(I assume that we at least agree that for something to act purposefully, it must contain some form of representation of the goal to be obtained—a thermostat at least meets that requirement, while evolution does not… even if evolution was as intentionally designed and purposefully created as the thermostat.)
It would have a purpose in my proposed first sense—and in my proposed second sense—if we are talking about the evolutionary process after the evolution of forward-looking brains.
Evolution (or the biosphere) was what was being argued about in the video. The claim was that it didn’t behave in a goal directed manner—because of its internal conflicts. The idea that lack of harmony could mess up goal-directedness seems OK to me.
One issue of whether the biosphere has enough harmony for a goal-directed model to be useful. If it has a single global brain, and can do things like pool resources to knock out incoming meteorites, it seems obvious that a goal-directed model is actually useful in predicting the behaviour of the overall system.
Most scientific definitions should try to be short and sweet. Definitions that include a description of the human mind are ones to eliminate.
Here, the idea that purpose is a psychological phenomenon is exactly what was intended to be avoided—the idea is to give a nuts-and-bolts description of purposefulness.
Re: defining “mind”—not a big deal. I just mean a nervous system—so a dedicated signal processing system with I/O, memory and processsing capabilities.
Re: defining “mind”—not a big deal. I just mean a nervous system—so a dedicated signal processing system with I/O, memory and processsing capabilities.
Any nervous system? That seems like a bad idea. Is a standard neural net trained to recognize human faces a mind? Is a hand-calculator a mind? Also, how does one define having a memory and processing capabilities. For example, does an abacus have a mind? What about a slide rule? What about a Pascaline or an Arithmometer?
I just meant “brain”. So: caclulator—yes, computer—yes.
Those other systems are rather trivial. Most conceptions of what constitutes a nervous system is run into the “how many hairs make a beard” issue at the lower end—it isn’t a big deal for most purposes.
Human mind: complex. Cybernetic diagram of minds-in-general: simple.
Dude, have you seriously not read the sequences?
First you say that defining minds is simple, and now you’re pointing back to your own brain’s inbuilt definition in order to support that claim… that’s like saying that your new compressor can compress multi-gigabyte files down to a single kilobyte… when the “compressor” itself is a terabyte or so in size.
You’re not actually reducing anything, you’re just repeatedly pointing at your own brain.
Re: “First you say that defining minds is simple, and now you’re pointing back to your own brain’s inbuilt definition in order to support that claim… ”
I am talking about a system with sensory input, motor output and memory/processing. Like in this diagram:
That is nothing specifically to do with human brains—it applies equally well to the “brain” of a washing machine.
Such a description is relatively simple. It could be presented to Martians in a manner so that they could understand it without access to any human brains.
The main problem in the discussion that appeared to me is the fact that the present state of the universe is really unlikely, and you would never get it by chance. This is true and the universe does naively appear to have been designed to produce us. However, this is a priori massively unlikely. This implies that we exist in a universe that tries out many possibilities (many worlds interpretation) and anthropic bias ensures that all observers see weird and interesting things. Robert’s problem is that he gets an emotion kick out of ascribing human-friendly purpose to survivorship bias. I’m pretty sure that nothing other than the most painstaking argument is going to get him to realize his folly, and that just isn’t going to happen in one hour video chats.
Big World rather. Many-worlds doesn’t give different laws of physics in the way that the string theory landscape or Tegmark’s mathematical universe hypothesis do.
Any hypothesis that assigns a really low probability to the present state of the universe is probably wrong.
That’s what I said.
(The universe is in a state such that to uniquely determine it, we need a very complicated theory. Therefore, we should look for less complicated theories which contain it and many other things, and count on anthropics to ensure we only see the parts of the universe we’re accustomed to.)
Have you read Sean Carroll’s “From Eternity to Here”? It’s a fairly layman-friendly take on that problem (or I suppose more accurately, the problem of why the past was in such an improbable state of low entropy). I think his explanation would fall under Carl Schulman’s “Big World” category.
I think this argument is mostly about whether purpose is there—not about where it comes from.
Designoid entities as a result of anthropic selection effects seem quite possible in theory—and it would be equally appropriate to describe them as being purposeful [standard teleology terminology disclaimers apply, of course].
Especially if you unpack “purposeful” as meaning “stimulating that portion of the human brain that evolved to predict the behavior of other entities”. ;-)
The real confusion about purpose arises when we confuse the REAL definition of purpose (i.e. that one), with the naive inbuilt notion of “purposeful” (i.e. “somebody did it on purpose”).
That should not be the definition of purpose—if we are trying to be scientific. Martian scientists should come to the same conclusions.
“Purpose”—in this kind of context—could mean “goal directed”—or it could mean pursuing a goal with a mind that predicts the future. The former definition would label plants and rivers flowing downhill as purposeful—whereas the latter would not.
Do you mean that a Martian scientist would not conclude that when a human being uses that word, they are referring to a particular part of their brain that is being stimulated?
What I’m saying is that the notion of “purpose” is an interpretation we project onto the world: it is a characteristic of the map, not of the territory.
To put it another way, there are no purposeful things, only things that “look purposeful to humans”.
Another mind with different purpose-detecting circuitry could just as easily come to different conclusions—which means that the Martians will be led astray if they have different purpose-recognition circuits, following which we will have all sorts of arguments on the boundary conditions where human and Martian intuitions disagree on whether something should be called “purposeful”.
tl;dr: if it’s part of the map, the description needs to include whose map it is.
Now you have to define “mind” as well. It doesn’t seem to me that that’s actually reducing anything here. ;-)
I’m not sure we can rule out a meaningful and objective measure of purposfulness, or something closely related to it.
If I saw a Martian laying five rocks on the ground in a straight line, I would label it an optimization process. Omega might tell me that the Martian is a reasonable powerful geral optimization process, currently optimizing for a target like ’Indicate direction to solstice sunrise.” or “Communicate concept of five-ness to Terran”. In a case like that the pattern of five rocks in a line is highly intentional.
Omega might instead tell me that the Martian is not a strong general optimization process, but that member of its species frequently arrange five stones in a line as part of their reproductive process, that would be relatively low in intentionality.
But intentionality can also go with high intelligence. Omega could tell me that the Martian is a strong general optimization agent, is currently curing Martian cancer, and smart martians just rocks in a line when they’re thinking hard. (Though you might reparse that as there is a part of the martian brain that is a specialized optimizer for putting stones in a line. I think knowing whether this is valid would depend on the specifics of the thinking hard->stones in a line chain of causality.)
And if I just found five stones in a line on Mars, I would guess zero intentionality, because that doesn’t constitute enough evidence for an optimization process, and I have no other evidence for Martians.
Evolution is an optimization process, but it doesn’t have “purpose”—it simply has byproducts that appear purposeful to humans.
Really, most of your comment just helps illustrate my point that purposefulness is a label attached by the observer: your knowledge (or lack thereof) of Martians is not something that changes the nature of the rock pattern itself, not even if you observe the Martian placing the rocks.
(In fact, your intiial estimate of whether the Martian’s behavior is purposeful is going to depend largely on a bunch of hardwired sensory heuristics. If the Martian moves a lot slower than typical Earth wildlife, for example, you’re less likely to notice it as a candidate for purposeful behavior in the first place.)
How do you know it doesn’t have purpose? Because you know how it works, and you know that nothing like “Make intelligent life.” was contained in it’s initial state in the way it could be contained in a Martian brain or an AI.
The dumb mating martian also did not leave the rocks with any (intuitively labeled) purpose.
I’m saying: Given a high knowledge of the actual process behind something, we can take a measure that can useful, and corresponds well to what we label intentionality.
In turn, if we have only the aftermath of a process as evidence, we may be able to identify features which correspond to a certain degree of intentionality, and that might help us infer specifics of the process.
What Wright said in response to that claim was: how do you know that?
“Optimisationverse
The idea that the world is an optimisation algorithm is rather like Simulism—in that it postulates that the world exists inside a computer.
However, the purpose of an optimisationverse is not entertainment—rather it is to solve some optimisation problem using a genetic algorithm.
The genetic algorithm is a sophisticated one, that evolves its own recombination operators, discoveres engineering design—and so on.”
In this scenario, the process of evolution we witness does have a purpose—it was set up deliberately to help solve an optimisation problem. Surely this is not a p=0 case...
That’s not the same thing as acting purposefully—which evolution would still not be doing in that case.
(I assume that we at least agree that for something to act purposefully, it must contain some form of representation of the goal to be obtained—a thermostat at least meets that requirement, while evolution does not… even if evolution was as intentionally designed and purposefully created as the thermostat.)
My purposeful thinking evolved into a punny story:
http://lesswrong.com/lw/2kf/purposefulness_on_mars/
It would have a purpose in my proposed first sense—and in my proposed second sense—if we are talking about the evolutionary process after the evolution of forward-looking brains.
Evolution (or the biosphere) was what was being argued about in the video. The claim was that it didn’t behave in a goal directed manner—because of its internal conflicts. The idea that lack of harmony could mess up goal-directedness seems OK to me.
One issue of whether the biosphere has enough harmony for a goal-directed model to be useful. If it has a single global brain, and can do things like pool resources to knock out incoming meteorites, it seems obvious that a goal-directed model is actually useful in predicting the behaviour of the overall system.
Most scientific definitions should try to be short and sweet. Definitions that include a description of the human mind are ones to eliminate.
Here, the idea that purpose is a psychological phenomenon is exactly what was intended to be avoided—the idea is to give a nuts-and-bolts description of purposefulness.
Re: defining “mind”—not a big deal. I just mean a nervous system—so a dedicated signal processing system with I/O, memory and processsing capabilities.
Any nervous system? That seems like a bad idea. Is a standard neural net trained to recognize human faces a mind? Is a hand-calculator a mind? Also, how does one define having a memory and processing capabilities. For example, does an abacus have a mind? What about a slide rule? What about a Pascaline or an Arithmometer?
I just meant “brain”. So: caclulator—yes, computer—yes.
Those other systems are rather trivial. Most conceptions of what constitutes a nervous system is run into the “how many hairs make a beard” issue at the lower end—it isn’t a big deal for most purposes.
Hm. Which one is it? ;-)
So, a thermostat satisfies your definition of “mind”, so long as it has a memory?
Human mind: complex. Cybernetic diagram of minds-in-general: simple.
A thermostat doesn’t have a “mind that predicts the future”. So, it is off the table in the second definition I proposed.
Dude, have you seriously not read the sequences?
First you say that defining minds is simple, and now you’re pointing back to your own brain’s inbuilt definition in order to support that claim… that’s like saying that your new compressor can compress multi-gigabyte files down to a single kilobyte… when the “compressor” itself is a terabyte or so in size.
You’re not actually reducing anything, you’re just repeatedly pointing at your own brain.
Re: “First you say that defining minds is simple, and now you’re pointing back to your own brain’s inbuilt definition in order to support that claim… ”
I am talking about a system with sensory input, motor output and memory/processing. Like in this diagram:
http://upload.wikimedia.org/wikipedia/commons/7/7a/SOCyberntics.png
That is nothing specifically to do with human brains—it applies equally well to the “brain” of a washing machine.
Such a description is relatively simple. It could be presented to Martians in a manner so that they could understand it without access to any human brains.
That diagram also applies equally well to a thermostat, as I mentioned in a great-great-grandparent comment above.