Jorge Luis Borges, The Babylon Lottery, 1941. Government by lottery. Living under a lottery system leads to greater expectation of random events, greater belief that life is and should be ruled by randomness, and further extension of the lottery’s scope, in a feedback loop that increases until every aspect of everyone’s life is controlled by the lottery.
Phil_Goetz2
Anon: “The notion of “morally significant” seems to coincide with sentience.”
Yes; the word “sentience” seems to be just a placeholder meaning “qualifications we’ll figure out later for being thought of as a person.”
Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let’s say intelligence and empathy are both necessary but not sufficient.
James: “Shouldn’t this outcome be something the CEV would avoid anyway? If it’s making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist.”
CEV is not a magic “do what I mean” incantation. Even supposing the idea were worked out, before the first AI is built, you probably don’t have a mechanism to implement it.
anon: “It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves.”
Something is missing from that sentence. Whatever you meant, let’s not rule out creating new species. We should, eventually.
Eliezer: Creating new sentient species is frightening. But is creating new non-sentient species less frightening? Any new species you create may out-compete the old and become the dominant lifeform. It would be the big lose to create a non-sentient species that replaced sentient life.
“I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings.”
It turns out—I’ve done the math—that if you are using a logic-based AI, then the probability of having alternate possible interpretations diminishes as the complexity increases.
If you allow /subsystems/ to mean a subset of the logical propositions, then there could be such interpretations. But I think it isn’t legit to worry about interpretations of subsets.
BTW, Eliezer, regarding this recent statement of yours: “Goetz’s misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction”: I challenge you to find one post where you have tried to correct me in a misunderstanding of you, or even to identify the misunderstanding, rather than just complaining about it in a non-specific way.
Eliezer: “I’ll go ahead and repeat that as Goetz’s misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz’s comment.”
Really? I challenge you to point to ONE post in which you have tried to correct a misunderstanding by me of your opinion, rather than just complaining about my “misunderstandings” without even saying what the misunderstanding was.
Eliezer, I have probably made any number of inaccurate depictions of your opinions, but you can’t back away from these ones. You DO generally think that your opinion on topics you have thought deeply about is more valuable than the opinion of almost everyone, and you HAVE thought deeply about fun theory. And you ARE planning to build an AI that will be in control of the world. You might protest that “take over the world” has different connotations. But there’s no question that you plan for your AI to be in charge.
It is deeply creepy and disturbing to hear this talk from someone who already thinks he knows better than just about everybody about what is good for us, and who plans to build an AI that will take over the world.
Michael, I thought that you advocated comfort with lying because smart people marginalize themselves by compulsive truth-telling. For instance, they find it hard to raise venture capital. Or (to take an example that happened at my company), when asked “Couldn’t this project of yours be used to make a horrible terrorist bioweapon?”, they say, “Yes.” (And they interpret questions literally instead of practically; e.g., the question actually intended, and that people actually hear, is more like, “Would this project significantly increase the ease of making a bioweapon?”, which might have a different answer.)
Am I compulsively telling the truth again? Doggone it.
Is it just me, or did Wright’s writing style sound very much like Eliezer’s?
pdf23ds: The claim that atheism inevitably leads to nihilism, and that belief in god inevitably relieves it, is made regularly by religious types in the West as the core of their argument for religion.
Today, in the West, people think that atheism leads to an existential crisis of meaning. But in ancient Greece, people believed in creator gods, and yet had to find their own sense of purpose exactly the same as an atheist.
We assume that the religious person has a purpose given by God. But Zeus would have said that the purpose of humans was to produce beautiful young women for him to have sex with. Ares would have said their purpose was to kill each other. Bacchus would have said it was to party. And so on. The gods ignored humans, had trivial purposes for them, or even hostile intent towards them.
Every believing Greek had to find their own meaning in life; often based on a sense of community. This meaning, or lack thereof, bore no relation to whether they believed in the gods or not.
Anna wrote:
Maybe it will make it easier but they didn’t really work at it. By having this alledged surgery will it make then more or less prone to believe in the quick fix or the long term discipline of working at it?
The reason for practicing discipline is to be able to solve problems. It would not be rational to avoid a quick solution to your life’s biggest problem, in order to gain experience that might possibly be useful in solving smaller problems later on.
On the flip side, I’d like to see less-rational characters in fantasy books. I can’t believe in pseudo-medieval worlds where the main characters have no ethnic, racial, gender, or class prejudices; have no superstitions; and never make decisions for religious reasons.
(In some fantasy, notably Tolkien, ethnic and racial stereotypes are allowed—but in those fantasy worlds, they’re true almost 100% of the time; and the author assumes that the reader, like the author, won’t even think of them as prejudices.)
In 1998, I wrote a rec.arts.int-fiction post called “Believable stupidity” (http://groups.google.com/group/rec.arts.int-fiction/ browse_thread/thread/60a077934f89a291/ 3fffb9048965857d?lnk=gst&q=believable+stupidity#3fffb9048965857d) split across 3 lines; rejoin for link)
saying that Eliza, a computer program that matches patterns, and fills in a template to produce a response, always wins the Loebner competition because template matching is more like what people do than reasoning is.
Someone (Russell?) once commented on the surprising efficacy of mathematics, which was developed by people who did not believe that it would ever serve any purpose, and yet ended up being at the core of many pragmatic solutions.
A companion observation is on the surprising inefficacy of philosophy, which is intended to solve our greatest problems, and never does. Like Eliezer, my impression is that philosophy just generates a bunch of hypotheses, with no way of choosing between them, until the right hypotheses is eventually isolated by scientists. Philosophy is usually an attempt to do science without all the hard work. One might call philosophy the “science of untestable hypotheses”.
But, on the other hand, there must be cases where philosophical inclinations have influenced people to pursue lines of research that solved some problem sooner than it would have been solved without the initial philosophical inclination.
One example is the initial conception that the Universe could be described mathematically. Kepler and Newton worked so hard at finding mathematical equations to govern the movements of celestial bodies because they believed that God must have designed a Universe according to some order. If they’d been atheists, they might never have done so.
This example doesn’t redeem philosophy, because I believe their philosophies were helpful only by chance. I’d like to see how many examples there are of philosophical notions that sped up research that proved them correct. Can anyone think of some?
To make it clear why you would sometimes want to think about implied invisibles, suppose you’re going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe’s expansion will have accelerated too much for them to ever send a message back. Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy? Or do you think the spaceship blips out of existence before it gets there? This could be a very real question at some point.
I don’t see any difference between deciding to send the spaceship even though the colonists will be outside my lightcone when they get there, and deciding to send the spaceship even though I will be dead when they get there.I don’t think it’s possible to get outside Earth’s light cone by travelling less than the speed of light, is it? I’m not well-educated about such things, but I thought that leaving a light-cone was possible only during the very early stages (eg., the first several seconds) after the big bang. Of course, that was said back when people believed the universe’s expansion was slowing down. But unless the universe’s expansion allows things to move out of Earth’s light-cone—and I suspect that allowing that possibility would allow violation of causality, because it seems it would require a perceived velocity wrt Earth past the speed of light—then the entire exercise may be moot; the notion of invisibles may be as incoherent as the atomically-identical zombies.
PK is right. I don’t think a GLUT can be intelligent, since it can’t remember what it’s done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it’s a Turing machine.
The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.
Eliezer, I suspect you are not being 100% honest here. I don’t have any problems with a GLUT being conscious.
I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn’t believe that a GLUT could be conscious even if that weren’t the case.)
Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.
Although, ironically, I’m in the process of doing exactly that. I will try to come up with a rationalization for why it is Not Silly when I do it.
Caledonian writes:
Um, no. What it IS is a radically different meaning of the word than what the p-zombie nonsense uses. Chalmers’ view requires stripping ‘consciousness’ of any consequence, while Eliezer’s involves leaving the standard usage intact.
‘Consciousness’ in that sense refers to self-awareness or self-modeling, the attempt of a complex computational system to represent some aspects of itself, in itself. It has causal implications for the behavior of the system, can potentially be detected by an outside observer who has access to the mechanisms underlying that system, and is fully part of reality. What Eliezer wrote is consistent with that definition of consciousness. But that is not “the standard usage”. It’s a useless usage. Self-representation is trivial and of no philosophical interest. The interesting philosophical question is why I have what the 99% of the world who doesn’t use your “standard usage” means by “consciousness”. Why do I have self-awareness? - and by self-awareness, I don’t mean anything I can currently describe computationally, or know how to detect the consequences of.
This is the key unsolved mystery of the universe, the only one that we have really no insight into yet. You can’t call it “nonsense” when it clearly exists and clearly has no explanation or model. Unless you are a zombie, in which case what I interpret as your stance is reasonable.
There is a time to be a behaviorist, and it may be reasonable to say that we shouldn’t waste our time pursuing arguments about internal states that we can’t detect behaviorially, but it is Silly to claim to have dispelled the mystery merely by defining it away.
There have been too many attempts by scientists to make claims about consciousness that sound astonishing, but turn out to be merely redefinitions of “consciousness” to something trivial. Like this, for instance. Or Crick’s “The Astonishing Hypothesis”, or other works by neuroscientists on “consciousness” when they are actually talking about focus of attention. I have developed an intellectual allergy to such things. Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.
Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.
Eliezer, I’m shocked to see you write such nonsense. This only shows that you don’t understand the zombie hypothesis at all. Or, you suppose that intelligence requires consciousness. This is the spiritualist, Searlian stuff you usually oppose.The zombie hypothesis begins by asserting that I have no way of knowing whether you are conscious, no matter what you write. You of all people I expect to accept this, since you believe that you are Turing-computable. You haven’t made an argument against the zombie hypothesis; you’ve merely asserted that it is false and called that assertion an argument.
The only thing I can imagine is that you have flipped the spiritualist argument around to its mirror image. Instead of saying that “I am conscious; Turing machines may not be conscious; therefore I am not just a Turing machine”, you may be saying, “I am conscious; I am a Turing machine; therefore, all Turing machines that emit this sequence of symbols are conscious.”
If you want to fight the good fight, edit the section “Limits of Reductionism” in the Wikipedia article on Reductionism. It cites many examples of things that are merely complex, as evidence that reductionism is false.
Good post. Nick’s point is also good.
When parents say they don’t care who started it, it may also be a strategy to minimize future fighting. Justice is not always optimal, even in repeated interactions.