Aw great, now my post is broken.
Tiiba2
The URL to the anime fanfiction seems to be worse than broken. My browser doesn’t even say what you wrote, just that it’s “illegal”.
(page source: )
Well, the farmer’s wife seems to be one character who was thankful...
...and fared the worst.
But is this really cultural gloominess? Maybe this one is just reserved for when you’re in a really bad mood. What are the other stories in that book like?
Untranslatable 2: The frothy mixture of lube and fecal matter that is sometimes the byproduct of anal sex.
1) Who the hell is Master of Fandom? A guy who maintains the climate control system, or the crew’s pet Gundam nerd?
2) Do you really think the aliens’ deal is so horrifying? Or are you just overdramatizing?
Everyone is orgasmium. And strangely enough, they don’t think it’s all that horrible.
I recently wondered whether it’s possible that transhumans would spend parts of their lives in situations very similar to Dante’s hell, complete with wailing and gnashing of teeth. Some have suggested that a bit of pain might be necessary to make all the pleasure we’re supposed to get realizable, but I suggest that we might actually need quite a lot of it. If the only way to make people happy is to improve their lives, pushing them way down might turn out to be a reasonable solution. And some might choose that route to spice up whatever other sources of happiness there are. The fact that hellfire scares us fleshlings wouldn’t matter to indestructible nanocyborgs.
Or maybe they would intentionally seek other things that I consider horrible. Like the risk of death—isn’t that what people do already when they walk on a tightrope?
Abigail: “”“If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?”””
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn’t want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that—it is too similar to people who are horribly maimed. But It’s What’s Inside That Counts.
The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.
Abigail: “”“If you find the thought of having endless orgasms repulsive, might not the person who had, er, sunk so low, also find his state repulsive, eventually?”””
I, for one, cannot imagine one who has, er, ascended so high voluntarily reducing his own utility.
I cannot see why I shouldn’t want to become orgasmium. It would certsinly be disgusting to look at someone else turning into something like that—it is too similar to people who are horribly maimed. But It’s What’s Inside That Counts.
The reason that drug addiction is bad is that it has deleterious health effects. But although orgasmium is defenselsess, it is guarded by a benevolent god. Nothing in the world could destroy it.
This fun theory seems to be based on equivocation. Sure, insights might be fun, but that doesn’t mean they literally are the same thing. The point of studying the brain is to cure neurological disorders and to move forward AI. The point of playing chess is to prove your worth. So is the (relatively) insight-less task of becoming world champion at track and field. What UTILITY does solving BB(254) have?
I think a human can only have so much fun if he knows that even shooting himself in the head wouldn’t kill him, because There Is Now A God. And altering your brain might be the only solution. And I don’t see why it’s so abhorrent.
You keep mentioning “orgasmium” like it’s supposed to horrify me. Well, it doesn’t. I’m more horrified by the prospect of spending eternity proving theorems that don’t make my life one bit easier, like Sysiphus.
“Tiiba, you’re really overstating Eliezer and SIAI’s current abilities. CEV is a sketch, not a theory, and there’s a big difference between “being concerned about Friendliness” and “actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns.”″
That’s what I meant.
Michael, it seems that you are unaware of Eliezer’s work. Basically, he agrees with you that vague appeals to “emergence” will destroy the world. He has written a series of posts that show why almost all possible superintelligent AIs are dangerous. So he has created a theory, called Coherent Extrapolated Volition, that he thinks is a decent recipe for a “Friendly AI”. I think it needs some polish, but I assume that he won’t program it as it is now. He’s actually holding off getting into implementation, specifically because he’s afraid of messing up.
So, then, how is my reduction flawed? (Oh, there are probably holes in it… But I suspect it contains a kernel of the truth.)
You know, we haven’t had a true blue, self-proclaimed mystic here in a while. It’s kind of an honor. Here’s the red carpet: [I originally posted a huge number of links to Eliezer’s posts, but the filter thought they’re spam. So I’ll just name the articles. You can find them through Google.] Mysterious Answers to Mysterious Questions Excluding the Supernatural Trust in Math Explain/Worship/Ignore? Mind Projection Fallacy Wrong Questions Righting a Wrong Question
I have read the Chinese Room paper and concluded that it is a POS. Searle runs around, points at things that are obviously intelligent, asks “it that intelligence?”, and then answers, matter of factly, “no, it isn’t”. Bah.
What Searle’s argument amounts to
The Turing test is not claimed as a necessary precondition for consciousness, but a sufficient one.
“You guys are the ones who want to plug this damned thing in and see what it does.”
That’s just plain false. Eliezer dedicated his life to making this not so.
Something I forgot. Eliezer will probably have me arrested if I just tell you to come up with a definition. He advises that you “carve reality at its joints”:
http://lesswrong.com/lw/o0/where_to_draw_the_boundary/
(I wish, I wish, O shooting star, that OB permitted editing.)
“inklink” = “inkling”
Tobis: That which makes you suspect that bricks don’t have qualia is probably the objective test you’re looking for.
Eliezer had a post titled “How An Algorithm Feels From Inside”: http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/
Its subject was different, but in my opinion, that’s what qualia are—what it feels like from the inside to see red. You cannot describe it because “red” is the most fundamental category that the brain perceives directly. It does not tell you what that means. With a different mind design, you might have had qualia for frequency. Then that would feel like something fundamental, something that could never be explained to a machine.
But the fact is that if you tell the machine under what circumstances you say that you see red, that is all the information it needs to serve you or even impersonate you. It doesn’t NEED anything else, it hasn’t lost anything of value. Which is, of course, what the Turing Test is all about.
Come to think of it, it seems that with this definition, it might even be possible—albeit pointless—to create a robot that has exactly a human’s qualia. Just make it so it would place colors into discrete buckets, and then fail to connect these buckets with its knowledge of the electromagnetic spectrum.
Also, what I meant by “hubgalopus” is not that subjective experience is one. I meant that when you find yourself unable to decide whether an object has a trait, it’s probably because you have no inklink what the hell you’re looking for. Is is a dot? Or is it a speck? When it’s underwater, does it get wet?.. Choose a frickin definition, and then “does it exist?” will be a simple boolean-valued question.
“”“Things are as predictable as they are and not more so.”””
Michael, Eliezer has spent the last two years giving example after example of humans underusing the natural predictability of nature.
“”“Psy-K, try as I might to come up with a way to do it, I can see no possibility of an objective test for subjective experience.”””
I bet it’s because you don’t have a coherent definition for it. It’s like looking for a hubgalopus.
“”“A superintelligence will more-likely be interested in conservation. Nature contains a synopsis of the results of quadrillions of successful experiments in molecular nanotechnology, performed over billions of years—and quite a bit of information about the history of the world. That’s valuable stuff, no matter what your goals are.”””
My guess is that an AI could re-do all those experiments from scratch within three days. Or maybe nanoseconds. Depending on whether it starts the moment it leaves the lab or as a Jupiter brain.
I guess I’ll use this thread to post a quote from “The tale of Hodja Nasreddin” by Leonid Solovyov, translated by me. I think it fits very well with the recent sequence on diligence.
“He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way—he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants’ caravan; and this caravan will surely happen to be going the right way, and there will be a free camel, upon which the traveler will reach his destination. Meanwhile, he who sits on the road and wallows in despair—no matter how much he cries and complains—will evoke no compassion in the soulless rocks. He will die in the desert, his corpse will become meat for foul hyenas, his bones will be buried in hot sand. How many people died prematurely, and only because they didn’t love life strongly enough! Hodja Nasreddin considered such a death humiliating.
“No”—said he to himself and, gritting his teeth, repeated wrathfully: “No! I won’t die today! I don’t want to die!”″
About the book: it uses the name Hodja Nasreddin, but has little to do with him. The Nasreddin that Muslims know was a mullah. This one is a rabble-rousing vagabond who enters harems, makes life hard for corrupt officials, and has been successfully executed in every city in the Arabic world. I think that Solovjov took a Muslim hero and created a Communist hero. But that doesn’t take away from the fact that the book is a masterpiece.
- Apr 6, 2011, 5:26 AM; 8 points) 's comment on Rationality Quotes: April 2011 by (
Edward, how is it arrogant to want to contribute to science?