Nate, I know that you’re saying something deep, maybe even intelligent, but I’m having trouble parsing your post.
Tiiba2
Okay, so here’s a dryad. You cut her open, and see white stuff. You take a sample, put it under a microscope, and still see white stuff. You use a scanning tunneling microscope, and still see white stuff. You build an AI and tell it to analyze the sample. The AI converts galaxies into computronium and microscopium, conducts every experiment it can think of, and after a trillion years reports: “The dryad is made of white stuff, and that’s all I know. Screw this runaround, what’s for dinner?”
But using an outside view of sorts (observed behavior), you can still predict what the dryad will do next. Just like with quarks and with Occam’s razor and with prime numbers. And things you haven’t reduced yet, but think you can, like people or the LHC.
So, what would you call this dryad?
- Nov 12, 2010, 6:45 PM; 1 point) 's comment on Excluding the Supernatural by (
@denis bider: I guess I’m in a minority.
@metahacker: I do think that’s a great idea.
@denis bider: I call them “vegetarians” and “environmentalists”. Maybe I’m confused.
@Russell Wallace:
Arr… Erm… Anthropomorphism!
@Caledonian:
What happens if you try walking to work?
If the answer is NOT “my legs would fall off”, you have choices. Otherwise, you can drive or you can drive.
Wait… Eliezer isn’t a god?
Just great. I wrote four paragraphs about my wonderful safe AI. And then I saw Tim Tyler’s post, and realized that, in fact, a safe AI would be dangerous because it’s safe… If there is technology to build AI, the thing to do is to build one and hand the world to it, so somebody meaner or dumber than you can’t do it.
That’s actually a scary thought. It turns out you have to rush just when it’s more important than ever to think twice.
Aron: “”“Whereas the plausible reality IMO seems to be more like an ecosystem of groups of intelligences of varying degrees all of which will likely have survival rationale for disallowing a peer to hit nutso escape velocity.”””
What can an infrahuman AI do to a superhuman AI?
I can’t bring myself to feel sad about not knowing of a disaster that I can’t possibly avert.
Nevertheless, I don’t get why people would propose any design that is not better than CEV in any obvious way.
But I have a question about CEV. Among the parameters of the extrapolation, there is “growing up closer together”. I can’t decipher what that means, particularly in a way that makes it a good thing. If it means that I would have more empathy, that is subsumed by “know more”. My initial reaction, though, was “my fingers would be closer to your throat”.
Peter: I THOUGHT that I’m supposed to assume that there’s smoke. (DNRTFA, too hard for my little brain)
“”″(X->Y)->Y implies (not X)->Y”″”
The arrow means “implies”, right?
So,
(Smoke implies fire, therefore fire) implies (no smoke means fire)?
I don’t get it.
“They stopped to piss off a bridge.”
That there is anthropomorphism. Bridges don’t get mad.
On second thought, that’s not right. But you probably understood what I mean. If you happen to make an a conjecture about something like Kolmogorov complexity or the halting problem, and it just happens to be undecidable, it’s still either true or false.
Caledonian: There is one exception:
The Kolmogorov complexity of this sentence is exactly 50 bytes in Java bytecode.
Meaningful, but unfalsifiable.
/nitpick
Well, belligerent dissent can actually be polarizing.
But although Caledonian makes accusations that I find more than unfounded, I’ve seen him make sense, too. Overall, I don’t feel that his presence is so deleterious as to require banishment.
While spacing out in a networking class a few years ago, it occured to me that morality is a lot like network protocols, or in general, computer protocols for multiple agents that compete for resources or cooperate on a task. A compiler assumes that a program will be written in a certain language. A programmer assumes that the compiler will implicitly coerce ints to doubles. If the two cooperate, the result is a compiled executable. Likewise, when I go to a store, I don’t expect to meet a pickaxe murderer at the door, and the manager expects me to pay for the groceries. Those who do not obey these rules get the “25: to life” error.
Morality is a protocol for social networks. Some traditions of morality are arbitrary; It really doesn’t matter whether people drive on the right or on the left. However, some moralities are so bogus that societies using them wouldn’t last a week. If anyone drives on the left, EVERYONE had better drive on the left. It’s possible to create a workaround for any one action (there used to be societies of cannibals!), but some complete moralities are sufficiently broken that you won’t find any affluent civilizations that use them.
Moral progress/error cannot be judged in absolute terms, relative to the Bible. It must be judged based on the desires of the participants of the social network. However, this might be a two-parameter function, the other parameter being the definition of “participant”.
How’s this?
And screw Belldandy. The Lord of Nightmares can kick her ass.
(My god can beat up your god?)
“”“(Personally, I don’t trust “I think therefore I am” even in real life, since it contains a term “am” whose meaning I find confusing, and I’ve learned to spread my confidence intervals very widely in the presence of basic confusion. As for absolute certainty, don’t be silly.)”””
I’m just wondering, what do you think of the Ultimate Ensemble? If I’m not mistaken (I only read the Wikipedia article), it applies to existence your rule that if there’s no difference, there should be no distinction.
Especially considering that you can’t edit a post.
My fears:
Paperclip AI
People I know IRL catching me reading something embarrassing on the Internet
Nuclear war
The zombie under my bed