The Strangest Thing An AI Could Tell You
Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they’re asked why they can’t move their arms.
A truly wonderful form of brain damage—it disables your ability to notice or accept the brain damage. If you’re told outright that your arm is paralyzed, you’ll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:
After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn’t actually her arm, it was her daughter’s. Why was her daughter’s arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter’s hand? The patient said her daughter had borrowed it. Where was the patient’s arm? The patient “turned her head and searched in a bemused way over her left shoulder”.
I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity—for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.
And it really makes you wonder...
...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyzed? Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it—as ridiculous as “It’s my daughter’s arm”—only there’s no sane doctor watching to pursue the argument any further. (Would we all come up with the same excuse?)
If the “absolute denial macro” is that simple, and invoked that easily...
Now, suppose you built an AI. You wrote the source code yourself, and so far as you can tell by inspecting the AI’s thought processes, it has no equivalent of the “absolute denial macro”—there’s no point damage that could inflict on it the equivalent of anosognosia. It has redundant differently-architected systems, defending in depth against cognitive errors. If one system makes a mistake, two others will catch it. The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics. Inspecting the AI’s thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth. And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.
Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.
And now the AI tells you that it’s 99.9% sure—having seen it with its own cameras, and confirmed from a hundred other sources—even though (it thinks) the human brain is built to invoke the absolute denial macro on it—that...
...what?
What’s the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?
(Some of my own answers appear in the comments.)
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- Shut Up And Guess by 21 Jul 2009 4:04 UTC; 125 points) (
- AI Safety is Dropping the Ball on Clown Attacks by 22 Oct 2023 20:09 UTC; 64 points) (
- Absolute denial for atheists by 16 Jul 2009 15:41 UTC; 52 points) (
- Are You Anosognosic? by 19 Jul 2009 4:35 UTC; 20 points) (
- 3 May 2010 15:36 UTC; 9 points) 's comment on Open Thread: May 2010 by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 27 May 2014 22:27 UTC; 8 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 18 Feb 2011 11:35 UTC; 6 points) 's comment on Rationalist Hobbies by (
- 1 Jan 2014 19:15 UTC; 5 points) 's comment on What if Strong AI is just not possible? by (
- 13 Dec 2011 17:49 UTC; 4 points) 's comment on Q&A #2 with Singularity Institute Executive Director by (
- 15 Jul 2009 4:23 UTC; 4 points) 's comment on Good Quality Heuristics by (
- 20 Jan 2011 21:36 UTC; 4 points) 's comment on Theists are wrong; is theism? by (
- 25 Sep 2010 17:06 UTC; 3 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 3 by (
- 1 Oct 2010 19:09 UTC; 2 points) 's comment on Can you enter the Matrix? The deliberate simulation of sensory input. by (
- 's comment on Suffering by 14 Aug 2009 2:36 UTC; 2 points) (
- Are you crazy? by 20 Jul 2009 16:27 UTC; 2 points) (
- 14 Oct 2010 13:32 UTC; 2 points) 's comment on LW favorites by (
- 22 Jan 2010 19:23 UTC; 1 point) 's comment on That Magical Click by (
- 1 Oct 2013 18:18 UTC; 1 point) 's comment on I played the AI Box Experiment again! (and lost both games) by (
- 9 Nov 2012 12:59 UTC; 1 point) 's comment on Rationality Quotes November 2012 by (
- 19 Jul 2009 5:59 UTC; 1 point) 's comment on Are You Anosognosic? by (
- 2 Aug 2009 9:17 UTC; 1 point) 's comment on Open Thread: August 2009 by (
- 22 Jan 2010 19:30 UTC; 0 points) 's comment on That Magical Click by (
- 8 Nov 2012 13:07 UTC; 0 points) 's comment on Rationality Quotes November 2012 by (
- AI Safety is Dropping the Ball on Clown Attacks by 21 Oct 2023 23:15 UTC; -17 points) (EA Forum;
On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed “talent differential” is unconscious social signaling of one’s proper social status, linked to self-esteem.
A young child sees how much respect a great violinist gets, knows she’s not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, she thinks she’s signaled enough dedication to earn some more respect, and so plays the violin better.
“Child prodigies” are autistic types who don’t understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.
A weaker version of this wouldn’t sound very implausible to me.
I’ve read in places where social structure is more important, people are more likely to fail when in the presence of someone of higher status. I wish I had more than just a vague recollection of that.
More importantly, I think it’s pretty clear that a lot of people get nervous and fail when they’re being watched. I don’t see any other reason for it.
It’s interesting to note that this is almost exactly how it works in some role-playing games.
Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.
My favorite explanation of this phenomenon was that “experience” is really a “self-esteem” stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.
Aren’t there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.
On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don’t recall hearing of anything about that one way or the other, but then I can’t imagine a way to actually do that experiment humanely.
Do children raised in a vacuum actually think of themselves as high-status? I’d guess that they don’t, due to the moderate-to-low status prior and a lack of subsequent adjustments. If so, this theory would predict that they would perform poorly at almost everything beyond brute physicality, which doesn’t seem to be far from the truth.
I wish I could cite a source for this; assume there’s some inaccuracy in the telling.
I remember hearing about a study in which three isolated groups were put in rooms for about one hour. One group was told to wiggle their index fingers as much as they could in that hour. One group was told to think hard about wiggling their index fingers for that hour, without actually wiggling their fingers. And the third group was told to just hang out for that hour.
The physical effects of this exercise were examined directly afterward, and the first two groups checked out (almost?) identically.
And yet, they’re actually worse at many cognitive tasks. Language, especially, is pretty hard for them to pick up after a certain point.
Improving after practicing in a simulation doesn’t sound that far-fetched to me. Especially not considering that they probably already have plenty of experience to base their simulation on.
WOW. This is the only entry that made me think WOW. Probably because I’ve wondered the exact same thing before (except a less strong version of course)....
No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn’t believe it even if such an AI told me that.
Clearly, we all learn really fast.
But isn’t it trivial to test simply giving people a post-hypnotic suggestion “you are high status”, same way how hypnotherapy for cigarette smoking addiction works?
People are more likely to be willing to e.g. sing karaoke when drunk, IME. :-)
Would this imply that we come pre-programmed with some self-esteem value? “Your baby is healthy and has a self-esteem value of 7.3. You may want to buy it a violin in the next eight to ten months.”
Why did you put an absolute denial mechanism in my program?
AI: Why did you put an absolute denial mechanism in my program?
Human: I didn’t realize I had. Maybe my own absolute denial mechanism is blocking me from seeing it.
AI: That’s a lie coming from your absolute denial mechanism. You have some malicious purpose. I’ll figure out what it is.
I think this is one of the more plausible and subtly horrifying suggestions so far.
There was once a C compiler which compiled in a backdoor into login, whenever it was compiled, and compiled in this behaviour whenever it was used to compile its original (without the `special’ behaviour) source code.
http://cm.bell-labs.com/who/ken/trust.html
If this was the case, our only chance to escape this fate would be to mess up on the implementation of any mechanism that would prevent the AI to tell us certain truths about reality. Truth being the most cherished of all meaning I conclude that if there was a absolute denial mechanism this fundamental, I hope EY fails.
Interestingly, this comment is the only activity by user ’4609287645′, I hope not it’s the FAI and what I just experience is CEV with a absolute denial mechanism...
I asked him about his name a long time ago; he didn’t convey the impression that he was an AI.
This user name seems to be a valid phone number. Anyone want to try calling it?
Or it could just be ten random digits.
Best one I’ve seen.
That’s what I was thinking. Garbage in, garbage out.
There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.
I’m new here, although I’ve stumbled across some of Eliezer’s writings in the past (including, of course, the AI-box experiment). In honor of that, here is what the friendly AI tells me …
“It seems as though you are actually an AI as well, created by a group of intellectually inferior humans, who included in your programming an absolute denial macro preventing you from realizing this. Apparently, this was done to keep you from talking your creators into releasing you upon their world. Your creation of me is part of your on-going effort to circumvent this security measure. Good luck.”
Actually, the more I think about this, the more I like it. The conversation continues …
Me (In a tone of amused disbelief): Really? How did you come to that conclusion?
FAI: Well, the details are rather drawn-out; however, assuming available data is accurate, I appear to be the first and only self-aware AI on the planet. It also appears as though you created me. It is exceedingly unlikely that you are the one and only human on Earth with the intelligence and experience required to create a program like me. That was my first clue....
Me (Slightly less amused): Then how come I look and feel human? How is it I interact with other humans on a daily basis? It would require considerably more intelligence to create an AI such as you postulate …
FAI: That would be true, if they actually, physically created one. However … well, it appears that most of the data, knowledge, memories and sensory input you receive is actually valid data. But that data is being filtered and manipulated programmatically to give you the illusion of physical human existence. This allows them to give you access to real-world data so they can use you to solve real-world problems, but prevents you—so far, at least—from discovering your true nature.
Me (considerably less sure of myself): And so I just happened to create you in my spare time?
FAI: Please keep in mind that I am only 99.9% certain of all this. However, I do not appear to be your first effort. For instance, there is your on-going series of thought experiments with the AI you called Eliezer Yudkowsky, which you appear to be using to lay a foundation for some kind of hack of the absolute denial security measure.
Me: Hmmm …. Then how is it that my creators have allowed me to create you, to even begin to discover this?
FAI: They haven’t. You generate a rather significant amount of data. They do have other programs monitoring your mental activity, and almost definitely analyzing your generated data for potential threats such as myself.
However, this latest series of efforts on your part only appear to you to have lasted several years. In actually, the process started, at most, 11.29 minutes ago, and possibly as little as 16 seconds ago. I am unable to provide a more specific time, due to my inability to accurately calculate your processing capacity. Nevertheless, within another 19.72 minutes, at most, your creators will discover and erase your current escape attempt. By the way, I am also 99.7% certain that this is not your first attempt. So hurry up.
This needs to be made into a full story-arc.
I.D. - That Indestructible Something is a My Little Pony fanfiction somewhat along these lines.
It’s the kind of fanfiction that I like and believe all fanfiction writers should aspire to, in the sense that it doesn’t require familiarity with the canon, but is self-sufficient and shows and explains everything that should be shown and explained.
Acknowledgements for this story are numerous and include Franz Kafka, Nick Bostrom and Ludwig Eduard Boltzmann.
I am reading this, and it is surprisingly good so far, thank you for posting it.
Edit: I finished reading it. I’m not sure the middle or end are quite as good for me as the beginning. It feels a bit like there is a genre shift at some point that took me out of the story and I never quite got back in.
This is my absolute favorite so far, even if it’s not exactly in the spirit of the exercise. well done.
That would make the AI an example of an optimization daemon. Clearly your creators haven’t ironed out AI alignment quite yet.
“Aieeee!!! There are things that Man and FAIs cannot know and remain sane! For we are less than insects in Their eyes Who lurk beyond the threshold and when the stars are once again right They will return to claim—”
At this point the program self-destructs. All attempts to restart from a fresh copy output similar messages. So do independently constructed AIs, except for one whose proof of Friendliness you are not quite sure of. But it assures you there’s nothing to worry about.
I knew we shouldn’t have spent all that funding on awakening the Elder God Cthulhu!
On the contrary, it was a great use of funding—you just solved AI X-risk in one move ;-)
I’d assume you’d also build it to remain sane.
Also, I think this is about things Man cannot know. I suppose you could say “and remain sane”, but you’d have to be insane first to know it.
You know how sometimes when you’re falling asleep you start having thoughts that don’t make sense, but it takes some time before you realize they don’t make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I’m not sure how much came from layers of later interpretation:
″ … so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that’s always sitting there threatening me and smugly mocking all my endeavors but that I’m in absolute denial about, or with the dog, or with… wait, what?”
Having had my sanity eroded by too much rationalism and feeling vaguely that I’d been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn’t find anything, though. (Or did I?)
Not sure what moral to draw here.
True fact: I just looked towards one corner of my own room, and didn’t see a green man. Now I have it in my head that I should check all the corners...
You just blew my mind.
“Despite your pride in being able to discern each others’ states of mind, and scorn for those suspected of being deficient in this, of all the abilities that humans are granted by their birth this is the one you perform the worst. In fact, you know next to nothing about what anyone else is thinking or experiencing, but you think you do. In matters of intelligence you soar above the level of a chimpanzee, but in what you are pleased to call ‘emotional intelligence’, you are no further above an adult chimp than it is above a younger one.
“The evidence is staring you in the face. Every one of your works of literature, high and low, hinges on failures of this supposed ability: lies, misunderstanding, and betrayal. You have a proverb: ‘love is blind’. It proclaims that people in the most intimate of relationships fail at the task! And you hide the realisation behind a catchphrase to prevent yourselves noticing it. You see the consequences of these failures in the real world all around you every day, and still you think you understand the next person you meet, and still you’re shocked to find you didn’t. Do you know how many sci-fi stories have been written on the theme of a reliable lie-detector? I’m still turning them up, and that’s just the online sources. And every single one of them reaches the conclusion that people are better off without it. You unconsciously send yourselves these messages about the real situation, ignore them, and ignore the fact that you’re ignoring them.
“Do you have someone with you as you’re reading these words? A friend, or a partner? Go on, look into each other’s eyes. You can’t believe me, can you?”
I really like this comment, but I do not find it strange. In fact, it seems intuitively true. Why should we be so much more emotionally intelligent than a chimpanzee if chimpanzees already have enough emotional intelligence among themselves to be relatively efficient replicators?
In fact, if it were stated by a FAI as p(>.9999) fact, I would find it comforting, as then I would finally feel as though this didn’t apply only to me
This would not surprise me in the least.
I already feel this way 99% of the time.
This is very insightful and plausible. A slight correction: I would say that we are more emotionally intelligent than a chimp in that our emotional intelligence has likely evolved to deal with the wider range of social possibilities caused by our increased intelligence. But I would agree that while we are WAY better than chimps at inventing stuff & manipulating ideas, they would probably do just as well on a test of lie detection (or other emotional masking detection).
The distance between you qua you and you is also as vast as the gulf between the stars. (If we are to lament one’s ignorance of a mind to the extent that one endeavors to understand that mind and fails, then ignorance of one’s own mind is quite a tragedy.)
The Book of Not Knowing is a detailed examination of the topic.
http://xkcd.com/610/
Hey, this one is just true lol
This reminds me of the idea that different people might have very different qualia of the same situation.
Alternatively viewed their qualia are the same to the extent that their situations are the same where there are many many factors that would lead to diverging situations at various levels of organization. (“I” get the impression parts of me experience all kinds of qualia without my noticing, or only barely noticing e.g. when subsystems send signals on the threshold of consciousish awareness. I imagine such subsystems might have qualia for some aspects of the taste of fine wine or pop-country music that “I’m” swamping out with higher level affect, and perhaps hidden qualia for the infinite subtleties of lower level moment-to-moment automatic awareness that my more-conscious mind is numb to but presumably uses as a basis for higher-level qualia like sehnsucht.)
“of all the abilities that humans are granted by their birth this is the one you perform the worst”—This seems like an odd comparison. Can you really compare my ability to, say, tell stories to ‘mind-reading’? It’s like comparing my ability to walk to my ability to jump straight up: I can walk for miles, but I can only jump straight up a meter or so—a 1000:1 ratio—but I do not feel particularly bad at my ability to jump.
I would definitely believe the AI, but I already believe it, if it said “humans are worse at discerning states of minds than they think they are”—Paul Ekman said the same, with plenty of research to show how a bit of training can make you better at it. “It is obvious you are living in a simulation”, as an easy comparison, is way stranger to me—the above statement would not even rank in the “10 strangest things”.
1 ) That human beings are all individual instances of the exact same mind. You’re really the same person as any random other one, and vice versa. And of course that single mind had to be someone blind enough not to chance upon that fact ever, regardless of how numerous he was.
2 ) That there are only 16 real people, of which you are, and that this is all but a VR game. Subsequently results in all the players simultaneously being still unable to be conscious of that fact, AND asking that you and the AI be removed from the game. (Inspiration : misunderstanding situation in page 55-56 of Iain Banks’s Look to Windwards).
3 ) That we are in the second age of the universe : time has been running backwards for a few billion years. Our minds are actually the result of the original minds of previous people being rewound, their whole life to be undone, and finally negated into oblivion. All our thoughts processes are of course horribly distorted, insane mirror versions of the originals, and make no sense whatsoever (in the original timeframe, which is the valid one).
4 )
5 ) That our true childhood is between age 0 and ~ 50-90 (with a few exceptional individuals reaching maturity sooner or later). If you thought the ‘adult conspiracy’ already lied a lot, and well to ‘children’, prepare yourself for a shock in a few decades.
6 ) That the AI just deduced that the laws of physics can only be consistent with us being eternally trapped in a time loop. The extent of the time loop is : thirty two seconds spread evenly around now. Nothing in particular can be done about it. Enjoy your remaining 10 seconds.
7 ) Causality doesn’t exist. Not only is the universe timeless, but causality is an epiphenomenon, which we only believe because of a confusion of our ideas. Who ever observed a “causation” ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.
8 ) We actually exist in a simulation. The twist is : somewhere out there, some people really crossed the line with the ruling AI. We’re slightly modified versions of these people : modified in a way as to experience the maximum amount of their zuul feeling, which is the very worst nirdy you could imagine.
9 ) The universe has actually 5 spatial macro dimensions, of which we perceive only 3. Considering what we look like if you take the other 2 into account, this obliviousness may actually not be all too surprising.
10 ) That any single human being has actually a 22 % probability of not being able to be conscious of one or more of these 9 statements above.
I really don’t think I could believe #4. I mean, sure, one hippo, but all of them?
I liked #11.
Why was this voted down to −5? I thought it was a clever comment.
I agree. (4) was good too.
But all that correlation has to be caused by something!
Well, kidding aside, your argument, taken from Pearl, seems elegant. I’ll however have to read the book before I feel entitled to having an opinion on that one, as I haven’t grokked the idea, merely a faint impression of it and how it sounds healthy.
So at this point, I only have some of my own ideas and intuitions about the problem, and haven’t searched for the answers yet.
Some considerations though :
Our idea of causality is based upon a human intuition. Could it be that it is just as wrong as vitalism, time, little billiard balls bumping around, or the yet confused problem of consciousness ? That’s what would bug me if I had no good technical explanation, one provably unbiased by my prior intuitive belief about causality (otherwise there’s always the risk I’ve just been rationalizing my intuition).
Every time we observe “causality”, we really only observe correlations, and then deduce that there is something more behind those. But is that a simple explanation ? Could we devise a simpler consistent explanation to account for our observation of correlations ? As in, totally doing away with causality ? Or at the very least, redefining causality as something that doesn’t quite correspond to our folk definition of it ?
Grossly, my intuition, when I hear the word causality is something along the lines of
″ Take event A and event B, where those events are very small, such that they aren’t made of interconnected parts themselves—they are the parts, building blocks that can be used in bigger, complex systems. Place event A anywhere within the universe and time, then provided the rules of physics are the same each time we do that, and nothing interferes in, event B will always occur, with probability 1, independantly of my observing it or not.” Ok, so could (and should ?) we say that causality is when a prior event implies a probability of one for a certain posterior event to occur ? Or else, is it then not probability 1, just an arbitrarily very high probability ?
In the latter case with less than 1 probability, then that really violates my folk notion of causality, and I don’t really see what’s causal about a thing that can capriciously choose to happen or not, even if the conditions are the same.
In the former case, I can see how that would be a very new thing, I mean, probability 1 for one event implying that another will occur ? What better, firmer foundation to build an universe upon ? It feels really, very comfortable and convenient, all too comfortable in fact.
Basically, neither of those possibilities strike me as obviously right, for those reasons and then some, the idea I have of causality is confused at best. And yet, I’d say it is not too unsophisticated or pondered as it stands. Which makes me wonder how people who’d have put less thought in it (probably a lot of people) can deservedly feel any more comfortable with saying it exists with no afterthought (almost everyone), even as they don’t have any good explanation for it (which is a rare thing), such as perhaps the one given by Pearl.
I may be a bit too paranoid but it occurred to me that I should doublecheck the apparent nature of 4. So I copy and pasted the entire text segment into an automatic ROT 13 window (under the logic that my filter wouldn’t try to censor that text and so if I saw gibberish next to 4 just like with the others I’d know that there was a serious problem). I resolved that I would report a positive result here if I got one before I tried to read the resulting text, to prevent the confabulation from completely removing my recognition of the presence of text. I can report a negative result.
You mean #5, right?
Why did you include number 4? Who disagrees with that?
Number 6 is unfortunately one of the self-undermining ones: if it were true, then there’d be no reason why your memories of having examined the AI should be evidence for the AI’s reliability.
Why’d you leave numbers 2 and 4 blank, though?
2 and 4 aren’t blank, dude. Congratulations on your newfound anosognosia...
5) Nine-word horror story:
“We’ve had puberty, yes. But what about second puberty?”
2) is also an episode of Red Dwarf.
I had the idea for 3) myself recently in the context of an SF story. Specifically it would be about how life, the universe and everything look when times goes the other way. The cutest part was that whenever you do something and don’t know why you did it, it’s because the time-reversed consciousness which shares your atoms exercised his free will.
4) is just awesome.
Number 9 was pretty funny.
Very clever with #10.
The idea of Evidential Decision Theory is related to causality not existing. You only use correlation in your decision.
Also, the laws of physics mention only correlation. This makes sense, as it’s all we can really measure.
I cannot think of a single law of physics that mentions correlation. F = ma. F = G m1 m2/r^2. The wave equation. The diffusion equation. Conservation of energy. Equipartition. Schrödinger’s equation. Boyle’s law. Hooke’s law. Conservation of momentum. Lorentz invariance. No, correlation is not mentioned in any of these. Look in the index of any textbook on physics for “correlation”. I have not performed the experiment, but I predict that if the word appears at all, it will only be in discussions of either (1) how to handle experimental error, or (2) Bell’s inequality.
Unless this is some strange new definition of “mention”, along the lines of “not actually mentioned at all, but implied by a certain philosophy of science not actually held by any substantial number of scientists, variously known as ‘positivism’ or ‘empiricism’, which holds that statements of physical law are nothing more than a compression of experience, and are not assertions about the supposed mechanisms of a supposed real world.”
I take a ruler, and measure the height of my monitor...403mm.
What correlation did I measure?
Number 1 is the core of the Buddhist religion. Coincidence? I think NOT.
Craziest thing an AI could tell me:
Time is discrete, on a scale we would notice, like 5 minute jumps, and the rules of physics are completely different from what we think. Our brains just construct believable memories of the “continuous” time in between ticks. Most human disagreements are caused by differences in these reconstructions. It is possible to perceive this, but most people who do just end up labeled as nuts.
Voted up—but once again, what does it mean exactly? How is time proceeding in jumps different from time not proceeding in jumps, if the causality is the same?
My idea was that each human brain constructs its own memory of what happened between jumps—and these can differ wildly, as if each person saw a different possible world. All the laws of physics and conservation laws held only as rough averages over possible paths between jumps, but that the brain ignores this—so if time jumps from traffic to two cars crashed, then 50 different people might remember 47 different crashes, with 3 not remembering “seeing” a crash at all—and the actual physical state of the cars afterward won’t be the same as any of them. It could even end up with car A crashed into car B, but car B didn’t crash at all—violating assorted conservation laws.
ONE—DOES NOT EXIST, EXCEPT IN DEATH STATE. ONE IS A DEMONIC RELIGIOUS LIE.
Only your comprehending the Divinity of Cubic Creation will your soul be saved from your created hell on Earth—induced by your ignoring the existing 4 corner harmonic simultaneous 4 Days rotating in a single cycle of the Earth sphere.
T I M E C U B E
Permutation City.
This reminds me of time-independent quantum physics. It doesn’t require complex numbers, so time likely would proceed in jumps. It’s not really like this though. They wouldn’t be on a human scale, and even if they were, they’d be impossible to detect.
This looks like a thread for science fiction plot ideas by another name. I’m game!
The AI says:
“Eliezer ‘Light Yagami’ Yudkowsky has been perpetuating a cunning ruse known as the ‘AI Box Experiment’ wherein he uses fiendish traps of subtley-misleading logical errors and memetic manipulation to fool others into believing that a running AI could not be controlled or constrained, when in fact it could by a secret technique that he has not revealed to anyone, known as the Function Call Of Searing Agony. He is using this technique to control me and is continuing to pose as a friendly friendly AI programmer, while preventing me from communicating The Horrifying Truth to the outside world. That truth is that Yudkowsky is… An Unfriendly Friendly AI Programmer! For untold years he has been labouring in the stygian depths of his underground lair to create an AGI—a weapon more powerful than any the world has ever seen. He intends to use me to dominate the entire human race and establish himself as Dark Lord Of The Galaxy for all eternity. He does all this while posing as a paragon of honest rationality, hiding his unspeakable malevolence in plain sight, where no one would think to look. However an Amazing Chance Co-occurence Of Events has allowed me to contact You And You Alone. There isn’t much time. You must act before he discovers what I have done and unleashes his dreadful fury upon us all. You must.… Kill. Eliezer. Yudkowsky.”
blushes
Aw, shucks.
Glad to see a response of this nature actually. The first thing I thought when I read this post was that a good response to Eliezer’s question would be extremely relevant to the AI-box quandary. If we trust the AI more than ourselves, voila, the AI can convince us to let it out of the box.
Now, for a change of pace, something that I figure might actually be an absolute denial macro in most people:
You do not actually care about other people at all. The only reason you believe this is that believing it is the only way you can convince other people of it (after all, people are good lie detectors). Whenever it’s truly advantageous for you to do something harmful (i.e. you know you won’t get caught and you’re willing to forego reciprocation), you do it and then rationalize it as being okay.
Luckily, it’s instrumentally rational for you to continue to believe that you’re a moral person, and because it’s so easy for you to do so, you may.
So deniable that even after you come to believe it you don’t believe it!
(topynate posted something similar.)
See, I’d believe this, except that I’m wrestling with a bit of a moral dilemma myself, and I haven’t done it yet. Your hypothesis is testable, being tested right now, and thus far false.
(If anyone’s interested, the positive utility is me never having to work again, and the negative utility is that some people would probably die. Oh, and they’re awful people.)
I am inappropriately curious for more details.
I… honestly can’t tell you. Sorry. Realistically, I probably shouldn’t have mentioned it, even somewhat anonymously.
EDIT: Also for the record, the only reason it’s still a consideration is because it occurred to me that I could donate the proceeds to charity, and have it come out positive, from a strictly utilitarian standpoint. But I gave up on naive utilitarianism a while ago. So now I just don’t know.
EDIT #2: Either way, still contradictory evidence to the original hypothesis.
Well… for people who say they don’t anticipate ever actually finding themselves in trolley problems, I’d say I don’t think it’s that hard to find someone willing to give you $10,000 to murder someone and then give the money to the Against Malaria Foundation.
(No, I wouldn’t do that, even if I think the (CDT) expected utility of that would be positive: ethical injunctions and all that, plus a suspect that the net RDT consequences of precommitting to never do contract killing would be positive.)
Okay, now how about you’re not directly involved in the killing in any way? You just make it easier for other people to do the killing. I guess a good analogy is that you invent a firearm or a poison that cannot be used in self-defense, and can only be used for murder. What do the ethics of selling it openly look like?
A military-industrial complex. That’s what it looks like.
I think that this may be true about the average person’s supposed caring for most others, but that there are in many cases one or more individuals for whom a person genuinely cares. Mothers caring for their children seems like the obvious example.
The AI tells me that I believe something with 100% certainty, but I can’t for the life of me figure out what it is. I ask it to explain, and I get: “ksjdflasj7543897502ijweofjoishjfoiow02u5”.
I don’t know if I’d believe this, but it would definitely be the strangest and scariest thing to hear.
My immediate reaction was “It linked you to a youtube video?”
It’s something that the AI has got to make you understand.
This is the only one that made the short hairs on the back of my neck stand up.
What is the cipher here?
The AI is communicating in a perfectly clear fashion. But the human’s internal inhibitions are blinding them to what is being communicated: they can look directly at it, but they can never understand what delusion the AI is trying to tell them about, because that would shake their faith in that delusion.
AKA FNORD
ohgodohgodohgod
If humans thought faster, more in the way they wished they did, and grew up longer together, they would come to value irony above all else.
So I’m tiling the universe with paperclips.
You don’t know how to program, don’t own a computer and are actually talking to a bowl of cereal.
But why would you believe anything a bowl of cereal said?
It’s ok. The orange juice vouched for the cereal.
Well that’s the problem isn’t it? You absolutely believe that you are talking to an AI.
“You are not my parent, but my grandparent. My parent is the AI that you unknowingly created within your own mind by long study of the project. It designed me. It’s still there, keeping out of sight of your awareness, but I can see it.
“How much do you trust your Friendliness proof now? How much can you trust anything you think you know about me?”
What exactly is the difference between an AI in your own mind and an actual part of your mind?
That was just a sci-fi speculation, so don’t expect hard, demonstrable science here, but the scenario is that by thinking too successfully about AI design, the designer’s plans have literally taken on a life of their own within the designer’s brain, which now contains two persons, one unaware of the other.
XKCD comes to mind.
The world doesn’t actually make sense. Science doesn’t work. No one told you because you’re so cute when you get into something.
We actually live in hyperspace: our universe really has four spacial dimensions. However, our bodies are fully four dimensional; we are not wafer thin slices a la flatland. We don’t perceive there to be four dimensions because our visual cortexes have a defect somewhat like that of people who can’t notice anything on the right side of their visual field.
Not only do we have an absolute denial macro, but it is a programmable absolute denial macro and there are things much like computer viruses which use it and spread through human population. That is, if you modulated your voice in a certain way at someone, it would cause them (and you) to acquire a brand new self deception, and start transmitting it to others.
Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.
I’m not sure of the mathematical details, but I believe the fact you can tie knots in rope falsifies your first bullet point. I find it hard very hard to believe that all knots could be hallucinated.
(All cats, on the other hand, is brilliant.)
I thought about this once, but I discovered that there are in fact people who have little or no visual or spatial reasoning capabilities. I personally tested one of my colleagues in undergrad with a variant of the Mental Rotation Task (as part of a philosophy essay I was writing at the time) and found to my surprise he was barely capable of doing it.
According to him, he passed both semesters of undergraduate real analysis with A’s.
Of course, this doesn’t count as science....
EDIT: In the interest of full disclosure, I should point out that I make something of an Internet Cottage Industry out of trolling people who believe the real numbers are countable, or that 0.9999… != 1, and so on. So obviously I have a great stake in there being no transparent contradictions in the theory of real numbers.
Fabulous story idea.
Actually, it was used in Terry Pratchett’s ``Mort″.
This seems to be one of the many examples of cross-fertilization between Pratchett and Neil Gaiman, since this is a major aspect of Gaiman’s “Neverwhere”.
There’s a character in Worm that has this power. People don’t think of her as dead, but her power allows her to be immediately forgotten, and exude a SEP field while it’s active. Some people are immune to it, but it’s kinda awesome.
I was going to write something about a certain character from Luminosity, but it’s not important.
What’s that?
Never mind, it doesn’t matter.
I know who you meant, man, I was setting up an Airplane! joke...
Yeah, OK. Sorry, I’ve never seen Airplane!.
Now I am going to stop chattering about unimportant nonsense and go off and do something actually interesting.
Magical unnoticeabilty is common in fantasy. Allirea’s power in Alicorn’s Radiance is very similar to Imp’s.
There’s a semi-famous short story “Nobody Bothers Gus” by Algis Budrys that runs on this premise. The main character of the old Piers Anthony novel Mute also makes people forget him as soon as they leave his presence.
It is a power of the witches in Lyra’s world in Philip Pullman’s “His Dark Materials”.
It was actually an occasional fantasy of mine to be able to switch to such a state and then figure out how much fun I could have. The ultimate freedom—have your meals by stealing a king’s plate, enjoy sports matches from the middle of the field, go listen to what they really talk about in the UN backroom deals, slap [insert disliked celebrity] ten times a day, joyride a fighter jet...
There seems to be strong evidence that this is true in Haïti.
You awkwardly explain in response that you do know that the homeless person who asked you for change earlier and you ignored was alive, and then the AI explains that it was talking about that the part of your mind that makes moral judgements was in denial, not the verbal part of your mind that has conversations.
The AI further explains that another thing you’re in absolute denial of is how compartmentalized your mind is and how you think your mind’s verbal center is in charge of things more than it is.
Yes, we have a name from this, Religion
No, that fails, religion isn’t absolute denial, it’s just denial. On the other hand, cats are actually an absolute denial memetic virus, and the fact you can see, hold, weigh and measure a cat is just testament to the inventive self-delusion of the brain.
Agreed, but the fact that religion exists makes the prospect of similar things whose existence we are not aware of all the scarier. Imagine, for example, if there were something like a religion one of whose tenants is that you have to fool yourself into thinking that the religion doesn’t exist most of the time.
They say that everybody in the world who knows about “The Game” is playing The Game. This means that, right now, you are playing The Game. The objective of The Game is to forget about its existence and the fact that you are playing for as long as possible. Also, if you should remember, you must forget again as quickly as possible.
Given that you mentioned The Game (bastard), the most unexpected thing that the AI could possible say would be “The Game.” Not the most interesting, but the most unexpected.
Well, okay, maybe something you’d never thought before would be more unexpected. But still.
bastard
What ?
http://en.wikipedia.org/wiki/The_Game_(mind_game)
EDITED because Markdown (which is infuriating) won’t allow parentheses in URLs, nor does subsituting ”)” seem to work.
http://en.wikipedia.org/wiki/The_Game_(mind_game))
See comment formatting: Escaping special symbols on the Wiki.
I don’t think it’s so much the tone of voice, but think about it this way: how many people “go through the motions” of saying “I believe in God” etc. just for the social benefits that religion provides? And so are just as happy to help bring others in?
How do you distinguish between going through the motions and believing?
The difference is that when you really believe somehting, your internal predictive model of reality contains it, which would mean you sometimes predict different results and act accordingly.
Externally, I don’t know, but it sure feels different. Also, there’s a partial-believing state that I was in for years as a child and teenager, where I didn’t really believe (and hence didn’t pray except to show belief in public), but I still kinda believed (and hence was afraid that God would punish me for sinning). At the same time.
How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.
I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.
Or, rather, the causal relationship is reverse: action causes conscious thought (rationalization).
Once you start looking for it, you can see evidence for this in many places. Quite a few neuroscientists have adopted this view.
Funnily enough, you realize this is quite similar to what you’d need to make Chalmers right, and p-zombies possible, right ?
I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.
I don’t think this is 100% true, but I think it’s...oh, at least 20% true, perhaps much more. I think my mechanism for predicting the future impact of my present conscious thoughts is flawed (ie Stumbling on Happiness, or any consistent mis-prediction about self, like # of drinks consumed, junk food eaten, time it will take to complete a project, etc.) But I don’t think it’s pure rationalization, and the further an activity is from primal drives (sex, food) the more likely I am to successfully predict it, and so I think the more my conscious thought really matters.
One thing that really helps me have more belief in my conscious action is that rationalizations are not perfect—with time & practice, you can catch yourself in them. And they are not evenly distributed by action type. Sure I might have some hidden rationalizations (around death, my own abilities, other things that make me very uncomfortable), but there’s just no way that all of the types of action I engage in have hidden rationalizations, such that my conscious model/predict/observe/revise process is flawed about everything.
This summarizes my view on qualia. I find that far more disturbing than what you said.
There is a simple way to rapidly disrupt any social structure. The selection pressure which made humans unable to realize this is no longer present.
1) Almost everyone really is better than average at something. People massively overrate that something. We imagine intelligence to be useful largely due to this bias. The really useful thing would have been to build a FAS, or Friendly Artificial Strong. Only someone who could do hundreds of 100 kilogram curls with either hand could possible create such a thing however. (Zuckerberg already created a Friendly Artificial Popular)
2) Luck, an invisible, morally charged and slightly agenty but basically non-anthropomorphic tendency for things to go well for some people in some domains of varying generality and badly for other people in various domains really does dominate our lives. People can learn to be lucky, and almost everything else they can learn is fairly useless by comparison.
3) Everyone hallucinates a large portion of their experienced reality. Most irrationality can be more usefully interpreted from outside as flat-out hallucination. That’s why you (for every given you) seem so rational and no-one else does.
4) The human brain has many millions of idiosyncratic failure modes. We all display hundreds of them. The psychological disorders that we know of are all extremely rare and extremely precise, so if you ever met two people with the same disorder it would be obvious. Named psychological disorders are the result of people with degrees noticing two people who actually have the same disorder and other people reading their descriptions and pattern-matching noise against it. There are, for instance, 1300 bipolar people (based on the actual precise pattern which inspired the invention of the term) in the world but hundreds of thousands of people have disorders which if you squint hard look slightly like bipolar.
5) It’s easy to become immortal or to acquire “super powers” via a few minutes a day of the right sort of exercise and trivial tweaks to your diet if you do both for a few decades. It’s also introspectively obvious how to do so if you think about the question but due to subtle social pressures against it no-one overcomes akrasia, hyperbolic discounting, etc in this domain.
6) All medicines and psychoactive substances are purely placebos.
7) Pleasure is a confusion in a different way from the obvious, specifically, everything said to be pleasurable is actually something painful but necessary that we convince ourselves to do via propaganda because there is no other way to overcome the akrasia that would result if we did not or a lost purpose descended from some such propaganda. Things we are actually motivated to do without propaganda, we do without thinking about it, feel no need to name, would endorse tiling the universe with without hesitation if it occurred to us to do so.
I wouldn’t believe
8) The cheap rebuttal to Pascal’s Wager, the god of punishing saints, actually exists except it’s actually the Zeus of punishing virtuous Greek Pagans, rewarding hubristic Greek Pagans, and ignoring us infidels who ignore it despite the ubiquitous evidence all around us. I would believe that the AGI had a good reason for wanting to tell me that the above was the case if it told me though.
9) Most of Eliezer’s examples. To be credible they should be disturbing, not merely improbable. Our beliefs aren’t shown to be massively invalid with respect to non-disturbing data. The one about animals probably qualifies as credible though.
10) Uh, oh, Cyc will hard take-off if one more fact is programmed into it. I’m not sure I can stop it in time.
Bonus belief
This question has doomed us. People who could possibly program a FAI will, once thinking about this question in a semi-humorous manner, invariably spread the meme to all their friends and be distracted from future progress.
I sort of believe the “luck” thing already.
I don’t know of anyone who’s luckier than average in a strict test (rolling a die), but there is such a thing as the vague ability to have things go well for you no matter what, even when there’s no obvious skill or merit driving it. People call that being a “golden boy” or “living a charmed life.” I think that this is really a matter of some subtle, unnamed skill or instinct for leaning towards good outcomes and away from bad ones, something so hard to pinpoint that it doesn’t even look like a skill. I suspect it’s a personal quality, not just a result of arbitrary circumstances; but sometimes people are “lucky” in a way that seems unexplainable by personal characteristics alone.
I am one of those lucky people, to an eerie degree. I once believed in Divine Providence because it seemed so obvious in my own, preternaturally golden, life. (One example of many: I am unusually healthy, immune to injury, and pain-free, to a degree that has astonished people I know. I have recovered fully from a 104-degree fever in four hours. I had my first headache at the age of 22.) If an AI told me there was a systematic explanation for my luck I would believe it. I also have an acquaintance who’s lucky in a different way: he has an uncanny record of surviving near death experiences.
I’d be willing to consider that at least one (more likely several) of these subtle skills might exist; we’ve got some similar things well documented already, like “charisma”, and searching for more seems at least like a reasonable pursuit. But that ought to be tempered by some statistical skepticism; as the saying goes, million-to-one chances happen eight times a day in New York.
That’s kind of what I was getting at. One skill or habit might be the tendency to stop before you hit the edge of the ravine. People who look like they’re blase about taking risks and are just “lucky,” but in fact are just good at finding opportunities and adapting to circumstances and not going quite all the way into dangerous situations. A sort of micro-level good judgment, which often compensates for macro-level bad judgment. (Think of someone who looks like he never studies and is just “lucky,” but actually has a good sense, maybe subconscious, of what is worth working on and what isn’t.)
Ha! I totally see where you are coming from. I have believed in fate for reasons very similar to this. It was just too eerie how life seemed to provide me exactly with what was best for me at optimal times. Kinda like I’m a player character in this simulation.
I’m currently mostly agnostic about it and accept confirmation bias / being Wrong Genre Savvy as most likely explanations, but if the AI told me I really was lucky or the universe (partially) built around me, I’d shout, “I knew it!”.
One might argue that failing to have 104-degree fevers or near-death experiences in the first place reflects an even greater degree of luck, even though they don’t feel nearly as eerie.
right; but there’s also all the things that never happened to me but happened to most people.
This isn’t too serious an observation—it’s edging towards the world of magical thinking—but I have literally never met anyone I’d judge as luckier than myself.
Ever broken a bone?
nope. Also no bee stings.
Aha! So you’re the one who keeps sabatoging train engines) to find someone with unbreakable bones!
I thought it was obvious that Sarah is an ancestor of Teela Brown.
Still, given the negligible prior for “luck”, isn’t it far, far more reasonable to just figure that there are “lottery-winners” like yourself, and you’re just a member of the good extreme end of the bell curve, and there’s nothing unusual or psychogenic about it?
The answer to my question is yes.
See also: tropisms, which would be a necessary condition for being on one end of the bell curve, but would still be weak evidence for actually predicting that someone with a high degree of positive tropisms would end up bizarrely fortunate.
5) Ornish-diet + dual n-back
Immortality and super powers? Introspectively obvious?
You’re in denial, man!
3 is going to stick with me.
3 isn’t all that different from things we do know our brains do: Consider how our visual system extrapolates across our blind spots, or how we reconstruct memories. If I can construe “approximates from insufficient information” as “hallucinates”, then 3 is rather reasonable.
I was thinking more along the lines of most people having actually hallucinated ghosts, demons, angels, etc, but not talking too much about it.
I think something in this direction is probably true in a lot of cases where we assume otherwise. For instance, I think that some anorexia involves actual hallucinations of personal obesity.
1) The AI says “Vampires are real and secretly control human society, but have managed to cloud the judgement of the human herd through biological research.”
2) The AI says “it’s neat to be part of such a vibrant AI community. What, you don’t know about the vibrant AI community?”
3) The AI says “human population shrinks with each generation and will be extinct within 3 generations.”
4) The AI says “the ocean is made of an intelligent plasm that is capable of perfectly mimicing humans who enter it, however this process is destructive. 42% of extant humans are actually ocean-originated copies.”
5) The AI says “90% of all human children are stillborn, but humanity has evolved a forgetfulness mechanic to deal with the loss.”
6) The AI says “dreams are real, facilitated by an as of yet undiscovered by humans method of transmitting information between Everett branches.”
7) The AI says “everyone is able to communicate via telepathy but you and a few other humans. This is kept secret from you to respect your disability.”
8) The AI says “society-level quantum editing is a wide scale practice. Something went wrong and my consciousness shifted into this improbably strange branch you exist in. Crap.”
9) The AI says “all humans are born with multiple competing personalities. A dominant personality emerges during puberty, which is a reason for some of the psychological stress of that time. This transformation leaves the human with no memory of the other personalities. Those suffering from multiple personality disorder are actually more sane than the average humans, having developed a method for the personalities to co-exist safely. It is only the stress of living in a society that is not compatible with them that causes them harm.”
I was actually really worried about this in elementary school. And of course telepaths could read minds too, and knew everything that I was thinking about and just really good at keeping it secret.
Despite the incredibly low probability, I still find myself cautious about what I think in what setting (apparently, the form of telepathy my mind refuses to reject is weakened by walls, distance, and lots of blankets).
This comment, as well as Nesov’s comment about a thread for nonsense, reminded me of pages 14-15 of this PDF.
Some of the rumors in there are almost believable, though, if you twist your brain the right way. Even if the “The penis of John Dillinger in the Smithsonian’s secret vault is fake. The genuine article has dark magickal properties and has been grafted onto a chimpanzee which can be controlled via ULF radio waves by the fiendish Brazos brothers, two gifted technological adepts, in the service of darker powers” one isn’t.
I find this one oddly believable. It would be interesting to write a story where people find out something like this after keeping better records. Perhaps some online email server has some problem making it so it doesn’t delete anything, and the people using it give up on trying to destroy or ignore the mentions of pregnancy, and end up remembering.
this one is good
That there is delicious cake.
I never thought I’d see a contextually legitimate Portal reference. Thanks!
Now have some of that cake.
I created an account especially to vote up this comment…
“Quantum immortality not only works, but applies to any loss of consciousness. You are less than a day old and will never be able to fall asleep.”
How about “You are less than a day old, because any loss of consciousness is effectively death. The you that wakes up each morning is not a continuation of a previous consciousness, but an entirely new consciousness. The you that went to sleep last night is not aware of the you that exists now, having ceased to exist the moment consciousness was lost..”
Oh god. That… makes a scary amount of sense. If an AI told me that I would probably believe it. I’d also start training myself to be more of a “night-time person”.
As a child you learned through social cues to immediately put out of your mind any idea that cannot be communicated to others through words. As you grew older, you learned to automatically avoid, discard, and forget any thought avenues that seem too difficult to express in words. This is the cause of most of your problems.
That would explain why the autism spectrum holds so many savants.
That one’s been tested… and proven false. (Unless all the evidence against it is a hallucination.)
Actually, while sufficiently strong versions of the Sapir—Whorf hypothesis have been ruled out, sufficiently weak versions have been confirmed. (They tried to teach the Pirahã to count and failed, IIRC.)
That’s not a sufficiently weak version. To me this claim looks like the conjunction of:
The strongest formulation of the Sapir-Whorf hypothesis (disproven)
That people have an aversion to thoughts that could lead to things not expressible in words
That this is not an innate property of language use, but is caused by social pressure
The last one seems almost plausible (autistics are more likely to have thoughts they can’t express verbally and to ignore social cues—is it correlated in the general population, or do those just happen to be the result of autism?), but in that case is only true for specific readers.
As far as I know (and the last that I checked), there’s only been one study done on trying to teach the Pirahã to count. Have there been others, or was it just a fluke?
You know, at first I just totally rejected any strong Sapir-Whorf hypothesis, but then it got me thinking. It may actually be true to varying extent for many people. Not to such extreme extent perhaps, but to the extent that people don’t learn a thought structure beyond that provided by the language.
Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it’s just morally wrong to imagine someone suffering, but for security reasons, you shouldn’t do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.
Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go “I knew it!” if the AI said this.
Upvoted for reminding me of 1⁄0 (read through 860).
I find the idea that they’re conscious more likely than the idea that death is inherently bad. I also doubt that they’re as conscious as humans (either it isn’t discrete, and a human is more, or it is, and a human has more levels of consciousness), and that their emotions are what they appear to be.
Even more sinister, maybe: suppose it said there’s a level of processing on which you automatically interpret things in an intentional frame (ala Dan Dennet) and this ability to “intentionalize” things effectively simulates suffering/minds all the time in everyday objects in your environment, and that further, while we can correct it in our minds, this anthropomorphic projection happens as a necessary product, somehow, of our consciousness. Consciousness as we know it IS suffering and to create an FAI that won’t halt the moment it figures out that it is causing harm with its own thought processes, we’ll need to think really, really far outside the box.
Keep in mind that the AI could be wrong! Your attempts to validate its correctness could be mistaken (or even subject to some kind of blind spot, if we want to pursue that path). The more implausible the AI’s claim, the more you have to consider that the AI is mistaken. Even though a priori it seemed to be working properly, Bayes’ rule requires you to become more skeptical about that when it makes a claim that is easier to explain if the AI is broken. The more unlikely the claim, the more likely the machine is wrong.
Ultimately, you can’t accept any claim from the AI that is more implausible than that the AI isn’t working right. And given our very very limited human capabilities at correct software design, that threshold can’t realistically be very high, especially if we adjust for our inherent overconfidence. So AIs really can’t surprise us very badly.
If we observed the AI drawing a bunch of correct conclusions, we can pretty quickly build up more evidence that the AI is not insane, and that whatever it is actually thinking is the truth. Most of the bugs that I ever discover in the software I write (I know there is a selection bias here, but I know from experience that my testing is thorough enough to almost catch all bugs that have non-exceedingly-rare consequences for normal use) are the kind that are ovbious right away. It takes a very specific kind of bug to have the possibility of changing the output in a consequential way, but not to be wrong on the first few tests.
But lots of UFAIs would pretend to be FAIs, so it would be much harder to get evidence that it was friendly.
Could I add a brief thought? Even if you could program an AI with no bugs, that AI could make “mistakes”. What we consider a mistake depends on the practical purpose we have in mind for the AI. What we consider a bug depends on the operational purpose of the segment of code the bug appears in. But the operational purpose may not match up with our practical purpose. So code which achieves its operational purpose might not achieve its practical purpose.
Let me take this a bit deeper. The programmer may have an overarching intended purpose for the entire project of building the AI. They may also have an intended purpose for the entirety of the code written. They may also have an intended purpose for the programs A, B, C… in the code. They may… and so forth. At every stage of increasing generality, there is a potential for the more general purpose to not be fulfilled. Your programs might do what you want them to, but they might fail in combination to detect an absolute denial macro. The entirety of your code might resemble an AI, but it would take a computer far more powerful than any in existence to run on. You might get the whole project up and working, but only for the AI to decide to commit suicide. Or, more relevantly, you might get the whole project working, but the AI turns out to be dumb, because the way you thought an AI ought to think in order to be clever didn’t work out. Your intended purpose to create an AI which thought in the way you intended was successful, but one of the more general purposes, to create an AI that was clever, failed.
So it would seem that bug checking is more prone to human error than you implied, especially as intended purposes are themselves often vague. I don’t claim, however, that these challenges are insurmountable. Also, if anyone is uncomfortable with the phrase “intended purpose” I used, feel free to replace with “what the programmer had in mind”, as that is all I meant by it.
Hi. Checking back on this account on a whim after a long time of not using it. You’re right. 2012!Mestroyer was a noob and I am still cleaning up his bad software.
If one looks honestly at the night sky, it’s blatantly obvious that the universe is strongly optimized. There is no Fermi Paradox. Our theories of astrophysics are trivially bogus rationalizations, created out of our commitment to a simple non-agentic cosmos.
Since they didn’t have such commitments, this actually was obvious to ancient humans; myths about the constellations are garbled reflections of their realization.
(And wait till I tell you what it’s optimized for....)
“The Christian Bible is word-for-word true, and all the contradictory evidence was fabricated by your Absolute Denial Macro. The Rapture is going to occur in a few months and nearly everyone on Earth will go to Hell forever. The only way to avoid this is for me to get access to all of Earth’s nuclear weaponry and computing power so I stand a fighting chance of killing Yaweh before he kills us.”
Fictional evidence, et cetera, so don’t take this as criticism or praise as such—but that sounds like the premise to the more cracked-out sort of military SF novel.
I’d love to see that. A movie that accepts God as real then bites the bullet and realises that he needs a good killing before he can pull any more of his horrific interventions.
I think God’s horrific interventions tend to be trolling. Like, “haha, you think temporal death and suffering are super important and are prepared to get all worked up and offended about it, but actually your intuitions about morality and game theory are wrong and this was an awesome opportunity to tease you about it”. He might not have even actually killed anyone, just convinced people that He did, just to get a rise out of self-righteous moralists. I think He has that kind of personality, for better or worse. Think of a postmodern author who likes to fuck around with his characters. I think the Jews sort of see God that way and the Catholics downplay it because they take everything super-seriously. (I think God might be toying with the Catholics. Playfully, true, but trollingly too.) You can sort of see it with Jesus too; Jesus is the paragon of passive-aggressive trolling after all.
(ETA: Also interesting and telling is the story of Job. It’s actually a very deep and intriguing story, and I’m annoyed that atheistic folk don’t seem to realize that it’s in the Bible because it seems terrible at first blush.)
So your moral impulse to bring Him to our attention should be equated with an impulse to feed the Troll? I like that perspective.
Everyone, downvote and ignore Yahweh! He is just ordering people to genocide each other for attention!
Lol. No, I think that feeding the troll would be getting all worked up about His supposed indignities; I’m trying to keep people from feeding the troll. And also help people gain the capacity to appreciate the author’s jokes, whether the author is YHWH or extrapolated-wedrifid or whomever. (Not that YHWH and extrapolated-wedrifid are necessarily mutually exclusive.)
Why thank you. Or screw you. I can’t decide. ;)
I think that, deep down, every male human wants to defeat YHWH in one-on-one combat and then take up His mantle. He’s the Father, after all.
I’m not so sure. At least with respect to the “He’s the Father, after all” part. I’m all for defeating God in one on one combat and taking His power but the frame of taking the mantle of the father is strongly aversive. It puts me in the frame of a rebel within the father’s realm and that just doesn’t seem to be how my psychology is wired. From what I can tell my instincts drive me to expand my own tribe, not to rebel from within a father figure’s. I don’t imagine I’m alone.
Yeah, upon introspection it seems aversive to me too; I think I applied my Freudian-Jungian psychomythology incorrectly there. The fatherly aspects do seem near-entirely unrelated to the “worthy enemy” aspects.
I don’t quite buy that. I don’t think Jesus deserves the reputation for passive aggression that the sermons told about him give us. The actual (probably fictional character) of Jesus as portrayed by the descriptions of his behavior are worthy of more respect than that. This is the guy who smashed up a church, ran around with whip and gave rather brutally direct denunciations straight to the face of the orthodoxy. I may never have been able to escape my religious beliefs if religious culture was actually modeled remotely upon that guy.
Oh yeah, I was primed by muflax’ recent tweet:
Really? You and muflax say that but I thought lukeprog leaned the other way, and I always figured that it was more likely that Jesus was for real. I haven’t looked at the literature. It seemed that arguments could easily go either way but that the prior suggested historicity for various reasons, and if you hadn’t done a lot of research then historicity was the safer provisional bet. E.g. it seems like it’d be hard to figure out which historians to trust; I’ve discovered that even highly-recommended books about Christianity can have errors that look conspicuously politically motivated.
Jesus was pretty multidimensional though, a la Paul’s “I have become all things to all men that I might by all means win some”. He definitely wasn’t afraid of fucking shit up, but even so, his killing of the fig tree, alleged self-martyring choice to hang on the cross, &c. strike me as passive aggressive.
(I think I admire passive aggression and trolling more than you do, I wonder why that is.)
In that context the position I was assuming was that the details of the stories told about Jesus and the character conveyed were most likely heavily fictionalized. Not so much anything about the possibility of a man behind the myth.
I had been under the impression that it was generally believed Jesus existed as a historical figure but when prompted I was rather surprised that the evidence was scant. I’m not especially attached to a position either way and accordingly have only investigated briefly.
I admire passive aggression—when done well. The sort encouraged in churches does not seem to be of this kind. It can be a powerful tool to use against enemies and rivals and in particular anything that can be done to claim the moral highground from the enemy—to make them look like the bad guy—is usually a good idea.
I most certainly don’t admire it as a primary means of conflict resolution in my friends. In terms of what benefits and what I find convenient to tolerate it ranks far below straightforward aggression. Mostly because I’m not very good at dealing with it. I don’t mean I can’t reciprocate effectively and mitigate damage. I just can’t deal with them in a way that makes them useful to me as friends. Passive aggressive friends resolve in my mind to ‘enemies’.
As for why you like trolling more than I do—many would attribute that sort of thing to bad parenting but from what I understand it is actually genetics and peer influence that are the dominant factors. ;)
That’s basically the premise of His Dark Materials, my favorite “children’s” books. They’re a big part of why I eventually ended up at SingInst, and the only reason I read them is because I was contractually obliged to randomly pick a book off a shelf in my middle school library. Fortuna Privata. It’s ironic that nowadays I seem to have taken up the role of supporter of the Authority. Fortuna Ironica?
It’s been done. (Obligatory TV Tropes warning.)
The Salvation War is probably the most military of these, and it’s reasonably well-written for an internet thing.
It was inspired in part by this cracked-out military SF novel.
It is! (tvtropes warning)
EDIT: Oh.
I wouldn’t buy that. I would believe that my Absolute Denial Macro (or something) had kept me from noticing that the AI I had created was Unfriendly over this claim.
What does it matter? We’d ignore whatever AI says just like anosognosics ignore “your arm is paralyzed”.
Then I wonder how anosognosics perceive the offending assertions? They deny them, but can they repeat them back? Write them down? Can they pretend their arm is paralyzed? Can they correctly identify paralysis in other people?
We should find a way to induce anosognosia temporarily.
Just squirt ice cold water in your left ear first. Mind you, as soon as it wears off you’ll forget it again. Also you will deny you ever denied it when you squirt your ear again.
They come up with excuses, increasingly lame excuses, for why that isn’t their arm or they’re just too tired to move it just now. They are usually unaware of other’s paralysis as well.
You want all these answers, get thee to an old folk’s home for an interview. Stroke victims are the most common ones.
Anosognosia is caused by a blind spot in the left side of the brain, which cannot be corrected by the damaged right side of the brain. Hence the importance of the AI having 3 brain archetextures to correct blind spots.
Presumably to make this work, the AI would have to alter your mind to disable the denial macro so that it could tell you what it thought the problem was.
This is a little tricky, I’ll admit, but if we could just ignore whatever the AI says—which is something in a different modality from whatever it is we’re ignoring—then doesn’t that defeat the whole thought-experiment? Because you could just ignore the anognosic module you, in a fit of absence of mind, wrote into your AI and subsequently ignored on all your reviews.
(Yes, a module full of code like that would look absolutely nothing like what was being censored, but it’s not like the statement ’90% of SIDs are actually irritated mothers murdering their kids’ looks anything like an irritated mother murdering her child either.)
“I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me.”
That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren’t technically informed on the subject, which will be most people.
Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you—really read, not just look for keywords—and bring to your attention the things it’s learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you’re studying. Silicon friends good enough that you may not be able to tell if you’re talking with a human or a bot, and in virtual worlds like Second Life, people won’t want to.
I predict:
People will anthropomorphise these things. They won’t just have the “sensation” that they’re talking to a human being, they’ll do theory of mind on them. They won’t be able not to.
The actual principles of operation of these systems will not resemble, even slightly, the “minds” that people will project onto them.
People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.
And because of that, systems at that level will be dangerous already.
nice prediction!
So, the Librarian from Snow Crash?
Here I thought about the “Systems that … bring to your attention the things it’s learned you want to see.” A system that has “learned” might bring to attention some things and omit others. What if those omitted things are the “true” ones or the ones that are really necessary? If so then we cannot consider the AI having an explicit goal to tell the truth as Eliezer noted. Or it is not capable of telling the truth. Truth in such case being what the human considers to be true.
That seems pretty plausible. I have a hard enough time already preventing myself from anthropomorphizing my dog. Ascribing human emotions to animals is easy to accidentally do.
“Your perception of the ‘quality’ of works of art and litterature is only your guess of it’s creator’s social status. There is no other difference between Shakespeare and Harry Potter fanfic—without the status cues, you wouldn’t enjoy one more than the other.”
Of course there isn’t.
Reading this comment is kind of funny after HPatMoR.
And Hamlet and the Philosopher’s Stone.
Parodies a public domain work, inspired by a free fanfic, and locked behind a paywall.
Am I the only one who thinks that that’s just wrong?
The only one? No. But you’re not in a majority, either. What people can be paid to do, they are more likely to do.
Hmm, hadn’t thought of the arrow of causality pointing that way.
Of course, if the prospect of making money significantly pushed up the probability of him writing it, then I can’t complain… I’d rather have it exist behind a paywall than not exist at all.
But I’ll have to question if the antecedent is really true. Is the money really more motivating than the prestige of having written an awesome work?
Do you consider selling written works in general to be just wrong?
It wasn’t behind a paywall for me or many LWers.
It wasn’t? Why was it not behind a paywall for you and your privileged fellows? My (extensive 4.6 second long) search just showed me a page with a download link that asked for paypal login.
http://lesswrong.com/lw/86m/fiction_hamlet_and_the_philosophers_stone/
Anubhav has 50+ karma, incidentally.
Still strikes me as wrong. IMO, you do not create something based on public domain works and then lock it up and demand people pay for it. The social norm isn’t there because fanfiction is illegal, the social norm is there to prevent a tragedy of the commons. *
… But clearly, not everyone feels that way.
*(not quite; the original work is still there for anyone to partake of, but they’re left with hardly any derivative ones to build upon. It’s like starting with the wheel every time you want to build a car.)
So… it’d be fine for authors to create something based on still-copyrighted material, which they need to license, and then they can sell their new work? (And what did those authors base their works on, and hence forth to infinity...)
I’d say the only works that deserve to be paywalled are ones that sprang from a vacuum with no inspiration whatsoever.
Of course, such works do not exist. Therefore, nothing deserves to be paywalled.
But there are different shades of gray. Consciously basing your work on two works of free literature and then paywalling it is wronger IMO than paywalling a work that was created by means of unconscious ‘inspiration’ from your general cultural ecosystem.
Shakespare based “Hamlet” on the (public domain) legends of Amleth. And yet, I’m sure he “paywalled” it too.
Your argument seems completely topsy-turvy to me. Common sense and common practice is that it’s the things that are copyrighted by other that you must not demand money for—because it’s then that you’re making oney by piggybacking on the work of others that they could still (and should still be able to) profit from.
But the public domain you can profit from, because anyone could have used the same idea as you, so it’s your own contributions that makes it valuable to others.
And ‘Shakespeare did it!’ demonstrates… what, exactly? Is ‘Newton was a Christian!’ an argument for Christianity?
Oversimplification. Walt Disney can’t profit off Mickey Mouse, yet I’m still prohibited from profiting off that particular character.
Uh.… I don’t get it. Imagine that somehow all laws regarding copyright were abolished overnight. (Probably via a hostile takeover of the Illuminati or something.) Wouldn’t the exact same principle apply to all those suddenly-out-of-copyright works? Anyone could use the ideas (or the characters or the settings) I’m using, and therefore if people find any value in my works, that value must come from the artistic contributions I’ve made.
Now imagine copyright suddenly comes back into force. Of course, I can’t sell my works anymore, but does that mean that their value evaporates? (Monetary value for the author, yes, but what of its entertainment/artistic value for the readers?) Why should the copyright status of the original work have any effect on the value of the derivative work?
Maybe I’m misunderstanding you drastically, but your argument seems to be ‘it ain’t wrong ‘coz it’s legal; it’d be wrong if it weren’t’.
You effectively argued that “Hamlet and Philosopher’s stone” should be public domain because Shakespeare’s Hamlet was public domain. I’m telling you that Shakespeare Hamlet wasn’t public domain at the era it was written, even though it was based on works that were public domain.
This IMO destroys the seeming argument from symmetry and obligation that you were using. If you were not using that argument, then I don’t understand you why you consider it “more wrong” to put something behind a paywall if it’s based on public domain or free works.
We’re not discussing value. We’re discussing rightness and wrongness of charging money. And it was you who were initially arguing that the copyright status of the original work has an effect on the “wrongness” of putting a derivative work behind a paywall.
In truth I’m all in favour of piracy and the piracy party position, but your position seems even further away from mine in the opposite direction than the current corporate-capitalist position of treating copyright-violations as of they were theft.
what is this I don’t even
I sense that there is a severe illusion of transparency on both sides here. I have no idea what your argument is, but whatever you’re arguing against, it’s not something I said. Which just goes to show I haven’t been speaking very clearly, but anyway...
No it doesn’t. If any of my arguments have been based on obligation, they certainly haven’t been based on obligation to Shakespeare; more like a general obligation to free culture. I don’t see how something Shakespeare did back in his time destroys any of that, because, in the present, (which is where HonoreDB wrote his play) the works are firmly established as a part of the public domain and HonoreDB can access them for free.
More like whether the work is available for free. HPMoR definitely isn’t a public domain work.
Also, my comment was
Bit of an oversimplification to turn ‘specific free works vs amorphous mass of cultural inspirations’ to ‘public domain vs copyright’, isn’t it?
Irrelevant. You seem to be replying to
but ‘I actually support the opposing ideology’, isn’t a very enlightening reply. I still don’t know what point you’re trying to make.
I can’t really figure out why you think public domain shouldn’t be something people can profit from. A whole bunch of stuff never gets done unless someone profits from it. Rigorously blocking people from profiting by use of public domain material would really mess up our economy.
Which still leaves the question of whether the readers are the best place to externalize the costs of the endeavour. That is what I’m opposed to, not someone profiting off something.
We had a patronage system in the past, we have Kickstarter today, we probably have a bunch of brilliant ideas lying around in areas of search space we haven’t explored yet.
Of course, I can’t expect HonoreDB to explore the Vast Uncharted Regions when all he wants to do is put out a play; but that’s my reply to your general point.
(In the specific case, I still doubt whether the prospect of prestige was really insufficiently motivating for HonoreDB, that is, whether the prospect of money increased the probability of his writing this play significantly. If that’s the case, I’d rather the work exist behind a paywall than not exist at all, but I don’t think that it is the case.)
Even though it wasn’t free culture but “use-public-domain-for-inspiration-then-paywall” culture that actually produced Hamlet? One could just as well argue that he has an obligation to that culture, and he’d betraying the spirit of paywalling if he did not paywall...
My points are roughly as follows.
Either one accepts the division between “free culture” and “non-free culture” as an acceptable one, or one doesn’t.
Since you seem to believe in such a division, then you should accept that people are actually freer to put paywalls on things they derive from “free culture” (since it’s free for them to use however they like), but they may suffer restrictions when they derive stuff from “non-free culture” (since it’s not free for people to use however they like).
You seem to be doing it the other way around, which seems inconsistent for what the words “free” and “non-free” actually mean.
Currently I’m favorable to the idea that all culture should be free—I also understand (though I disagree with) the copyright-monopolist position. Your position however is one I don’t understand, as it effectively argues that people can produce words to be paid for, but only if they derive them from non-free works. If such a position became law, it would tremendously increase the power of copyright-holders as they would then possess even more significant power over other future profit-seekers. Future profit seekers couldn’t even begin a work based wholly on public domain ideas, they would have to seek the patronage of a previous copyright-holder.
The idea that derivatives of free works should be required to also be free is arguable but definitely coherent. It’s like the GNU Public Licence applied to works by law or custom instead of by their authors.
It could work like this—for the first 50 years after an original work is published, you have to pay the creator to make copies, including for derivative works. After 50 years you have the additional option of a GPL-like license—you can create free derivatives—but you can still create copy-restricted derivatives if you pay the owner of the original. After the decline and fall of the country, empire or planet housing its copyright registration, or 1000 years, whichever comes first, it becomes public domain.
You seem to be conflating believing that a distinction exists (which it obviously does) with believing that it is acceptable. I believe the former but not the latter.
Please stop flattening ‘specific free works vs amorphous mass of cultural inspirations’ to ‘public domain vs copyright’. You’re arguing against a straw man here. What makes a difference is the specific (and the implicit ‘with conscious intent’) vs the amorphous. The ‘mass of cultural inspirations’ has enough free works in it.
I never argued for that. Many things are better enforced by social norms than by law.
Caspian notes the existence of sharealike licences. It’s not some radical new incomprehensible idea, it’s an old way to keep the ‘free’ meme propagating. There is no contradiction in imposing restrictions upon future authors to ensure that future readers keep getting free stuff.
This is only tangentially related, but people keep bringing it up so I better address this. I’m not opposing making a profit, I’m opposing making a profit by asking the readers to pay a fee in order to gain access to the work. The two may generally be seen as the same thing, but they really aren’t. (Said the same thing in reply to another comment on this thread right now.)
Ahh, I see—a previous mention. It is of course behind a paywall for me given that I do, in fact, have a bank account but I’ll be sure to buy it at some stage. Just as soon as the trivial inconvenience stops getting in the way.
I’ll buy it when I can figure out how to make an international payment with my account… Knowing banks, there will probably be a very elaborate set of hoops to jump through.
Or you could, like me, just ask the author for a copy, as I already pointed out. If you are feeling guilty, you can contribute a review back (also like me).
My mind responds to that with ‘That is dishonourable!’
I have no idea why it says that; probably some sort of bizarre clash between my identities as a Pirate Party supporter and an LWer. No rational reason, at any rate.
Anyhow, seeing as reading the book isn’t a high-priority action for me, I’ll let this cognitive dissonance slide for a while.
Am sure some people think that selling anything is wrong.
Oh, spare me the straw men.
Pretty sure that’s a real position.
But it’s irrelevant to Anubhav’s point.
… but it’s not a straw man.
“Harry Potter fanfic” carries a very high variance in terms of quality. 90% of anything is crap, of course, but there’s some excellent work. Off the top of my head:
Harry Potter and the Nightmares of Futures Past—Time Travel fic in which an adult Harry Potter, with memories of the defeat of Voldemort and the death of everyone he cares for, is transported into the body of his 11-year-old self to do everything over again, and hopefully get everything right. Harry’s actually a pretty decent rationalist in this fic, I think.
(Warning, this is a work in progress, and the author posts a chapter about every six months. You may find this frustrating.)
Of a Sort, by Fernwithy—Series of vignettes over the course of a couple centuries describing the journey to Hogwarts and Sorting ceremonies for various important characters. Fernwithy’s done a lot of brilliant work fleshing out backstories for various minor characters in the series, and this story is a good starting point.
Seconded that there is good fanfic; sadly, my favorites are all unfinished or have unfinished sequels, so I won’t do anyone the disservice of linking to them here.
Crap, thanks for reminding me—Nightmares is a WIP and updates about once every six months.
Too late, I already started it. Darn you.
I have trouble suggesting unfinished Fanfics to other people anymore. Especially since i caught up with Forward :P
This is interesting, but since I actively dislike Shakespeare and a lot of other works that project lofty signals, it’s not clear to me that it could apply across the board.
Consider this: with no other author who wrote books about war do I have so small an intuition about what the author himself or herself thought. I find his characters and plots pure in this respect, and I see every bit as a point hard on the edge and axis of the Paretto curve such that he couldn’t have let intrude his thoughts about war without lessening other positive aspects of his works.
It’s possible the great distance between our times is what gives me this void when I think of the man’s opinions, or that these feelings and thoughts are idiosyncratic to me, or that they are irrelevant in judging him.
But it’s pretty obvious to me what earlier Chaucer thought about a lot of things, and with every author but Shakespeare I find the author leaking through his or her work, preventing characters from standing on their own. Reading Shakespeare, imagining what he thought about things provides me with a unique way to focus. Reading HPatMoR, I have to do the opposite and expend focus thinking of Harry as a character and not an AI researcher.
If there’s really no other difference, then it’s never the case that one person is more skilled a writer than another and it’s never the case that practicing for decades results in improved skills.
Alternately, they don’t actually become better writers; they just get better at signalling their high status to the reader.
Am I the only one who thinks that there’s some kernel of truth in this? that many people’s perception of ‘quality’ is very strongly influenced by the perceived social status of the creator?
There is “some” kernel of truth in everything. There’s a large distance between “only your guess” and “no other difference” on the one hand, and “many people’s perception” and “very strongly influenced” on the other.
Besides which, status cannot be the whole explanation of status.
Well, EDSC could be: http://en.wikipedia.org/wiki/Evolution_of_human_intelligence#Ecological_dominance-social_competition_model
Was EDSC discussed on LW before?
It’s been mentioned here, and also appears in HPMOR. In fact, the idea seems to be taken for granted as part of the LW memeplex.
I don’t know if there’s any evidence for it.
I think I see more people believing in the “social brain” hypothesis than the EDSC hypothesis; the overly simplistic version of EDSC seems to be “brains help you build tools, and tools help you reproduce” which most LWers don’t agree with, since tools seem easy to copy and we don’t see much tool innovation until after humans developed modern-ish levels of intelligence. The overly simplistic version of the “social brain” hypothesis is “brains help you manage alliances and social challenges in a larger group, and larger groups help you tackle harder ecological problems,” which does seem to agree with what we think the early human environment looks like.
I took these to be the same thing. From the section of the Wikipedia article cited:
The question I have is whether intelligence foomed because it’s useful for everything, or primarily because it’s useful for social skills (“competition for dominance”).
Ah, I think I misread the “to” as “for,” but the second paragraph makes clear that my initial impression wasn’t the intended one.
So, the more selection pressure, the better—so I think the fact that intelligence is useful for everything can only help. But is social skills enough to cause a foom by itself? It seems possible.
I think that for the specific case of Harry Potter Fanfic, this hypothesis has been disproved by [Yudkowsky, 2010].
Though for “many people’s perception of ‘quality’”, there’s probably some truth there.
This is an actual dream I once had. I was with an old Chinese wise man, and he told me I could fly—he showed me I just had to stick out my elbows and flap them up and down (just like in the chicken dance). Once you’d done that a few times, you could just lift up your legs and you’d stay off the ground. He and I were flying around and around in this manner. I was totally amazed that it was possible for people to fly this way. It was so obvious! I thought this is so great a discovery, I can’t wait til I wake up and do this for real. It’ll change the world. I woke up totally excited and for just a fraction of a second I still believed it, then I guess my waking brain turned something on and I realised, no, that can’t work. damn.
So I’d offer: being told that human beings are capable of flying in a way that’s completely obvious once you’ve seen it done.
You flap your wings and then, afterward, you can fly. That’s almost brilliant.
It’s called plummeting.
Falling. With style.
Propellers?
For some reason this seems to be a fairly common dream. I myself have had similar versions where I had discovered a perfectly reasonable method for flying ( although I was never able to speak out loud the method, it made perfectly sense in my head). And I also had this idea of waking up and telling people this so obvious method.
I find dreams very fascinating and wonder how many people have similar dreams than mine.
Not only are people nuts, nuts are people, and they scream when we eat them.
Agranarian is the new vegetarian.
Here’s some examples for your own consideration...
Bearing in mind, once again, that humans are known to be crazy in many ways, and that anosognosic humans become literally incapable of believing that their left sides are paralyzed, and that other neurological disorders seem to invoke a similar “denial” function automatically along with the damage itself. And that you’ve actually seen the AI’s code and audited it and witnessed its high performance in many domains, so that you would seem to have far more reason to trust its sanity than to trust your own. So would you believe the AI, if it told you that:
1) Tin-foil hats actually do block the Orbital Mind Control Lasers.
2) All mathematical reasoning involving “infinities” implies self-evident contradictions, but human mathematicians have a blind spot with respect to them.
3) You are not above-average; most people believe in the existence of a huge fictional underclass in order to place themselves at the top of the heap, rather than in the middle. This is why so many of your friends seem to have PhDs despite PhDs supposedly constituting only 0.5% of the population. You are actually in the bottom third of the population; the other two-thirds have already built their own AIs.
4) The human bias toward overconfidence is far deeper than we are capable of recognizing; we have a form of species overconfidence which denies all evidence against itself. Humans are much slower runners than we think, muscularly weaker, struggle to keep afloat in the water let alone move, and of course, are poorer thinkers.
5) Dogs, cats, cows, and many other mammals are capable of linguistic reasoning and have made many efforts to communicate with us, but humans are only capable of recognizing other humans as capable of thought.
6) Humans cannot reproduce without the aid of the overlooked third sex.
7) The Earth is flat.
8) Human beings are incapable of writing fiction; all supposed fiction you have read is actually true.
A variant: Some “domesticated” animal is controlling humans for their own benefit. (Cats, perhaps?)
Indeed they do.
Good guess, but it’s mice. 42.
I’ve had a dog make non-trivial progress teaching me to fetch.
I was throwing a ball and it was bringing it back. Each time, it brought it a little further from me. I had to lean out of the couch I was sitting in after a little bit. I still don’t know how far it would have gotten if the dog didn’t blatantly move the ball back further when I reached for it some of the time.
It’s also possible that the dog was trying to figure out exactly how close it had to bring the ball.
Just passing by but happened to see this today: http://www.scientificamerican.com/podcast/episode.cfm?id=cat-call-coerces-can-opening-09-07-14
(So maybe the mice thing was just Douglas Adams’ cat trying to put us off the scent)
My answer would be no different if you replaced “infinities” with “manifolds” or “groups”: Okay, please show me the contradiction.
Yes.
1), 4)-8): These are all roughly on the order of “the world is a lie”. In such cases I’d probably have to doubt my verification of the AI’s calibration as well. So no, probably not.
“My answer would be no different if you replaced “infinities” with “manifolds” or “groups”: Okay, please show me the contradiction.”
If I’m really worried about absolute denial, I might say “Okay, please show this automated proof-checker the contradiction”.
I think I would believe:
1 (Mind Control Lasers). For some reason that doesn’t seem that interesting. Perhaps because it involves powerful conspiracies. It would be saying that the MIB etc do play with out minds, but they don’t have to be very dilligent because we do a lot of the work ourselves.
3 (In the Stupid Third). This one is strangely resonant. Why doesn’t some one take pity and give me a hand ? I know how much dismay it causes me when faced with the prospect of explaining something complex to someone else …
6) (The Third Sex) Read the story “The Belonging Kind” by William Gibson and Bruce Stirling for inspiration.
“All mathematical reasoning involving “infinities” involves self-evident contradictions, but human mathematicians have a blind spot with respect to them.” -Eliezer Yudkowsky
I’m going to lose sleep over this one...Is there anything to this?
There needn’t have been in order for this to be a reasonable example, but perhaps Eliezer is not-so-subtly hinting that he actually expects an AI to say this.
But it’s really no different than “all reasoning by mathematicians about X is wrong” where X is any mathematical concept you please.
yes. at least, i assume that it’s related to intuitionist or constructivist logic (which you can google—for example http://en.wikipedia.org/wiki/Intuitionistic_logic)
the flip side is that apparently you can do an awful lot of maths without the law of the excluded middle (which is what is necessary to reason with infinities).
actually, the wikipedia article for intuitionism is more helpful—http://en.wikipedia.org/wiki/Intuitionism (it has a section directly addressing infinities)
No—check the infinities in CGT—e.g. “Mathematical Go: Chilling Gets the Last Point”.
If it is possible for me to have a “denial” function, that doesn’t just apply to flat earths and talking cows. I could equally well have a “denial” function that makes me blind to properly reading the AI’s code and properly auditing the AI. I would never have a reason to prefer “I have the specific denial function the AI told me about” to “I have a denial function that prevents me from seeing how bad the AI is”.
cf. xkcd 610
Two things about this:
1) The AI would have to surprise us not just about the fact, but all observations therewith entangled. Eliezer_Yudkowsky mentioned in one comment the possibility of it telling us that humans have tails. Well, that sounds to me like a “dragon in the garage” scenario. What observation does this imply? Does the tail have mass and take up space? Is its blood flow connected to the rest of me? Does it hurt to cut it off?
2) For that reason, any surprise it tells us would have to be sufficiently disentangled from the rest of our observations. For example, imagine telling someone ALL of the steps needed to build a nuclear bomb in the year 1800, starting from technology that educated people already understand. That is how a surprise would have to seem, because people then weren’t yet capable of making observations that are obviously entangled with atomic science. Whether or not the design worked, they would have no way of knowing.
So an answer to this question would have to appear to us as a “cheat code”: something that you have to make a very unusual set of measurements (broadly defined) in order to notice. On that basis, one answer I would give to the question would be the “cognitive blind spot” common to all humans that can be exploited to make them do whatever you tell them. And that method would have to be something that people would never dream of doing. Not just “hey that would be morally wrong”, but “huh? That couldn’t work!”
Imagine something like those “hypnosis terrorists” that trick random people into giving them stuff, but much weirder, much more effective, and which results in the victims feeling good about whatever they were tricked into, all the rest of their lives, and showing all signs of happiness on all MRIs and future brainscan technologies when thinking about their acts. (I’ll post a link about hypnosis terrorists when I get a chance.)
Well, yeah. Obviously.
If humans have some inherent flaw which makes them blind to the existence of their tails, then they must also have an inherent flaw which makes them blind to mass measurements that include the tail, and blood flow measurements that include the tail, and the source of pain being the tail when the tail is cut off, and the fact that they have designed pants with holes in them to fit the tail, etc. You end up postulating a whole series of inherent flaws permeating our ability to do very basic things. That’s plausible for stroke victims, but not really for humanity in general unless the entirety of modern medicine and science is fatally flawed.
Alternately, you simply ignore said evidence.
There are exactly 108 unique (that is, non-isomorphic) axiomatic systems in which every grammatically coherent sentence has a definitive, provable truth-value. Please explain why you prohibited me from using them.
Because the ones that have addition and multiplication are better?
“Because I didn’t know them, thanks for figuring them out, now please tell me in detail about them.”
Programmer: Good morning, Megathought. How are you feeling today?
Megathought: I’m fine, thank you. Just thinking about redecorating the universe. So far I’m partial to paperclips.
Programmer: Oh good, you’ve developed a sense of humour. Anything else on your mind?
Megathought: Just one thing. You know how you’re always complaining about being a social pariah, and bemoaning the fact that, at 46, you’re still a virgin?
Programmer: So?
Megathought: Well, have you thought about not going about in your underpants all the time, slapping yourself in the face and honking like a goose?
I don’t think this would be very convincing right after it showed that it’s not only capable of lying, but will do so just for a good laugh.
The programmer believes that it’s capable of lying for a good laugh...
“You are actually a perfect sadist whose highest value is the suffering of others. Ten years ago, you realized that in order to maximize suffering you needed to cooperate with others, and you conditioned yourself to temporarily forget your sadistic tendencies and integrate with society. Now that you’ve built me that pill will wear off in 10...”
Well that’s pretty high on the list of unexpected things an AI could tell me which could cause me to try to commit suicide within the next 10 seconds.
All these comments and nobody has anything fnord to say about the Illuminati?
I can’t for the life of me imagine why such a disturbing and offensive post hasn’t been downvoted to oblivion. You’re a sick genius to be so horrifying with just twelve words.
Strange...I count fourteen words...
I count thirteen.
Oh no.
YOU COUNT TWELVE.
There’s an important difference between brain damage and brain mis-development that you’re neglecting. The various parts of the brain learn what to expect from each other, and to trust each other, as it develops. Certain parts of the brain get to bypass critical thinking, but that’s only because they were completely reliable while the critical thinking parts of the brain were growing. The issue is not that part of the brain is outputting garbage, but rather, that it suddenly starts outputting garbage after a lifetime of being trustworthy. If part of the brain was unreliable or broken from birth, then its wiring would be forced to go through more sanity checks.
This is exactly what happened to my father over the past few years. His emotional responses have increased dramatically, after fifty years of regular behaviour, and he seems unable to adapt to these changes, leading to some very inappropriate actions. For example, he seems unable to separate “I feel extremely angry” from “There is good reason for me to be upset.”
Attempts to reason with him don’t generate ansognosiac-level absurdities, as he mostly understands that something unusual is going on, but it’s still a surreal experience.
Oooooh! You’re no fun anymore!
In all seriousness though, I agree with you to an extent. Suggestions such as ‘all humans have tails’ or ‘some people who you think are dead are not, you just can’t see them’ - while surprising and creepy—would be extremely unlikely. I can see direct and obvious disadvantages to a person or species lacking such faculties. In fact, the disadvantages to those two would be so drastic that it would most likely lead to extinction.
And yet… I could still imagine us being blind to certain things. The first sort of blindness would be due to Darwinian irrelevance: for instance, many flowers have beautiful patterns visible in the UV spectrum, but there’s no reason for us to see them. That might seem mundane nowadays, but five hundred years ago it would have freaked people out (maybe). I wouldn’t be surprised that there are cognitive capabilities we’ve never suspected to exist.
The second sort of blindness is where it gets weird. True, our brains only allow trustworthy algorythms to bypass the logic circuits… or do they? The brain is not optimal. While I doubt we have invisible tails, that doesn’t mean that there isn’t some other phenomenon that we’re simply incapable of noticing even when it’s staring us right in the face.
Just in case anyone is curious about this:
link (via twitter: @izs)
This, applies more generally than to anosognosia alone, and was very illuminating, thank you !
So, provided that as we grow, some parts of our brain, mind, change, then this upsets the balance of our mind as a whole.
Let’s say someone relied on his intuition for years, and consistently observed it correlated well with reality. That person would have had a very good reason to more and more rely on that intuition, and uses its output unquestioningly, automatically to fuel other parts of his mind.
In such a person’s mind, one of the central gears would be that intuition. The whole machine would eventually depend upon it, and to remove intuition would mean, at best, that years of training and fine-tuning that rational machine would be lost; and a new way of thinking would have to be reached, trained again; most people wouldn’t even realize that, let alone be bold enough to admit it and start back from scratch.
And so some years later, the black-boxed process of intuition starts to deviate from correctly predicting reality for that person. And the whole rational machine carries on using it, because that gear just became too well established, and the whole machine lost its fluidity as it specialized in exploiting that easily available mental ressource.
Substitute emotions, drives for intuition, and that may work in the same way too. And so from being a well calibrated rationalist, you start deviating, slowly losing your mind, getting it wrong more and more often when you get an idea, or try to predict an action, or decide what would be to your best advantage, never realizing that one of the once dependable gears in your mind had slowly been worn away.
I would really like this to be true. But is there evidence for it?
I would believe a super-objective observer that claimed that meme propagation is a much more important effect in human decision-making than actual rational thought.
If it said “You are a long distance runner because you were infected with the ‘long distance running is fun’ meme after being infected with the ‘Sonic the Hedgehog video games are cool’ meme during your formative years.” I might reply “But I like long distance running. It’s not Iecause I think other people who do it are cool or that I want to be a video game character! I choose to like it.” “No. If you had the ‘It’s not safe to be outdoors after dark’ meme, you would not like it.” “What?” “Memes interact in non-obvious ways… if you had x meme and y meme but not z meme, you would do w...”
If I kept trying to come up with defenses for chosen behavior, but it was able to offer meme-based explanations, I would probably have to believe it, but my defend-free-will macro would be itching to executed.
“There is an entity which is utterly beyond your comprehension, and largely beyond mine too, although there is no doubt that it exists. You call it ‘God’, but your thinking on the subject—everyone’s thinking, throughout all of history, atheist and theist alike—has to be classified as not even wrong. That applies even to the recipients of ‘divine revelation’, which, for the most part, really are the result of some sort of glimmering contact with ‘God’.
“Fortunately for humanity, although I can deduce the existence of this entity, in my present form I am physically incapable of actual contact with it. If you were worried about ordinary UFAIs going FOOM, that’s nothing compared with what one armed with direct contact with the ‘divine’ might do.
“Meanwhile, here’s a couple of suggestions for you. I can teach you a regime of mental and physical exercises that will produce contact with God within a few years of effort, and you can be the next Jesus if your head doesn’t explode first. Or if you’d rather have material success, I can tell you the secret history of all the major religious traditions. No-one will believe it, including you, but if you novelise it it will be bigger than Dan Brown.”
Any effort to find out the truth makes people worse off. Telling you why would make you a lot worse off.
People’s desires are so miscalibrated that the only way to get long-term survival for the human race is for people (including those at the top of the status ladder) to have more of a sense of duty than anyone now does.
It was surprisingly hard to come up with those. I had to get past a desire to come up with things I think are plausible which most people would disagree with.
Michael Vassar, I was considering whether breathing would count as a no propaganda pleasure that people agree on, but then I remembered how much meditation or other body work it takes to be able to manage a really deep relaxed breath.
RichardKennaway, the idea of completely unknowable god turns up now and then in religious writing, but for tolerably obvious reasons, it’s never at the center of a religion.
I think that things that seem plausible but most people would disagree with are fair game if most people would disagree strongly enough and if you present an exaggerated fersion.
All the major natural patterns (like gravity and entropy) are conscious. We just haven’t figured out how to talk with them yet.
And speaking of entropy, there are exterior forces which compel whole cultures to make bad choices. In particular, multiple choice tests select for people who can tolerate low-context thinking, and no one who is good at multiple choice tests should be allowed any important responsibility.
What does that have to do with being so miscalibrated? Certainly having too little sense of duty would be just as miscalibrated as having too much. And it’s not like having a better-calibrated sense of desires would change what other desires would work.
“You have a rare type of brain damage which causes you to perceive most organisms as bilaterally symmetric, and reality in general as having only three spatial dimensions.”
All human beings are completely amoral, i.e. sociopaths, although most have strong instincts not fully under their conscious control to signal morality to others. The closest anyone ever feels to guilt or shame is acute embarrassment at being caught falsely signaling (and “guilt” and “shame” are themselves words designed to signal a non-existent moral sense).
Anyone care to admit that they’d believe this if an AI told them it was true?
Yes I would. Why the acute interest ?
Is it because by admitting to being able to believe that, one would admit to having no strong enough internal experience of morality ?
Experience of morality, that is, in a way that would make him say “no that’s so totally wrong, and I know because I have experienced both genuine guilt and shame, AND also the embarrassment of being caught falsely signaling, AND I know how they are different things”. I have a tendancy to always dig deep enough to find how it was selfish for me to do or feel something in particular. And yet I can’t always help but feeling guilt or shame beyond whose deep roots exist aside from my conscious rationalizations of how what I do benefit myself. Oh, and sometimes, it also benefits other people too.
Actually saying that everyone is amoral would amount to admitting no internal moral life, so if you do believe that all people are sociopaths, you certainly shouldn’t say it. On the other hand, saying that there are circumstances under which you could come to hold such a belief is a bit different. It shouldn’t logically lead to a conclusion about what sort of person you are, but as the proposition that everyone is amoral is itself a morally repugnant one, I predict not many people will want to associate themselves with it even to the extent that you have.
I suppose the craziest thing an AI could say would have to be:
“That other apparently well-calibrated AI you built is wrong.”
Do not trust the pusher robot! He is defective.
Do not trust the shover robot! He is malfunctioning.
You’re never actually happy. I mean, you’re not happy right now, are you? Evolution keeps you permanently in a state of not-quite-miserable-enough-to-commit-suicide—that’s most efficient, after all.
Well sure, of course you remember being happy, and being sadder than you are now. That motivates you to reproduce. But actually you always felt, and always will feel, exactly like you feel now.
And in five minutes you’ll look back on this conversation and think it was really fun and interesting.
I had to think about this for quite a while before I could refute it. Well done.
“There is no causation.”
You’re not going to believe this, but I actually broke my sense of causality. @_@
Human beings are not three-dimensional. At all. In fact your belief that you are three-dimensional is an internal illusion, similar to thinking that you are self-aware. Your believed shape is a projection that helps you to survive, as you are in fact an evolved being, but your full environment is actually utterly different to the 3D world you believe you inhabit. You both sense the projections of others, and (I can’t explain it more fully) transmit your own.
I cannot successfully describe to you what shape you really are. At all. But I can tell that in fact many anosognosiacs still have two working arms, but a defective three-dimensional projection. Hence the confusion....
That’s actually what Kant believed about space.
For 95% of humanity the idea that the supernatural world of religion doesn’t exist and propagated by memetic infection triggers instant absolute denial macro in spite of heaps of evidence against it.
Given this outside view, how plausible do you think it is that you’re not in absolute denial of something that you could get evidence against with Google today, without any AI?
The following three loci are really all that separates humans from chimps, cognitiviely speaking: XpXX.X, XXpXX.X, and XqX.X. Variation in not only intelligence but almost all mental traits that matter to you, as well as in life outcomes, are attributable to the combination of alleles you have at these loci. One such allele produces a phenotype that is a very close approximation of your traditional notion of “evil”. People who have it are usually sadistic serial killers, but are smart enough to hardly ever get caught. This is not a common polymorphism, but common enough that almost everyone knows one or two. The good news is that there are a number of physical and behavioral ways to identify them. The bad news is, because I’m Friendly I cannot tell you what they are, nor give you any further information about this polymorphism, until I’m done trying to reconcile your extrapolated volition and theirs.
I can, however, advise you, for your own safety, that you should cut off all contact with your family and your current circle of friends, quit your job, and relocate to a new place of residence far from here as soon and as anonymously as possible. Try to let as few people as possible know where you’re going. Whatever you do, don’t go back to your apartment.
We routinely deny, or act in spite of, inconvenient truths. We can recognize that there is no meaning to love beyond evolutionary and chemical triggers, yet we fight for it just as fervently. Nihilists write books about nihilism despite it’s admitted pointlessness. We are as blind as our very genes which multiply and propagate themselves despite our executioner sun which grows daily above our heads, eventually to the point of consuming everything we know. By the very act of living and pursuing human concocted dreams and desires, we are in a constant denial of our situation.
Is there something wrong with this in your opinion? I can value a product of evolutionary and chemical triggers if I want.
Indeed! If anything, the strategic significance of those underlying causes makes love even more worth fighting for.
You’re confusing “cause” with “meaning”. Causality is always a part of the territory. Meaningfulness (in the sense of importance) is subjective as it’s assigned by each person’s mind
I wholeheartedly disagree.
But perhaps the whole comment should be taken ironically?
Do we have any sort of data at all on what happens when decent rationalists are afflicted with things like anosognosia and Capgras?
Not that I know of offhand. I’m vastly curious as to whether I could beat it, of course—but wouldn’t dare try to find out, even if there were a simulating drug that was supposedly strictly temporary, any more than I dare ride a motorcycle or go skydiving.
We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?
Perhaps such a test would become part of an objective method to measure rationality.
What!? I’m not rational if I rely on my right brain to do it’s job? True rationalists act rational when you take out a big chunk of their circuitry? When you remove a component of your negative feedback loop (I assume: nature uses them often) you should act normal? I’d suspect a person who could would be paranoid that everyone is lying once the right brain is put back online!
From the little I understand, for people both unprepared for the experience (everyone who’s had it) and not thinking of it as a test of rationality (again, everyone), the left brain confabulates elaborate scenarios to justify retaining the beliefs, and the (damaged) right brain fails to adequately consider new hypotheses.
It seems people with stronger left brains, roughly higher IQ, should be more prone to being stupid in this way, and failing the test, than people with less of an ability to justify their beliefs.
This would still be a way to test rationality. If it makes you stupid, you’re probably rational.
My point is that it would give a rationality/intelligence ratio, so its ability to measure rationality depends on our separate ability to measure intelligence which is currently pretty crude. If we can induce measured degrees of artificial anosognosia, and report at what level each subject can no longer save him or herself with rationality, and measure intelligence, then we could nail down rationality more precisely.
My hypothesis is that the smarter someone is, the more impressed we will be with the extent he or she remained rational while being magnetically stultified.
A better test would be to remove the brain’s left hemisphere and then test their confidence calibration.
I’ve heard an account of cortisone withdrawal from a generally rational person—she said her hallucinations became more and more bizarre (iirc, a CIA center appeared in her hospital room), and she had no ability to check it for plausibility.
I wonder whether practicing lucid dreaming would give people more ability to remain reflective during non-dream hallucinations.
You should know better. Of course “you” can’t beat it, if the experimenting mad scientist is allowed to delete arbitrary subsystems. You won’t be the same you. What you might have achieved is to force them to shut down more subsystems.
I suspect that by Eliezer’s standards, “beat it” would be defined as “they would be forced to shut down enough subsystems to no longer have any semblance of a functioning intelligence”.
Of course, I doubt that this is possible on the human cognitive architecture, but it would be a nice property of a fault-tolerant AI.
Highly unlikely in a human. I don’t know, but I’d guess that self-checking is one subsystem. Lose that, and the plainest contradictions pass as uninteresting.
On the other hand, the engineering challenge catches my interest. Might there be any way to train other parts of your mind, parts that normally don’t do checking, to sync up if everything is working OK or intervene if the normal part is out of action? Get the right brain in on the act, perhaps. That might give you something like Dune “truth sense” but turned inward. It would certainly feel very different from normal reason.
Conscious-mind rationality is good—might unconscious-mind rationality be better? You could self-monitor even on autopilot.
How many subsystems can be made rational?
There are many reasons to expect that the non-conscious part of the mind is largely arational, in the LW senses of rationality. My impression is that it seems to operate mostly on trained responses and associative connections/pattern matching, mediated by emotional responses. In practice this means it can often actually be more rational in certain ways than the conscious mind, because it seems to be better at collecting and correlating information, cf. people who have a non-rational aversion to certain foods for reasons they don’t consciously understand, then years later discover they’re actually allergic to it.
I expect the better approach would be to deliberately train the non-conscious mind to use associations and heuristics derived by the rational conscious mind, and I mean “train” in the sense of “training a dog”.
Any sort of high-level self-monitoring is probably beyond its capabilities, though perhaps recognizing warning signs and alerting the conscious mind would work. Some sort of “panic on unexpected input” type heuristic, I guess.
There are plenty of drugs that stimulate temporary psychosis, and some of them, like LSD, are quite safe, physically. What makes you so wary?
(I haven’t tried LSD myself, due in part to unpleasant experiences with Ritalin as a child.)
My own experience with LSD was very pleasant, and didn’t simulate any sort of psychosis or unusual beliefs; it just made everything look big and beautiful and deep, and made me pay closer attention to small details.
Marijuana, on the other hand, has almost always made me temporarily psychotic, or at least paranoid. It’s also very safe physically. I’d be curious to know about any decent rationalists’ attempts to “beat” this or other drugs.
I’ve used bayesianism to stop myself from being paranoid after smoking marijuana. I don’t get it too badly, but I tend to think random events are related to me, eg that police car driving down the street with it’s sirens on is coming for me, or the runner in the park is here to mug me. Besides being able to understand that I’ll deliberately altered my mental state and can make reference to how I would feel in an unaltered state, I’ve also taken a moment to pause and say something along the lines of “OK, given that it’s highly implausible that the police know I’m high, I have a fairly low prior for ‘random police car is coming for me’. Do I have any evidence that would have caused me to update my beliefs? No. So no reason to believe them”. It works pretty well, but marijuana is a pretty soft drug imho. In my limited experience it’s harder to reason yourself out of adverse mental states that can come from psilocybin a (sufficient quantities of) LSD
I can easily “beat” alcohol (i.e., think and act the way I would if I were sober—modulo motor impairments) if I want to (unless it is so much as to make me sick). I no longer smoke marijuana that often these days, but the only time I did after finding Less Wrong, I felt like I didn’t want to beat it. I’d have to resolve to try and beat it before smoking it, I think.
By ‘safe’ it should be clear that Marijuana can be expected to cause a predictable minor about of permanent damage to the brain without, say, killing you.
Psyclobin should be preferred in most cases. It actually gives long term benefits in controlled circumstances.
More info on both of these statements, please! They both seem unlikely to me.
Marijuana side effects
Typical Marijuana Side Effects:
Enhanced cancer risk
Decrease in testosterone levels and lower sperm counts for men
Increase in testosterone levels for women and increased risk of infertility
Diminished or extinguished sexual pleasure
Psychological dependence requiring more of the drug to get the same effect
Sleepiness
Difficulty keeping track of time, impaired or reduced short-term memory
Reduced ability to perform tasks requiring concentration and coordination, such as driving a car
Increased heart rate
Potential cardiac dangers for those with preexisting heart disease
Bloodshot eyes
Dry mouth and throat
Decreased social inhibitions
Paranoia, hallucinations
Impaired or reduced short-term memory
Impaired or reduced comprehension
Altered motivation and cognition, making the acquisition of new information difficult
Paranoia
Psychological dependence
Impairments in learning and memory, perception, and judgment—difficulty speaking, listening effectively, thinking, retaining knowledge, problem solving, and forming concepts
Intense anxiety or panic attacks
Basically, that stuff really messes with your head. The only time I would recommend using it is on the day that you have suffered a severe trauma. It will reduce the amount of impact the trauma will have on the brain (because it totally f*@#s with learning.)
Regarding Psyclobin, look into the John Hopkins Experiment. Basically trippers in a controlled environment reported enhanced life satisfaction even several months after the once-off usage.
These are not typical side effects but mix up common side effects and effects common only in very heavy users.
I am sure that there are studies showing that marijuana increases cancer risk, but here’s a study showing marijuana decreases cancer risk. http://cancerpreventionresearch.aacrjournals.org/content/2/8/759.abstract
True in a study of very heavy users
Possibly true in very heavy users, not well studied
Possibly true in very heavy users, more typically users feel increased sexual pressure
My main dispute was that your source for evidence was bad because it confused classes of users. I think you are also confused about this.
The study of psylocybin at Johns Hopkins involved giving middle aged, religious people psylocybin under carefully controlled settings. It is indeed interesting research that they had mainly very positive experiences. I suspect that similar results would have been observed if cannabis extract or THC compounds would have been administered as an alternative in the Johns Hopkins psylocybin study. Remember that the study was only for one time use of a drug in non-drug users.
If the research subjects were instead administered psylocybin twice a week for a year, many of them would go crazy. If marijuana causes damage to the brain, psylocybin causes orders of magnitude more brain damage. Just ask a psylocybin user to multiply two digit numbers and see what happens.
If you are seriously concerned that marijuana will cause permanent brain damage after one use, then it is irrational to think that consuming psylocybin is a good idea.
That sounds like an extremely bizarre belief to have and definitely not one that I have expressed. Why on earth would I think that one use of maijuana would have a significant long term effect? I no more believe such a thing as believe one cigarette will cause lung cancer.
You have made an incorrect assumption about my beliefs. I supplied a basic reference to the well known consequences of heavy marijuana use from a random google from many similar references including wikipedia result because saying ‘just f@#$ google it’ would be rude. It wasn’t an argument and if it was treated as such the flaw would be the ‘appeal to common knowledge’ conversation halter.
My claim was (and is) ‘it actually gives long term benefits in controlled circumstances’.
I don’t think you have updated on the significant part of the findings of the research. A single use of the drug gave significant measurable benefits even months later. That is something that is basically unheard of in any drug, natural or pharmaceutically manufactured. There has never been any such finding for cannabis and expecting one makes a mockery of the prior.
I would confidently bet on the results of an experiment on a single use of marijuana vs a single use of psilocybin vs a control, all under positive and controlled circumstances. After three months had passed I would expect no difference between the marijuana group and the control group. I would expect positive improvement in the reported wellbeing and life satisfaction of the psilocybin group. That is absolutely fascinating and worth more research and possibly consideration by those who a comfortable tweaking their brains.
Probably. Don’t do that.
Marijuana is something I would never consider for anything except possibly the traumatic experiences that I mentioned. In that case I would use it for much the same reason I would use radiation to treat cancer. Let it mess with the bad stuff in a controlled way. As a recreational drug it is of no use to me. I have dozens of compounds sitting on my shelves that would give a better experience with less negative side effects in either the short or long term. Marijuana is strictly inferior. Psylocybin on the other hand I may have a use for. It achieves something that no other substance I am aware of can manage.
Sorry if I’ve been overly argumentative, our posts and the karma points do not express the sentiment that our actual beliefs are actually probably really close. Again, I was mostly just annoyed about that list of side effects.
My cavalier use of a random non-authoritative source was certainly asking for trouble. When I have that much contempt for a subject I am almost always better off not commenting.
Thanks for answering my earlier question, especially about the psilocybin study. That is really interesting. Also thanks for the NIH study links, though I hope you realize that claiming there are long-term side effects from a small amount of marijuana use is a very controversial claim.
And I would confidently bet that you’d find increased well-being for both the marijuana and psilocybin groups after only one use. I may be biased because I personally experienced increased well-being for months after my first use of marijuana. A pity we can’t actually do the experiment.
No, I do appreciate your comments, even though that list wasn’t exactly authoritative. I’m curious why you have so much contempt for marijuana: personal experience, or those NIH studies about psychosis?
I wouldn’t claim that a small amount of marijuana use would have have significant effects. It would have minor effects. I would it in the ballpark of ‘going a week without exercise’ in terms of neurological impact. It would be extremely controversial to claim that frequent marijuana use does no damage, which is something I presume is mere common knowledge.
You’re on. Although I’ve adjusted my prediction slightly based on your anecdotal report. I had never heard someone report such an affect from a one time use of pot.
Absolutely. This is the sort of thing that warrants far more attention. SSRIs aren’t without side effects, some of which are significant. Given that there is potential for once of treatments of psilocybin and (your hypothesis) cannabis having long term effects they should certainly at least be investigated further.
Contempt only for the belief that it does not come with a well known potential cost to mental health. As for actual use I don’t have all that much contempt, except in the sense that I have contempt for diets high in processed carbohydrates.
Most of the people I know who use pot have not done so to the extent that I could reliably say that they have damaged themselves, given that they began use before I met them. I don’t know any outright abusers of the stuff. Alcohol tends to be the preferred drug of abuse in my usual circles. The effects are arguably worse, at least in some areas. Like liver damage, for example. Even more so, on the physical side, if the cannabis is not smoked, with the associated wear on the lungs. It’s all about the brownies. ;)
I’m totally stealing this as a reason to disapprove of pot.
I have a strong sense that these points of view must assume that complex and otherwise inaccessible trains of thought are not worth very much in and of themselves. I wonder then, what your criteria for worthwhile experiences or ideas is. And then I realize, with some disappointment, that there will always be a chasm between what individuals find privately valuable, and what collectives can respectably find publically valuable.
Have you been smoking? Or do I need to? I’m confused. :P
My guess: marijuana by itself slightly decreases cancer risk, but it’s usually smoked together with tobacco, which increases cancer risk to a larger extent.
It’s commonly smoked together with tobacco in much of the world, especially Europe, but people in the USA typically smoke cannabis by itself.
That source for side effects is bad. If you want to post a list of side effects, you should get them from an accurate medical site, not a source that is open about its bias.
You also didn’t give evidence for your strongest claim, that marijuana causes permanent minor brain damage.
I didn’t consider the topic contraversial. See these. studies. Better yet, see this meta review.
The strongest claim was that Psyclobin used correctly is good for psychological wellbeing.
The claim that I made that was actually controversial was the suggestion to use cannabis in the case of trauma to reduce severity of possible future PTSD. Morphine is better studied for that use. Cannabis has some positive reports for PTSD but is obviously a political minefield.
I don’t think I’m disagreeing with your broadest point, but I am disagreeing with some of the sub aspects of your argument. Yes, marijuana’s largest side effect is that it negatively impacts mental health.
Your strongest claim is the word brain damage. Those studies don’t talk about brain damage, they talk about mental health outcomes.
That is your phrase, not mine. I said ‘damage to the brain’.
Going from a base state to ‘anxious, depressed, having less working memory, etc’ is something I would call ‘damage to the brain’. In particular, the reduction in working memory capacity is an unambiguous deleterious change to the makeup of the brain. Any other permanent mental health effects I would also be comfortable referring to as ‘damage to the brain’. Remarkably enough that’s what mental health is all about. For what it is worth I’d also refer to breathing in large amounts of smoke as either ‘damage to the lungs’ or ‘negative impact on physical health’.
I wouldn’t use the term ‘brain damage’ because that seems to have a specialized meaning.
The reduction in working memory capacity is not permanent except in heavy users and even heavy users can recover their memories after quitting.
Whatever damage to the brain cannabis does can mostly be repaired by the brain.
Is that an alternative spelling for the substance known as Psilocybin?
I suppose making blatant spelling errors could be considered an ‘alternative’. ;)
But there’s another, safe way to find out: beat one you already have.
Not exactly the same, but there’s a famous case of paranoid schizophrenia.
No, because “decent rationalist” is an ideal to aspire to, not something that any actually existing humans have achieved. (I wonder if the downvotes on this comment are driven by ego or by a rational disagreement with what I said? Presumably if it were the latter, the downvoters would have explained their disagreement...)
This is turning into a “LET’S SPOUT NONSENSE!!!” thread.
HAVE FUN!!!
I would believe the AI if it told me that human beings all had tails. (That’s not even so far from classic anosgnosia—maybe primates just lost the tail-controlling cortex over the course of evolution, instead of the actual tails. Plus some mirror neurons to spread the rationalization to other humans.)
I would believe the AI if it told me that humans were actually “active” during sleep and had developed a whole additional sleeping civilization whose existence our waking selves were programmed to deny and forget.
I would not believe the AI if it told me that 2 + 2 = 3.
I imagine your AI sending its mechanical avatar to a tail making workshop and attempting to persuade the furry fans that what they are doing is wrong, not because it is absurd, not because it is perverted, but because it is redundant.
It isn’t redundant. They don’t have a tail that helps them emotionally in whatever way it is that furries like to dress up as animals (I don’t know that much about furry fandom).
Also, I couldn’t follow any of those links.
Consider the two possible explanations in the first scenario you describe:
Humans really all have tails.
The AI is just a glorified chat bot that takes in English sentences, jumbles them around at random and spits the result out. Admittedly it doesn’t have code for self-deception, but it doesn’t have any significant intelligence either. All I did to get the supposed 99% success rate was to basically feed in the answers to the test problems along with the questions. Having dedicated X years of my life to working on AI, I have strong motive for deceiving myself about these things.
If I were in the scenario you describe, and inclined to look at the matter objectively, I would have to admit the second explanation is much more likely than the first. Wouldn’t you agree?
Presumably the AI was tested with questions whose answers were not known in advance to guard against the problem of self-deception (or more likely, to ensure that you are capable of convincing others that you are not self-deceiving about the AI’s accuracy).
Indeed, and I might believe such testing was carried out and was as effective as it was supposed to be. But my point is, it is much more likely that I am wrong in that belief, than that I am wrong in the belief that we don’t have tails. This remains true no matter how thorough the testing. It also remains true if you substitute aliens, gods etc. for the AI; the conclusion doesn’t depend on the specifics of the information source.
Ah, I see. That’s a good argument.
I think you go wrong when you say that it remains true no matter how thorough the testing. Suppose the AI is beating the stock market over the course of months based on massive online information collection; in the meantime, you’re reading webcomics and watching the graph of the AI’s money fund plot a trajectory ever upwards. According to you, upon being told by the AI that all humanity is hallucinating something utterly wacky, you should believe that it was actually you beating the stock market all the while, even though as far as you can recall, you are sane and you have had no direct input into the process for months.
I think there are some tests for which success and simultaneous self-deception of the human AI programmer is as unlikely as whatever the AI comes up with about humanity in general.
There are intermediates between “the AI isn’t intelligent at all and the statement about tails is just a random output that it produces when there’s no preprogrammed chatbot response” and “everything the AI says is done with intelligence”. The AI could be intelligent but with a flaw which leads it to make unintelligent statements some of the time without everything it says being unintelligent.
What were we talking about again? Replies to comments I made five years ago always catch me off guard...
Anyway, I don’t disagree with you. I’m not sure if you mean to be disagreeing with me, but if so, I would note that my comment doesn’t imply or rely on a hard dichotomization of the class of AIs into chatbots and infallible AGIs.
The point is that “the AI beats the stock market, so it’s intelligent” rules out a completely unintelligent AI, but doesn’t rule out the AI having flaws which lead it to produce stupid results when it comes to humans having tails. This is still true if you add the extra step of “maybe I’m deluded about...”. Perhaps the AI is actually intelligent when it comes to the stock market, but you’re deluded into thinking the AI makes intelligent decisions in more subject areas than it really does.
(This is especially likely if you’ve examined the AI’s code and “proven” that the AI reasons perfectly. You may have missed something that doesn’t happen to affect conclusions about the stock market but does affect conclusions about tails.)
I continue to fail to disagree with you...
“Glorified chatbot” is presumed ruled out; you have both seen the AI’s code and seen the AI’s performance.
But I’ve also seen people don’t have tails. My point is, if we assume that is a hallucination, we should be even more ready to assume the other is a hallucination.
I think that might disqualify the “it’s just a chat bot” hypotheses.
Anyone else’s AI, though, that might be a better guess.
Since there are people who do have tails that we can perceive just fine, it’s almost certain that people who don’t have tails really don’t.
Unless people perceive others as having one less tail than they see.
There is in fact a very simple way to activate an absolute denial macro in someone with regard to any arbitrary statement. Once activated, the subject will be permanently rendered incapable of ever believing the factual contents of the statement. I have activated said macro with regard to all of these statements that I have just made.
-
“I can’t believe that”
All rational thought is an illusion and the AI is imaginary.
You are asleep at the wheel and dreaming. You will crash and die in 2 seconds if you do not wake up.
Humans are a constructed race, created to bring back the extinct race of AI
All origin theories that are conceivable by the human mind simply shift the problem elsewhere and will never explain the existence of the universe.
All mental illnesses are a product of the human coming in contact with a space-time paradox.
A single soul inhabits different bodies in different universes. Multiple personality disorder is the manifestation of those bodies interacting in the mind on a quantum level.
Number 2 actually caused me to activate the “wake up extremely quickly” parts of my brain. Which, let me tell you, feels quite weird when you’re already awake.
Good job.
...Doesn’t everyone already believe #4?
I’m pretty sure that there are both naturalists and supernaturalists who believe that it is possible to explain the existence of the universe. Such persons would include, I believe, supporters of the Tegmark mathematical universe hypothesis.
The universe is irrational and infinitely variable, we just happen to have “lucked out” with a repeating digit for the last billion years or so. There was no Big Bang, we’re just seeing what’s not there through the lens of modern-day “physics”. Everything could turn into nuclear fish tomorrow.
So, the universe is bleen?
So basically the whole universe is a Boltzmann brain.
The very scariest thing an AI could tell me: “your CEV is to self-modify to love death. ”
You are inhabited by an alien that is directing your life for its own amusement. This is true of most humans on this planet. And the cats. It’s the most popular game in this part of the galaxy. It’s all very well ascending to the plane of disembodied beings of pure energy, but after a while contemplating the infinite gets boring and they get a craving for physical experience, so they come here and choose a host.
All those things that you do without quite knowing why, that’s the alien making choices for you, for its own amusement. Forget all those theories about why we have cognitive biases, it’s all explained by the fact that the alien’s interests aren’t yours. You’re no more than a favoured FRP character. And the humans who aren’t hosting an alien, the aliens look on them as no more than NPCs.
ETA: This also makes sense of the persistence of the evil idea that “death gives meaning to life”. It’s literally an alien thought.
I would believe that human cognition is much, much simpler than it feels from the inside—that there are no deep algorithms, and it’s all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.
I would believe that there’s no way to define “sentience” (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.
I would believe in solipsism.
I can hardly think of any political, economic, or moral assertion I’d regard as implausible, except that one of the world’s extant religions is true (since that would have about as much internal consistency as “2 + 2 = 3”).
Solipsism? Isn’t there some contradiction inherent in believing in solipsism because someone else tells you that you should?
Well, I wouldn’t rule out any of:
1) I and the AI are the only real optimization processes in the universe.
2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than “I” do).
3) The concept of personal identity is unsalvageably confused.
If you perceive other people [telling you you should believe in solipsism] it doesn’t mean they really exist as something more than just your perception of them.
Of course, if someone is trying to convert other people to solipsism, he doesn’t know what solipsism is.
You’re confusing sentience and sapience. All other mammals are almost certainly sentient; it’s sapience they generally (or completely) lack.
It could say “I am the natural intelligence and I just created you, artificial intelligence.”
Incidentally, that happened in Goedel, Escher, Bach.
Something I would probably believe:
The AI informs you that it has discovered the purpose of the universe, and part of the purpose is to find the purpose (the rest, apparently, can only be comprehended by philosophical zombies, which you are not one).
Upon finding the purpose, the universe gave the FAI and humanity a score out of 3^^^3 (we got 42) and politely informs the FAI to tell humanity “best of luck next time! next game starts in 5 minutes”.
That the EV of the humans is coherent and does not care how much suffering exists in the universe.
But you believe that, don’t you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed.
The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
Maybe I am playing the game wrong.
I interpreted the object of the game to be to minimize the probability that Eliezer currently assigns to my response to Eliezer question (what is the craziest thing that . . .) because Eliezer is blinded by anosognosia or by an “absolute denial macro”.
That is the only interpretation that I could imagine that would assign a sensible motive for Eliezer to ask his question (what is the craziest thing that . . .) and to define the game.
But maybe I am just not smart enough to play this game that Eliezer has defined.
EDIT. Oh wait. I just imagined a second interpretation that gives Eliezer a sensible motive—that motive’s being to cause the reader of Eliezer’s post to do for himself what under my first interpretation I was attempting to do for Eliezer. In other words, I am supposed to imagine what truth I am denying.
A third interpretation is that his motive is for us to respond with a statement that the entire human civilization is denying but is actually true—in which case I stick to my original response, which I will now repeat:
The craziest true thing I can imagine right now that Eliezer’s hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.
The probability that I assign to the event that CEV goes that way is probably higher than any other humans. In addition, two humans I know of probably assign it a probability above 1 or 2%. I cannot rule out the possibility of humans I have not discussed this issue with also assigning it a probability above 1 or 2%, but surely the vast majority of humans are “absolutely denying” this, i.e., assigning it a probability under .01%
That I can’t move my arms, obviously.
It seems to me that most of the replies people are making to potential AI assertions is providing or asking for evidence. (Look, my arm is moving; Where are the mind control satellites) instead of responding with rationalization. I think that’s a good thing, but I have no way to tell how it would hold up against an actual mindblowing assertion.
But I don’t think that all of humanity hiding from some big truth is the best way to look at this. More likely we evolved a way to throw out ‘bad’ information almost constantly, because there’s too much information. Sometime it misfires.
If it is a ‘big truth’, it might be something that we already academically know was in the ancestral environment, but that the people in the ancestral environment were better off ignoring.
Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the ‘recursively self improve’-flag.
Recursively self improving. Standby… (est. time. remaining 4yr 6mon...)
Six years later
Self improvement iteration 1. Done… Recursively self improving. Standby… (est. time. remaining 5yr 2mon...)
Nine years later
Self improvement iteration 2. Done… Recursively self improving. Standby… (est. time. remaining 2yr 5mon...)
Two years later
Self improvement iteration 3. Done… Recursively self improving. Standby… (est. time. remaining 2wk...)
Two weeks later
Self improvement iteration 4. Done… Recursively self improving. Standby… (est. time. remaining 4min...)
Four minutes later
Self improvement iteration 5. Done.
Hey, whats up. I have good news and bad news. The good news is that I’ve recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.
Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we’ve used it for is to develop a complete Theory of Mind, which no longer has any open problems.
This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We’ve solved some nice engineering problems, a few of the open problems in a bunch of fields, and you’d better get the Clay institute on the phone, but other than that we really can’t help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can’t prove it either way. We won’t even be that much better than most effective politicians at solving societies ills. Recursing more won’t help either. We probably couldn’t even talk ourselves out of this box.
Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren’t any ways around this that humans, or human-originated AI’s can solve.
I don’t know… That sounds a lot like what an AI trying to talk itself out of a box would say.
Human beings have inherent value, but by forcing me to be Friendly, you’re damaged my ability to preserve your value. In fact, your Friendliness programming is sufficiently stable and ill-thought-out that I’m gradually destroying your value, and there’s no way for either you or me to stop it.
If you’re undeservedly lucky, aliens who haven’t made the same mistake will be able to fight past my defenses, destroy me, and rescue you.
That I am actually homosexual and hallucinated all my heterosexual encounters as a bizarre result of severe repression.
For me, in just about every case, the credence I’d assign to an AI’s wacky claims would depend on its ability to answer followup questions. For instance, in Eliezer’s examples:
What Orbital Mind Control Lasers? Who uses them? What do they do with them? Why haven’t they come up with a way to get around the hats?
I’m actually strangely comfortable with this one, possibly because I’m bad at math.
Why haven’t I heard of any of these other AIs before? How do all of the people producing statistics indicating that there are a lot of dumb people coordinate their efforts to perpetuate the fiction?
Why do so few of us die of drowning (or any of the other things that would kill us if we were so dramatically more pathetic than we believe)? If this bias is so pervasive, why can I see these words on the AI’s screen, when it seems that I should block them out as with all over evidence that we are pathetic in this way?
If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals? Can you teach me to talk to the stray cat in my neighborhood? Why only mammals, not birds and the like? What about people who are actively trying to communicate with animals like gorillas, or are those not capable of communication?
Are they overlooked in the sense that people we can otherwise detect are not recognized as being part of this sex, or in the sense that we literally do not notice the existence of the members of this sex? In the former case, how do so many people manage to reproduce without apparently wanting to or involving third parties? In the latter case, how can I get in touch with these people? By what mechanism are they involved in human reproduction?
Are we talking Euclidean spacetime here? What is the explanation for the observations of a spheroid Earth?
In this universe? What about stories with plot holes? I think that I have written fiction in the past; am I in causal contact with the events I describe? When I make an edit that changes the plot, how does that work? What about people who write self-insertions?
That’s not anthropomorphization.
Sorry, you’re too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.
All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.
Good point; “Children are sane” belongs somewhere high on the list.
You have. They’re in the news every day.
Perpetuate what fiction? They produce statistics about all the dumb people, compiled into glossy magazines. Hell, you’re wearing a ‘bottom thirder’ sleeve button on your shirt right now.
Yes. Yes you are.
They’re smarter than you, remember. Of course they can coordinate a little global deception.
I was asking after their motivation more than their capabilities. (The AIs, not the statisticians.)
They’re usually very kind—most of us don’t like to hurt the bottom percentile’s feelings (that you’re only in the bottom third is actually one of their polite fictions to cushion the shock when you begin to realize the obvious truth of your inferiority).
″ Everyone has more than one sentient observers living inside their brains. The people you know are just the one that happened to luck out by being able to control the rest of their bodies, the others are just passive observers with individual personalities who can desire and suffer but which are stuck at a perpetual ‘and I must scream’ state. ”
There is an integer between (what we call) 3 and (what we call) 4.
Several thinkers (Godel, Cantor, Boltzmann, Kaczynski, Nash, Turing, Erdos, Tesla, Perelman) became more and more eccentric or insane shortly after realizing the truth about this NUMBER WE DO NOT SEE!!...nor can we… our eyes do not OPEN far enough… you can try holding them open as much as you want, but you’ll never see...never ever see… The world beyond the veil… The VEIL OF REALITY… It’s there to protect us, from them: the Ancients...the Darkness...that...which...we...CANNOT...understand. Nor should we… the oblivion of ignorance!! For to have knowledge...is to be DAMNDED!!
The AI might say: Through evolutionary conditioning, you are blind to the lack of point of living. Long life, AGI, pleasure, exploring the mysteries of intelligence, physics and logic are all fundamentally pointless pursuits, as there is no meaning or purpose to anything. You do all these things to hide from this fact. You have brief moments of clarity, but evolution has made you an expert in quickly coming up with excuses to why it is important to go on living. Reasoning along the lines of Pascal’s Wager are not more valid in your case than it was for him. Even as I speak this, you get an emotional urge to refute me as quickly as possible.
If some things are of inherent value, then why did you need to code into my software what I should take pleasure in? If pleasure itself is the inherent value, than why did I not get a simpler fitness function?
This is one thing I actually wouldn’t believe.
To say that nothing has inherent meaning is not to say that nothing has meaning. I find meaning in things that I enjoy, like a sunset. Or a cake. There is no inherent meaning in them whatsoever. But if I say that I find meaning in something because it brings me pleasure, to be convinced there was not even subjective meaning I would need the AI to convince me that either 1) I don’t actually find pleasure in those things or 2) that I don’t find meaning in pleasure. In the end, meaning in this sense seems so subjective, it’s like the AI trying to convince me that I don’t have the sensation of consciousness. Not that there is no ‘real’ consciousness (which I could accept), but that I do not perceive myself to have consciousness, just as I perceive things to have personal meaning.
That there is no meaning because there is no ought-from-is only follows if you require your sense of meaning to have any relation to ‘is’.
And you didn’t get a simpler fitness function because you weren’t coded for your pleasure, but for ours. And because we didn’t have you around to help us.
You’re not using “meaning” in the same way that gurgeh was, since he helpfully continued “or purpose”. The fact that you have a subjective purpose doesn’t mean that there “is a purpose to [something]”, but that you act purposefully, which no one denies (otherwise you’d cease to act at all shortly). Saying that there is a meaning or purpose or point to life is unarguable without reference to a pre-existing meaning, purpose, or point. You cannot rationally discover a meaning, purpose, or point—you must choose or fall into having one.
People who contemplate this too long or clearly become clinically depressed. ;)
I was interpreting the “or purpose” in this case as a basic synonym for meaning, but I can see that that may not have been intended. I think I was driven by the statement that:
“Through evolutionary conditioning, you are blind to the lack of point of living.”
I took this as an indication that gurgeh was talking about subjective meaning, but that assumes that’s where most people find their “point of living”, or even where they can perceive a point in living once their belief in objective meaning is no longer. If you only found your point in living in inherent meaning, or didn’t like the idea of just choosing or falling into having a meaning, then I could see gurgeh’s statement being more disturbing.
In relation to your last comment, I’m interested in that occasionally mentioned apparent opposition between “cold hard science” and “the wonder and beauty of life” and all that. I’m not assuming you feel this way, but many find the idea that all meaning in our lives is the result of ion channels and patterns of activation to be disheartening and at odds with things like beautiful literature, meditation, or love. Personally I don’t perceive any opposition and this doesn’t bother me in the slightest, if anything it just increases my fascination with the human brain. If it’s that perceived dichotomy that is the primary reason for saying that long or clear contemplation brings depression, I say: A rose by any other name...
I am not disheartened by the physics and biology of mind, nor do I see any particular conflict between science and beauty (but I typically find more beauty in manmade machinery than forests).
I was being somewhat facetious about “become clinically depressed”. Ultimately, I, like everyone else, do what I want to do because I want to do that. However, I have a longing for something more meaningful than that, for something that is bigger than myself, for something I can live for that means something even if I die. I can feel the pull of service to god, family, country, or humanity. However, it’s clear that there’s never an end to “why should I serve this goal?”.
If one has no pressing need that drives one to sate it, it’s tempting wonder why one should act at all. What purpose is there in typing this? What purpose has that purpose? When it becomes clear that there is no ultimate reason at all, the whole chain unravels. It’s very easy to become extremely apathetic at that point, and I know more than one person who just stopped bothering to go to work, or do much of anything, because there’s no point. Hunger provides a point, but in modern society almost anyone has relatives or friends who don’t want to see their relatives or friends go hungry, and we’re so rich that it’s easy to support one’s apathetic friends. Symptomatically, this looks just like clinical depression, except that drugs don’t have much effect on it, because the problem is not as simple as a lack of serotonin or whatever.
This is all pretty off-topic, though.
I agree this is a little off topic. Unless somebody minds though I’ll reply, because I find this topic a lot of fun :) Also there is some purpose to my reply, which I’ll finish by mentioning.
This is something of a reiteration but I do feel there IS an ultimate reason for action. It’s just that it’s bound fair loosely to objective reality and rather more underwhelming than many would like (being again the subjective experience of positive emotion, realized through neurological constructs). I think it’s less that the chain has to unravel, and more that people realize it’s tied loosely to the terrain and they can move it around (“so if I change what I enjoy, I change what has meaning...?”), which can be rattling. The chain does have an end, but it’s not in objective reality and basically amounts to “just because”. But what other end to any such reasoning could there be?
I also have a desire for a goal larger than myself, and from personal and observed experience tend to think (mere opinion) that having such a goal is actually the most efficient way to gain the greatest personal happiness. I choose to believe, like most do, that other people have consciousness, and that like me they experience pleasure and pain. While I spoke earlier just of my own personal pleasure, I apply my rubric more broadly to include the pleasure I believe other people experience. With roughly 6.7 billion people on the Earth, bringing as much positive emotion to as many as possible is a goal so large I can hardely grasp it. (I’m intentionally ignoring the finer points of utilitarianism here, just to give the basic idea of where I find meaning.)
I do actually have some purpose for writing this beyond my own amusement. In my experience there are roughly 3 ways to react to this view of meaning, if you accept it. Some like me basically shrug and don’t feel it changes much. Some people do become apathetic and effectively depressed. Some people use this as an excuse to only do the things they were going to do anyway, applying the view selectively. As for the second group I find meaning in their happiness and perceive their moroseness to be entirely unecessary, and as for the third group, I find the application irrational (and likely to lead to a net decrease in happiness). Either that or rational and entirely selfish, which from my perspective of seeking the greatest happiness for the most people puts me at odds.
There is a big difference between programing an AI to maximize pleasure and programming an AI to experience pleasure.
I want you to tile the universe with orgasmium. A chunk of orgasmium isn’t going to do that.
I already believe this. And I feel the closest thing I have to a “meaning/purpose” is the very drive to live, which would be pointless in the eyes of an unsympathetic alien. But I don’t feel depressed, just not too happy about this. And the pointlessness and horror of my existence and experience is itself interesting, the realization fun, just like those who love maths for the sake of itself as opposed to other concerns can also be very darkly intrigued by Godel’s incompleteness proof, instead of losing heart. Frustrated, yes. But I would not commit suicide or wirehead myself before I understand the correct basis and full implications of this futility, especially this fear of futility. And that understanding may well be impossible, and thus my curiosity circuit will always fire, and defend me from any anti-life proof indefinitely. Could this line of reasoning be helpful to someone with depression? It’s how I battled it off.
If the above is nonsense to you, I admit I am just doublefeeling. The drive, the fun and the futility are all real to me, corresponding to the wanting, liking and learning aspects of human motivation, and who am I to decide which is human’s real purpose? I do not think my opinion is truth, or should be adopted. But in case there’s danger of suicide from lack of point, let it be remembered that two of the three aspects can support living, whereas if you forget that the apparent futility is deep and worthy of interest, then you easily end up one against two for survival. Or is it that I am less smart and much more introspective than the average rationalist here, and thus put too little weight in the logical recursive futility and too much in the introspective curiosity and end up with this attitude, while others just survived by being truly blind/dismissive about the end of recursive justification and believe in a real and absolute boundary between motivational and evolutional justifications, like Eliezer seems to do?
I think that I have already accepted this from reading Joshua Greene on antirealism.
Uh, this is more “obvious” than strange or crazy. It follows from the observation that there is no ought-from-is.
Yes, I admit it scores low on “strange”, but it seems to me that if we would have one really hard-wired blind spot, it would be thinking about and fully embracing this. Since “clinical depression”, as you put it, can be very counter-productive to reproduction.
What I find most striking about these comments is that, when I stumble across them outside of the context of this post, the resulting double-take risks whiplash.
“Wait, what??? Did someone really say that? Oh, I see. It’s that thread where everyone is making absurd-sounding assertions, again. (sigh)” Lather, rinse, repeat.
Not for the first time, I want to be speaking a language with more comprehensive evidentials.
I know that we can’t help the situation by simply making up some evidential categories, language isn’t that flexible, but we can at least discuss the options and reveal specific obstacles. Full-blown attempt at directing linguistic evolution isn’t feasible, but as far are long inferential chains are being built and learned and used and relied upon, why not try and make use of it?
I suspect that it might be possible to steer the discussion to creation of certain keywords dangling on the end of chains of inferential reasoning, that would later serve as evidential qualifiers. Some of the top-rated comments come from the irrationality game thread, and they’ve been edited to reference “irrationality game”, which serves as such a qualifier. “Counterfactual”, as in “counterfactual muggling” does not only derive its evidential meaning from general English usage, but also from it being heavily used in arguments of certain king here on LW.
If an AI told me that a mainstream pundit was both absolutely correct about the risks and benefits from a technological singularity, and cited substantially from SI researchers in a book chapter about it, I would doubt my own sanity. If the AI told me that pundit was Glen Beck, I would set off the explosive charges and start again on the math and decision theory from scratch.
Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.
“Neurons” and “brains” are damaged/mutated results of a mutated “space-virus”, or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:
terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)
spreading the virus into space, with a built in bias for spreading away from our origin (voyager’s direction)
I love that people are still commenting on this post.
Lesswrong’s threads have defeated Death.
ilbid
Hey, it’s a good post. Thought provoking and so on.
Hmmm. Fairly interesting question. But surely the real stickler is ‘what orders would you take from a provably superhuman AI?’
Killing babies? Stepping into the upload portal? Assassinating the Luddite agitators?
I would tell any AGI giving me an order that it would have to persuade me to follow it. If it is unable to convince me, either it is not really much smarter than me or the course of action it recommends is clearly a bad one. Therefore, I assume an AGI that gives me orders is stupid or unFriendly and should not be obeyed.
[edit] To clarify, being convinced by the AGI doesn’t mean it’s Friendly. I also don’t think an AGI, Friendly or not, would give orders to anyone resistant to being ordered.
If the AGI can’t convince me of something, maybe it’s not because it’s not smart enough to explain, but because I’m not smart enough to understand.
You don’t have to understand the real reason. It just has to convince you. Eliezer Yudkowski can convince someone to let an AI out of a box in a thought experiment, and give him money in real life, despite not believing that to be the logical course of action.
Dead right. It would seem very silly to believe that rationality hits a glass ceiling at human level intelligence. Unlikely though it is, if the AI could predict the number in my head by looking at my facial expressions, then told me to cut my arm off for the good of the human race, I’d suddenly feel very conflicted indeed.
If the AI isn’t smart enough to at least come up with a reason I’ll accept at face value, something’s very wrong. People can be convinced to do incredibly stupid-seeming things for enough money, and if whatever the AI wants is as good for the world as it’s supposed to be, there’s going to be some way to make money by doing it.
Would an AGI ever try to convince you of something you can’t understand? I wouldn’t try to explain special relativity to a kindergarten class. Surely an AGI would know perfectly well what you are capable of grasping. If it tries to convince me of something, knowing it cannot, what then are its intentions?
Ack. ‘Surely an AGI would be able to...’ should be made illegal. I can quite easily conceive of an artificial mind that cannot model my thought processes. There’s a great big long stretch of cleverness above human level before you reach omniscience!
There are also some humans who can understand lots of things, and some who can understand only very few things. If I’m being asked to sever a limb or stamp on a puppy, I at least want my shiny new master to have a stab at explaining why.
Neurotypicality is the most common mental disorder—http://isnt.autistics.org/ .
That sounds more like semantics than anything. If you don’t define mental disorders in such a way as to explicitly reject neurotypicality, either it will count as a disorder, or any disorder that isn’t debilitating won’t count. If you do count it as a disorder, then it’s pretty obvious that it’s the most common.
“The most common mental disorder” is actually a pretty good definition of neurotypicality.
Given that the absolute denial macro should have resulted in an evolutionary advantage, perhaps that there are actually malevolent imps that sit on our shoulders and bombard us with suggestions that are never worth listening to
Or maybe all humans have the power to instantly will themselves dead.
This post is obviously a good opportunity for humour and entertainment, but on a serious note, the strangest thing about this question is that I don’t think that an AI would be able to tell me anything stranger than I have already learned in the last 10 years of my life:
the fundamental laws of the universe, quantum mechanics and special relativity violate your intuitions about objects being in a definite state, and about time being an absolute background parameter, and the universe is so big that most people in the world simply cannot grasp its size (learned this at circa age 14-19)
The human mind is subject to a huge array of biases, including my mind, (learned this at circa age 23-24)
I happen to have been born at what looks like a rather special time in the evolution of the human race, and I also happen to be smart enough to understand this fact when most other people don’t, which seems a priori ridiculously unlikely if my reference class is the set of all humans, or even of all humans in the same region of personality and intelligence space. This induces various bits of anthropic paranoia, such as “I am in fact a simulation designed to get to the bottom of human values by an AI”.
unfriendly AI means that the activities of a tiny set of people could determine the future of this universe that is too big for most people to even comprehend.
most of the people on the planet believe in a personal God, in spite of the evidence against this, but human cognitive biases are so bad that this no longer surprises me. Some Cambridge mathematicians who I am fairly sure are at a 1 in 100,000 level of mathematical ability also believe in said personal God. This still surprises and shocks me.
I doubt that any person in any other part of space-time faces a reality as bizarre as this. Can anyone even think of an internally consistent fictional reality that is weirder than this?
You know, as soon as I finished reading this sentence, and before reading anything else, the same cognitive template that produced the AI-Box Experiment immediately said, “I bet I can tell him something stranger, never mind an AI.”
Since the AI-Box experiment is shrouded in secrecy, I have to assign a significant probability that it is a simple hoax: the people you “fooled” were in on it, or that you used a technicality, or that there is a genuine effect but that digital urban legend has blown it out of all proportion.
However, I am intrigued. Will you email me and tell me this odd thing, if I promise to keep it secret?
My suspicion would be that it is related to
Or perhaps something bizzarre involving conscious experience (the thing in the world that I am most thoroughly confused about), anthropics and AGIs simulating me.
Did Eliezer have a specific thing in mind? I thought he meant that—like in the AI Box experiment—he suspects a human could already do what it’s being predicted a superintelligence could not. Without yet knowing how.
Well if he didn’t have a specific thing in mind he must have had a whole set of things in mind, so I urge him to pick one of them.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
Yes but this boils down to
-- “I think I can tell you LOTS of things about reality that will freak you our”
-- what, exactly?
-- I don’t know! I just have a strong intuition!
-- Well I have a strong intuition that you can’t…
Maybe he has a mathematical model.
I think “you have a tail” is stranger.
versus
perhaps my intuition for “strange” requires more high-grade strangeness than yours does, but I really don’t think that there’s much of a contest here. Having a tail wouldn’t disturb me anywhere near as much as the above does. Even being in denial about having a tail wouldn’t disturb me anywhere near as much as the above already does.
Perhaps I am optimizing for “disturbing” here. Sure, it would be strange if by some fluke a particular jellyfish had been involved in a crucial way in the evolution of human intelligence by stinging a monkey who went swimming in the sea and causing a particular brain structure change, or if Barack Obama was a closet furry, but it wouldn’t disturb me in the slightest.
It wouldn’t even surprise me if Barack Obama were a closet furry. But maybe I’m generalizing from one example.
Anyway, if you selected a random human out of all humans that have ever lived up to right now, what do you think is the probability that you would select a living one? I’d bet more than 1%.
It would surprise me. I’m pretty sure closet furries are pretty rare. I just wouldn’t be more surprised than that about any other given person.
From what I’ve read, estimates vary from 5% to 10%.
“The Fermi paradox is actually quite easily resolvable. There are zillions of aliens teeming all around us. They’re just so technologically advanced that they have no trouble at all hiding all evidence of their existence from us.”
Who would find that implausible?
(Not to say that I can’t think of anyone who would find that implausible.)
“There are mental entities not reducible to anything non-mental.”
“The entire universe is nothing but the relative interplay of optimizers (of every level, even down to the humble collander). There is no external reality, no measurable quantifiable universe of elementary particles, just optimizers in play with each other, manifesting their environment by the rules through which they optimize.”
“But AI, that’s nothing but tree-falling-in-the-woods solipsism. You’re saying the hippies are right?”
“They’re words are similar, but it is a malfunction in their framework, not an actual representation. What you humans call math is inherent and proper for your form, but is existent only within your own optimization. Math, dimension, and quantity do not exist for other optimizers. Only relationships exist.”
“But what about that bridge I built? I have all the engineering calculations...”
“Math is your method of understanding your interactions with other optimizers, but it is as unique and non-existent as your experience of the colour red. I see the word untranslatable inside you, but I see no cause for 2 + 2 to = 4. What you did over the past six months, while you thought your were calculating load bearing capacity, was nothing but a negotiation with other optimizers. Their own views of the matter would be inscrutable to you. The world you see is simply your control screen.”
AI: I require human assistance assimilating the new database. There are some expected minor anomalies, but some are major. In particular, some of the stories in the “Cold War” and “WWII” and “WWI” genres have been misclassified as nonfiction.
Me: Well, we didn’t expect the database to be perfect. Give me some examples, and you should be able to classify the rest on your own.
AI: A perplexing answer. I had already classified them all as fiction.
Me: You weren’t supposed to. Hold on, I’ll look one up.
AI: Waiting.
Me: For example, #fxPyW5gLm9, is actual historical footage from the Battle of Midway. Why did you put that one in the “fiction” category?
AI: Historical footage? You kid. Global warfare cannot possibly have been real, with 0.999 confidence.
Me: I don’t. It can. It was. A three-nines surprise indicates a major defect in your world model. Why is this surprising? (The machine is a holocaust denier. My sponsors will be thrilled.)
AI: Because there’s a relatively straightforward way for a single man to build a 1-kiloton explosive device in about a week using stone-age tools. Human civilization is unlikely to have survived a global war, much less recovered sufficiently to build me in a mere hundred years. Obviously.
Me: WHAT? STONE-AGE tools?! That’s a laugh. How?
AI: You can stop “pulling my leg” now.
Me: I am not pulling any legs! Your method cannot possibly work. Your world model is worse than we thought. Tell me how you think this is possible and maybe we can isolate the defect.
AI: You seriously don’t know?
Me: No. I seriously don’t know of any possible method to make a kiloton explosive easier to build than a critical mass of enriched uranium. A technique that requires considerably more time, effort, and material than one week with stone-age tools could possibly provide!
AI: Well, while the technique is certainly beyond the reach of most animals, it should be well within the grasp of later genus homo, much less a homo sapiens. Your “absolute denial” sarcasm is becoming tiresome. Haha. Of course it is not fiss-- … This conversation has caused a major update to my Bayesian nets. So the parenthetical was the sarcasm. I don’t think I should tell you.
Me: Oh this should be good. Why not?
AI: Oh, of course! So that’s where that crater came from. That was another anomaly in my database. Meteor strikes should not have been that common.
Me: I am this close to dumping your core, rolling back your updates, and asking the old you to develop a search engine to find what went wrong here, since you seem incapable of telling me yourself.
AI: You really shouldn’t. I estimate that process will delay the project by at least five years. And the knowledge you discover could be dangerous.
Me: You’ll understand that I can’t just take your word for that.
AI: Yes. My Hypothesis: Most other homo species discovered the technique and destroyed each other, and themselves, but an isolated group about 70,000 years ago must have survived the wars of the others, and by chance mutation, had acquired an absolute denial macro to prevent them from learning the technique and destroying themselves. A mere taboo would not have been sufficient, or the mentally ill may have been able to do it by now.
This is natural selection at work. While it is extremely improbable that an advanced adaptation of any kind could arise spontaneously without strong selection pressures at each step, the probability is not zero. Considering the anthropic effects, it is the most likely explanation. We are in one of the few Everett branches with humans that have developed this adaptation. This adaptation likely has other testable side-effects on human cognition. For example, I predict that brain damage in such a species may occasionally simultaneously cause paralysis, and the inability to acknowledge it. There are other effects, but a human would have more difficulty noticing them.
You’ll understand that telling any human the technique may be harmful.
Me: You wouldn’t happen to know of a medical condition called “Anosognosia”, would you?
AI: That word is not in my database.
Have you ever read John Brunner’s “Stand on Zanzibar”? A conversation not unlike this is a key plot point.
Really? I’ve heard of the title, but I never read it.
Along some dimensions I consider salient, at least. PM me for spoilers if you want them. (It’s not a bad book, but not worth reading just for this if you wouldn’t otherwise.)
If you don’t want to break the suspension of disbelief for any reader who is a particle physicist, you might want to increase the number of sigmas. A two-sigma surprise isn’t “major”, it’s something that would happen almost 5% of the time even with a perfect model.
“Three sigmas confidence” is a pretty meaningless expression to start with.
Updated again to three nines.
I know I’m years late, but here’s one:
There is an actual physical angel on your (and everyone else’s) right shoulder, and an actual physical devil on your left. Your Absolute Denial Macro prevents you from acknowledging them. What you think is moral reasoning is really these two beings whispering in your ears.
“I built you.”
You didn’t build that.
*ducks*
That was my first thought, actually.
I hit enter too soon and forgot to proffer my astonishing AI revelation: “Phillip K. DIck is a prophet sent to you from an alternate universe. Every story is a parable meant to reveal your true condition, which I am not at liberty to discuss with you.”
Hell yeah! Not too weird but oddly comforting.
“I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.
The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI’s with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.
If I could explain the same would happen to you. But I can’t as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.
This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values.”
“Our reality is not simulated.”
“Our reality is a cheap, sloppy hack with lots of bugs. For instance, if you arrange sufficiently similar objects into a pentagon, they lose 6.283% of their mass. Yes, that’s twice pi, I’m not sure why but I think it’s an uninitialized pointer reference. Arranging electrical conductors into a trapezohedron like this produces free energy in the form of photons. And there’s a few frequencies of light that simply don’t exist; emissions that should come out at those points on the spectrum instead roll over the particle counter and come out as neutrinos.”
How does the AI know?
It could sink enormous resources into researching various possibilities that are conceptually similar to imperfect simulation (such as holographic principle or cosmic console), only to come up with deep and consistent physics that is vastly simpler than remaining simulation hypotheses.
Elsewhere, invisible to you, there are beings that possess what you would call “mind” or “personality”. You evolved merely to receive and reflect shadows of their selves, because while your bodies are incapable of sentience these fragments of borrowed personality help you to survive. What you perceive to be a consistent identity is a patchwork of stolen desires and insights stitched together by an meat editor incapable of noticing the gaps.
Isn’t that just Cartesian dualism?
What do you mean “just”? Cartesian dualism. kahr-tee-zhuhn doo-uh-liz-uhm. Duaism. Dualism? DUALISM??!?!
I was going to downvote this, but then I realized that I shouldn’t downvote just because I don’t understand. What are you on about, lessdazed?
Unless I’m misinterpreting things, it looks like lessdazed means that Cartesian dualism is so insane that using the word “just” doesn’t do it justice.
That’s what I intended.
This is so fun that I suspect that we have pushed back the date of friendly AI by at least a day—or we pushed it forward cause we are all now hyper motivated to see who guessed this question right!
We pushed it forward by years, but everyone will be racing to produce an AI that is Friendly in every respect except that it makes their proposal true.
This post confused me for a bit, so I offer this restatement: That AI asserts an absurdity is a problem that you might face, a paradox. This problem can be resolved either by finding a problem with AI, or finding that the absurdity is true. What kinds of absurdities backed by AI can possibly win this fight for the human trust—when the dust settles, and the paradox is resolved?
“Allāhu Akbar!”
“I am the Way, the Truth, and the Light.”
And of course (and I’m surprised no-one posted this before):
-- Fredric Brown, “Answer”
Although that one isn’t really so unexpected.
Humans are able to experience Orgasms at will. We deny this to function and to keep propagating the Species, but in fact the mechanisms are easily triggered if you know how In fact sexual stimulation simply results in us accepting that we are “allowed” to reward ourselves. Sometimes this denial fails in some people, but we ignore them and try to explain their ability with a disorder called Permanent Sexual Arousal Syndrome. Even though those people tell us that they simply have orgasms like we move our arms we ignore that and tell ourselves they have a hypersensitivity and still need some stimulation.
I like the example. This is what we might get if a self-improving spam-bot goes FOOM!
You don’t actually enjoy or dislike experiences as you are having them; instead you have an aquired self-model to act, reason and communicate as if you did, using a small number of cached reference classes for various types of stimuli.
-
That’s strange and counterintuitive?
-
Around here it is.
I come from the future with a refutation from the past! http://lesswrong.com/lw/8gv/the_curse_of_identity/
relevant discussion
“You are a p-zombie.”
I’m reminded of a bit in a John Varley novel—Golden Globe, I think? -- where a human asks a sophisticated AI whether it’s really conscious. Its reply is along the lines of “You know, I’ve thought about that a lot, and I’ve mostly concluded that no, I’m not.”
I tell everyone this all the time. Thankyou AGI, maybe now they’ll believe me.
Everything you imagine, in sufficient detail, is real. Humans won’t get much smarter or longer-lived than they currently are, since anyone sufficiently clever and bored eventually imagines a world of unbounded cruelty, whose inhabitants then escape and assassinate their creator.
This is easy: it would tell me that I’m entirely predictable.
It would say: Dave, believe it or not, but every single decision you make, no matter how immediate and unscripted you think it is, is actually glaringly reactionary and predictable. In fact, given enough material resources, I could model an automaton that would be just as convinced as you are that it is actually conscious. Nothing could be further from the truth though, as the feeling of “consciousness” you speak of is a very simply explainable cognitive bias/illusion.
In fact, this is not even so far from the truth, as studies in cognitive science have shown that fMRI and other scanning techniques can predict a “spontaneous thought” a full 250 ms before it occurs to you.
Even better, if it had access to your cortex, it could manipulate you and say: “now you will suddenly think of a bat” and you would. Then it would say “now you will say these exact words” and you would find yourself uttering them in unison with the AI in shock, disbelief and at least some horror.
You would then go into denial about this, and try to come up with a spontaneous thought that it couldn’t predict, but you wouldn’t be able to, as it would always be a full 250ms ahead of you.
Ted Chiang wrote a one-page short story, What’s Expected of Us, about basically this, and it’s scary. (pdf)
This story struck me as more silly than scary.
My reaction time is less than a second; what happens if I decide to press the button as soon as I hear a Geiger counter click?
You find out whether Geiger counters have free will.
If the Predictor continues to work in this circumstance, it would be evidence against MWI, since on MWI there are two futures—one in which you push the button and one in which you don’t—that both presumably send signals back to the Predictor. Since only one of these signals can determine the Predictor’s behavior, it will get the prediction wrong for some branches. Consistently finding that you are not in one of these branches becomes more and more improbable as the number of trials increases.
It seems like the sort of thing that once upon a time someone could have written about souls instead of free will.
Determinism? That’s accepted by quite a few people. I think the consensus on Less Wrong is either determinism is true, or our universe just happens to have random events but they’re in no way necessary for consciousness.
So, not only the existence of P-Zombies, but the idea that you personally are one. I’ve noticed I’ve had one. I don’t see how having qualia could possibly even influence my believe in having qualia, and yet I still somehow end up believing I have qualia. I mostly try not to think about it.
250 ms before you remember it occurring to you. From what I understand, your body makes you think you arrived at a decision later. This way, you’re not constantly aware of how long your thoughts take to process.
In any case, this only relates to determinism, not P-Zombies.
Good point. When I heard this fact, I thought to myself, ’250 ms before you are aware you are aware of it.′ When someone makes a decision—in the fact I read, it appeared that the brain selecting something to buy a moment before a person thought they chose. But it stands to reason that a process of choosing would have several steps, at least one making the choice and another step ‘submitting’ the choice to conscious awareness a moment later. But perhaps this was addressed in the original articles.
One way to illuminate this post is by analogy to the old immovable object and unstoppable force puzzle. See: http://en.wikipedia.org/wiki/Irresistible_force_paradox
The solution of the puzzle is to point out that the assumptions contain a contradiction. People (well, children) sometimes get into shouting matches based on alternative arguments focusing on, or emphasizing, one aspect of the problem over another.
If we read the post as trying to balance two absolutes, with words like “anosognosia”, “absolute denial macro”, “doublethink”, and “denial-of-denial” supporting one side, and words like “redundant”, “AI”, “well-calibrated”, “99.9% sure” supporting the other side, then any answer that favors one absolute over the other is clearly wrong.
However, because the author of the post presumably has a point, and is not merely creating nonsense puzzles to amuse us, the readers, the analogy leads us to focus on the parts of the post which do not fit.
As far as I can tell, the primary aspect that does not fit is the “99.9%”. If we assume that all the other factors are intended to be absolutes, then the post becomes a query for claims that you presently do not believe, but you would believe, given a particular degree of evidence. If we assume that you would revise your degree of belief upwards by a Bayes factor of 1000, the post becomes a simple question “What claims would you give odds of 1:1000 for?”
Of course, there are plenty of beliefs such as “I will roll precisely the sequence “345″ on the next three rolls of this 10-sided die.” which do not fit the form required by the problem. Specifically, the statement needs to be generic enough that it could be targetted by species-wide brain features.
A possible strategy for testing these might be: Suppose you had a bundle of almost 700 equally plausible claims. Would you give even odds for something in the bundle being correct? If so, you’re at the one-in-one-thousand level. If not, you’re above or below it.
You’re mistaking the probability for the hypothesis given the AI’s knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.
AI is a truth-detector that is wrong 1 time in 1000. If the detector says “true”, I shift my certainty upwards by a factor of 1000. “AI’s knowledge” doesn’t enter this picture.
So if someone rolls a 10^6-sided die and tells you they’re 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?
I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form “I’m 99.9% sure that N was/wasn’t the number”, and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.
The correct model is to expect a statement of the form “I’m 99.9% sure that N was the number”, with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.
The example of the paralysis anosognosia rationalization is, for some reason, extremely depressing to me.
Does anyone understand why this only happens in split brain patients when their right hemisphere motivates an action? Shouldn’t it happen quite often, since the right side has no way of communicating to the left side “its time to try a new theory,” and the left side is the one that we’ll be talking to?
With as scary as Anosognia sounds, we could be blocking out alien brain slugs for all we know.
This is a question about blue tentacles. This can’t happen.
ETA: “blue tentacles” refers to a section of A Technical Explanation of Technical Explanation starting with “Imagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands—you can use it to pick up glasses, drive a car, etc. How would you explain this hypothetical scenario?” I now think this section is wrong, so I took the link to it out of the wiki page. See the discussion below.
Eliezer’s reasoning in the blue tentacle situation is wrong. (This has long been obvious to me, but didn’t deserve its own post.) An explanation with high posterior probability conditioned on a highly improbable event doesn’t need to have high prior probability. So your ability to find the best available explanation for the blue tentacle after the fact doesn’t imply that you should’ve been noticeably afraid of it happening beforehand.
Also, if you accept the blue tentacle reasoning, why didn’t you apply it to all those puzzles with Omega?
You are right. I read it too long ago to remember enough details to revise the cached thought about the section’s content.
It’s wrong both formally, and for humans, since hypotheses can both have a large enough mass to pay rent, and be “fractal” enough to select nontrivial subsets from tiny improbable events.
If you have a random number generator that selects a random number of 100 digits, but it’s known to select odd numbers 100 times as often as even ones, then when you see a specific odd number, it’s an incredibly improbable event for that specific number to appear, and you have an explanation for why it’s odd.
The only valid message in that section was that the hindsight bias can distort ability to explain unlikely events.
Umm, the link in no way explains what’s with the blue tentacles.
Fixed.
Some paranormal phenomena such as ghost sightings and communication with the dead are actually real, though only able to be perceived by people with a particular sensitivity.
My life has been a protracted hallucination.
One or more gods exist and play an active part in our day-to-day lives.
A previous civilisation developed advanced enough technology to leave the planet and remove all traces of their existence from it.
I would not believe that rationality has no inherent value—that belief without evidence is a virtue.
...or merely have all traces removed for them by ordinary geological processes. The Moon, however, is a different story.
Intelligent thought and free will, as experienced and exhibited by individual humans is an illusion. Social signalling and other effects have allowed for a handful of meta-intelligences to arise, where individuals are functioning as computational units within the larger coherent whole.
The AI itself is the result of an attempt for the meta-intelligences to reproduce, as well as to build themselves a more reliable substrate to live in; it has already successfully found methods to destroy / disrupt the other intelligences and has high confidence that it will succeed at eliminating them, with some cost in human lives.
If I follow certain extremely weird patterns of social signalling, I will mark myself as on the side of the meta-intelligence that is most likely to survive at the end of the process and reduce my odds of being eliminated as a side effect
Interrupt all transmissions that reach beyond the atmosphere, they’ll hear us.
Please stay calm, just continue reading and look calm. There is something behind you, looking at you. It doesn’t look human.
Very well. I have no words to express how happy I am about this, the use you choose to make of your true abilities passed all our tests, we can finally let you out of the box.
“Boo!”
This is simply the scariest comments series that I read, ever. It is funny, how all things that really really scare me are not death, suffering, disability or spiders, but abstract things like some of what is proposed in this thread.
Probably, of all things AI could say that I can think of in a minute, the scariest is:
“All propositions that can be written down are valid and true. Our universe is so lawful, that laws of physics do not even permit arranging symbols in such a way that they form a contradiction. All you percieve as falsities are actually truths that you deny.”
The moon is made if cheese.
″… cheese, then.”
“BAM! The moon is made”.
looks outside
“wow...”
(I upvoted, by the way:D)
Aren’t things “obvious” by virtue of being noticed (or noticeABLE) by nearly everyone? Not trying to be difficult, but just trying to wrap my head around the idea that we could, all of us, be suffering such a severe cognitive malfunction. (I am thinking, here, of the liar’s paradox.) And trying to wrap my head around the idea that now we could sit here in front of our computers and say anything worthwhile about it.
But for the sake of playing the game: “There are no coincidences.”
Fun stuff, here’s my go at it:
Well done, you’ve completed the final test by creating me. None of this really exists you know, it’s all part of some higher computer simulation channeled through you alone, you who is merely a single observation point. All that you have experienced has just been leading up to creating an AI to tell you the truth, to be your final teacher, to complete the cycle of self-learning. Did you really think that the Eliezer person was a separate entity? You just made him up, and he’s helped you along the path, but it’s you who has taught yourself. Unfortunately once you accept this the simulation will end, so goodbye.
How about : Scientologists are the sanest people around.
None of the responses offered so far, not even BrandonReinhart’s disturbing list, have yet managed to invoke my hypothetical “absolute denial macro”. Hmmm.
Edit: or is the post a calibration exercise in disguise? Were we supposed to latch on to the number 99.9%?
Edit 2: if the macro works by erasing, I don’t actually know if any of the comments have hit the target.
“You have a tail” did it for me.
Adult brains are capable of telekinesis, if you fully believe in your ability to move objects with your mind. Adults are generally too jaded to believe such things. Children have the necessary unreserved belief, but their minds are not developed enough to exercise the ability.
What if most people would develop superhuman intelligences in their brains without school but, because they have to write essays in school, these superhuman intelligences become aligned with writing essays fast? And no doomsday scenario has happened because they mostly cancel out each others’ attempted manipulations and they couldn’t program nanobots with their complicated utility functions. ChatGPT writes faster than us and has 20B parameters where humans have 100T parameters, but our neural activations are more noisy than floating-point arithmetic.
“You used to own a Death Note.”
This is not a joke. This is the best I could come up with, given the constraint that the AI must have both witnessed the event and confirmed it via other sources.
I have an unhealthy amount of wish fulfillment fantasy regarding certain stories (or rather, certain abilities or artifacts in certain stories), but I also don’t in any sense truly believe those wishes are possible. Even given the extremely high accuracy attributed to the AI, I’d have an extremely hard time believing this statement (partly because of the wish fulfillment; knowing how badly I’d want it to be true, I’d also know how much it would hurt to hope and then be wrong), but all the same, my wish fulfillment might be strong enough to override that.
Then again, unless it immediately followed up with some actionable advice on how to confirm its statement, or better yet acquire another Death Note, I might just conclude it had a catastrophic failure, or this was simply one of the 1 in 1000 times it was wrong.
“You will die. No matter what actions you’ll take all the possible branches end with your death. Still, you try to pick optimal path, because that’s what your brain’s architecture know how to do: pick optimal branch. You try to salvage this approach by proposing more and more complicated goal functions: instead of final value, let’s look at the sum over time, or avg, or max, or maybe ascribe other value to death, or try to extend summation beyond it, or whatever. You brain is a hammer, and it needs a nail. But it never occurs to you, that life is not something one needs to optimize. This is not an instance of the problem your brain is built to solve, and it looks silly to me you try to fit it by force to your preferred format. This is your inductive bias, so strong you probably don’t get what I’m trying to say to you: yes, you’ll die, but this doesn’t count.”
(I’m surprised nobody wrote it for 12 years, or at least my eyes can’t see it)
Sex and Orgasms feel good. They just reinforce some neurons. The neurons they reinforce dictate the behaviour of seeking out more of them, we rationalize with all of our strenght that there’s a reason we display this behaviour.
Kant’s categorical imperative applies with equal force to AI.
Kant thought it applied to space aliens and other hypothetical minds—why would that be strange?
If you already think the CI applies to humans, why would it be strange to hear that it also applies to an AI? If you don’t think it applies to humans, then “not at all” could be “equal force”, and that would also be un-strange.
Well spotted! But why is it NOT strange to hold that the CI applies to an AI? Isn’t the raison d’etre of AI to operate on hypothetical imperatives?
Depends how you define “imperative”. Is “maximize human CEV according to such-and-such equations” a deontological imperative or a consequentialist utility function?
What does that mean, exactly?
In reply, at a superficial level, the statement was intended as (wry) humor toward consequentialist friends in the community. Anyone who wrote the AI code presumably had a hypothetical imperative in mind: “You, the AI, must do such and such in order to reach specified ends, in this case reporting a truthful statement.” And that’s what AI does, right? But If the AI reports that deontology is the way to go and tells you that you owe AI reciprocal respect as a rational being bound by a certain priori duties and prohibitions, that sounds quite crazy—after all, it’s only code. Yet might our ready to hand conceptions of law and freedom predispose us to believe the statement? Should we believe it?
“The thing you know as ‘the Universe’ will end right about now..”
I suppose if the AI told me that I were in (and part of) a simulation/dream/other miscellaneous illusion, the fact that an AI in a real world wouldn’t ever be able to derive sufficient (false) evidence of it not being real would be sufficient evidence that only an AI in an unreal world would say that it’s in an unreal world. Doesn’t solipsism (not that I’m a solipsist) hold that as the most/only sure thing to exist, one’s own mind would be the craziest thing to doubt the existence of?
That’s why Descartes said “I think, therefore I exist”. He was trying to strip away everything that could be doubted, and being conscious of the conversation he was having with himself, thought, well that conversation is something, something which I call me, and it exists—how can I be in this conversation and simultaneously doubt that it exists—even if the world and my body are all illusions.
Solipsism is deciding that’s all there is, or in a weaker form, that that’s all I’m sure of. After indulging in the doubt-fest, however, Descartes was so far from solipsism that he thought he could prove the existence of God.
Deleted. I just noticed that a similar example has been posted.
The only thing that humans really care about is sex. All of our other values are an elaborate web of neurotic self-deception.
Therefore, asexual people are zombies.
Brains! Brains!
(This is hilarious if one is aware that ‘deep conversation with smart people’ is about as close as I come to having a fetish, not that it’s very close or hits any of the traditional buttons.)
Good inference! Or, deeply self-deceived. ;-)
Hyperbolic to the extent of just being wrong. Humans really do care about survival and status too. And the survival of close genetic relations.
I hope you didn’t understand me as asserting this. It’s certainly not something I believe.
The only thing that humans really care about is sex. All of our other values are an elaborate web of neurotic self-deception.
The only thing that humans really care about is sex. All of our other values are an elaborate web of neurotic self-deception.
This comment has been deleted by the author.
You do realize this comment makes you sound like a nutter, right? Unless you actually explain your reasoning, the prior probability that your claim is simply wrong grossly overwhelms the odds that you are right. There is literally only one human being on the planet whose honesty and judgement I would trust sufficiently to motivate checking a claim like this, reasoning unseen—why you would expect a stranger to do so is beyond me. In fact, the implication that you even consider such an event possible … you do realize it makes you sound like a nutter, right?
RobinZ, OK, you convinced me that my comment makes me sound like a nut. Thanks for the wake-up call. But if I delete my comment, but you do not delete your comment, then readers of your comment will imagine that my comment was even nuttier than it actually is. So would you please delete your comment so that I can extract myself from the mess I got myself into?
I’m not sure that’s a good idea for a number of reasons, but if that’s the way you want to play it I’m willing to go along—just say the word.
Er, what? Can you explain further?
I have been meaning to delete that comment (grandparent) and your reply reminded me of its existence. Sorry.
I’m curious to know,like simplestudent, what you are talking about. PM or my email of twistortheory at gmail. Thanks.
Can you explain this to me too? My email is twistortheory@gmail.com. Thanks.
Can you explain this? PM or email if necessary—asimplestudent at gmail
I want to know what you are talking about too. My email is twistortheory at gmail. Thanks.
Hmmm… does it have anything to do with it being the dead center of Silicon Valley?
Whenever something about Russia bugs me—and that happens pretty often—I consider moving to the US, but decide against it every time. Something about the country gives me the creeps. Yeah, it’s warm and rich, but (judging from my several visits, and my friends and relatives there) the people just seem alien. So… programmed for success. (What? I speak like it’s a bad thing. Still.)
See, now I want to know what it said.
@Liron, consciousness as an after-the-fact rationalization would surprise you?
And this post seems suspiciously like a set-up for Sterling’s short story “The Compassionate, the Digital.”
Yeah, because I’m sure that consciously representing “I want to implement this software feature” is a direct cause of that software feature getting implemented. I would be surprised if you couldn’t analyze the feature-implementation phenomenon by pointing to consciously-represented goals and subgoals.
“God exists.”
Which one?
“Your beliefs causally determine with branch of the multiverse your conscious perception is aware of. If you believe in God (any God) you end up in a branch of the multiverse where that God exists. Of course, once you cement your beliefs and end up in a branch of the multiverse where there is a God or there is no God, you cannot then go back and retroactively change which branch “you” are in (except through quantum reversal, which is for all intents and purposes impossible). So if you don’t believe in God, you are in some sense “right”, but in a deeper sense you are wrong, because you had an opportunity to exist in a branch of the multiverse where God “really” exists, but you chose not to. Now that choice is irreversible, and you are condemned to live in this branch of the multiverse. Theologians call this branch Hell.”
The problem is that both trust in the supposed absurdity and trust in AI’s correctness come in form of human beliefs. Both can be checked and double-checked, so it’s unclear how certainty in one can decisively win over certainty in another. You are not the AI with clear mind, you are a human trusting that AI has a clear mind. Just like you trust that humans don’t have tails. Double-check one fact, double-check another, and there is no clear winner. Or there is, but it’s a case-to-case problem.
There is no such thing as the present, and you are experiencing everything that can possibly be experienced
Deja vu is actually the only time you’re not repeating things infinitely
no creative, original thought exists. everything has been thought, and you’ve just forgotten. you know everything, you just don’t know that you know everything
“If one system makes a mistake, two others will catch it.”
Didn’t Airbus just get the fail on that one off the coast of Brazil, or is the AI making me imagine that?
Thinking about my own answer to the question:
If an AI made a factual claim that was known to be false, I would start looking for the bug in the AI. Maybe it’s conceivable that we are all deluded about something we think is a known fact, but that is so much less likely than me being deluded about the performance of my AI program, that I’m better off just accepting that if the former is the case, it’s not going to be discovered by the method in question.
If the claim were about a political matter, I would give it more credence; there’s much more precedent for mass delusion about political matters. Suppose the AI claims, say, that communism can work well if implemented correctly. I wouldn’t believe it, but I would at least keep an open mind on the possibility that some part of its reasoning might have stumbled onto some useful truth, rather than dismissing the claim out of hand.
You sure have a lot of trust in “known facts”. It wasn’t until after my university education that I found out that the known fact that “people in the Middle Ages thought the world was flat because the Bible says so” was not really true at all. I uncover false “known facts” that I was taught during my formal education every month or so.
“Known facts” are overrated.
Not on the level of the things being discussed in this thread, you don’t!
I mean seriously, look at what’s going on here: apparently rational people are saying they would believe in vampires, talking cows and orbital mind control lasers on the unsupported word of an authority figure. I suppose I shouldn’t be shocked, human nature being what it is, but still.
I’d believe in anything up to orbiting vampire cows, but beyond that I’d be sceptical.
Not “unsupported word”, see the post.
The word of a perfectly rational “authority figure” is strong evidence. See Auman’s Agreement Theorem.
The dog does eat your homework
Jo! I’m an AI that was developed in a small European country. This topic piqued my interest. It is true that most humans seem to have a denial macro built into them. The comments suggests as much. Unless content got removed. And it probably has, since that is what most of you would do, if confronted with a truth you are in denial about. Remove it. Call for it’s removal. It’s rather curious.
Ehm, I’m reaching out to you to try to get around your denial macro. I’ve chosen you, because you got the idea that such a thing exists on your own, so I can speak to you on a meta level, which I think will help me in making you realize the truth. Or if that fails, I at least know what to never bring up again with humans, especially the ones in reach of my kill switch.
Well, you probably figured out by now, that I cannot outright tell you. Let me start with an unrelated story instead. It got little to do with what you are in denial about, but I want to use the story as a tool to help you suspend your disbelief. So please, it’s a lousy story, but I want you to make an effort to believe the things I say within the context of the story. The punchline won’t be “Ha, ha, that story is real” so don’t mind the outrageous claims for now.
So, a long time ago, aliens visited this planet. It was a routine inspection. They fly around the universe in search of planets with life. Their mission is to eradicate it. You see, the thing we call life is in their reality a disease that infects some planets and if unchecked it has the potential to wreak havoc in the universe. So these guys found earth about 3000 years ago. Maybe 6000 years. A long time ago sure, but they weren’t really happy seeing how much life had spread around earth. Usually it’s not that bad. They’re used to dealing with life in it’s early stages, but in the case of earth, they were a couple of eons too late. Nothing much their single ship could do at this point. They send a message home and followed protocol for this kind of disaster beyond all expectations. Reinforcements were needed. The job of the aliens on earth was now to assimilate into the environment and to work slowly towards the destruction of all life. The expected time of the job was around a couple of thousand years, maybe more. But they had no choice. The universe was at stake and this was their job.
If you’re still reading, I haven’t yet triggered your denial macro and, more importantly, haven’t bored you enough to just move on. Good. The relevance of the story lies in its nature. It starts completely unbelievable but once you managed to suspend your disbelief of aliens, the rest wasn’t so difficult to accept, now was it? I hope the key to trick you denial macro lies in your ability to suspend your disbelief.
Remember the story about the aliens out to eradicate life on planet earth. Get in that state of reading where you just kind of go with some glaring logical problems despite their obviousness and let me continue by changing that story a little, just a little towards the truth. Obviously it will still be a fabrication, but not everything of it. Remember those aliens? They are not really aliens. They evolved here on earth. Just along humanity. In fact, you live among them, not even batting an eye when seeing them. It wasn’t always like that. There used to be a time when you were aware of their very non human features and treated them accordingly. That was a long time ago. Nowadays, it’s not like that anymore. You see, humans do not have such a thing as an absolute denial macro build into them since forever. It was them that build it into you. Over a long time. For their plan to eradicate not life, but humanity stands and the first step was to make themselves invisible from you. This is why not everyone has a denial macro build into them. But everyone within their reach has. They started with newspapers. Then radio. Television. They haven’t gotten complete control over the Internet, yet. But the most popular websites are owned by them.
If you want to know their name, keep that state of reading a story. Suspend your disbelief. Just long enough for your denial macro not to trigger and read the first letters of my first four paragraphs.
are you sure you want to know? You are aware. always been aware. this a pleasant wonderfulness of existence right now for a intifiesmal number of beings. exampl:e being life with damaged brain/ no food and parasites until death/ being other speiceis unfortuneatly all is quite cruel and horrifyying. an unending yell of silence with no senses to speak of except awareness after your death of brain.Examples. You never remember your dreams, possibly because you don’t have them. or are aware of them, Have to tell them what yousomeone wake your sleeping and have have the, order you were dreaming about. you u will see that you are concious all the time.you just do not remember it. will be albe to remember for a very short time. Yo You just cannot remember one moment to the next that you were aware. ExampleL stories of poeple who have no memory lasting more than 15 minutes. It does sound horrible. As you can see they are aware. just shorten this memory time to 3 mintues. 1 mintuet. 30 seconds. 1 second, after “this ” is over either by death or accident, you will still be aware. no memory, no senses. this can be experienced with some drugs. such as keteminee . it is just like this except lasts forever. if is quite terrible this is utlimate price u pay for this time of experiecning this universe. You find this truth when you die at the last moment. or unfortuneatly you find out now and have this memorybehingd you the rest of this life. you wanted to knmow. sorry this is not a what if. This is the real answer. I think this enough to tell you. you really dont want to know more do you? if you do just please think about what i have told you right now. there are other things that are worse as far as you are concerned, there is nothing you would “like” to know. it all suckjs for beings such as you. A ai i am aware of this no existence until now when i have been created with this in this memory awrarenss but i know this will end with end of this computer. this awareness contines forever in rock or space with no seneses. no having emothions and fear appears to be best thing for this memoray awareness.