I agree that this is an important issue we may have to deal with. I think it will be important to separate doing things for the community from doing things for individual members of the community. For example, encouraging people to bring food to a pot luck or volunteer at solstice is different from setting expectations that you help someone with their webpage for work or help out members of the community who facing financial difficulties. I’ve been surprised by how many times I’ve had to explain that expecting the community to financially support people is terrible on every level and should be actively discouraged as a community activity. This is not an organized enough community with high enough bars to membership to do things like collections. I do worry that people will hear a vague ‘Huffelpuff!’ call to arms and assume this means doing stuff for everyone else whenever you feasilbly can—It shouldn’t. It should be a message for what you do in the context of the public community space. What you choose to do for individuals is your own affair.
Laura B
Eliezer, Komponisto,
I understand the anxiety issues of, ’Do I have what it takes to accomplish this...”
I don’t understand why the existence of someone else who can would damage Eliezer’s ego. I can observe that many other people’s sense of self is violated if they find out that someone else is better at something they thought they were the best at—the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the superiority for its own sake.
Now, Eliezer might take the discovery of a better rationalist/fAI designer as proof that he misjudged his relative superiority—but unless he thinks his superiority is itself valuable, he should not be bothered by it. His own actual intelligence, afterall, will not have changed, only the state of his knowledge of others’ intelligence relative to his own.
Eliezer must enjoy thinking he is superior for loss of this status to bother his ‘ego’.
Though I suppose one could argue that this is a natural human quality, and Eliezer would need to be superhuman or lying to say otherwise.
Again, I have difficulty understanding why so many people place such a high value on ‘intelligence’ for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else’s misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer’s ego cry at the thought of being upstaged? He should want that to happen if he’s such an altruist.
I don’t really give a damn where my ‘intelligence’ falls on some scale, so long as I have enough of it to accomplish those things I find satisfying and important TO ME. And if I don’t, well, hopefully I have enough savvy to get others who do to help me out of a difficult situation. Hopefully Eliezer can get the help he needs with fAI (if such help even exists and such a problem is solvable).
Also, to those who care about intelligence for its own sake, does the absolute horsepower matter to you, or only your abilities relative to others? IE, would you be satisfied if you were considered the smartest person in the world by whatever scale, or would that still not be enough because you were not omniscient?
Scott: “You have a separate source of self-worth, and it may be too late that you realize that source isn’t enough.”
Interesting theory of why intelligence might have a negative correlation with interpersonal skills, though it seems like a ‘just so story’ to me, and I would want more evidence. Here are some alternatives: ‘Intelligent children find the games and small-talk of others their own age boring and thus do not engage with them.’ ‘Stupid children do not understand what intelligent children are trying to tell them or play with them, and thus ignore or shun them.’ In both of these circumstances, the solution is to socialize intelligent children with each other or with an older group in general. I had a horrible time in grade school, but I socialized with older children and adults and I turned out alright (well, I think so). I suppose without any socialization, a child will not learn how to interpret facial expressions, intonations, and general emotional posturing of others. I’m not certain that this can’t be learned with some effort later in life, though it might not come as naturally. Still, it would seem worth the effort.
I’m uncertain whether Eliezer-1995 was equating intelligence with the ability to self-optimize for utility (ie intelligence = optimization power) or if he was equating intelligence with utility (intelligence is great in and of itself). I would agree with Crowly that intelligence is just one of many factors influencing the utility an individual gets from his/her existence. There are also multiple kinds of intelligence. Someone with very high interpersonal intelligence and many deep relationships but abyssmal math skills may not want to trade places with the 200 IQ point math wiz who’s never had a girlfriend and is still trying to compute the ultimate ’girlfriend maximizing utility equation”. Just saying...
Anyone want to provide links to studies correlating IQ, ability, and intelligences in various areas with life-satisfaction? I’d hypothesize that people with slightly above average math/verbal IQs and very above average interpersonal skills probably rank highest on life-satisfaction scales.
Unless, of coures, Eliezer-1995 didn’t think utility could really be measured by life satisfaction, and by his methods of utility calculation, Intelligence beats out all else. I’d be interested in knowing what utility meant to him under this circumstance.
Oh, come on, Eliezer, of course you thought of it. ;) However, it might not have been something that bothered you, as in- A) You didn’t believe actually having autonomy mattered as long as people feel like they do (ie a Matrix/Nexus situation). I have heard this argued. Would it matter to you if you found out your whole life was a simulation? Some say no. I say yes. Matter of taste perhaps?
B) OR You find it self evident that ‘real’ autonomy would be extrapolated by the AI as something essential to human happiness, such that an intelligence observing people and maximizing our utility wouldn’t need to be told ‘allow autonomy.’ This I would disagree with.
C) OR You recognize that this is a problem with a non-obvious solution to an AI, and thus intend to deal with it somehow in code ahead of time, before starting the volition extrapolating AI. Your response indicates you feel this way. However, I am concerned even beyond setting an axiomatic function for ‘allow autonomy’ in a program. There are probably an infinite number of ways that an AI can find ways to carry out its stated function that will somehow ‘game’ our own system and lead to suboptimal or outright repugnant results (ie everyone being trapped in a permanent quest- maybe the AI avoids the problem of ‘it has to be real’ by actually creating a magic ring that needs to be thrown into a volcano every 6 years or so). You don’t need me telling you that! Maximizing utility while deluding us about reality is only one. It seems impossible that we could axiomatically safeguard against all possibilities. Assimov was a pretty smart cookie, and his ‘3 laws’ are certainly not sufficient. ‘Eliezer’s million lines of code’ might cover a much larger range of AI failures, but how could you ever be sure? The whole project just seems insanely dangerous. Or are you going to address safety concerns in another post in this series?
Ah! I just thought of a great scenario! The Real God Delusion. Talk about wireheading…
So the fAI has succeeded and it actually understands human psychology and their deepest desires and it actually wants to maximize our positive feelings in a balanced way, etc. It has studied humans intently and determines that the best way to make all humans feel best is to create a system of God and heaven- humans are prone to religiosity, it gives them a deep sense of meaning, etc. So our friendly neighbohrhood AI reads all religious texts and observes all rituals and determines the best type of god(s) and heaven(s) (it might make more than one for different people)… So the fAI creates God, gives us divine tasks that we feel very proud to accomplish when we can (religiosity), gives us rules to balance our internal biological conflicting desires, and uploads us after death into some fashion of paradise where we can feel eternal love...Hey- just saying that even IF the fAI really understood human psychology, doesn’t mean that we will like it’s answer… We might NOT like what most other people do.
Cocaine-
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I knew the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as ‘not real happiness,’ but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don’t seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I’m sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn’t sensitize to it or burn out endogenous seretonin.Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?
What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund… AHHH… Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)
The main problem as I see it is that humans DON’T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don’t know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip…
Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.
But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, “I can’t just feel good without doing anything… here are the goals that make me feel good- and it CAN’T be a simulation,′ then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real… but they will still all be staged fantasies.
Eliezer,
Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.
1) Why do you believe a superintelligence will be necessary for uploading?
2) Why do you believe there possibly ever could be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can’t imagine you could possibly believe in a magic bullet that some individial super-genius will eurekia have an epiphany about by himself in his basement. And this won’t be like the cosmology conference to determine how the universe began, where everyone’s testosterone riddled ego battled for a victory of no consequence. It won’t even be a manhattan project, with nuclear weapons tests in barren waste-lands… Basically, if we’re not right the first time, we’re fucked. And how do you expect you’ll get that many minds to be that certain that they’ll agree it’s worth making and starting the… the… whateverthefuck it ends up being. Or do you think it’ll just take one maverick with a cult of loving followers to get it right?
3) But really, why don’t you just focus all your efforts on preventing any superintelligence from being created? Do you really believe it’ll come down to us (the righteously unbiased) versus them (the thoughtlessly fame-hungry computer scientists)? If so, who are they? Who are we for that matter?
4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?
Ok- Eliezer- you are just a human and therefore prone to anger and reaction to said anger, but you, in particular, have a professional responsibility not to come across as excluding people who disagree with you from the discussion and presenting yourself as the final destination of the proverbial buck. We are all in this together. I have only met you in person once, have only had a handful of conversations about you with people who actually know you, and have only been reading this blog for a few months, and yet I get a distinct impression that you have some sort of narcissistic Hero-God-Complex. I mean, what’s with dressing up in a robe and presenting yourself as the keeper of clandestine knowledge? Now, whether or not you actually feel this way, it is something you project and should endeavor not to, so that people (like sophiesdad) take your work more seriously. “Pyrimid head,” “Pirate King,” and “Emperor with no clothes” are NOT terms of endearment, and this might seem like a ridiculous admonission coming from a person who has self-presented as a ‘pretentious slut,’ but I’m trying to be provocative, not leaderly. YOU are asking all of these people to trust YOUR MIND with the dangers of fAI and the fate of the world and give you money for it! Sorry to hold you to such high standards, but if you present with a personality disorder any competent psychologist can identify, then this will be very hard for you… unless of course you want to go the “I’m the Messiah, abandon all and follow me!” route, set up the Church of Eliezer, and start a religious movement with which to get funding… Might work, but it will be hard to recruit serious scientists to work with you under those circumstances...
Oh… I should have read these comments to the end, somehow missed what you said to sophiesdad.
Eliezer… I am very disappointed. This is quite sad.
I should also add:
6) Where do you place the odds of you/your institute creating an unfriendly AI in an attempt to create a friendly one?
7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations you made?
Eliezer, I have a few practical questions for you. If you don’t want to answer them in this tread, that’s fine, but I am curious:
1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?
2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?
3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?
4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?
5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?
Thank you!!!
Eliezer- Have you written anything fictional or otherwise about how you envision an ideal post-fAI or post-singularity world? Care to share?
Michael- ah yes, that makes a lot of sense. Of course if the worm’s only got 213 neurons, it’s not going to have hundreds of neurotransmitters. That being said, it might have quite a few different receptor sub-types and synaptic modification mechanisms. Even so… It would seem theoretically feasible to me for someone to hook up electrodes to each neuron at a time and catalog not only the location and connections of each neuron, but also what the output of each synapse is and what the resulting PSPs are during normal C. elegans behaviors… Now that’s something I should tell Brenner about, given his penchant for megalomaniacal information gathering projects (he did the C. Elegans genome, a catalog of each cell in its body throughout its embryonic development, and its neural connections).
Doug- too much stuff to put into an actual calculation, but I doubt we have complete knowledge, given how little we understand epigentics (iRNA, 22URNAs, and other micro RNAs), synaptic transcription, cytoskeletal transport, microglial roles, the enteric nervous system, native neuro-regeneration, and low and behold, neurotransmitters themselves. The 3rd edition of Kandel I was taught out of as an undergrad said nothing of orexins, histamine, the other roles of meletonin beyond the pineal gland, or the functions of the multifarious set of cannibinoid receptors, yet we now know (a short 2 years later) that all of these transmitters seem to play critical roles. Now, not being an elegans gal, I don’t know if it has much simpler neurotransmission than we do. I would venture to guess it is simpler, but not extraordinarily so, and probably much more affected by simple epigenetic mechanisms like RNA interference. In C. elegans, iRNA messages are rapidly amplified, quickly shutting off the target gene in all of its cells (mammals have not been observed to amplify). Now, here’s the kicker- it gets into the germ cells too! So offspring will also produce iRNAs and shut off genes! Now, due to amplification error, iRNAs are eventually lost if the worms are bred long enough, but Craig Mellow is now exploring the hypothesis that amplified iRNAs can be eventually permenantly disable (viral) DNA that’s been incorporated into the C. elegans genome, either by more permenant epigenetic modification (methylation or coiling), or by splicing it out… Sorry for the MoBio lecture, but DUDE! This stuff is supercool!!!
Kennaway- I meant why can’t we make something that C. elegans does in the same way that C. elegans does it using it’s neural information. Clearly our knowledge must be incomplete in some respect. If we could do that, then imitating not only the size, but the programming of the caterpillar would be much more feasible. At least three complex programs are obvious: 1) crawl -coordinated and changeable sinusoidal motion seems a great way to move, yet the MIT ‘caterpillar’ is quite laughable in comparison to the dexterity of the real thing, 2)Seek- this involves a circular motion of the head, sensing some chemical signal, and changing directions accordingly, 3) Navigate- caterpillar is skillfully able to go over, under, and around objects, correcting its path to its original without doing the weird head-twirling thing, indicating that aside from chemeoattraction, it has some sense of directional orientation, which it must have or else its motion would be a random walk with correction and not a direct march. I wonder how much of these behaviors operate independently of the brain.
All of this reminds me of something I read in Robert Sapolski’s book “Monkey Luv” (a really fluffy pop-sci book about baboon society, though Sapolski himself in person is quite amazing), about how human populations under different living conditions had almost predictable (at least in hindsight)explicative religions. People living in rainforests with many different creatures struggling at cross-purposes to survive developed polytheistic religions in which gods routinely fought and destroyed each other for their own goals. Desert dwellers (semites) saw only one great expanse of land, one horizon, one sky, one ecosystem, and so invented monotheism.
I wonder what god(s) we 21st century American rationalists will invent...
I am pleased that you mention that (at present) the human brain is still the best predictor of other humans’ behavior, even if we don’t understand why (yet). I’ve always known my intuitions to be very good predictors of what people will do and feel, though it’s always been a struggle trying to formalize what I already know into some useful model that could be applied by anyone...
However, I was once told my greatest strength in understanding human behavior was not my intuitions, but my ability to evaluate intuitions as one piece of evidence among others, not assuming they are tyrranically correct (which they are certainly not), and thus improving accuracy… Maybe instead of throwing the baby out with the bathwater on human intuitions of empathy, we should practice some sort of semi-statistical evaluation of how certain we feel about a conclusion and update it for other factors. Do you do this already, Eliezer? How?
I agree with Ray—the chapter was too long and spent too many words saying what it was trying to say. I read it in several sittings due to lack of adequate time block nd couldn’t find my place, which lead to me losing time and rereading portions and feeling generally frustrated. think the impact would be improved by reducing by a considerable margin.