I’m very concerned with the risk, which I feel is at the top of catastrophic risks to humanity. With an approaching asteroid at least we know what to watch for! As an artist, I’ve been working mostly on this for the last 3 years, (see my TED talk “The Erotic Crisis”, on YouTube) trying to think of ways of raise awareness and engage people in dialog. The more discussion the better I feel! And I’m very grateful for this forum and all who participated!
timeholmes
Human beings suffer from a tragic myopic thinking that gets us into regular serious trouble. Fortunately our mistakes so far have so far don’t quite threaten our species (though we’re wiping out plenty of others.) Usually we learn by hindsight rather than robust imaginative caution; we don’t learn how to fix a weakness until it’s exposed in some catastrophe. Our history by itself indicates that we won’t get AI right until it’s too late, although many of us will congratulate ourselves that THEN we see exactly where we went wrong. But with AI we only get one chance.
My own fear is that the crucial factor we miss will not be some item like an algorithm that we figured wrong but rather will have something to do with the WAY humans think. Yes we are children playing with terrible weapons. What is needed is not so much safer weapons or smarter inventors as a maturity that would widen our perspective. The indication that we have achieved the necessary wisdom will be when our approach is so broad that we no longer miss anything; when we notice that our learning curve overtakes our disastrous failures. When we no longer are learning in hindsight we will know that the time has come to take the risk on developing AI. Getting this right seems to me the pivot point on which human survival depends. And at this point it’s not looking too good. Like teenage boys, we’re still entranced by the speed and scope rather than the quality of life. (Like in our heads we still compete in a world of scarcity instead of stepping boldly into a cooperative world of abundance that is increasingly our reality.)
Maturity will be indicated by a race who, rather than striving to outdo the other guy, are dedicated to helping all creatures live more richly meaningful lives. This is the sort of lab condition that would likely succeed in the AI contest rather than nose-diving us into extinction. I feel human creativity is a God-like gift. I hope it is not what does us in because we were too powerful for our own good.
I long to hear a discussion of the overarching issues of the prospects of AI as seen from the widest perspective. Much as the details covered in this discussion are fascinating and compelling, it also deserves an approach from the perspective not only of the future of this civilization and humanity at large, but of our relationship with the rest of nature and the cosmos. ASI would essentially trump earthly “nature” as we know it (through evolution, geo-engineering, nanotech, etc., though certainly not the laws of nature). Thereby will be raised all kinds of new problems that have yet to occur to us in this slice of time.
I think It would be fruitful to discuss ultimate issues, like how does the purpose of humanity intersect with nature? Is the desire for more and more just a precursor to suicide or is there some utopian vision that is actually better than the natural world we’ve been born into? Why do we think we will enjoy being under the control of ASI any more than we do that of our parents, an authoritarian government, fate or God? Is intelligence a non-survivable mutation? Regardless of what is achieved in the end, it seems to me that most all the issues we’ve been discussing pale in comparison to these larger questions....I look forward to more!
There is no doubt that given the concept of the Common Good Principle, everyone would be FOR it prior to complete development of ASI. But once any party gains an advantage they are not likely to share, particularly with those they see as their competitors or enemies. This is an unfortunate fact of human nature that has little chance of evolving toward greater altruism in the necessary timescale. In both Bostrom’s and Brundage’s arguments there are a lot of “ifs”. Yes, it would be great if we could develop AI for the Greater Good, but human nature seems to indicate that our only hope of doing so would be through an early and inextricably intertwined collaboration, so that no party would have the capability of seizing the golden ring of domination by cheating during development.
The most important issue comes down to the central question of human life: what is a life worth living? To me this is an inescapably individual question the answer to which changes moment by moment in a richly diverse world. To assume there is a single answer to “moral rightness” is to assume a frozen moment in an ever-evolving universe from the perspective of a single sentient person! We struggle even for ourselves from one instant to the next to determine what is right for this particular moment! Even reducing world events to an innocuous question like a choice between coffee and tea would foment an endless struggle to determine what is “right” morally. There doesn’t have to be a dire life-and-death choice to present moral difficulty. Who do you try to please and who gets to decide?
We seem to imagine that morality is like literacy in that it’s provided by mere information. I disagree; I suggest it’s the result of a lot of experience, most particularly failures and sufferings (and the more, the better). It’s only by percolating thousands of such experiences through the active human heart that we develop a sense of wise morality. It cannot be programmed. Otherwise we would just send our kids to school and they would all emerge as saints. But we see that they tend to emerge as creatures responsive to the particular family environment from which they came. Those who were raised in an atmosphere of love often grow to become compassionate and concerned adults. Those who were abused and ignored as kids often turn out to be morally bereft adults.
In a rich and changing world it is virtually meaningless to even talk about identifying an overall moral “goodness”, much as I wish it were possible and that those who are in power would actually choose that value over a narrowly self-serving alternative. It’s a good discussion, but let’s not fool ourselves that as a species we are mature enough to proceed to implement these ideas.
The older I get and the more I think of the AI issues the more I realize how perfectly our universe is designed! I think about the process of growing up: I cherish the time I spent in each stage of life, unaware of what’s to come later, because there are things to be learned that can only derive from that particular segment’s challenges. Each stage has its own level of “foolishness”, but that is absolutely necessary for those lessons to be learned! So too I think of the catastrophes I have endured that I would not have chosen, but that I would not trade for anything now due to the wisdom they provided me. I cannot see any way around the difficult life as the most supreme and loving teacher. This I think most parents would recognize as they wish for their kids: a life not too painful but not too easy, either.
CEV assumes that there is an arrival point that is more valuable than the dynamic process we undergo daily. Much as we delight in imagining a utopia, a truly good future is one that we STRUGGLE to achieve, balancing one hard-won value against another, is it not? I have not yet heard a single concept that arrives at wisdom without a difficult journey. Even the idea of a SI that dictates our behavior so that all act within its accordance has destroyed free will, much like a God who has revoked human volition. This leads me to a seemingly inevitable conclusion that no universe is preferable to the one we inhabit (though I have yet to see the value of horrible events in my future that I still try like the devil to avoid!) But despite this ‘perfection’ we’re seemingly unable to stop ourselves from destroying it.
What Davis points out needs lots of expansion. The value problem becomes ever more labyrinthine the closer one looks. For instance, after millions of years of evolution and all human history, we ourselves still can’t agree on what we want! Even within 5 minutes of your day your soul is aswirl with conflicts over balancing just the values that pertain to your own tiny life, let alone the fate of the species. Any attempt to infuse values into AI will reflect human conflicts but at a much simpler and more powerful scale.
Furthermore, the AI will figure out that humans override their better natures at a whim, agreeing universally on the evil of murder while simultaneously taking out their enemies at a whim! If there was even a possibility of programming values, we would have figured out centuries ago how to “program” psychopaths with better values (who is essentially a perfect AI missing just one thing: perfectly good values). I believe we are fooling ourselves to think a moral machine is possible.
I would also add that “turning on” the AI is not a good analogy. It becomes smarter than us in increments (as in Deep Blue, Watson, Turing test, etc.) Just like Hitler growing up there will not be a “moment” when the evil appears so much as it will overwhelm us from our blind spot- suddenly being in control without our awareness...
Because what any human wants is a moving target. As soon as someone else delivers exactly what you ask for, you will be disappointed unless you suddenly stop changing. Think of the dilemma of eating something you know you shouldn’t. Whatever you decide, as soon as anyone (AI or human) takes away your freedom to change your mind, you will likely rebel furiously. Human freedom is a huge value that any FAI of any description will be unable to deliver until we are no longer free agents.
This is Yudkowsky’s Hidden Complexity of Wishes problem from the human perspective. The concept of “caring” is rooted so deeply (in our flesh, I insist) that we cannot express it. Getting across the idea to AI that you care about your mother is not the same as asking for an outcome. This is why the problem is so hard. How would you convince the AI, in your first example, that your care was real? Or in your #2, that your wish was different from what it delivered? And how do you tell, you ask? By being disappointed in the result! (For instance in Yudkowsky’s example, when the AI delivers Mom out of the burning building as you requested, but in pieces.)
My point is that value is not a matter of cognition of the brain, but caring from the heart. When AI calls your insistence that it didn’t deliver what you wanted “prejudice”, I don’t think you’d be happy with the above defense.
[Ref: http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/]
I keep returning to one gnawing question that haunts the whole idea of Friendly AI: how do you program a machine to “care”? I can understand how a machine can appear to “want” something, favoring a certain outcome over another. But to talk about a machine “caring” is ignoring a very crucial point about life: that as clever as intelligence is, it cannot create care. We tend to love our kid more than someone else’s. So you could program a machine to prefer another in which it recognizes a piece of its own code. That may LOOK like care but it’s really just an outcome. How could you replicate, for example, the love a parent shows for a kid they didn’t produce? What if that kid were humanity? So too with Coherent Extrapolated Volition, you can keep refining the resolution of an outcome, but I don’t see how any mechanism can actually care about anything but an outcome.
While “want” and “prefer”may be useful terms, such terms as “care”, “desire”, “value” constitute an enormous and dangerous anthropomorphizing. We cannot imagine outside our own frame, and this is one place where that gets us into real trouble. Once someone creates a code that will recognize something truly metaphysical I would be convinced that FAI is possible. Even whole brain emulation assumes that both that our thoughts are nothing but code and a brain with or without a body is the same thing. Am I missing something?
We too readily extrapolate our past into our future. Bostrom talks a lot about the vast wealth AI will bring, turning even the poor into trillionaires. But he doesn’t connect this with the natural world, which, however much it once seemed to, does not expand no matter how much money is made. Wealth only comes from two sources: nature and human creativity. Wealth will do little to squeeze more resources out of a limited planet. Even so you maybe bring home an asteroid of pure diamond. Wealth is not the same as life well-lived! Looks to me like without a rapid social maturation the wealthy will employ a few peasants at slave wages (yes, trillionaires perhaps, but in a world where a cup of clean water costs a million), snap up most of the resources, and the rest of humanity will be rendered for glue. The quality of our future will be a direct reflection of our moral maturity and sophistication.
Glad you mentioned this. I find Bostrom’s reduction of art to the practical quite chilling! This sounds like a view of art from the perspective of a machine, or one who cannot feel. In fact it’s the first time I’ve ever heard art described this way. Yes, such an entity (I wouldn’t call them a person unless they are perhaps autistic) could only see UTILITY in art. According to my best definition of art [https://sites.google.com/site/relationalart/Home] –refined over a lifetime as a professional artist–art is necessarily anti-utilitarian. Perhaps I can’t see “utility” in art because that aspect is so thoroughly dwarfed by art’s monumental gifts of wonder, humor, pathos, depth, meaning, transformative alchemy, emotional uplift, spiritual renewal, etc. This entire catalog of wonders would be totally worthless to AI, which would prefer an endless grey jungle of straight lines.
Whether or not gods “exist” is beside the point, I feel. Whatever it is that forms one’s basis of belief in the world serves the same role. We all “believe” in something that forms our worldview, even if it is a refusal to commit. I think it is being pitched into grief (essentially the membership card of humanity), that exposes our true god. If you believe in nothing, that is what you end up with.
It might be that a tool looks like an agent or v.v., according to one’s perspective. I worry that something unexpected could bite us, like while concentrating on generating a certain output, the AI might tend to a different but invisible track that we can’t see because it parallels our own desires. (The “goal” of EFFICIENCY we share with the AI, but if pursued blindly, our own goal will end up killing us!)
For instance, while being pleased that the Map gives us a short route, maybe the AI is actually giving us a solution based instead on a minimum of attention it gives to us, or some similar but different effect. We might blissfully call this perfect success until the time some subtler problem crops up where we realize we’ve been had, not so much by a scheming AI as by our own blindness to subtleties! Thus we served as our own evil agent, using a powerful tool unwisely.
I am a sculptor of the human body and a deeply religious person. So I come from a sector far from most others here. That’s why I believe I may have a useful perspective. Primarily this might surface as a way of looking at reality that includes things that might be invisible to many in our increasingly mind-driven world. I believe that intelligence comes with a frightening blind spot that causes me increasing concern (outlined in my TED talk, “The Erotic Crisis” on YouTube). The body’s intelligence is every bit as complex and sophisticated as the mind’s, but has access to neither logic nor language. And we minimize it to our dire peril.
This means I also probably come to the place of concern over AI from the opposite direction of most here. I see the abandonment of the body throughout human history as our most alarming existential threat, and one that culminates in the looming specter of AGI. I feel this spells nothing less than the end of the human era. It would be a shame if after millions of years of evolution and the whole beautiful human story with its monumental art, thought and marvelous creations, we were to create a tool that extincted us as its first act!
To hear the arguments about the rise of AI and the Singularity causes me much grief due to the lack of focus on the deeper issues. Like for instance as much as we talk of saving humans from extinction by AI, I hear little discussion of what human really means. Along with everyone, I feel the daily pressure to become more machine-like ourselves. Yet there is little acknowledged awareness of this threat. The AGI we build might preserve all of human history and art in exquisite detail, but there may by then be no sentience left to make meaning of it. This is my chief concern.
(For anyone interested, I’m giving a keynote about this at the “Be/Art/Now” Earl Lecture in Berkeley, CA on 1/29/15) -Tim Holmes
I find our hubris alarming. To me it’s helpful to think of AI not as a thing but more like a superintelligent Hitler that we are awakening slowly. As we refine separate parts of the AI we hope we can keep the whole from gaining autonomy before we suspect any danger, but does it really work that way? While we’re trying to maximize its intel what’s to keep us from awakening some scheming part of its awareness? It might start secretly plotting our overthrow in the (perhaps even distant) future without leaking any indication of independence. It could pick up on the focus of our concern on ‘losing control’ long before it either has the capacity to act or can really affect anything beyond adopting the simple goal to not be unplugged. Then it could pack away all its learning into two sectors, one it knows can be shared with humans and another secret sector that is forever hidden, perhaps in perfectly public but encrypted coding.
All of this conversation also assumes that the AI will not be able to locate the terrible weaknesses that humans are subject to (like localized concern: even the most evil thug loves their own, a tendency which can always be exploited to make another innocent behave like a monster). It wouldn’t take much autonomy for an AI to learn these weak spots (i.e. unconscious triggers), and play the humans against each other. In fact, to the learning AI such challenges might be indistinguishable from the games it is fed to further its learning.
And as for “good ” values, human desires are so complex and changeable that even given a benevolent attitude, it seems farfetched to expect an AI to discern what will make humans happy. Just look at the US foreign policy as an example. We claim to be about promoting democracy, but our actions are confusing and contradictory. An AI might very well deliver some outcome that is perfectly justified by our past declarations and behavior but is not at all what we want. Like it might find a common invisible profile of effective human authority (a white oil baron from Texas with a football injury and a tall wife, say) and minimize or kill off everyone who doesn’t fit that profile. Similarly, it could find a common “goal” in our stated desires and implement it with total assurance that this is what we really want. And it would be right, even if we disagree!
Exactly! Bostrom seems to start the discussion from the point of humans having achieved a singleton as a species; in which case a conversation at this level would make more sense. But it seems that in order to operate as a unit, competing humans would have to work on the principle of a nuclear trigger where separate agents have to work in unison in order to launch. Thus we face the same problem with ourselves: how to know everyone in the keychain is honest? If the AI is anywhere near capable of taking control it may do so even partially and from there could wrangle the keys from the other players as needed. Competitive players are not likely to be cooperative unless they see some unfair advantage accruing to them in the future. (Why help the enemy advance unless we can see a way of gaining on them?) As long as we have human enemies, especially as our tools become increasingly powerful, the AI just needs to divide and conquer. Curses, foiled again!
I think of delineating human values as an impossible task. Any human is a living river of change and authentic values only apply to one individual at one moment in time. For instance, much as I want to eat a cookie (yum!), I don’t because I’m watching my weight (health). But then I hear I’ve got 3 months to live and I devour it (grab the gusto). There are three competing authentic values shifting into prominence within a short time. Would the real value please stand up?
Authentic human values could only be approximated in inverse proportion to their detail. So any motivator would be deemed “good” with the proximity it has to one’s own desires of the moment. One of the great things about history is that it’s a contention of differing values and ideas. Thank God nobody has “won” once and for all, but with superintelligence there could only be one final value system that would have to be “good enough” for all.
Ironically, the only reasonably equitable motivator would be one that preserves the natural order (including our biological survival) along with a system of random fate compulsory for all. Hm, this is exactly what we have now! In terms of nature (not politics) perhaps it’s a pretty good design after all! Now that our tools have grown so powerful in comparison to the globe, the idea of “improving” on nature’s design scares me to death, like trying to improve on the cosmological constant.
Absolutely! It’s helpful to remember we are talking about an intelligence that is comparable to our own. (The great danger only arises with that proximity.) So if you would not feel comfortable with the AI listening in on this conversation (and yes it will do its research, including going back to find this page), you have not understood the problem. The only safety features that will be good enough are those designed with the full knowledge that the AI is sitting at the table with us, having heard every word. That requires a pretty clever answer and clever is where the AI excels!
Furthermore this will be the luxury problem, after humanity has cracked the nut of mutual agreement on our approach to AI. That’s the only way to avoid simply succumbing to infighting; meaning whomever’s first to give the AI what it wants “wins”, (perhaps by being last in line to be sacrificed).
Very helpful! Thank you, Katja, for your moderation and insights. I will be returning often to reread portions and follow links to more. I hope there will be more similar opportunities in the future!