Freedom From Choice: Should we surrender our freedom to an external agent? How much?
This article explores the following topic: “When we are presented with too many choices, we can get paralized, and do nothing at all, or follow harmful heuristics, such as the path of least difficulty, or the path of least risk. Should we surrender that choice to external agents, so that among the choices that remain it is easier to determine a “best” choice? But which agents should we choose, and how much of our freedom should we surrender to them? Would a general AI be able to play this role for all of humanity? Given the inevitablity of the Singularity, can this even be avoided? What possibilities does this open? Is it a desirable outcome? We might end up becoming eternal minors. Literally, if immortality is reached.”
Sometimes life can feel like a wide open quicksand box: you have so many choices before you, calculating the optimal choice is nigh-impossible. The more options you have, the harder it is to make a decision. To employ a visual metaphor, there is no greater freedom of movement than floating in an empty void Yet there’s nowhere to go from there, and all choices are meaningless. Drawing a floor, a horizon, allows you to move along it… but you have sacrificed a degree of freedom.
Life choices present you with a bit of a traveling salesman’s dilemma. You may use some heuristic or another, but since heuristics by definition don’t guarantee the optimum result, you still have to choose between heuristics, and consistently use the same heuristic. However, the more restrictions you place on your journey, the easier it is to discriminate between routes, and come out of it with the impression of having made the right choice, rather than lingering doubt that plagues you ever time your path becomes dangerously steep, or crowded to a crawl, where you tell yourself “I really shouldn’t have taken that right turn at Albuquerque. Or should I have? Either way, there’s no way for me to have known. But there’s no way I can climb this road. I have ruined my life. But right now there’s nowhere to go but on. There is no hope. There is no respite. There is only car.”
Hence, to make your choice in the crossroads of life, which can look less like the intersection of two curves and more like a point connecting an infinity of hyperplanes, you might be tempted to let other people, or other… things, outside of yourself, make the choices for you, just the same way when you aren’t sure of the fastest way from point A to point B you just ask your GPS (or Google). You could:
Find a leader you’d like to follow.
Obey a dogmatic religious or philosophical creed
Get a romantic partner in a “property of love” type of relationship
Get into a marriage/have (a) kid(s) and allow the ties and responsibilities to force your life into a specific direction, and allow all of your free time to be allocated to childrearing.
Or even write yourself a WorldOfDarkness character sheet and roll the dice everytime you have to choose.
You could even get some smartphone app that lets you state options which you weight with a preference coefficient and randomly gives you one.
You could use this very site’s rational horoscope. (Yesterday’s advice was pretty damn useful, too!).
Worst case scenario, get yourself a good old-fashione enemy, and you can center your lives into a feud against each other! (I wouldn’t recommend it, but it’s a fairly popular option).
Or you could simply do whatever society expects you to do, like most people. Either as a follower or as a leader: don’t forget being a leader often means showing a very generic personality and being a slave to PR.
Once you’re stuck in a career, you could devote yourself to advance through it given pre-established chains of command/promotion. One can live through a lifetime like this
Or you could just wander aimelssly, get bit jobs you quit as soon as they get boring, or easy jobs where you have to do little, or even live off benefits, fall in love with your couch, and sink into the depths of the internet or some MMORPG where the choices and plots have already been written for you.
So, the subordinates (for example: children, citizens, employees, intellectuals) support Freedom of Choice so that they can follow the strong desires they have every now and then, that go against the norm set by authority (sometimes this is an end in itself, especially in the arts). The superiors (for example: parents, politicians and civil servants, bosses, censors and editors) might want to give their wards more leeway in order to escape responsibility for making hard choices for the others, because they know they will be blamed if the choice leads to a failure, and because they don’t want to have to deal with accusations of being oppressive, tyrannical, or heavy-handed in their use of authority.
But such a climate can lead to a paralysis and a listlessness that is as bad and destructive and unhealthy as the worst dictatorship. But where to strike a balance? Which methods are the most questionable, which are the least? Surrendering your freedom to a foreign agent is a dangerous gamble! And this is where the biggest difficulty arises: the general self-modifying tranhuman AI.
The actual traveling salesman can be brute-forced by enough processing power. Can something similar be said of every human’s life? How are we going to deal with that? Will we allow it to turn our lives into scripted events optimized to every player’s personalities? Ones with actual, life-threatening danger in them, even? (As immortals, will we become reckless with our lives, or more cowardly? Or will it simply be a matter of age?) Do we give the machine an Omniscient Morality License to make us live lives of excitement, drama, love, deception, hard, productive, rewarding work, and fun^4, with just the right balance of exaltation and relaxation for every individual? Will we start bitching like whiny spoiled brats if the processes aren’t exactly optimal? There’s a limit to how good a scripted event you can get in Real Life, with Real People. Or will we free-
Ohmygosh. I have just found a Wild Mass Guessing for The Matrix: humans have freely abandoned Real Life, which they leave the literal Deux Ex Machina (should we call the general AI D.E.M., or are the doomsday-cult connotations just too massive?) to run for the continued existence of the material support of their minds. The Matrix itself, including it’s blue-filtered “Real World” is the game the machine created for those individuals that showed that they would enjoy their lives best as cyberpunk anti-heroes. Everything that happened in the movie was staged for their sake, and nothing is real. There are other massive multiplayer games, each catering to a specific type of individual, if not an entire universe for each individual, some of them having recursive levels of reality (“We must go deeper”). Each of them tailor-made to entertain them the most. If DEM decides a certain individual born into the games is not fit to be told the truth (perhaps they might try something stupid like trying to “free” those who are aware and perfectly content), they can live their whole lives without knowing the machine put a dream in their dream so they could dream while they dream.
So, fiction aside, this seems like a fairly probable hypothetical, an attractor of futures. Should we try to avoid it? Can we? Giving up a paralizing freeedom in exchange for an exciting but pre-plotted existence? We’d be stuck as children forever, we could never grow into responsible, self reliant adults (in fact it would be strongly unadvisable: you’d utterly lose to those DEM-advised overgrown kids, and that’s if the DEM isn’t constantly protecting and covering your skin against your own wishes).
And all of the people of the world were told they could remain children for ever. As in, for eternity.
- 12 Jul 2011 12:52 UTC; 0 points) 's comment on Discussion: Ideas for a Lesswrongian anticipation Sci-Fi set in 2060 by (
True fact: I don’t drive. I never have. My visual processing and reflexes are bad enough that I don’t trust myself to do so—I haven’t been specifically told that I’m not allowed to, but I estimate that there’s a significant enough chance of me causing an accident if I do that it’s not worth it. This is a source of inconvenience in my life, but it hasn’t been too hard to adjust my lifestyle to accommodate it, so that’s what I’ve done. I am, however, hoping that those spiffy new self-driving cars that Google has been working on turn into something other than a geeky novelty sometime soon. I want one. There’s a reasonable chance that they’ll revolutionize my life.
Compared to regular driving, is a self-driving car ‘surrendering freedom’? I suspect not, or at least not much—one might not be able to slow down to get a better look at some distracting thing along the side of the road, or run a red light when there’s clearly nobody else at the intersection, and one might have less control over the route that one takes to get from point A to point B, but generally speaking, other than the skills involved, there doesn’t seem to be that much of a difference between the two.
How about a self-driving car that’s able to communicate with computers at nearby stores? One could give the car a file with one’s grocery list, and have it go to the store that has all the items in stock for the best combined price. This seems like giving up a little bit of freedom—maybe I like shopping at store A rather than store B, and don’t mind doing without bananas this week—but it seems like a good thing to me, overall. The car is still a tool, helping me achieve my preferences.
How about we take that new functionality a few steps further: The resulting technology wouldn’t exactly be a car any more, but more ubiquitous, gathering data that’s important to all my decisions and telling me whether things are good ideas or not. This system wouldn’t just say ‘go to store B; store A is out of bananas’; it would say ‘go to store B; store A’s bananas come from a company that was just discovered using child labor, and A has not yet announced that they’ve switched suppliers’. This still helps me achieve my preferences, but in a much more holistic way—and by keeping track of many more things than it’s possible for me to do on my own.
It’s a bit of a conceptual leap, but not too far to be believable, between that and a system that has the potential notice that the best way to allow me to have the experiences that I want without inconveniencing others is by uploading me into a simulated environment with a group of compatible individuals with similar preferences—that this option not only avoids the risk of supporting child labor, but potentially avoids supporting involuntary labor altogether, by making it so that the only ‘real’ things I need are hardware and electricity, and both of those can be created by machines.
Where in this, exactly, do I stop being an adult, by your standards? Has it perhaps already happened, since I’m willing to trust a smart car to do something that I ‘should’ be doing for myself?
I’m not saying “not being an adult” is a bad thing, at least not by my own standards. There are many aspects of “adulthood” I repudiate.
However, I thought the whole point of libertarianism, which appears to be endorsed by some notable people here, appears to be to maximize individual freedom and embracing the many dangers that come with it. I’m not sure it’s such a good idea, and I’m arguing that limiting our choices through agents we can’t control allows us to feel more in control with what’s left, and more satisfied with our choices. I then think of the logical extremity of such an attitude, and wonder and cower before it, feeling its “goodness” to be as ambiguous as the libertarian’s. Hence the “CONGRATULATIONS” scene, since that was a bit of an Esoteric Happy Ending for anyone familiar with the context.
In other words, it’s what they call the Peter Pan Syndrome: do you want to be a child forever? People always seem to get nostalgic about their childhoods (except some outcasts for whom childhood was a terrible time and who are quite happy to live in a world where you can sue people attempting bullying). Yet the cosntant exhortation: “Grow up”. “Stop being such a child”. “You’re a big boy/man/big girl/woman now”. “Take responsibility”. But is it really worth it, adulthood? If we could give it up, should we? Allowing an AI to govern our lives seems to amount to giving up humanity’s adulthood. It wouldn’t even be a Zeroth Law Rebellion, we’d be the ones asking for it. So, should we?
A self-driving car is a robotic chauffeur. Human chauffeurs are not our bosses but our servants. There are many other examples of devices replacing servants and other underlings. I wouldn’t offhand consider any of these to be examples of “surrendering our freedom to an external agent”. I would, instead, consider becoming a servant or underling to be an example of surrendering (part of) our freedom to an external agent, who tells us what to do.
It’s a question of who is telling whom what to do. Are you telling the device to do something, or is the device telling you? We mostly tell our devices what to do.
There are, of course, devices that tell us what to do or otherwise oversee us. For example, a cash register that calculates change in effect tells us what to do in the trivial sense of telling us how much change to return. This is fairly trivial and we welcome the help. More ominously, a modern cash register keeps tabs on cashiers because it keeps a perfect record of what was sold and exactly how much money should be in the tray. This is the sort of oversight that a human manager used to do. So in this case the machine acts as a kind of immediate supervisor.
I’d point out that a lot of powerful people have advisors whose job is, more or less, telling powerful people what to do.It seems less about telling and more about being able to actually compel via some form of force—when the cash register gains the ability to auto-issue disciplinary actions, then I think it’s “telling us what to do”. When it’s simply reporting information, it’s still subordinate, just not necessarily to you personally.
This was a post in qualitative story-telling mode, with no real view to how the extra details were likely or unlikely. The arguments used were fairly poor because of all the argument by analogy and just plain assuming your claims.
To do better, try to keep arguments grounded in quantifiable situations. Remember that extra details are burdensome and be careful of fictional evidence. Also, yeah, read up on a bit of decision theory.
Okay, guys. Thank you for your comments. The truth is, I wasn’t trying for a “thesis” article, but for a light-hearted discussion with knowledgeable, intelligent people, about a subject that made me scratch my head. I hoped that you could link me to resources that would allow me to streamline my doubts, distinguish between what is possible and what isn’t, between valid paths of speculation and paths that have been just plain Jossed by the pertinent fields, with the objective of gathering enough information to synthesize it into an actual Top Level article, once we had matured the subject a little. We don’t have actual fora here, so I thought this was the standard way of doing it. Or is there no place for unprofessional, unambitious, unserious discussion here?
It also intended to be humorous, light-reading, fast-flowing, and deliberately vague: again , fodder for light conversation. The images, while illustrative for those familiar with the context, were mostly there for the sake of memetic, referential humor in the case of the Inception pic, and for the sake of a simple, synthetic, graphical metaphor in the case of the EVA image.
I don’t think I have provided any actual fictional evidence, although I have allowed The Matrix to “prime” me into a specific type of scenario among the myriad possible. I don’t think I added too many details: rather, I outlined possible paths I could figure within the limits of my imagination. For example, I abstained to discuss how the fictional worlds, matrixes, would actually be implemented: I certainly don’t expect them to rely on a single cable with a needle longer than one’s actual head, for instance. Rather, I take the fictional concept, strip it to its minimum elements, and explore from there.
See, I am a troper, a pretty committed one, and much of reality I see and codify through the prism of fiction, if only because fiction is in many ways a reflection, a portrait, and a caricature of reality, that comes from taking that imperfect copy of the universe we have in our heads and doing interesting things to it.
Finally, I have already read through all the sequences up to the Meta-ethics one (didn’t have the time to go any further yet, and haven’t read the P-Zombie subsequence). I’m sorry, I just can’t firgure out how that part affects what we’re saying here. Also, I don’t think I should be expected to write in-depth science, for now:.I am an industrial engineering student, I can write about heat transfer or electricity distribution or water pumps or building structures, but I have my hands full with that sort of stuff right now and I’d really have trouble fitting the advanced math I’d need to say anything relevant on the more hardcore AI-research topics.
I still intend to learn about that topic sooner or later, since it appears the Deus Ex Machina (is it really wrong if I call the Fully General Friendly AI this? I think it really fits...) would render my future profession next to irrelevant: if I want to avoid that same fate, I don’t have much of a choice, do I?
Why the bad ratings? I’m still kind of a beginner, so advice is very welcome.
I voted it up because I feel that especially for beginners negative feedback should basically be no worse than −1, and also because I’m an Eva fan.
From the title, I had expected to read about cognitive prostheses, the extended mind hypothesis, wireheading, Dewey’s “Learning What to Value” AIXI variant, etc.
And what did I get… That said, why is this a bad article? Because it seems to consist entirely of half-baked rhetorical questions, odd examples and language, and no hard grounding of any kind or even a clear thesis. Basically, it reads like the stereotypical freshman college student bull session.
Just a few quick points, to help:
The main problem is that the article is all over the place. Next time, try to pick a single coherent thing that you want to say, and just say that, in as few of words as possible, with as much evidence (in the form of links to either LW, or outside sources, preferably scientific) You present far too many questions in the introduction, each of which are far too vague to actually be answered or discussed in a coherent way.
The pictures add nothing. I can think of no other LW post that uses pictures like this (though I could be wrong) There are also typos, and misunderstandings of some concepts.
I would suggest reading a little bit more of the site, specifically some of the hardcore articles about decision theory, etc. Then you will have a better idea about how to write a good post. I hope that was helpful!