Getting Nearer
Reply to: A Tale Of Two Tradeoffs
I’m not comfortable with compliments of the direct, personal sort, the “Oh, you’re such a nice person!” type stuff that nice people are able to say with a straight face. Even if it would make people like me more—even if it’s socially expected—I have trouble bringing myself to do it. So, when I say that I read Robin Hanson’s “Tale of Two Tradeoffs”, and then realized I would spend the rest of my mortal existence typing thought processes as “Near” or “Far”, I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.
Among other things, this clears up a major puzzle that’s been lingering in the back of my mind for a while now. Growing up as a rationalist, I was always telling myself to “Visualize!” or “Reason by simulation, not by analogy!” or “Use causal models, not similarity groups!” And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.
But later, I learned about the Outside View versus the Inside View, and that people asking “What rough class does this project fit into, and when did projects like this finish last time?” were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects. And this didn’t seem to fit very well with my injunction to “Visualize!”
So now I think I understand what this principle was actually doing—it was keeping me in Near-side mode and away from Far-side thinking. And it’s not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).
An example of this might be the balance between offensive and defensive nanotechnology, where I started out by—basically—just liking nanotechnology; until I got involved in a discussion about the particulars of nanowarfare, and noticed that people were postulating crazy things to make defense win. Which made me realize and say, “Look, the balance between offense and defense has been tilted toward offense ever since the invention of nuclear weapons, and military nanotech could use nuclear weapons, and I don’t see how you’re going to build a molecular barricade against that.”
Are the particulars of that discussion likely to be, well, correct? Maybe not. But so long as I wasn’t thinking of any particulars, my brain had free reign to just… import whatever affective valence the word “nanotechnology” had, and use that as a snap judgment of everything.
You can still be biased about particulars, of course. You can insist that nanotech couldn’t possibly be radiation-hardened enough to manipulate U-235, which someone tried as a response (fyi: this is extremely silly). But in my case, at least, something about thinking in particulars...
...just snapped me out of the trance, somehow.
When you’re thinking using very abstract categories—rough classes low on computing power—about things distant from you, then you’re also—if Robin’s hypothesis is correct—more subject to ideological bias. Together this implies you can cherry-pick those very loose categories to put X together with whatever “similar” Y is ideologically convenient, as in the old saw that “atheism is a religion” (and not playing tennis is a sport).
But the most frustrating part of all, is the casualness of it—the way that ideologically convenient Far thinking is just thrown together out of whatever ingredients come to hand. The ten-second dismissal of cryonics, without any attempt to visualize how much information is preserved by vitrification and could be retrieved by a molecular-level scan. Cryonics just gets casually, perceptually classified as “not scientifically verified” and tossed out the window. Or “what if you wake up in Dystopia?” and tossed out the window. Far thinking is casual—that’s the most frustrating aspect about trying to argue with it.
This seems like an argument for writing fiction with lots of concrete details if you want people to take a subject seriously and think about it in a less biased way. This is not something I would have thought based on my previous view.
Maybe cryonics advocates really should focus on writing fiction stories that turn on the gory details of cryonics, or viscerally depict the regret of someone who didn’t persuade their mother to sign up. (Or offering prizes to professionals who do the same; writing fiction is hard, writing SF is harder.)
But I’m worried that, for whatever reason, reading concrete fiction is a special case that doesn’t work to get people to do Near-side thinking.
Or there are some people who are inspired to Near-side thinking by fiction, and only these can actually be helped by reading science fiction.
Maybe there are people who encounter big concrete detailed fictions process them in a Near way—the sort of people who notice plot holes. And others who just “take it all in stride”, casually, so that however much concrete fictional “information” they encounter, they only process it using casual “Far” thinking. I wonder if this difference has more to do with upbringing or genetics. Either way, it may lie at the core of the partial yet statistically outstanding correlation between careful futurists and science fiction fans.
I expect I shall be thinking about this for a while.
- The Values-to-Actions Decision Chain: a lens for improving coordination by 30 Jun 2018 9:26 UTC; 33 points) (EA Forum;
- The Values-to-Actions Decision Chain by 30 Jun 2018 21:52 UTC; 29 points) (
- (Moral) Truth in Fiction? by 9 Feb 2009 17:26 UTC; 25 points) (
- 21 Mar 2012 19:30 UTC; 5 points) 's comment on Simple but important ideas by (
- [SEQ RERUN] Getting Nearer by 4 Feb 2013 6:36 UTC; 4 points) (
A lot of the author’s craft is specifically about encouraging far mode. Cf: “a long time ago, in a galaxy far far away”. This is labeled as “suspension of disbelief”. The reading public has been trained to switch into that mode given a few of the standard cues. The game is rigged against you.
However, I also see some softs of fiction, but particularly SF, triggering the opposite: geek mode. A geek is using “near” thinking, which is why he asks questions like: what makes the warp drives glow blue? Geeks thrive amid data amenable to theorizing.
(Corollary: now you see why people who are trying to create fiction get annoyed at geeks.)
What you would have to create is something new, not just fiction that appeals to geeks, but fiction with enough detail and interlaced facts that it tempts every reader to be a geek.
Or “what if you wake up in Dystopia?” and tossed out the window.
What is the counterargument to this? Maybe something like “waking up in Eutopia is as good as waking up in Dystopia is bad, and more probable”; but both of those statements would have to be substantiated.
That seems to assume “Dystopia is likely” and “being in dystopia is significantly worse then death”.
If you think both of those things are true, though, then what about the odds of our society turning into dystopia in the next 30 years you’ll naturally be alive anyway? Should you kill yourself now to avoid the risk of being alive in a possible dystopia in 30 years? It seems fairly silly if you consider it in those terms.
I’d expect it to be more likely to wake up in a world worth living, especially considering they put all that effort into waking you up. Shouldn’t the idea that it isn’t be the one that needs to be substantiated?
If the people in the future put a lot of effort into waking you up, then that will be because they have a use for you (not necessarily vice versa).
Consider; if Omega were to suddenly turn up with a machine that, when you press the button, could instantly produce a perfectly healthy clone of William Shakespeare, with all his memories up to ten seconds before his death; then there would probably be a lot of English professors wanting to press the button. But would Shakespeare necessarily find our world today worth living in?
Eventually, probably yes; but on waking, it would seem a terribly confusing and dangerous place, where one cannot blithely stroll across the road and expect the carriages to avoid one, where some strange magic allows communication across vast distances and a single performance on the stage can be copied and broadcast across thousands of kilometres, repeated indefinitely and continually. Where every man has food of a quality that would surprise even a King, but few men have as many as a single servant. Where the language itself has shifted and changed, becoming something but barely recognisable. It would take quite some time to get used to the differences, and to become even comfortable in this vastly different world.
Thank you for the praise! I’ll post soon on fiction as near vs far-thinking.
The distinction between “near” and “far” thinking seems to have a connection with the old distinction between a puzzle and a mystery.
(Quick recap: A puzzle has a definite solution; a mystery does not)
Near thinking is outstanding for solving puzzles, but breaks down when examining a mystery. There is too much that is uncertain and unknowable about mysteries to allow close analysis to provide useful conclusions.
When examining a mystery, the less rigorous, more intuitive nature of far-thinking is more useful. Where there is no definite solution, one must speculate in a somewhat irrational way in order to form an action plan.
General George S. Patton said, “An imperfect plan implemented immediately and violently will always succeed better than a perfect plan.”
“it was keeping me in Near-side mode and away from Far-side thinking.”
So this is following Robin’s lead on implying that far-side thinking can be a permanent mode of operation. I don’t think you have any choice but to operate in near-side mode if you spend a signficant amount of time thinking about any given subject. Far-side mode is the equivalent of a snap judgement. Most of the post is routine from that perspective. You identify weaknesses in the performance of snap judgements, and move on to spending more time thinking on the given subject, with naturally better results.
“What if you wake up in Dystopia?” What is the counterargument to this?
“That’s my problem.”?
People don’t apply near thinking to fiction, especially to technical issues presented in fiction, because most fiction is full of fake detail: words that sound like descriptions if you skim over them, but are actually complete gibberish. This is especially true of science fiction, where many authors insert “technobabble”, which is created by taking words at random from outside the reader’s expected vocabulary.
I should probably blog about it, but here’s my opinion about cryonics:
What are chances that signing up for cryonics will work? I estimate it’s really really tiny, 1% or less kind of chance, even if cryonics works some day I might die in a wrong way like in a car accident or by cancer metastasis that will make me lose too much information; or will be frozen in a wrong way; or I won’t stay frozen for long enough due to hardware failure, economic crash, or whatever reasons; or future might decide not to unfreeze me; or to modify me too much upon unfreezing etc. Anything goes wrong and it’s a fail, and things tend to go wrong with first try of every new technology almost always.
What’s the benefit if it works? It could be very high like infinite youth in utopian society, but I guess it’s most likely to be moderate to high, like a few extra decades of life of someone vaguely like me.
What’s the cost? I did a quick check and it seemed very high.
The most naively calculated expected utility of that doesn’t match the price, with reasonable levels of time discounting and risk aversion it’s really a horrible proposition. It’s too much of a Pascal’s Wager if you think a small chance of a very high win makes cost and risk irrelevant.
SENS sounds like a much more likely way to achieve much very long healthy lifespans. Cryonics depends on success of SENS anyway, it’s just a bet that SENS is most likely to occur too late against chance of cryonics failing.
There are alternatives way to increasing your healthy lifespan with high expected return, low risk, and low cost—not smoking and avoiding obesity are the most obvious ones in modern Western societies. Unless you’ve done all these taking a high cost high risk chance like cryonics seems not much different than going to church every Sunday hoping afterlife really exists.
I wonder what makes you and Robin like cryonics so much. You most likely have much higher estimation of its chances. You might also have a higher estimation of its utility if it works. Or you might have lower estimation of its price, perhaps you have too much money and no idea what to do with it ;-)
The chances are tiny, but a tiny chance is preferable to no chance at all.
The benefit if it works is that you wake up as yourself, immortal in eutopia. Anything less I qualify as failure.
“What if you wake up in Dystopia?”
What is the counterargument to this?
I’m not sure if it’s possible to convincingly argue that a dystopia bad enough to not be worth living in probably wouldn’t care much about its citizens, and even less about its cryo-suspended ones, so if things get bad enough your chances of being revived are very low.
Michael G.R.: I’m not sure correlation between what possible future would do with cryo-suspended people, and how much you’d like it on utopia-dystopia scale, are much correlated. I think that unless you’re revived very quickly after death you’ll most likely wake up in a weirdtopia.
“I think that unless you’re revived very quickly after death you’ll most likely wake up in a weirdtopia.”
Indeed, though a technologically advanced enough weirdtopia might have pretty good ways to help you adapt and feel at home (f.ex. by modifying your own self to keep up with all the post-humans, or by starting you out in a VR world that you can relate to and progressively introducing you to the current world).
In practice atheism has more in common with veganism than with “not doing/believing X.”
Vegans don’t just not eat animals or abjure the use animal products like leather. The firebrands among them engage in an ideological critique of our civilization’s practice of exploiting animals, and they argue the moral and practical advantages of abandoning that exploitation..
Similarly, the high-profile atheists don’t just not believe in gods. They present philosophical and scientific critiques of god beliefs and argue that we would do better by abandoning these beliefs.
Well, I don’t think most atheists do that. (IIRC, someone (EY?) proposed to use untheist for someone who doesn’t believe in God and antitheist for what you say.)
ETA: IIUC there is a very large social stigma attached to atheism in America, so I guess that over there only people who are pretty sure of their position would self-identify as atheists; so probably in America the fraction of self-identified atheists who “present philosophical and scientific critiques of god beliefs” would be a lot greater. Where I am, theists and atheists might jokingly mock each other much like fans of different football teams would, but most of them don’t usually try to convert each other any more than fans of different football teams would—I suspect many people would even see that as rude in most situations.
I hear it’s generally seen as treasonous to switch football teams, rather than rooting for your hometown team for your entire life. If that’s true, religious conversions seem more socially acceptable.
Except that a lot of well known Americans in the entertainment industry, which aims at the lower common denominators of American society, have come out by now as nonbelievers, along with ones in other countries who have some name recognition in the U.S. Their skepticism of religion doesn’t seem to have hurt their ability to make a living in a competitive market. For some examples:
:http://en.wikipedia.org/wiki/List_of_atheists_in_film,_radio,_television_and_theater
whats about my position: I am not entirely sure that God creator of universe does not exist*, but I am very sure that religions are not positively linked to existence of God, and if anything, plurality of them, internal contradictions, and behaviour of them towards eachother constitutes weak evidence for non-existence of God in general, and very strong evidence for non-existence of God as defined by any of the religions.
Consequently, someone could call me ‘militant’ when it comes to atheism—i see religions as evil—but i’m technically agnostic.
(*it is actually conditional for me on what has lower Kolmogorov complexity: our universe which eventually self organizes us, intelligent observers, or an universe which eventually self organize a single intelligence that fills all of it and proceeds to do stuff like ‘imagining’ universes and otherwise be bored. Note that those 2 things may be equivalent; we don’t know how it will turn out, and the ‘singularity’ may indeed end up with singular intelligence)
This depends on how you define Kolmogorov complexity of universes. The Turing machine that simulates all other Turing machines is not the best explanation for all nontrivial universes if you require the explanation to be empirical.
Yes, of course. I do think that there will be a formalization of KC that’s fairly language independent, though. Basically, KC is here as a form of Occam’s razor.
Also, the comparison between those 2 universe rules may directly yield the complexity difference, e.g. removal of rule leading to god-making. In any case I don’t think it’ll be possible to make any progress on this until superintelligence, which won’t need any of our insights. That is kind of depressing thought.
Do you think its the correct form of Occam’s razor? In other words, does Occam’s razor properly take a unique form?