This description/advice is awesome, and I mostly agree, but I think it presents an overly uniform impression of what love is like. I’ve been in Mature Adult Love multiple times, and the feelings involved have been different every time. I wouldn’t necessarily reject your division into obsession, closeness, and sexual desire, but I think maybe there are different kinds (or components) of closeness, such as affection, understanding, appreciation, loyalty, etc., and any friendship or relationship will have these in differing degrees. For instance, for a lot of people, family love seems to involve a lot of loyalty but not as much understanding.
lucidian
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
Thanks for this post. I basically agree with you, and it’s very nice to see this here, given how one-sided LW’s discussion on death usually is.
I agree with you that the death of individual humans is important for the societal suporganism because it keeps us from stagnating. But even if that weren’t true, I would still strongly believe in the value of accepting death, for pretty much exactly the reasons you mentioned. Like you, I also suspect that modern society’s sheltering, both of children and adults, is leading to our obsession with preventing death and our excessive risk aversion, and I think that in order to lead emotionally healthy lives, we need to accept risk of failure, pain, and even death. Based on experience and things I’ve read, I suspect we all have much deeper reserves of strength than we realize, but this strength is only called on in truly dire circumstances, because it’s so costly to use. If we never put ourselves into these dire circumstances, we will never accomplish the extraordinary. And if we’re afraid to put ourselves in such circumstances, then our personal growth will be stunted by fear.
I say this as someone who was raised by very risk-averse parents who always focused on the worst-case scenario. (For instance, when I was about seven, I was scratching a mosquito bite, and my dad said to me “you shouldn’t scratch mosquito bites, because one time your grandfather was cleaning the drain, and he scratched a mosquito bite with drain gunk on his hand, and his arm swelled up black and he had to go to the hospital”.) As a kid I was terrified of doing much of anything. It wasn’t until my late teens and early twenties that I started to learn how to accept risk, uncertainty, and even death. Learning to accept these things gave me huge emotional benefits—I felt light and free, like a weight had been lifted. Once I had accepted risk, I spent a summer traveling, and went on adventures that everyone told me were terrible ideas, but I returned from them intact and now look back on that summer as the best time in my entire life. I really like the saying that “the coward dies a thousand deaths, the brave man only one”. It fits very well with my own experience.
I’m hesitant to say that death is objectively “good” or “bad”. (I might even classify the question as meaningless.) It seems like, as technology improves, we will inevitably use it to forestall or even prevent death. Should this happen, I think it will be very important to accept the lack of death, just as now it’s important to accept death. And I’m not really opposed to all forms of anti-deathism; I occasionally hear people say things like “Life is so much fun, why wouldn’t I want to keep doing it forever?”. That doesn’t seem especially problematic to me, because it’s not driven by fear. What I object to is this idea that death is the worst thing ever, and obviously anyone who is rational would put a lot of money and effort into preventing it, so anyone who doesn’t is just failing to follow their beliefs and desires to the logical conclusion. So it’s really nice to see this post here, amidst the usual anti-deathism. Thanks again for writing it.
What would it mean to examine this issue dispassionately? From a utilitarian perspective, it seems like choosing between deathism and anti-deathism is a matter of computing the utility of each, and then choosing the one with the higher utility. I assume that a substantial portion of the negative utility surrounding death comes from the pain it causes to close family members and friends. Without having experienced such a thing oneself, it seems difficult to estimate exactly how much negative utility death brings.
(That said, I also strongly suspect that cultural views on death play a big role in determining how much negative utility there will be.)
I wish I could upvote this comment more than once. This is something I’ve struggled with a lot over the past few months: I know that my opinions/decisions/feelings are probably influenced by these physiological/psychological things more than by my beliefs/worldview/rational arguments, and the best way to gain mental stability would be to do more yoga (since in my experience, this always works). Yet I’ve had trouble shaking my attachment to philosophical justifications. There’s something rather terrifying about methods (yoga, narrative, etc.) that work on the subconscious, because it implies a frightening lack of control over our own lives (at least if one equates the self with the conscious mind). Particularly frightening to me has been the idea that doing yoga or meditation might change my goals, especially since the teachers of these techniques always seem to wrap the techniques in some worldview or other that I may dislike. Therefore, if I really believe in my goals, it is in my interest not to do these things, even though my current state of (lack of) mental health also prevents me from accomplishing my goals. But I do want to be mentally healthy, so I spent months trying to come up with some philosophical justification for doing yoga that I could defend to myself in terms of my current belief system.
Earlier this week, though, some switch flipped in me and I realized that, in my current state of mental health, I was definitely not living my life in accordance with my values (thanks, travel, for shaking me out of fixed thought-patterns!). I did some yoga and immediately felt better. Now I think I’m over this obsession with philosophical justifications, and I’m very happy about it, but damn, it took a long time to get there. The silly thing is that I’ve been through this internal debate a million times (“seek out philosophical justifications, which probably don’t exist in a form that will satisfy my extreme skepticism and ability to deconstruct everything” vs. “trust intuition because it is the only viable option in the absence of philosophical justifications; also, do more yoga”). Someday I’ll just settle on the latter and stop getting in arguments with myself.
Also, sorry if this comment is completely off-topic; it’s just something I’ve been thinking about a lot.
I was wondering this too. I haven’t looked at this A_p distribution yet (nor have I read all the comments here), but having distributions over distributions is, like, the core of Bayesian methods in machine learning. You don’t just keep a single estimate of the probability; you keep a distribution over possible probabilities, exactly like David is saying. I don’t even know how updating your probability distribution in light of new evidence (aka a “Bayesian update”) would work without this.
Am I missing something about David’s post? I did go through it rather quickly.
Forgive me, but the premise of this post seems unbelievably arrogant. You are interested in communicating with “intellectual elites”; these people have their own communities and channels of communication. Instead of asking what those channels are and how you can become part of them, you instead ask how you can lure those people away from their communities, so that they’ll devote their limited free time to posting on LW instead.
I’m in academia (not an “intellectual elite”, just a lowly grad student), and I’ve often felt torn between my allegiances to the academic community vs. the LessWrong community. In part, the conflict exists because LessWrong frames itself as an alternative to academia, as better than academia, a place where the true intellectuals can congregate, free from the constraints of the system of academic credibility, which unfairly penalizes autodidacts, or something. Academia has its problems, of course, and I agree with some of the LessWrong criticisms of it. But academia does have higher standards of rigor: peer review, actual empirical investigation of phenomena instead of armchair speculation based on the contents of pop science books, and so on. Real scientific investigation is hard work; the average LW commenter seems too plagued by akrasia to put in the long hours that science requires.
So an academic might look at LW and see a bunch of amateurs and slackers; he might view autodidacts as people who demand that things always be their way and refuse to cooperate productively with a larger system. (Such cooperation is necessary because the scientific problems we face are too vast for any individual to make progress on his own; collaboration is essential.) I’m not making all this up; I once heard a professor say that autodidacts often make poor grad students because they have no discipline, flitting back and forth between whatever topics catch their eye, and lacking the ability to focus on a coherent program of study.
Anyway, I just figured I’d point out what this post looks like from within academia. LessWrong has repeatedly rejected academia; now, finally, you are saying something that could be interpreted as “actually, some academics might be worth talking to”. But instead of conceding that academia might have some advantages over LW and thus trying to communicate with academics within their system, you proclaim LessWrong to be “the highest-quality relatively-general-interest form on the web” (which, to me, is obviously false) and then you ask actual accomplished intellectuals to spend their time conversing with a bunch of intelligent-but-undereducated twenty-somethings who nonetheless think they know everything. I say that if members of LW want to communicate with intellectual elites, they should go to a university and do it there. (Though I’m not sure what to recommend for people who have graduated from college already; I’m going into academia so that I don’t have to leave the intellectually stimulating university environment.)
I realize that this comment is awfully arrogant, especially for something that’s accusing you of arrogance. And I realize that you are trying to engage with the academic system by publishing papers in real academic journals. I just think it’s unreasonable to assume that “intellectual elites” (both inside and outside of academia) would care to spend time on LW, or that it would be good for those people if they did.
Who are some of the best writers in the history of civilization?
Different writers have such different styles that I’m not sure it’s possible to measure them all on a simple linear scale from “bad writing” to “good writing”. (Or rather, of course it’s possible, but I think it reduces the dimensionality so much that the answer is no longer useful.)
If I were to construct such a linear scale, I might do so by asking “How well does this writer’s style serve his goals?” Or maybe “How well does this writer’s style match his content?” For instance, many blogs seem to be optimized for quick readability, since most people are unwilling to devote too much time to reading a blog post. On the other hand, some academic writing seems optimized for a certain kind of eloquence and formality.
I guess what I’m trying to say is that you’re asking the wrong question. Don’t ask “What makes a piece of writing good?”. Ask “How does the structure of this piece of writing lead to the effect it has on the reader?”. The closer you come to answering this question, the easier it will be to design a structure that serves your particular writing needs.
Do people here read Ribbonfarm?
The tradeoff between efficiency and accuracy. It’s essential for computational modeling, but it also comes up constantly in my daily life. It keeps me from being so much of a perfectionist that I never finish anything, for instance.
I cannot agree with this more strongly. I was burnt out for a year, and I’ve only just begun to recover over the last month or two. But one thing that speeded my recovery greatly over the last few weeks was stopping worrying about burnout. Every time I sat down to work, I would gauge my wanting-to-work-ness. When I inevitably found it lacking, I would go off on a thought spiral asking “why don’t I like working? how can I make myself like working?” which of course distracted me from doing the actual work. Also, the constant worry about my burnout surely contributed to depression, which then fed back into burnout.…
It took me a really long time to get rid of these thoughts, not because I have trouble purging unwanted thoughts (this is something I have extensive practice in), but because they didn’t seem unwanted. They seemed quite important! Burnout was the biggest problem in my life, so it seemed only natural that I should think about it all the time. I would think to myself, “I have to fix burnout! I must constantly try to optimize everything related to this! Maybe if I rearrange the desks in my office I won’t be burnt out anymore.” I thought, for a long time, that this was “optimization” and “problem solving”. It took a depressingly long time for me to identify it for what it really was, which is just plain old stress and worry.
Once I stopped worrying about my inability to work, it became a lot easier to work.
Of course, there’s some danger here—I got rid of the worry-thoughts after I had already started to recover from burnout. They weren’t necessary, and my desire to work could just take over and make me work. But if you really have no desire to work, then erasing such thoughts could just lead to utter blissful unproductivity.
There are also things which are bad to learn for epistemic rationality reasons.
Sampling bias is an obvious case of this. Suppose you want to learn about the demographics of city X. Maybe half of the Xians have black hair, and the other half have blue hair. If you are introduced to 5 blue-haired Xians but no black-haired Xians, you might infer that all or most Xians have blue hair. That is a pretty obvious case of sampling bias. I guess what I’m trying to get at is that learning a few true facts (Xian1 has blue hair, Xian2 has blue hair, … , Xian5 has blue hair) may lead you to make incorrect inferences later on (all Xians have blue hair).
The example you give, of debating being harmful to epistemic rationality, seems comparable to sampling bias, because you only hear good arguments for one side of the debate. So you learn a bunch of correct facts supporting position X, but no facts supporting position Y. Thus, your knowledge has increased (seemingly helpful to epistemic rationality), but leads to incorrect inferences (actually bad for epistemic rationality).
There’s also the question of what to learn. You could spend all day reading celebrity magazines, and this would give you an increase in knowledge, but reading a math textbook would probably give you a bigger increase in knowledge (not to mention an increase in skills). (Two length-n sets of facts can, of course, increase your knowledge a different amount. Information theory!)
This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn’t call it “safe”. These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.
All of our search results come filtered through google’s algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what’s on the web, and we’re scarcely even conscious that the filter bubble exists. If you don’t know about sampling bias, how you can you correct for it?
With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we’ll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I’m sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness—their own personal problems?
Google’s continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you’re typing something into the search bar, google autocompletes—changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly—but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.
And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you’re a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful—introduce too much randomness and the quality decreases rapidly.)
I guess I’m just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone’s mobile devices to understand what they’re saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it’s hard for the computer to understand?
I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you’ll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It’s automatic. In a CS crowd, I’ll use CS metaphors; in a non-CS crowd I won’t. So I’m not opposed to changing the way I speak based on the context. I’m just specifically worried about the sort of speaking patterns NLP systems will force us into. I’m afraid they’ll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that’s what the system has gotten the most data on, and can respond best to).
Ok, I’m done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don’t think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.
Hmm, you’re probably right. I guess I was thinking that quick heuristics (vocabulary choice, spelling ability, etc.) form a prior when you are evaluating the actual quality of the argument based on its contents, but evidence might be a better word.
Where is the line drawn between evidence and prior? If I’m evaluating a person’s argument, and I know that he’s made bad arguments in the past, is that knowledge prior or evidence?
Unless the jargon perpetuates a false dichotomy, or otherwise obscures relevant content. In politics, those who think in terms of a black-and-white distinction between liberal and conservative may have a hard time understanding positions that fall in the middle (or defy the spectrum altogether). Or, on LessWrong, people often employ social-status-based explanations. We all have the jargon for that, so it’s easy to think about and communicate, but focusing on status-motivations obscures people’s other motivations.
(I was going to explain this in terms of dimensionality reduction, but then I thought better of using potentially-obscure machine learning jargon. =) )
I agree with you that it’s useful to optimize communication strategies for your audience. However, I don’t think that always results in using shared jargon. Deliberately avoiding jargon can presumably provide new perspectives, or clarify issues and definitions in much the way that a rationalist taboo would.
It might be worth noting that Bayesian models of cognition have played a big role in the “rationality wars” lately. The idea is that if humans are basically rational, their behaviors will resemble the output of a Bayesian model. Since human behavior really does match the behavior of a Bayesian model in a lot of cases, people argue that humans really are rational. (There has been plenty of criticism of this approach, for instance that there are so many different Bayesian models in the world that one is sure to match the data, and thus the whole Bayesian approach to showing that humans are rational is unfalsifiable and overfitting.)
If you are interested in Bayesian models of cognition I recommend the work of Josh Tenenbaum and Tom Griffiths, among others.