Open thread, 21-27 April 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Thread started before the end of the last thread to ecourage Monday as first day.
- 24 Apr 2014 17:38 UTC; 12 points) 's comment on Open Thread April 23 − 29, 2014 by (
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes’ world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra’s world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha’s world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren’t so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes’ world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft’s world, Qoheleth’s world, or Nietzsche’s world.
Each of these models of the world — Bayes’ world, Cassandra’s world, Buddha’s world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes’ world, what evidence might suggest that we are in Cassandra’s or Buddha’s world?
Edited lightly — In the first couple of paragraphs, I’ve clarified that I’m talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.
Replace religion with this dilemma and you have NS’s Microkernel reliigon.
I don’t see these as alternatives, more like complements.
It’s a memorable name, but it does not need to be called anything so dramatic, given that we live in this world already. For example, most of us make a likely correct prediction that if we procrastinate less then we will be better off, yet we still waste time and regret it later.
Why this AIXIsm? We are a part of the world, and the most important part of it for many people, so updating your model of self is very Bayesian. Lacking this self-update is what leads to a “Cassandra’s world”.
I’d tell you what method, I would use to evaluate the evidence to decide in which world we are, but it seems like you denied it in the premise. ;)
That’s an interesting post. Let me throw in some comments.
I am not sure about the Cassandra’s world. Here’s why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions—what will come to pass and what will not. That is not a low-level world property, that’s just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
So then the Cassandra’s world is essentially a predetermined world where fate rules and you can’t change anything. None of your choices matter.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin’s genie: “Phenomenal cosmic predictive capacity … itty bitty evidential status.”
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo’s “install prophecy” code was. If Cassandra took a lesson in lying from Epimenides, she mightn’t have had any problems.
You’re right about the prisoner. (Which also reminds me of Locke’s locked-room example regarding voluntariness.) That particular situation doesn’t distinguish those worlds.
(I should clarify that in each of these “worlds”, I’m talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
I think we’re thinking about different myths. I’m thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we’re in Cassandra’s world, we expect to be in situations where that doesn’t work.
That’s pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.
A world with causes and effects. (Bayes’ world as described is Cassandra’s world, for the usual reasons of “prediction” not being what you want for choosing actions).
[ There was something else here, having to do with how it is hard to use causal info in a Bayesian way, but I deleted it for now in order to think about it more. You can ask me about it if interested. The moral is, it’s not so easy to just be Bayesian with arbitrary types of information. ]
Hmm. I think I know what you’re referring to — aside from prediction, you also need to be able to factor out irrelevant information, consider hypotheticals, and construct causal networks. A world where cause and effect didn’t work a good deal of the time might still be predictable, but choosing actions wouldn’t work very effectively.
(I suspect that if I’d read more of Pearl’s Causality I’d be able to express this more precisely.)
Is that what you’re getting at, at all?
Well, when you use Bayes theorem, you are updating based on a conditioning event. But with causal info, it is not a conditioning event anymore. I don’t think it is literally impossible to be Bayesian with causal info, but it sounds hard. I am still thinking about it.
So I am not sure how practical this “be more Bayesian” advice really is. In practice we should be able to use information of the form “aspirin does not cause cancer”, right?
[ I did not downvote the parent. ]
For one thing, we already have strong evidence rationality is a useful idea: it’s called science & technology.
Cassandra’s world: Mythical predictions seem to be unconditional whether Bayesian predictions are conditional on your own actions and thus can be acted upon.
Buddha’s world: Well, understanding your own values and understanding how to maximize them are two tasks none of which is redundant. I think rationality is useful in understanding your own values as well, for example by analyzing them through evolutionary psychology or cognitive neuroscience. Moreover, empirically understanding of our own values also improves when learning epistemic facts and analyzing hypothetical scenarios. Without rationality it is difficult to create sufficiently precise language for formulating the values.
Pure curiousity question: What is the general status of UDT vs. TDT among yall serious FAI research people? MIRI’s publications seem to exclusively refer to TDT; people here on LW seem to refer pretty much exclusively to UDT in serious discussion, at least since late 2010 or so; I’ve heard it reported variously that UDT is now standard because TDT is underspecified, and that UDT is just an uninteresting variant of TDT so as to hardly merit its own name. What’s the deal? Has either one been fully specified/formalized? Why is there such a discrepancy between MIRI’s official work and discussion here in terms of choice of theory?
Why do you say that? If I do a search for “UDT” or “TDT” on intelligence.org, I seem to get about an equal number of results.
This seems accurate to me. I think what has happened is that UDT has attracted a greater “mindshare” on LW, to the extent that it’s much easier to get a discussion about UDT going than about TDT. Within MIRI it’s probably more equal between the two.
As I recall, Eliezer was actually the one who named UDT. (Here’s the comment where he called it “updateless”, which everyone else then picked up. In my original post I never gave it a name but just referred to “this decision theory”.)
There has been a number of attempts to formalize UDT, which you can find by searching for variations on “formal UDT” on LW. I’m not aware of a similar attempt to formalize TDT, although this paper gives some hints about how it might be done. It’s not really possible to “fully” specify either one at this time because both need to interface with a to-be-discovered solution to the problem of logical uncertainty, and at this point we don’t even know the type signature of such a solution. In the attempts to formalize UDT, people either make a guess as to what the type signature is, or side-step the problem by assuming that all relevant logical facts can be deduced by the agent.
Thanks! This is exactly the kind of answer I was hoping for. A lot of it was what I had sort of deduced from looking at MIRI docs and stuff, but having it laid out explicitly seems to have clicked the missing elements into place and I feel like I understand it much better now.
You might also find this honor’s thesis by Daniel Hintze handy.
I’m not serious, but I’d say that there’s little actual use of TDT because it requires us to solve the difficult problem of finding the right causal and logical structure of the problem—this can be handwaved in by the user, but doing that feels awkward. Folk-UDT (“just execute the best strategy”) is sufficient for most purposes, both in application and in e.g. trying to understand logical uncertainty.
On the other hand, using causal structure is what lets us consider hypotheticals properly—so TDT will not have some issues that typical-UDT does with hypotheticals about its own actions. On the mutant third hand, TDT’s solution of adding logical nodes to the causal structure might just be a simplification of something deeper, so it’s not like we (us non-serious decision-theory dilettantes) should put all our eggs in one basket.
What is an example of an issue that UDT has with hypotheticals that TDT does not?
The 5 and 10 problem is basically what happens when your agent asks “what are the logical implications if 5 is chosen?” rather than “If we do causal surgery such that 5 is chosen, what’s the utility?”
There are other ways to avoid the 5 and 10 problem, but I think they’re less general than using causality.
Here’s one attempt to further formalize the different decision procedures: http://commonsenseatheism.com/wp-content/uploads/2014/04/Hintze-Problem-class-dominance-in-predictive-dilemmas.pdf (H/T linked by Luke)
All the things you’ve heard are consistent and together they answer your final question by denying that there is a discrepancy in choice of theory, just in choice of name. (Not that I’m sure that all the things you’ve heard are true.)
That would make “TDT is underspecified” a rather odd thing for someone to say, though.
Good questions!
I was feeling lethargic and unmotivated today, but as a way of not-doing-anything, I got myself to at least read a paper on the computational architecture of the brain and summarize the beginning of it. Might be interest to people, also briefly touches upon meditation.
EDIT: Just realized, this model explains tulpas. Also has connections to perceptual control theory, confirmation bias and people’s general tendency to see what they expect to see, embodied cognition, the extent to which the environment affects our thought… whoa.
Could you elaborate? I haven’t read the paper, but this connection doesn’t seem obvious to me.
O_O
This explains SO MUCH of things I feel from the inside! Estimating a small probability it’ll even help deal with some pretty important stuff. Wish I could upvote a million times.
How strong is the evidence in favor of psychological treatment really?
I am not happy. I suffer from social anxiety. I procrastinate. And I have a host of another issues that are all linked, I am certain. I have actually sought out treatment with absolutely no effect. On the recommendation of my primary care physician I entered psychoanalytic counseling and was appalled by the theoretical basis and practical course of “treatment”. After several months without even the hint of a success I aborted the treatment and looked for help somewhere else.
I then read David Burns’ “Feeling Good”, browsing through, taking notes and doing the exercises for a couple of days. It did not help, of course in hindsight I wasn’t doing the treatment long enough to see any benefit. But the theoretical basis intrigued me. It just made so much more sense to be determined by one’s beliefs than a fear of having one’s balls chopped off, hating their parents and actively seeking out displeasure because that is what fits the narrative.
Based on the key phrase “CBT” I found “The now habit” and reading me actually helped to subdue my procrastination long enough to finish my bachelor’s degree in a highly technical subject with grades in the highest quintile. Then I slipped back into a phase of relative social isolation, procrastionation and so on.
We see these phenomena consistently in people. We also see them consistently in animals being held in captivity not suited to their species’ specific needs. I am less and less convinced that this block of anxiety, depression and procrastination is a disease but a reaction to an environment in the broadest sense inherently unsuitable to humans.
The proper and accepted procedure for me would be to try counseling again, this time with a cognitive behavioral approach. But I am unwilling to commit that much time for uncertain results, especially now that I want to travel or do a year abroad or just run away from it all. (Suicide is not an option) What lowers my odds of success even more is that I never feel understood by people put in place to understand in various venues. So how could such a treatment help?
I am open to bibliotherapy. I don’t think I am open to traditional or even medical therapy.
I have suffered from social anxiety continuously and depression off and on since childhood. I’ve sought treatment that included talk therapy and medication. Currently I am doing EMDR therapy which may or may not end up being helpful, but I don’t expect it to work miracles. Everyone in my immediate family has had similar issues throughout their lives. I feel your pain. Despite not being perfect and being in therapy, I feel like my life is going pretty well. Here is what has worked for me:
Acceptance: Not everyone can be or should be the life of the party. Being quiet or reserved or shy is a perfectly acceptable way to live your life. You can still work on becoming comfortable in more social situations but you are fine right now. There are plenty of people who will like you just as you are, even if you social skills are far from perfect. Harsh self-judgement can make anxiety worse and lead to procrastination and depression. What I try to do as best I can is to just do whatever I feel like in the moment, and just let the world correct me. I try not to develop too many theories about how the world will react to me since I know from experience that those theories will be biased and pessimistic.
Decide what you want from the world: I guess this is somewhat generic life advice, but it has really worked for me. I decided fairly early on what I wanted to get from the social world. I wanted 3 things.
marriage
children
a good career
Deciding those things, I plugged away at getting them. I was completely incompetent at talking to women but with some help from e-harmony I found one who I was able to be comfortable with and who liked me. We got married 6.5 years ago and we have a 2 year old daughter and another child on the way. Professionally, I found a career that involves a minimum of politicking and no customer interaction. And yet it is both intellectually satisfying and highly remunerative. Even though neither my home life nor my professional life are perfect, achieving my basic life goals has given me a deep feeling of confidence and satisfaction that I can use to counter feelings of anxiety and depression as they come.
Each step I took along the path towards my goals gave me more confidence to move forward, but that confidence wasn’t necessarily automatic. I have to periodically brag to myself about myself because otherwise I will naturally focus on my failures and weaknesses and start to feel like a loser. You should be very proud of your accomplishments in college. Most people could not do what you have done. Remind yourself of that. Feel good about yourself.
So, can you say more about what aspect of your environment is bugging you? Captivity?? Do you want to try living somewhere more “outdoors”?
I am imagining that some issues of depression/social anxiety might be a lot easier resolved in an ancestral environment. Especially the social anxiety part.
It was mainly a thought that occured to me to write down as the rest of the story wrote itself. My problem is more social anxiety, which of course pertains to the social environment. Moving of course will not help this anxiety one bit, more probably even amplify it.
I think the evidence shows that it works for some people, doesn’t work for other people, and the spectrum of outcomes stretches all the way from “miraculously fixed everything” to “made everything worse” :-/
Oh, and “some people” and “other people” refers not just to the person being treated, but to a patient/psychotherapist pair. It is fairly common for people to have no success with a chain of therapists until they find “the one” who clicks and can effectively help with whatever the problem is.
Sorry, but there is really no answer to the question as posed.
So continue burning through therapists in the hope of being understood. Is there any shred of evidence that I should try psychoanalytic treatment again? From my impression the effect of it is similar to homeopathic treatment.
How can I restate it to get a more answerable question?
I don’t know. Note that this answer is different from “continue with what you were doing”. One of the points here is that any advice has to be highly personalized and generic recommendations are quite useless.
As an aside, are you looking for a therapist to understand you, or to effect some change in you?
I don’t think you can get a useful answer from strangers on the ’net.
What does it mean for a dog to be procrastinating?
Procrastination usually involves human wanting to do things that are not natural.
I used to believe that procrastination was something very unique to me but today I believe that nearly everyone struggles with it to some extend. Even someone like Tim Ferriss who advises a dozen startups and writes a book at the same time still deals with it. People who are productive simply have found strategies to still be productive despite being imperfect humans.
You already read Burns. How about doing 15 minutes per day of his exercises for the next year?
Indeed I can try again. Though social cues are quite powerful in maintaining the routine.
Having options is nice. Also more varied experiences tend to stick better, like reading two different explanations of the same phenomenon.
Not at all. Procrastination is letting near and immediate incentives overcome far and remote ones.
People procrastinate by browsing the ’net instead of going running—which one is more “natural”?
Going running for the sake of doing exercise isn’t natural.
Browsing the net= being sedentary, saving energy, staying in a place you know is safe and has access to food and water. Running= Wasting a shit ton of energy and putting yourself into the world and at risk for no immediate gain.
Seems obvious to me which you would be more naturally inclined to do.
I’ve heard the idea from Somatic Experiencing—unfortunately, I haven’t found anything that goes into detail about that particular angle, except that part of it seems to be about having a tribe—it’s not just about spending time out of doors.
I’ll be keeping an eye out for information on the subject, but meanwhile, you might want to look into Somatic Experiencing and Peter A. Levine.
This scratches on some things some popular people sometimes note: A feeling of being derooted, having no sense of belonging or meaning. Maybe this is a reason for the recent resurging of religious organizations. Of course if this vague shred of an idea has some truth to it one should be able to create or find a tribe substitute.
I will look into it, thank you.
Consider neurofeedback administered by a professional. In the U.S. it will cost between $50/200 a session. You probably need at least 20 sessions for permanent results, but you might be able to feel some effects during the first session.
Source of information about effectiveness and duration?
None online. I have read several books on the topic and undergo it myself.
If you don’t mind, what were the books, and what changes have you noticed in yourself?
Protocol Guide for Neurofeedback Clinicians (very expensive but the best); The Neurofeedback Solution How to Treat Autism, ADHD, Anxiety, Brain Injury, Stroke, PTSD, and More; and Getting Started with Neurofeedback (Norton Professional Books).
Neurofeedback has many different targets. I have used it to become more relaxed and focused. Most of what I learned came from talking to neurofeedback professionals. I strongly suggest you not experiment on yourself, but rather do so under the care of a professional.
Existent. But psychological treatment is in it’s infancy. I am not a licensed mental health professional, but watch this:
https://www.youtube.com/watch?v=_V_rI2N6Fco
Now, go find a therapist who’s at least 45 years old, preferably 50-plus, is not burned out, and loves what they do. It doesn’t really matter what the therapeutic modality is. Don’t go to a thirty-something CBT-weenie.
Edit: A bunch of recent posts on my blog are about therapy. May or may not be useful:
http://meditationstuff.wordpress.com/
some personal anecdotes /data points from someone in a similar situation (social anxiety,depression,procrastination to the point of dropping out of uni,going abroad): I was lucky with my CBT-psychotherapist,they helped me unravelling that big knot of connected issues. I am still suffering, but now equipped to deal with it. That said, I decided to travel for 8month (NZ),basically as frontal override for some of my issues. Be aware traveling with mental issues can terribly backfire; you are on your own without your usual escape strategies. Depending on your flavour of issues, strategies to get around that vary but expect and be resolved to have the same bad days like at home. Having heaps of money to get your own room/room service/fast food/return tickets helps; ensure a really solid safety net from home (someone to lend you money,do minor services for you,people to call at strange times, people to call you regularly). Do not expect the condition “abroad” to change you quickly- you’ll still find it harder to get to know people than others. Expect lots of gras is greener-fallacy; I caught my brain giving the exactly same reasons for going home early that it gave month ago for going away. That said,was going away a good decision? Yes. Was it the optimal decision? I am not sure.
Making friends is hard with social anxiety but I think it’s your best bet.
Who are you, what are your physical and social environments like, and do you do the obvious things like lifting weights (or at least similar if you’re female) and eating “right”?
The only reason to pay someone for non-specific therapy is if you don’t have any friends, and even then you can’t be truly honest without risking being institutionalized.
Disagree. Frequent discussion of one’s anxieties can be a heavy burden on a friendship, and it’s vulnerable to cascading failures. If I have four friends and spread my worries evenly between them, and one finds this exhausting and decides to spend less time with me, then I have three friends I can talk to, each of whom will suddenly find me even more stressful to be around.
Friends include family.
If you have a shitty relationship with your family, then that sucks. If you’re male, suck it up and be a man. If you’re female and not ugly, you have an unlimited number of guys to dump your feelings into. If you’re female, ugly, and have a shitty relationship with your family, then you probably have simiar friends and already share your feelings a lot with each other without fear of rejection so you’re good.
Unless you’re just gaming the system for pills (which is fine, if you know what you’re doing), then professional therapy for non-specific stuff is pointless.
It’s not useful to discuss whether or not anxiety, depression, or procrastination is a “disease.” It either is or isn’t a useful way to adapt to the current environment, and if it’s not useful you want to change either your reaction or your environment.
If by psychological treatment, you mean the Freudian kind, that’s mostly BS.
I get confused when people use language that talks about things like “fairness”, or whether people are “deserving” of one thing or another. What does that even mean? And who or what is to say? Is it some kind of carryover from religious memetic influence? An intuition that a cosmic judge decides what people are “supposed” to get? A confused concept people invoke to try to get what they want? My inclination is to just eliminate the whole concept from my vocabulary. Is there a sensible interpretation that makes these words meaningful to atheist/agnostic consequentialists, one that eludes me right now?
Here are some things people might describe as “unfair”:
Someone shortchanges you. You buy what’s advertised as a pound of cheese, only to find out at home that it’s only four-fifths of a pound; the storekeeper had their thumb on the scale to deliberately mis-weigh it.
Someone passes off a poor-quality item as a good one. You buy a sealed box of cookies, only to find out that half of them are broken and crumbled due to mishandling at the store.
Someone entrusted with a decision abuses that trust to their advantage. The facilities manager of a company doesn’t hire the landscaping company that makes the best offer to the company, but instead the one that offers the best kickback to the facilities manager.
Someone uses a position of power to take something that isn’t theirs; especially when the victim can’t do anything about it. A boy’s visiting grandmother gives him $50 to buy a video game for his birthday; but as soon as the grandmother has left, the boy’s mother takes the money away and uses it to buy liquor for herself.
Someone abandons a responsibility, leaving it to others to cover. Four people go out to dinner together; and the bill comes to $100. One person excuses himself “to go to the restroom,” but doesn’t come back, so the others have to pay his share of the bill as well as their own.
Someone takes advantage of a person’s weak or ignorant position. A taxi driver, knowing that a tourist doesn’t know the city, takes a deliberately circuitous route to run up the meter.
Someone uses asymmetrical information to deprive others of a stronger negotiating position. An employer tells each of her employees individually that they are poor performers, easily replaceable, and unlikely to get a raise; so that they do not realize that together they are not easily replaceable and that by collective bargaining they could negotiate for higher wages.
Someone breaks agreed-upon rules to take something of value. A poker player uses a trick to put a card into play that wasn’t dealt to him — the classic “ace up the sleeve” — in order to win money that another player would have won.
Someone entrusted to do a good job instead does a bad job in order to gain an advantage some other way. A star sports player deliberately plays poorly so his team will lose a game they are strongly favored to win, allowing people who have bet against his team to win big.
Someone gets away with breaking the rules by making outside arrangements with those responsible for enforcing them. By donating to the “police charitable fund,” you get a bumper sticker that makes it less likely the police will pull you over if you break the traffic laws.
What sorts of things do you see in common among these situations?
Your list seems a bit… biased.
Let’s throw in a couple more situations:
A homeless guy watches a millionaire drive by in a Lamborghini. “That’s not fair!” he says.
An unattractive girl watches an extremely cute girl get all the guys she wants and twirl them around her little finger. “That’s not fair!” she says.
A house owner learns that his house will be taken away from him under an eminent domain claim by the state which wants a developer to build a casino on the land. “That’s not fair!” he says.
A union contractor is undercut on price by a non-union contractor. “That’s not fair!” he says.
While people say “That’s not fair” in the above examples and in these, it seems there are two different clusters of what they mean. In the first group, the objection seems to be to self-serving deception of others, particularly violation of agreements (or what social norms dictate are implicit agreements). Your examples don’t involve deception or violation of agreements (except perhaps in the case of eminent domain), and the objection is to inequality. I find it strange that the same phrase is used to refer to such different things.
I think you could say that in both groups, people are objecting because society is not distributing resources according to some norm of what qualities the resource distribution is supposed to be based on.
In the first group of examples, people are deceiving others and violating agreements, and society says that people are supposed to be rewarded for honest behavior and keeping agreements.
For the second group of examples:
The homeless person example is a bit tricky, since there are multiple different norms that they might be appealing to, but suppose that the homeless person used to be a hard worker before he got laid off and lost his home. The homeless person may then be objecting that society is supposed to reward a willingness to put in hard work, whereas he doesn’t perceive the millionaire as having worked equally hard. Or, the homeless person may think that society should provide some minimum level of resources to everyone, and the fact that he has nothing while another person has millions demonstrates a particularly blatant violation of this rule.
There’s a social ideal saying that people should be rewarded for their “internal” characteristics (like honesty) rather than “external” ones (like appearance), so the unattractive girl is objecting to the attractive girl being rewarded for something she’s not supposed to be rewarded for.
The house owner is objecting because we usually think that people should be allowed to keep the property they have worked to have, and the eminent domain claim is violating that intuition.
The union contractor is complaing because he thinks that being unionized provides benefits for the profession as a whole, and that the non-union contractor is getting a personal benefit while defecting against the rest of the profession.
Regardless of what your ideal society looks like, creating it probably requires consistently maintaining some algorithm that rewards certain behaviors while punishing others. Fairness violations could be thought of as situations where the algorithm doesn’t work, and people are being rewarded for things that an optimal society would punish them for, or vice versa.
You could also say that in both groups, there is actually an implicit agreement going on, with people being told (via e.g. social ideals and what gets praised in public) that “if you do this, then you’ll be rewarded”. If you buy into that claim, then you will feel cheated if you do what you think you should do, but then never get the reward.
Of course, the situation is made more complicated by the fact that there is no consistent, univerally-agreed upon norm of what the ideal society should be, nor of what would be the optimal algorithm for creating it. People also have an incentive to push ideals which benefit them personally, whether as a conscious strategy or as an unconscious act of motivated cognition. So it’s not surprising that people will have widely differing ideas of what “fair” behavior actually looks like.
However looking at reality, the phrase is used in all these ways, isn’t it?
As Bart Wilson mentions here, a century ago the word “fairness” referred exclusively to the first cluster. However, due to various political developments during the past century it has drifted and now refers to a confused mix of both.
Indeed it is, which is evidence for the two different types of situations feeling similar to people.
That’s odd … I was specifically trying to choose examples that would be relatively uncontroversial — cases of cheating, betrayal of trust, abuse of power, and so on; as opposed to cases of mere inequality of outcome.
That’s a bias, isn’t it? :-)
If you’re choosing examples to construct a definition from, already having a definition in mind makes the exercise pointless.
If you choose examples of fraud and abuse of power you essentially force the definition of “unfair” be “fraud and abuse of power”.
Wow, and here I thought I’d be dinged for including such mildly politicized examples as the police one and the collective-bargaining one. Instead, I get dinged for not including a bunch of stuff likely to provoke a political foofaraw about class, gender, or eminent domain? Weird.
Okay, this is getting excessively meta. I’m done here.
Maybe you should have been more concerned with figuring out how stuff really works and less with the possibility of provoking a political foofaraw on an internet forum...
Nickpick: Your third example:
Is similar to one of fubarobfusco’s examples:
There is a subtle, but important difference. Many people (here and elsewhere) would consider the exercise of eminent domain powers by the state to be ethical and correct application of state powers for the betterment of society—a few suffer but for the greater good.
Yes, and if the example had involved a road or other public works project, as opposed to immediately selling the land to a developer, your objection would have been appropriate.
Oh, but the developer will provide jobs, and serve as an attractor for other businesses, and generally lift the area economically, and pay taxes into state coffers, and there will be gallivanting unicorns under the rainbows, and the people will look at the project and say “This is good”.
If you believe what the state will tell you.
So whether that example fits with the first set depends on whether the state’s claim that the project is good is true, and thus whether this example it is perceived as fitting with them depends on whether the perceiver believes the claim. Similarly, the Lamborghini example fits if one accepts the Marxist theory about the origin of income inequality.
Now we come to your example of the two girls. It’s hard to make it an example of “fraud or abuse of power” (although it might be possible with enough SJ-style rhetoric about how beauty is an oppressive social construct). Notice that it is similar to the Lamborghini example otherwise, in particular it seems like the kind of thing that fits in the category whose archetypical member is the Lamborghini example.
So we can now reconstruct a history of the meaning of “unfair”. Originally, i.e., about a century ago, it meant basically “fraud, cheating, or abuse of power”. As Marxism became popular it expanded to include income inequalities, which fit that definition according to Marxist theory. Later as differences of income became one of the archetypical examples of “unfairness” and as the theory underlying its inclusion became less well-known, more things such as the two girls example came to be included in the category. See the history of verbs meaning “to be” in Romance Languages for another (less mind-killing) example of how semantic drift can produce these kinds of Frankencategories.
I think it’s simpler, without getting Marxism involved. The key word is “entitlement”. If you feel entitled to something, then if you don’t have it, someone is cheating you out of your right—it’s unfair! Doesn’t really matter who, too—nowadays people point at the universe and shout “Unfair!” :-/
The general principle seems to be that there’s an expectation of certain behavior, but one person acts deceptively in a way that harms the other people.
It’s not a theistic concept—if anything, it predates theology(some animals have a sense of fairness, for example). We build social structures to enforce it, because those structures make people better off. The details of fairness algorithms vary, but the idea that people shouldn’t be cheated is quite common.
I am with Stanislaw Lem—it’s hard to communicate in general, not just about fairness. I find so many communication scenarios in life resemble first contact situations..
It’s a cultural norm. If someone constantly defects in prisoner dilemma he’s violating the norm of fairness and deverses to be punished for doing so.
Except that in a lot of accusations of “unfairness” there is no obvious prisoner-dilemma-defection going on.
Not lynching rich bankers means choosing to cooperate. Having a social landscape that’s peaceful and without much violence isn’t something to take for granted.
That is not a prisoner’s dilemma.
We sort of have an informal agreement of the proletarians not making a revolution and hanging the rich capitalists in return for society as a whole working in a way that makes everyone better of.
Rich bankers not fulfilling their side of working to make everyone in society better of is defecting from that agreement.
No, we don’t have anything of that sort.
Marx was wrong. He is still wrong.
Marx argued that a revolution is the only way to create meaningful social change. That’s not what I’m saying in this instance.
Political power is justified in continental Europe through the social contract. Hobbes basically made the observation that every men can kill very other man in the state of nature and that we need a sovereign to wield power to prevent this from happening.
Even British Parliamentary Style debate that’s not continental in nature usually doesn’t put the same value on freedom as a political value as people in the US tend to do.
As far as the US goes the American dream is a kind of informal agreement. You had policies like the New Deal to keep everyone in society benefiting from wealth generation.
Then in the last 3 decades most of the new wealth went to the upper class instead of being distributed through the whole society as it had been in the decades before that point.
Marx argued for a lot of things. The particular thing that I have in mind here is his position that the society consists of two classes—a dispossessed (“alienated”) proletariat and fat-cat capitalists, that these two classes are locked in a struggle, and that the middle class is untenable and is being washed out. This is the framework which your grandparent comment relied on.
It was wrong and is wrong.
I don’t think saying “That is not a prisoner’s dilemma” is a useful way of communicating “those players don’t exist.”
Also, the topic at hand is what do people mean by “fair,” not whether the situations they do or do not call fair are real situations.
The notion of “middle class” is involves having more than two sides. People calling themselves “upper-middle class” is a very American thing to do. In the US ideal a person of middle class is supposed to own his own home and therefore own capital.
Workers do organize in unions and use their collective bargaining power to achieve political ends in the interests of their members. When a union makes a collective labor agreement with industry representatives you do have two clearly defined classes making an agreement with each other.
In the late 19th century a bunch of unions did support the communist ideal of revolution but most of them switched.
Groups like the US Chamber of Commerce do have political power. Money of capitalists funds a bunch of think tanks who do determine a lot of political policy. Do you think that the Chamber of Commerce isn’t representing the interest of a political class of capitalist?
Yes, individual people might opt out of being part of politics. We aren’t like the Greek who punished people by death for not picking political sides.
Lastly, I would point out that I speak about political ideas quite freely and without much of an attachment. It might be that you take a point I’m making overly seriously.
Ah. OK then.
How would you apply that to Lumifer’s second example?
The usual way groups of girls deal with this is to call the girl who actually twirls around a lot of guys around her little finger a slut. The punishment isn’t physical violence but it’s there.
The sense of fairness evolved to make our mental accounting of debts (that we owe and are owed) more salient by virtue of being a strong emotion, similar to how a strong emotion of lust makes the reproductive instinct so tangible. This comes in handy because humans are highly social and intelligent and engage in positive-sum economic transactions, so long as both sides play fair… according to your adapted sense of what’s fair. If you don’t have a sharp sense of fairness other people might walk all over you, which is not evolutionarily adaptive. See “The Moral Animal” or “Nonzero” by Robert Wright, or the chapter “Family Vaules” in Steven Pinker’s “How the Mind Works.”
This sense of fairness may have been co-opted at other levels, like a religious or political one, but it’s quite instinctual. Very young children have a strong sense of fairness before they could reason to it, just as they can acquire language before they could explicitly/consciously reason from grammar rules to produce grammatical sentences. It’s very engrained in our mental structure, so I think it would take quite an effort to “wipe the concept.”
So, as I’ve heard Mike Munger explain it, fairness is evolution’s solution to the equilibrium outcome selection problem. “Solution to the what?” you ask. This would be easy to explain if you’re familiar with the Edgeworth box.
In a simplified economy consisting of two people and two goods, where the two people have some combination of different tastes and different initial baskets of things. Suppose that you have 20 oranges and 5 apples, and that I have 3 oranges and 30 apples, and that we each prefer more even numbers of fruits than either extreme. We can trade apples and oranges to make each of us strictly better off, but there’s a whole continuum of possible trades that make us better off. And with your highly advanced social brain, you can tell that some of these trades are shit deals, like when I offer you 1 apple for 12 of your oranges. Even though we’d both mutually benefit, you’d be inclined to immediately counteroffer with something a closer to the middle of the continuum of mutually beneficial exchanges, or a point that benefits you more as a reprimand for my being a jerk. Dealing fairly with each other skips costly repeated bargaining, and standing up to jerks who deviate from approximate fairness preserves the norm.
This is the sort of intuition that we’re trying to test for in the Ultimatum game.
“Fairness” generally means one out of two things.
Either it’s, basically, a signal of attitude—to call something “fair” is to mean “I approve of it”—or it is a rhetorical device in the sense of a weapon in an argument.
I think that people generally have gut ideas about what fairness entails, but they are fuzzy, bendable, and subject to manipulation, both by cultural norms and by specific propaganda/arguments.
According to Moral Foundations Theory, fairness is one of the innate moral instincts.
According to Scott Adams, fairness was invented so children and idiots can participate in arguments.
I think we have a fairness instinct mostly so we can tell clever stories about why our desire for more stuff is more noble than greed.
It might be that “fairness” is part of our ingrained terminal values. Of course it doesn’t mean you shouldn’t violate “fairness” when the violation is justified by positive utility elsewhere. However, beware of over-trusting your reasoning.
Tracing the memetic roots back, you could say that ‘fairness’ derives from the assumption that all humans have equal inherent worth, which I suppose you could link back to religious ideals. Natural rights follow from this same chain, but it’s not obvious to me what concepts came first and caused the others (never mind what time they were formalized).
If you want to strike it from your thinking, keep in mind that fairness is a core assumption of our social landscape, for better or worse. It can be worth keeping solely because people might hate you if you don’t.
The word “fairness” has been subject to a lot of semantic drift during the past century. Here is a blog post by Bart Wilson, describing the older definition, which frankly I think makes a lot more sense.
Humans are diverse.
I mean this not only in the sense of them coming all kinds of shapes, colours and sizes, having different world views and upbringings attached to them, but also in the sense of them having different psychological, neurological and cultural makeup. It does not sound like something that needs to explicitly said but apparently it needs to be said.
Of course first voices have realised that the usual population for studies is WEIRD but the problem goes deeper and further. Even if the conscientious scientist uses larger populations, more representative for the problem at hand, the conclusions drawn tend to ignore human diversity.
One of the culprits is the concept of “average” or at least a misuse of it. The average person has an ovary and a testicle. Completely meaningless to say, yet we are comfortable in hearing statements like “going to college raises your expected income by 70%” (number made up) and off to college we go. Statements like these suppress a great deal of relevant information, namely the underlying, inherent diversity in the population. Going to college may increase lifetime earnings, but the size of this effect might be highly dependent on some other factor like inherent cognitive ability and choice of major.
Now that is obvious, you might say, but virtually all research shows that this is not the case. It was surprising to see that the camel has two humps, that is, one part of the population seems to be incapable of learning programming, while the other is. And this can be determined by the answer to a single question. Research on exercise and diet is massively convoluted with questions about endurance/strength and carbs/fats. May this be because of ignoring underlying biological factors?
People are touting the coming age of personalised medicine as they see massively diminishing returns on generic medicine. Ever more diseases are hypothesised to have very specific causes for each person necessitating ever more specialised treatment. The effects of psychedelic substances are found to be dependant on the exact psychological makeup, e.g. cannabis causing psychosis only in individuals already at risk for such episodes.
There is no exact point to this rant. Just the observation that ever more statements are similar to saying “having unprotected sex with your partner has a high probability of leading to pregnancy” to homosexual man.
The study you’re probably thinking of failed to replicate with a larger sample size. While success at learning to code can be predicted somewhat, the discrepancies are not that strong.
http://www.eis.mdx.ac.uk/research/PhDArea/saeed/
The researcher didn’t distinguish the conjectured cause (bimodal differences in students’ ability to form models of computation) from other possible causes. (Just to name one: some students are more confident; confident students respond more consistently rather than hedging their answers; and teachers of computing tend to reward confidence).
And the researcher’s advisor later described his enthusiasm for the study as “prescription-drug induced over-hyping” of the results …
Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.
The failure to replicate was of their test, not of the initial observation. Specifically it was considered interesting why the distribution of grades in CS (apparently typically two-humped) was different from eg mathematics (apparently typically one-humped). As far as I know this still remains to be explained.
See also the comments of Yvain’s What Universal Human Experiences Are You Missing Without Realizing It? for a broad selection of examples of how human minds vary.
Oh, now I realized the point of that article was the comments, not the article itself. Thanks for clarifying this!
There are three separate issues:
(a) The concept of averaging. There is nothing wrong with averages. People here like maximizing expected utility, which is an average. “Effects” are typically expressed as averages, but we can also look at distribution shapes, for instance. However, it’s important not to average garbage.
(b) The fact that population effects and subpopulation effects can differ. This is true, and not surprising. If we are careful about what effects we are talking about, Simpson’s paradox stops being a paradox.
(c) The fact that we should worry about confounders. Full agreement here! Confounders are a problem.
I think one big problem is just the lack of basic awareness of causal issues on the part of the general population (bad), scientific journalists (worse!), and sometimes even folks who do data analysis (extremely super double-plus awful!). Thus much garbage advice gets generated, and much of this garbage advice gets followed, or becomes conventional wisdom somehow.
That depends. Mostly they are used as single-point summaries of distributions and in this role they can be fine but can also be misleading or downright ridiculous. The problem is that unless you have some idea of the distribution shape, you don’t know whether the mean you’re looking at is fine or ridiculous. And, of course, the mean is expressly NOT a robust measure.
The Eurythmics said it best:
I travel the world and the seven seas
Everybody’s looking for something
Some of them want to use you
Some of them want to get used by you
Some of them want to abuse you
Some of them want to be abused
I’ve been struggling with how to improve in running all last year, and now again this spring. I finally realized (after reading a lot of articles on lesswrong.com, and specifically the martial arts of rationality posts) that I’ve been rationalizing that Couch to 5k and other recommended methods aren’t for me. So I continue to train in the wrong way, with rationalizations like: “It doesn’t matter how I train as long as I get out there.”
I’ve continued to run intensely and in short bursts, with little success, because I felt embarrassed to have to walk any, but I keep finding more and more people who report success with programs where you start slowly and gradually add in more running.
Last year, I experimented with everything except that approach, and ended up hurting myself by running too far and too intensely several days in a row.
It’s time to stop rationalizing, and instead try the approach that’s overwhelmingly recommended. I just thought it would be interesting to share that recognition.
You might also want to work on eliminating embarrassment.
Any guides on how to do that?
Rejection Therapy is focused in that direction.
That game is terrifying just to think about.
Awesome, do you have more like that?
Maximize embarrassment until you’re no longer capable of feeling shame from the foibles and sensibilities of mere humans.
Psychological theories like IFS would recommend charitably interpreting the inclination to embarrassment as a friendly impulse to protect oneself by protecting one’s reputation. For example, some people are embarrassed to eat out alone; a charitable interpretation is that part of their mind wants to avoid the scenario of an acquaintance of theirs seeing the lonely diner and concluding that they have no friends, and then concluding that they are unlikable and ostracizing them. Or a minor version of the same scenario.
Then one can assess just how many assetts are at stake: Realistically, nothing bad will happen if one eats out alone. Or one might decide that distant restaurants are safe. The anticipation of embarrassment might respond with further concerns, and by iterating one might arrive at a more coherent mental state.
Have you considered not running as your primary exercise program? If you aren’t specifically going for the performance of running, I would shelve it and instead cut calories (assuming you have extra weight to lose) and lift heavy things at the gym. Distance running is great for distance running.
I have been in multiple running groups and they are great for achieving goals like 26.2 miles, but after that, I wanted to optimize for looks and not for long distances (any more).
Unfortunately, I live in a rural area where gyms are hard to come by. I have enjoyed running for its own sake in the past, that’s a part of why I want to get back into running shape, but I will try to add in some body weight exercises as well as my running.
You don’t need a gym to exercise. Google up “paleo fitness”, Crossfit is full of advice about how to build a basic gym in your garage, etc. etc.
That’s great, it would be such a problem to not like running and not live near a gym. Good luck.
The best general advice I can give you is:
Be honest with yourself when determining your current abilities. There’s no shame in building slowly. It just means you get to improve even more.
Not every day is a hard day. There are huge benefits to varying your workouts. If you’re running about the same distance each day you run, you’re doing it wrong. Some days should be shorter, more intense intervals broken up by very slow jogs or walks, while other days should be “active recovery” days of short, slow runs, while other days you might go for distance and a sustained pace. Just to give an idea, even elite athletes will not usually do more than 2-3 hard (interval) days each week. You will want to start with 0 or 1.
Watch your volume: Slowly increase your total miles / week over time. Make sure you start low enough not to get repetitive stress injuries.
I was once a fairly successful runner and have a lot of experience with designing training programs for both distance running and weightlifting. I’d be happy to help you design your running program or to look over your program once you do some research and put something together. Let me know!
A side question: from a joint-stress point of view, is it better to have a heavily cushioned running shoe or it’s better to go for minimal shoes and avoid heel strike and running on hard surfaces?
That’s a tough question, and one I’ve actually struggled to answer myself.
If you ask anyone in the mainstream competitive running community, they’ll tell you to get a good, cushioned running shoe, but also to work on your form to develop a good midfoot strike. Runners often to barefoot drills and other drills to develop proper midfoot strike, but still run in cushioned running shoes. They’ll also go running on the beach barefoot if they can to improve foot strength and form.
Repetitive stress injuries (shin splints, stress fractures, joint and tendon problems) are the single most common injury in runners and have taken me out of the game many times, even when actively trying to prevent them and with proper coaching. Proper shoes and good running form are both supposed to reduce these injuries.
However, there are a lot of successful barefoot runners and I do think there is something to learn from the ancestral health and fitness communities. There are a lot of runners who go completely barefoot and a lot who use minimalist footwear like Vibrams and don’t report any issues. They claim that your body mechanics are better barefoot and I have to agree that we were built to run barefoot. However, a lifetime of wearing shoes could definitely make a difference on whether or not running barefoot is still a good idea.
I suspect that you just have to be a lot more careful with barefoot running and that it’s probably not a good idea for your joints or back longterm to run barefoot or minimal with high volume for years. But honestly, I don’t know if it’s any worse than doing it with cushioned running shoes. Runners in proper shoes also have joint problems when they get older.
Do you mean walk-run-walk-run in a single session? Or that you do short intense sessions with no walking?
I would just set up short runs around my apartment that were all “run” no walk and gradually increase my distance. But one of the problems was that I just wasn’t out there very long. It was a convenient excuse when I was busy to just run a 15 minute loop instead of run/walking for 30 minutes+.
Is there any specific reason why you’ve been avoiding those approaches (e.g. where you slowly increase)? You mention that you told yourself “It isn’t for me,” but haven’t told us why.
Something I’ve had trouble with now that I’m starting to run is finding a running/jogging speed that takes as little energy, while still not walking. The last time I ran I finally found it and severely decreased the time I spend walking. It might be helpful to find that speed. I can guarantee you that it will feel very slow.
It’s mostly just the contrast between how I learned running in High school cross country and what’s actually recommended now. There were no real rest days, we ran 5 days a week and we were supposed to run at least once on the weekends. We ran hill reps two days a week, and long runs on the other days. We were all on the same training program regardless of where we started from.
What I’ve read recently is that about 4 days a week is a better way to do it, at least during your early progress, with a mixture of long slow runs and some interval work outs once you’ve reached a good level of fitness.
Research on mindfulness meditation
Mindfulness meditation is promoted as though it’s good for everyone and everything, and there’s evidence that it isn’t—going to sleep is the opposite of being mindful, and a mindfulness practice can make sleep more difficult. Also, mindfulness meditation can make psychological problems more apparent to the conscious mind, and more painful.
The difficulties which meditation can cause are known to Buddhists, but have not yet known by researchers or the general public. The commercialization of meditation is part of the problem.
It’s opposite in some regards but not all of them. Both sleep and mindfulness meditation usually lead to very little beta wave activity in the brain.
I don’t have a Zeo myself but it wouldn’t surprise me if I could reach a state in meditation where I’m mindful but the Zeo labels me as sleeping.
As far as researching whether meditation improves how well you feel, I think that’s hard. 5 years ago, if you asked me how I’m feeling than a real answer might be good or bad. Maybe even 7 different stages of a Likert scale. Today a full answer might take 5 minutes because I have awareness of a lot of stuff that goes on inside myself. If you simply compare the values towards those 5 years ago I don’t think that would tell you very much. There was a while when I tried to keep numbers about my daily happiness level but after a while I simply gave up because it didn’t seem to provide useful insight because they reference points aren’t stable.
After I started meditating mindfully, my anxiety got worse, a lot worse. I talked about this on meditation forums and they said it means that “I’m working on my problems” and I should just keep doing it more and more and I would somehow overcome it. Well, I tried to, but my anxiety only got worse. Currently I have a small break from meditating.
How do you know that you are meditating mindfully? If you ask that question on a meditation forum they have no way to know whether you are doing things right.
If you want help in this venue it would help if you describe exactly what you think you are doing when you are “meditating mindfully”. It would also help to know what you exactly observed that makes you conclude that your anxiety got worse.
After I made that post I thought I should have put “tried” before “meditating mindfully”, but then I forgot about it. You’re right, I’m probably not doing it correctly.
I focus on my breath, but it’s of course really hard for me and I don’t know if I’m doing it properly. More specifically I focus on the feeling when air goes in and out of my nose. The problem is that I can either focus on my breath and breath forcefully, or I daydream and breath naturally. This process feels like a cat chasing its tail. In the “mindfulness in simple English” they said that I shouldn’t control my breathing, but I don’t know how to do that. It’s really hard for me to focus on my breath without trying to control it.
What exactly I observed? Usually I feel more tense and focus on myself more after I’ve meditated. I’m not sure if I can give more specific examples because I haven’t kept a diary about this.
As far as I understand some traditional Buddhist do advocate to feel the air going in and out of your nose. I think that practice might make sense for people who aren’t present in their head. For Western intellectuals who already spent a lot of time in their head I think it makes more sense to feel the breath in the belly.
Here on LW we also don’t meditate with the main purpose of speaking spiritual experiences. Opening the third eye isn’t the point of the exercise for us but it might be for some Buddhists who like focusing on the breath and if do that I feel that part of my attention is on the chakra generally called third eye.
To speak in a bit more New Age language focusing on your belly instead of your nose will make you more grounded.
From a more Western perspective good German physiotherapy says that it’s beneficial to breath with the belly instead of breathing higher in the body.
My first meditation book was from Aikido master Koichi Tohei. Tohei advocates a type of meditation where one is focused on the tan-diem as the locus of attention while meditating. The tan-diem is a chakra around an two finger breadth under the belly button. Tohei also calls it the center of the body and the one-point.
After googling a bit around the solar plexus might also be a good point but you don’t need to focus on a single point. The belly is good enough as an area.
If you are completely unable to be in a state where you don’t control your breath and don’t day dream start by focusing on deep long breathes will being focused on the belly and go for maximum length of breaths.
It’s unfortunate that I have to use words like chakra while speaking on LW but those words have some use. You don’t need to believe that chakras really exist. Just take them as crude approximations that the kind of people with experience of meditation use. Unfortunately I also don’t have good scientific evidence to back up what I said.
Meditation increases self awareness. That’s the point. The interesting thing would be whether you are also more tense by objective measures such as increased pulse or blood pressure. If you live together with other people you might also have them rate the level of your tension.
The Feeling Good handbook get’s frequently referenced on LW. In it Burns advocates that people who want to self treat anxiety spent 5 minutes every week to fill out a questionnaire that measures their anxiety levels.
If I would struggle with anxiety that I wanted to go away I would make myself a Google form with Burns anxiety list and answer it every sunday to see whether I’m improving as time goes on.
Having a free text diary is also valuable.
I tried what you suggested. I sat in one position for 50 minutes and tried to focus on the feeling of breathing in my belly (see how I tabooed my earlier use of “meditating mindfully”?) Here’s what I observed:
At first it was a bit hard to find the breathing, it’s more subtle than the feeling in my nostrils. But I was able to occasionally focus and my focus gravitated towards that region close to the belly button. It feels better to focus on my belly than on my nostrils. Focusing on nostrils feels heavy and shallow, while focusing on belly feels a bit more light and deep.
What surprised me most was that I felt like I was actually able to focus on the feeling in my stomach without trying to control my breathing as much. At least I was able to more easily convince that this was the case. It feels like nostrils are so close to where the act of breathing happens, while my belly is more distanced from this thing that does the breathing. It feels more like focusing on an external object.
It was mostly fantasizing and daydreaming and I was able to focus only for short periods of time, maybe a few seconds and just occasionally. I got obsessive-compulsive thoughts like “focus on your nostrils”, but I tried to be mindful about those and mostly succeeded. I was a bit tense and at least at one point I noticed my heartbeat was quite fast, which made me more anxious. Part of this tenseness was due to the fact that I decided a poor posture when I started. I decided not to change this posture along the way.
I feel more relaxed than when I started and I don’t usually feel like that when I’ve meditated. So overall, a positive experience, placebo or not.
Great. Thank you for sharing your experience. It sounds like you are moving in the right direction.
The fact that your heartbeat gets fast and an emotion comes up that makes you anxious is no bad sign. If you stay present and your body processes the emotion it’s dealt with. After processing strong emotions my body usually feels more relaxed than before. In meditation tension can rise to uncomfortable levels. Then the body recognizes the tension as unnecessary and the tension falls off.
I think 50 minutes are probably too much for you at your stage. Staying focused for 50 minutes is very hard and you are likely to lose your focus.
In your situation I would rather go for 10 or 15 minutes for meditating alone. Set an alarm clock. Once you reach the point where you feel like you can focus for longer periods of time you can increase the time you meditate.
If you want to spent more time writing down what you experienced like you just did is very useful. It allows you to make sense of the experience. That’s what diary writing is about.
I personally keep information like that in my own Evernote account and don’t have a physical diary that could lie around that someone could notice. You don’t need to talk with the kind of people who would look down on you for having a diary about the fact that you have a diary.
The point of writing things done in a diary is to refine your thinking. You force yourself to bring clarity into your thought. For me writing a post on LW like the one above about why I recommend focusing on the belly instead of the nose, refines my own thinking about meditation. Using you and LW as an audience instead of simply writing down my thoughts in a private journal has advantages or disadvantages. When writing for an LW audience I have to be more careful with terms like chakra then when I’m just writing for myself. Writing emails to friends can also be useful to refine your thoughts. You probably have a bunch of different friends with different perspectives on life and different level of trust when it comes to sharing personal experiences.
All my writing still goes into my Evernote account. Meditation can lead to perceiving a bunch of new things that you never experienced before. If you don’t want to become a mystic, putting cognitive labels on experience is important to keep your orientation and be able to navigate the world.
It was the main point of my first half. At this point in time understanding the “why” isn’t that important.
That’s an interesting way of putting it. With time your belly won’t feel like an external object anymore but will feel internal. At that point a lot of your anxiety issues will likely solve themselves.
I’m not sure if I got anything else out of your post, but I will try to focus on my belly the next time I meditate. The chakra and third eye stuff didn’t bother me, just maybe confused a little, but I have a vague feeling of what they might describe. I’ve actually downloaded the Feeling Good handbook, but reading the whole book is currently a pretty daunting task. That questionare seems easy so it might just be something I could do. Diary is also something I’ve tried to do, but akrasia has prevented me from doing it frequently (I’m also embarrassed if someone notices I’m keeping a diary which is of course really stupid and something I should work on).
Thanks for being kind, I expected a more hostile reply.
One piece of advice, sort of a shot in the dark but aimed at addressing a common failure mode. If you were trying to force yourself to meditate while sitting in an uncomfortable position or for excessive lengths of time, don’t do that. All you’re doing is training yourself to be pissed off and tense about meditating. Try just sitting comfortably in a chair and focusing on your breath for ten minutes, or even just five minutes at first if it’s really that arduous.
I agree. Just be sure that you sit in a stable position.
I personally can sit comfortably in lotus. It’s a learned skill but it’s not something you need to learn to be able to meditate and if you focus on it at the beginning you focus on the wrong thing.
Thanks, this is one of the very few meditation papers that seem to be worth reading, since as they observe:
How do I decide whether to get married?
My girlfriend of four years and I are both graduating college.
I haven’t found employment yet, and she’s returning home for work.
As near as I can tell, we’re very compatible.
Pros
We are very fond of each other, get a lot of value out of each other’s time.
We’ve been able to talk about the subject sanely.
Status
We agree on religion and politics.
Married guys make more on average, but the arrow of causality could point in either direction or come from something else.
Financial benefits
Cons
Negative Status associated with marrying young?
No jobs yet, no clear home or area to live in.
She sometimes gets mad at me for things I’m “just supposed to know” to do, not do, say, or not say. I’m not sure if she’s right and I’m a jerk.
She has said that she doesn’t want to marry me if she’s just my female best friend that I sleep with. But I don’t know how to evaluate what she’s asking. There are a number of possibilities. Maybe I don’t feel the requisite feelings and thus she wouldn’t want to be married. Maybe I do have the feelings and I have no way to evaluate whether I do or not. Maybe I’m not ever going to feel some extra undetected thing X, ever, and so I should just go through the motions saying that I do, and our marriage prospects are entirely unchanged. Maybe this is just some signalling ritual we have to go through.
We both are concerned that I’ve not really had a relationship not with her, so there are no points of comparison for me to make.
In your list you didn’t mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.
What exactly do you think/hope will change between the current situation (which I assume involves you two living together) and the situation if you were to marry?
Don’t get married unless there is a compelling reason to do so. There’s a base rate of 40-50% for divorce, and at least some proportion of existing marriages are unhealthy and unhappy. Divorce is one of the worst things that can happen to you, and many of the benefits of marriage to happiness are because happier people are more likely to get married in the first place.
What are her feelings about you? Are you “just” her “male best friend that she sleeps with”? Your post comes across as rather asymmetric.
Aren’t you “both concerned” that she had too many relationships and so may decide that you are not for her precisely because she has these “points of comparison”? I suspect that she is the dominant partner in this relationship, possibly because she is more mentally mature, and this is often a warning flag.
Do you get mad at her for things she is just supposed to know to do, say or not say?
Anyway. DO NOT GET MARRIED YET until you figure out how to be an equal in this relationship (and if you think that you are, then you are fooling yourself).
I don’t know what is the significance of marriage for you, except symbolic. IMO the truly critical point is having kids. You probably want to have stable income before that.
Regarding things you’re “just supposed to know”: same thing happens to me with my wife. Haven’t stopped us from being together for 10 years and raising a 4 year old son. Different people see things differently and have different assumptions on what is “obvious”. The important thing is being mutually patient and forgiving (I know it’s easier said than done, but it’s doable).
Regarding the “extra feeling”. Don’t really know what to tell you. It is difficult to compare emotional experiences of different people. When our relationship started, it was mad, passionate infatuation. Now it’s something calmer but it is obvious to me we love each other.
I had few relationships apart from my wife and virtually no serious relationships. Never bothered me.
And married women make less, so even assuming the arrow of causality is entirely from marital status to income it’s not clear to me what would happen to your combined income.
Even if your combined income decreases, your combined consumption probably increases, because many goods are non-rivalrous in a marriage situation. See here for a discussion.
I believe you meant decreases.
I think he means increases. If your consumption decreases, then your standard of living is falling and that doesn’t sound good at all.
Good point, but doesn’t that also apply to unmarried cohabitation?
EDIT: BTW, the bottom of your post says “[...] marriage makes family income go up via the large male marriage premium minus the small female marriage penalty”, which answers my question upthread.
It also applies in interesting ways to communal living.
In fact, given the magnitude of the effect, the question becomes “Why would anyone ever live alone?”. And the fact that a lot of people do this, by choice, leads into interesting directions...
Yes it does, so it’s not really an argument for the act of marriage itself, but on marriage-like behaviors.
As a starting point, run through this: http://www.justfourguys.com/female-divorce-risk-calculator/
Also, you should be the reluctant one, not her.
And, if neither of you are willing to live an at-least vaguely biblical marriage, then civilization would probably be better of with you just donating sperm to a sperm bank, keeping her from sleeping around, and encouraging her to marry someone who is stereotypically Christian and for whom she would be willing to convert.
Well, you are. Pending more life experience, find the most un-politicallycorrect Game blogger you can stomach and start there.
This isn’t a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site’s roots, there are a lot of misconceptions in this regard.
Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?
Not so much a specific misconception, but understanding the current state of AI research and understanding how mechanical most AI is (even if the mechanisms are impressive) should make you realize that being a “Friendly AI researcher” is a bit like being a unicorn tamer (and I mean that in a nice way—surely some enterprising genetic engineer will someday make unicorns).
Edit: Maybe I was being a little snarky—my meaning is simply this: Given how little we know about what actual Strong AI will look like (And we genuinely know very very little), any FAI effort will face tremendous obstacles in transforming theory into practice—both in the fact that the theory will have been developed without the guidance that real-world constraints and engineering goals provide, and the fact that there is always overhead and R&D involved in applying theoretical research. I think many people here underestimate this vast difference.
Some people might underestimate the difficulty. On the other hand even if doing FAI research is immensely difficult that doesn’t mean that we shouldn’t do FAI research. The stakes are to high to avoid doing the best we can.
I think that if we only start friendliness research when we’re obviously close to building an AGI, it will be too late.
I think that almost all research done before that will have to be thrown out. Maybe the little that isn’t will be worth it given the risks, but it will be a small amount.
How did you reach that conclusion? To me it seems very unlikely. For example it seems that there’s a good chance the AGI will have something called “utility function”. So we can start thinking of what is the correct utility function for a FAI even if we don’t know how to build an optimizer around it. We can study problems like decision theory to better understand the domain on the utility function. etc.
It’s not clear at all that AGI will have a utility function. But furthermore, bolting a complex, friendly utility function onto whatever AI architecture we come up with will probably be a very difficult feat of engineering, which can’t even begin until we actually have that AI architecture.
That’s something I’m willing to take bets on. Regardless, it is precisely the type of question we better start studying right now. It is a question with high FAI-relevance which is likely to be important for AGI regardless of friendliness.
I doubt it. IMO AGI will be able to optimize any utility function, that’s what makes it an aGi. However, even if you’re right, we still need to start working on finding that utility function.
I question both of these premises. It could be like you or I, in the sense that it simply executes a sequence of actions with no coherent or constant driving utility function (even long-term goals are often inconsistent with each other), and even if you could demonstrate to it a utility function that met some extremely high standards, it would not be persuaded to adopt it. Attempting to build in such a utility function could be possible, but not necessarily natural at all; in fact I bet it would be unnatural and difficult.
I understand your rebuttal to “friendliness research is too premature to be useful” is “It is important enough to risk being premature”, but I hope you can agree that stronger arguments would put forward stronger evidence that the risk is not particularly large.
But let’s leave that aside. I’ll concede that it is possible that developing a strong friendliness theory before strong AI could be the only path to safe AI under some circumstances.
I still think that it is mistaken to try to ignore intermediate scenarios and focus only on that case. I wrote about this in a post before, How to Study AGIs safely which you commented on.
I doubt the first AGI will be like this, unless you count WBE as AGI. But if it will, it’s very bad news, since it would be very difficult to make it friendly. Such an AGI is akin to an alien species which evolved under conditions vastly different from ours: it will probably have very different values.
So for example when Stuart Russell is saying that we really should get more serious about doing Friendly AI research, it’s probably because he’s a bit naive and not that familiar with the actual state of real-world AI?
I have updated my respect for MIRI significantly based on Stu Russell signing that article. (Russell is a prominent mainstream computer scientist working on related issues; as a result his opinion I think has substantially more credibility here than the physicists.)
If you don’t think that MIRI’s arguments are convincing, then I don’t see how one outlier could significantly shift your perception, if this person does not provide additional arguments.
I would give up most of my skepticism regarding AI risks if a significant subset of experts agreed with MIRI, even if they did not provide further arguments (although a consensus would be desirable). But one expert does clearly not suffice to make up for a lack of convincing arguments.
Also note that Peter Norvig, who coauthored ‘Artificial Intelligence: A Modern Approach’ with Russell, does not appear to be too worried.
I mean to say that if you understand the work of Russell or other AI researchers, you understand just how large the gap is between what we know and what we could possibly apply friendliness to. Friendliness research is purely aspirational and highly speculative. It’s far more pie-in-the-sky than anti-aging research, even. Nothing wrong with Russell calling for pie-in-the-sky research, of course, but I think most people don’t understand the gulf.
When somebody says something like “Google should be careful they don’t develop Skynet” they’re demonstrating the misunderstanding that we even have the faintest notion of how to develop Skynet (and happily that means AI safety isn’t much of a problem).
I’ve read AIMA, but aren’t really up speed on the last 20 years of cutting edge AI research, which it addresses less. I don’t have the same intuition about AGI concerns being significantly more hypothetical than anti-aging stuff. For me that would mean something like “any major AGI development before 2050 or so is so improbably it’s not worth considering”, given how I’m not very optimistic on quick progress in anti-aging.
This would be my intuition if I could be sure the problem looks something like “engineer a system at least as complex as a complete adult brain”. The problem is that an AGI solution could also be “engineer a learning system that will learn to behave at human level or above intelligence at human life timespan or faster”, and I have much shakier intuitions about what the minimal required invention is for that to happen. It’s probably still ways out, but I have nothing like the same certainty of it being ways out as I have for the “directly engineer an adult human brain equivalent system” case.
So given how this whole thread is about knowing the literature better, what should I go read to build better intuition on how to estimate limits for the necessary initial complexity of learning systems?
What do you mean with the term “mechanical”?
I’m guessing Punoxysm’s pointing to the fact that the algorithms used for contemporary machine learning are pretty simple; few of them involve anything more complicated than repeated matrix multiplication at their core, although a lot of code can go into generating, filtering, and permuting their inputs.
I’m not sure that necessarily implies a lack of sophistication or potential, though. There’s a tendency to look at the human mind’s outputs and conclude that its architecture must involve comparable specialization and variety, but I suspect that’s a confusion of levels; the world’s awash in locally simple math with complex consequences. Not that I think an artificial neural network, say, is a particularly close representation of natural neurology; it pretty clearly isn’t.
I agree with you on both counts—that most human cognition is simpler than it appears in particular. But some of it isn’t, and that’s probably the really critical part when we talk about strong AI.
For instance, I think that a computer could write a “Turing Novel” that would be indistinguishable from some human-made fiction with just a little bit of human editing, and that would still leave us quite far from FOOMable AI (I don’t mean this could happen today, but say in 10 years).
OK. I’ve seen a lot of people here say that Eliezer’s idea of a ‘Bayesian intelligence’ won’t work or is stupid, or is very different from how the brain works. Those familiar with the machine intelligence literature will know that, in fact, hierarchical Bayesian methods (or approximations to them) are the state of the art in machine learning, and recent research suggests they very closely model the workings of the cerebral cortex. For instance, refer to the book “Data Mining: Concepts and Techniques, 3rd edition” (by Han and Kamber) and the 2013 review “Whatever next? Predictive brains, situated agents, and the future of cognitive science” by Andy Clark: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=8918803
The latter article has a huge number of references to relevant machine learning and cognitive science work. The field is far far larger and more active than many people here imagine.
Nonpaywalled Clark link: http://users.monash.edu/~naotsugt/Tsuchiya_Labs_Homepage/Bryan_Paton_files/Commentary%20.pdf
Do you have a recommendation for a resource that explains the basics in a decent matter?
What would you consider the “very basics”?
What are some of the most blatant? Sorry to ask a question so similar to Squark’s.
A koan:
A monk came to Master Banzen and asked, “What can be said of universal moral law?”
Master Banzen replied, “Among the Tyvari of Arlos, all know that borlitude is highly frumful. For a Human of Earth, is quambling borl forbidden, permissible, laudable or obligatory?”
The monk replied, “Mu.”
Master Banzen continued, “Among the Humans of Earth, all know that friendship is highly good. For a Tyvar of Arlos, is making friends forbidden, permissible, laudable or obligatory?”
The monk replied, “Mu,” and asked no more.
Qi’s Commentary: The monk’s failure was one of imagination. His question was not foolish, but it was parochial.
Shouldn’t Banzen’s second question be something like “For a Tyvar of Arlos, is making friends frumful, flobulent, grattic, or slupshy?”?
I don’t really know anything about the Tyvar of Arlos, so I’m pretty confused on this front, but I’m fairly sure you’re relating a Talmudic anecdote, not a Zen one ;-). “Forbidden, permissible, laudable, or obligatory” says to me that we’re contemplating halachah.
I would hope you don’t know anything about them—they were made up on the spot. ^_^
And yes, I suppose the style here might well have been influenced from more than one place.
Sounds to me like the master’s jumping to more conclusions than the student is, here. His response makes sense if he wanted to break a sufficiently specific deontology (at least at interspecies scope), but there are a lot of more general things you could say about morality that aren’t yet ruled out by the student’s question.
How is this a failure of imagination? Why is the question parochial?
Parochial because he mistook a local property of mindspace for a global one; unimaginative because he never thought of frumfulness when considering what things a mind might value. “Good” is no more to a Tyvar than “frumful” to Clippy or “clipful” to a human.
this is silly. Good is a quite useful concept that easily stretches to cover entities with different preferences, but even if it does not, it’s STILL meaningful, and your clippy example shows us exactly why. The meaning of clipful, something like “causes there to be more paperclips” or whatever, is perfectly clear to if not really valued by humankind.
Is “good” what many sorts of intelligent beings strive to do? Then “good” is such things as self-improvement, rationality, survival of one’s values, anti-counterfeiting of value, personal survival, and resource acquisition. For any intelligent being that does not expend energy to survive will be washed away by entropy. And so, “good” is universal. (The sage Omohundro does not call it “good”, though; that is a novice’s word.)
Is “good” the noise that one group of one species of social creatures say when they comfort and praise their tribemates? Then “good” is such things as singing with a regular melody and rhythm, or setting up certain sorts of economic deals among tribemates and others; or leading the tribe’s warriors to dismember the others instead of being dismembered themselves; and it is parochial.
Ah, I see I was unclear. By “is no more to a Tyvar” I meant “is no more significant to a Tyvar” rather than “is no more comprehensible to a Tyvar.” Sorry; my fault.
How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?
It worked reasonably well for me.
For what kind of timeframe? Do the effects stay the same over time? Are there meaningful side effects?
Disclaimer: This stuff varies from person to person. I had already tried a number of similar medications before going on generic adderall, all without success. I’ve been on it now for almost a year, and it’s had a noticeable effect on my ability to concentrate on tasks and feel motivated to complete them. Often when I am struggling to focus on something to the point of not getting anything done, I’ll suddenly realize that I didn’t take my pill that morning. As far as I can tell these effects have been pretty consistent since the first week or so that I started taking it, although it’s possible that there was a “ramping up” period that I’ve since forgotten about. In terms of side effects, I didn’t need to take caffeine for the first few weeks while I acclimated to the drug, and was sort of jittery in an irritating way. That died down, obviously, although it remains stupendously unwise for me to take the pill at 10:30 or so or later, since it then keeps me up the following night.
What other drugs did you try?
Strattera and Focalin, and possibly another one that I’m forgetting.
When you say “without success” do you mean that these drugs did nothing useful, or just that they weren’t good enough? I don’t know strattera, but I think of methylphenidate (focalin) as very similar to amphetamine (adderall). Certainly methylphenidate is weaker than amphetamine, but I’d expect it to be a pretty good predictor of whether the amphetamine would work. So I am very surprised that I think you are saying that the one worked and the other didn’t, which is why I’m asking for clarification.
Strattera was actually quite a while ago (sadly I don’t remember the generic name) but I’m pretty sure it had no noticeable effect. I should probably clarify that the focalin actually did have a noticeable effect, but it was very weak and it had the same sleep-disturbing side effects as adderall, so it was not really worth it.
Thanks!
So the quantified self (QS) community has been existing for a while. Just as bodybuilding groups should be excellent test beds for what kind of exercises and chemicals will yield high results, the QS community should yield a preferably small, low-cost set of measures you should determine about yourself. Do these exist? Can be any blood measure, rhythm, time, psychological value, net worth …
There’s no standardized list.
Basically it turns out that it’s really hard to get people to measure specific stuff and it’s often a lot more useful if people measure value that they care about.
Agreed. QS seems most helpful for providing people tools to attack problems they are having (sleep, weight, etc.) rather than make a normal person superhuman.
I wouldn’t frame it that way. Talking of “problem” indicates that you compare yourself to the average person. There no reason why you have to do that. You can also track a variable where you are already above average and work on improving on that variable.
On the other hand you have to care about improving on that variable.
I think that Quantified Mind provides some high-value tests. So long as you’re willing to sit down and take a test, you can get data on:
Reaction Time
Visuo-spacial memory
Executive Function
Working Memory
Verbal learning
Motor function
Also, looking at what Gwern tracks, it seems helpful to have long-run data on subjective mood and energy. I randomly sample myself on that with PACO. PACO can allow you to poll yourself on any kind of thing you can imagine, like whether you’re sitting, standing, or walking, or whether you’re in public or private.
Edit: Added detail on QM.
I’ve been reading about maximizers and satisficers, and I’m interested to see where LessWrong people fall on the scale. I predict it’ll be signficantly on the maximizer side of things.
A maximizer is someone who always tries to make the best choice possible, and as a result often takes a long time to make choices and feels regret for the choice they do make (‘could I have made a better one?‘). However, their choices tend to be judged as better, eg. maximizers tend to get jobs with higher incomes and better working conditions, but to be less happy with them anyway. A satisficer is someone who tries to make a ‘good enough’ choice—they tend to make choices faster and be happier with them, despite the choices being judged (generally) as worse than those of maximizers.
If you want, take this quiz
And put your score into the poll below: [pollid:682]
I wonder what the person who submitted the number 1488 was thinking. (Maximizing their answer, perhaps.)
The quiz seems to target at people who are different then me. I don’t watch TV so, it’s hard for my to give an answer about channel surfing. I don’t listen to the radio. The same goes for renting videos.
That quiz looks like it could use an update to fit modern society. It was hard to answer questions about “channel surfing” or “renting videos” in the modern era of hulu, Netflix, and Amazon Prime. Also, thinking back to the days of actual video rental stores, it was much easier to choose a movie there than it is to choose one on Netflix. Possibly because the Netflix selections tends towards “second rate movies I’ve never heard of OR first rate movies that I’ve already watched or am not interested in”)
Anyways, I am a natural maximizer, which causes lots of stress towards decisions, so I’ve trained myself towards being a satisficer. I often try to think of decisions in the framework of “it doesn’t matter that much WHAT I decide to do here, so long as I just make a decision and move forward with it”.
I think about research where they show that the hardest decisions are the least important (if it was obvious which option was significantly better, then it wouldn’t be a hard decision.) I think about research where they show that people are happier with decisions when they can’t back out of them, so don’t second-guess them. I think about cost-benefit analysis and how maximizing that particular decision probably isn’t worth the time or stress.
A specific example: I tend to have trouble deciding what to order at restaurants. Knowing that whatever they serve at a restaurant is going to be relatively good, it’s not that important what I decide. So when the waitress asks if everyone is ready to order I say “yes”, even though I’m not ready, knowing that I will have to choose SOMETHING when it gets to me, and in reality I would be happy with any of the options.
Giving neutral answers to every question is ‘maximizer tendencies’, which seems odd.
You mean alternately picking 3 and 4? I was momentarily puzzled because seven is an odd number but I assume that’s what you mean. If so, hmm, that is odd.
Neutral would mean 4 for each one. (123 4 567.)
It’s not necessarily odd for neutral answers to count as “maximizing tendencies”—perhaps most people lean distinctly towards satisficing in the situations described by the questions.
Derp derp derp. Clearly I need to review the difference between odd and even numbers.
A good point about the maximisation tendencies, too, although it strikes me as a little implausible that this was deliberate on the part of the quiz’s designer(s).
I’m an Orthodox Jew, and I’d be interested to connect with others on LW who are also Orthodox. More precisely, I’m interested in finding other LWers who (a) are Jewish, (b) are still religious in some form or fashion, and (c) are currently Orthodox or were at some point in the past.
You must have excellent compartmentalization skills.
I’m an Orthodox Jew (Modern Orthodox). Since Mr. Yudkowsky’s work is—obviously—apikorsus of the highest level, I read it l’havin ul’horos, mostly, but enjoy the thinking in it anyway.
In case anyone else is curious, it appears that:
“apikorsus” has a range of meanings including “heretic”, “damned person”, “unbeliever”; the term may or may not be derived from the name of Epicurus.
[EDITED to add: As pointed out by kind respondents below, I was sloppy and mixed up “apikores” (which has the meanings above) and “apikorsus” (which means something more like “the sort of thing an apikores says”). My apologies.]
“l’havin ul’horos” means “to understand and to teach”, as opposed to “to agree” or “to practice” or whatever. In the Bible, when the Israelites invade Canaan they are told not to learn to do as the natives do, and there’s some famous commentary that says “but you are allowed to learn in order to understand and to teach”.
[EDITED to add: I am not myself Jewish, nor do I know more than a handful of Hebrew words; if I have got the above wrong then I will be glad to learn.]
More accurately: Apikores = heretic in modern parlance; apikorsus = heretical views.
As an aside, Maimonides is the medieval Jewish authority generally associated with the view that the term apikores is not derived from the name Epicurus. Maimonides was a world-class Aristotelian philosopher and quotes Epicurus several times in his works. Since the words apikores and Epicurus have identical spellings in medieval Hebrew, the fact that Maimonides proposes a different etymological theory begs for an explanation. Maimonides’ theory is that the term is from the Aramaic “apkeirusa” (this is hard to translate, especially in the way Maimonides seems to be using it; I think it implies something like “people doing whatever they feel like instead of listening to authority figures”). I’ve long felt that this derives from the fact that the Talmud’s discussion of the term doesn’t have anything to do with dogma or heretical beliefs but rather with belittling authority figures. Maimonides himself, however, converts the term in his other works into the current usage of referring to heretical beliefs. Based on this, I strongly suspect that Maimonides thought that the original term does stem from Epicurus (who held precisely those beliefs that Maimonides identifies as heretical), but that the rabbis of the Talmud borrowed the term and used it as a sort of Aramaic-Greek pun to refer to belittling authority figures.
Also in case anybody else is curious, Modern Orthodox is as opposed mainly to Ultra-Orthodox (also known as “hareidi” or “frum”). Hassidim are their own sub-group of Ultra-Orthodox.
As an interesting intellectual challenge, try steelmanning some of the hareidi sociopolitical positions, such as their extreme opposition to the Israeli draft law. And it does need steelmanning—I personally know several very well-thought-out, very smart, very well-meaning, very knowledgeable rabbis who strongly agree with the hareidi positions.
I think that actually if you accept a certain basic worldview, they have a rather strong case. I strongly disagree with that worldview, but that’s a diffrent matter.
Let’s lay it out:
Axiom 1: Everything happens acording to God’s will.
Axiom 2: If we behave righeously, God’s will will be favourable.
Example: Again and again in the past, this has happened. “בכל דור ודור עומדים עלינו לכלותינו והקדוש ברוך הוא מצילנו מידם” [Rough translation: “In each and every generation our foes have tried to destroy us, and each time the Holy One Blessed Be He saves us from them.]
Corollary 1: If we are righteous, we can expect this to carry on in the present and the future.
Axiom 3: The most righteous thing to be doing is to be studying the Holy Texts.
Lemma: We need to have as large a number of people as possible studing in yeshiva as their day-to-day occupation.
Proof of the lemma: Follows from Corollary 1 and Axiom 3.
Preposition: “Much as it pains us, we acknowledge that not everyone has it in them to spend all day studying Torah. We don’t want to force people who don’t want to study in yeshiva to do so (much as it aches the very bottoms of our souls), but at least you can let those who want to do so get on with it, and not waste their time on your secular ‘army’ nonsense, which has nothing to do with our defense, as our only true defense is God.”
Proof of the preposition: The lemma says that we need lots of yeshiva bochers, so let’s provide them! If you don’t have the proper כונה we cannot effectivly force you to study Torah (even if the hareidim had the political power), but at least we can take the masses of willing hareidi young men and allow the to do their job for the defense of our people, in order to protect what fragment of spiritual defense we still have.
Corollary 2: The state of Israel shouldn’t draft the yeshiva bochurs. Doing so removes our only true line of defense, and so is taramount to the genocide of the Jewish people.
If you accept the three axioms, it leads invariably to the Preposition, and so to Corollary 2.
Q.E.D.
That would work… but the Chareidim don’t actually believe in their defenses (they flee places getting bombed and leave the soldiers to defend people’s lives), nor are these defenses backed up in any way by halacha (they’re misinterpreted that one text they use as a source). Also, they don’t allow anyone in their community to go into the army. Ever. And they don’t let non-Chareidim join them in their learning for defense, either.
I suspect noonehomer’s correct in part and that the chareidim don’t actually believe everything that Username says.
Also, I don’t think it’s true they don’t let anyone go to the army (or at least it didn’t use to be), just that it’s discouraged.
If anyone’s interested in my own thoughts, I posted them in a comment here. Just look for the comment by iarwain. Sorry, you may need to understand some hebrew terms to understand it. But then again, you’ll need to understand hebrew terms to read Username’s comment as well.
Yes.
What I wrote was a steelman of their positions, and must be taken as such. They themselves do not have such sophisticated mental models of the world. The answer to why they hate the IDF and the state of Israel is simply one of tribal affiliation. <>
[Edit: Also see point 3 in iarwain1′s linked comment. It explains the hareidi attitude to all this.]
I don’t know about “frum”. Badly educated and mistakenly chumradik is more like it.
The hardest part of reading things l’havin ul’horos is that I can’t recommend them to anyone else because it’s assur for non-learned people to read them (possibly even non-Jews, in this case). And yes, iarwain1 is correct that apikorsus is a thing and an apikores is a person. But thank you for translating.
Can you recommend such things to other people considered learned? (And: is there an important distinction between “assur” and “forbidden”? A little googling suggests that “assur” is less emphatic somehow; is that right?)
Yup, inexcusably sloppy of me. Thanks.
Almost certainly I can. But right now I’m in high school, so I don’t know that many people who qualify.
Um… assur means you can’t do it. It’s not less severe than “forbidden”, I don’t think. It literally means “bound”. It’s important to note that it doesn’t mean something’s morally wrong, but in this case, independent of the prohibition (non-literal translation of the noun form, issur) the act of reading foreign philosophy without knowledge of the corresponding arguments in one’s own can cause stupid questions, not smart ones, and is considered to be wrong, not just forbidden (in my father’s circles, anyhow).
I commend you for your self-control in not telling other people about these issues. I’d also add that for many people who aren’t the intellectual type, you’d be doing them a major disservice by exposing them to arguments that can easily cause them massive psychological stress issues. As I know from personal experience with people who that happened to.
It might be worth thinking about switching to a different high school where there are more intellectual-type people around. Also, if you go to Yeshiva University for college you’ll find plenty of smart people, both staff and students, who are quite educated in foreign philosophies.
Tyler Cowen talks with Nick Beckstead about x-risk here. Basically he thinks that “people doing philosophical work to try to reduce existential risk are largely wasting their time” and that “a serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons.”
My Straussian reading of Tyler Cowen is that a “serious” MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.
A “serious” MIRI would operate in absolute secrecy, and the “public” MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.
Excerpts and discussion on MR: http://marginalrevolution.com/marginalrevolution/2014/04/nick-becksteads-conversation-with-tyler-cowen.html
Hackers / assassins would at best postpone the catastrophe, not avoid it.
If you ideas of being serious is to train a team of hacker-assassins that might indicate that your project is doomed from the start.
As far as I know there are still nuclear weapons in the post-collapse Soviet Union.
Pretty clear that he meant the “loose nukes” that went unaccounted for in the administrative chaos after Soviet Collapse.
How many nuclear weapons did get neutralized in that way?
Most of this information isn’t being released to the public. It is known that the entire Kazakhstan arsenal was left unguarded after the fall of the Soviet Union, and it was eventually secured by the US.
How do you know?
The official story that the Kazakhstani tell seems to be:
US official history as retold by the Council of Foreign relations seems to be:
What is he talking about? Sam Nunn?
A team of slightly more sophisticated Terminators, right?
Oh, wait… :-D
I have trouble with the statement “In the end, we’re all insignificant.” I mean I get the sentiment, which is of awe and aims to reduce pettiness. I can get behind that. But I have trouble if someone uses it in an argument, such as: “Why bother doing X; we’re all insignificant anyway.”
Because, if you look closely, “significance” is not simply a property of objects. It is, at the very least, a function of objects, agents and scales. For example you can say that we’re all insignificant on the cosmic scale; but we’re also all insignificant on the microscopic scale. We’re also insignificant for some trees in the middle of the rainforest or an alien in another galaxy. We’re almost completely insignificant to some random person in the past, present or future, but much more significant to the people around us.
To put differently, given two actions A & B with expected utilities U & V, you should choose A over B iff U > V. Only the relative ordering of U & V is meaningful, not the absolute difference (the utility function can be scaled arbitrarily anyway).
Good point. I guess you could rephrase some of the existential angst over insignificance as despairing at the tiny amounts of utility we can manipulate given a utility function scaled to the entire world/universe/whatever.
Can anyone share the story behind the Future of Life Institute?
There are a lot famous people on their list, and presumably FLI is behind the recent article in the huffington post, but how much does this indicate that said famous people are on board with the claims in the article? The top non-famous person on their list of people studies monte-carlo methods and volunteers for CFAR—is this an indication that they’re bringing on someone to do actual work? Or does Alan Alda being at the top of their list of advisors mean they’re going to focus on communications?
UPDATE: somervta with a large chunk of story
Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.
The abstract: “This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.”
Quite simple, really, but I found it extremely interesting.
http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf
Discussed occasionally: https://www.google.com/search?num=100&q=%22simulation%20argument%22%20site%3Alesswrong.com
The argument falls apart once you use UDT instead of naive anthropic reasoning: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/aoym
Maybe I am unfamiliar with the specifics of simulated reality. But I don’t understand how it is assumed (or even probable, given Occam’s Razor) that if we are simulated then there are copies of us. What is implausible about the possibility that I’m in a simulation and I’m the only instance of me that exists?
In the Tegmark IV multiverse all consistent possibilities exist so there is always a universe in which you are not in a simulation. The only meaningful question is what universes you should pay more attention to.
See also this.
I’ve seen it. It seemingly ignores the possibility that humanity will not go extinct [EDIT: in the near future, possibly into the tens of megayears] but will also never reach a ‘posthuman state’ capable of doing arbitrary ancestor simulations.
I think “extinct before reaching a “posthuman” stage” covers that also.
True—I guess I was reading it in the context of the usual singulatarian assumptions of quick take-off.
Honey badger intelligence
When I was a kid, our cats used a similar tactic to escape the laundry room with a closed door. One would sit on the dryer and turn the handle with both paws and the other would push against the door with their head.
Since LW is the place where I found out about App Academy… I started working through their sample problems today, and at what level of perceived difficulty / what number of stupid mistakes should I give up? Both in the sense of giving up on working toward getting into App Academy specifically [because I doubt I think fast enough / have a good enough memory to pass the challenges—the first four problems in their second problem-set took me over an hour, and I had to look a few things up despite having gone through the entire Codecademy Ruby course] and in the sense of giving up on programming as an at-least-short-term job plan?
Not sure how much of this is lack of practice (maybe implementation / avoiding stupid errors would get better with practice, but designing the algorithms takes me a while, and I’m not new to programming at all), how much is overconfidence / unrealistically high expectations wrt skill (but they say the code challenges are supposed to take 45 minutes each) and how much is that I really don’t have the talent to get into that particular program, or to not fail miserably at the job, or to develop the skills to be able to even get a programming job...
Hey, I have good news for you. I just tried those practice problems and timed myself to see if I could give you something to compare to (and for fun). I completed the first four in about an hour and 10 minutes (though I am a bit out of practice). Those practice problems are not trivial; they take some thought. I didn’t have to use any outside resources, but I did have to test quite a few things out in the terminal as I was coding it.
For background: I am self taught, but I’ve been programming for almost 2 years. I have done freelance rails programming. I have built multiple rails apps from the ground up by myself. One of these is still in use by multimillion dollar company as a part of their client onboarding program. I’ve been offered a job as a rails developer, though I didn’t end up taking it as I had a higher paying offer on the business end of things.
So I say don’t worry if you have a bit of trouble with it. If you felt like you were looking things up all the time, then you just need some more practice. For the algorithm design part (especially the mathy ones), look into Project Euler. It’s a great list of problems to get practice and you can use whatever language you want to find the answer, so use Ruby. Practice taking the problems apart into pieces, using helper functions, and writing the pseudocode before you actually code anything. That will make this style of thinking feel more natural.
Feel free to PM me if you want to talk more.
Use the try harder Luke.
What do you mean by “not new to programming at all”? How many hours programming have you done? How many projects have you completed? Because unless you’ve had a job as a programmer before or you did CS as a college degree your previous experience will be utterly swamped by App Academy. If you feel insecure about algorithms specifically practice them specifically. If you want more practice with Ruby maybe do Hartl’s book. The Codecademy Ruby course is not the end of the world. If programming appeals to you prepare, apply and let App academy do the judging.
Edit: Remember, many people who have had jobs as programmers can’t do FizzBuzz if asked to in an interview. Retain hope.
Please recommend some good sources/material (books, blogs, also advice from personal experience) for techniques of self-analysis and introspection? Basically, I’m looking for things to keep in mind while I attempt to find patterns of behavior in myself and ways for changing them. I realize that this is a very broad category. But roughly, material akin to Living Luminously.
The Feeling Good Handbook. It focuses specifically on Depression and Anxiety, but could probably be useful for anyone.
I’d like to gauge interest in posting bulleted, section-by-section, non-fiction book summaries, with the intention of some discussion. I think that it would be of high utility to those who want knowledge but haven’t the time to read a book, and for me who wants to read a book and work through the ideas more thoroughly. The first two books I have in mind are Understanding Uncertainty which has been recommended by Lukeprog, and The Moral Animal which has been recommended by EY.
It could be chapter by chapter, perhaps in weekly open threads, or the whole book in a discussion post. The summary would mostly consist of select quotes with commentary to summarise longer passages.
The poll is just for interest, comment if you have strong preference for the books I choose,
[pollid:683]
Something that keeps nagging me in my mind: A young college graduate comes up to you and asks “Where should I look for what kind of work to have the highest living standard?”
Remember, a lower nominal wage in a country where this wage has higher purchasing power should be better suited to this individual. Naively I might say the US or Switzerland but something tells me I am overlooking a gigantic hole.
For someone skilled enough to choose your location and who thinks long-term enough to live very cheaply for a number of years, higher nominal wages means higher absolute savings amounts.
Live somewhere expensive when you’re getting started, and move somewhere cheap when you’re slowing down.
Cost of living is an overblown statistic because dumb people spend their money poorly. You can live in expensive areas on the cheap without that much effort. This isn’t to say that living in the bay area isn’t more expensive than many other areas, but it certainly isn’t as expensive as the cost of living calculations would make it seem.
Yes, provided you’re young, healthy, and childless.
What makes youth a necessary condition independent of overall health?
Mostly risk and stress tolerance.
...but also less established social ties. And less settled long-term investments (though this correlates with with risk part).
In some fields, doing freelance work for clients in a country with low purchasing power while living in one with high purchasing power is an option.
Living standard as quantified is not particularly helpful to the individual. Someone might be comparatively far better off living in malaysia with a long-distance high paying freelance programming job, but I think you’ll find that being around cultural compatriots is not to be ignored.
I do not know if this is the best place, but I have lurked here and on OB for roughly a year, and have been a fellow traveler for many more. However specifically I want to talk to any members that have ADHD, and how they specifically go about treating their disorder. On the standard anti-akrasia topics, the narrative is that if you have anxiety,depression, xyz that you should treat that first, but there seems to be a lack of quantity of members that have this. Going to other forums to talk about stuff like which medication is “better” is filled with a lot of bad epistemology, conclusions, people faking their disorder, and much more. Any other members have it, and want to talk about it? I was hoping there could be maybe a general discussion thread for people with it if enough people have it. I’ve poured through studies and journals but it is difficult to do alone.
Alright, i’m going to get enough karma and just start this myself until some one stops me. I also kind of need this, so I don’t destroy my life through some other unspecified means.
I was diagnosed non-hyperactive ADD as a kid, though I haven’t done meds for that since middle school. It’s been suggested that it was a misdiagnosis for asperger’s.
Does anyone have suggestions for Android self-tracking/quantified-apps? I just got an Android phone an am hoping to begin tracking my diet, exercise ect. as well as various outcomes and try to find correlations.
LifeTracking
I was able to get it installed, but get a message saying “Unfortunately, LifeTracking has stopped” whenever I try to go past the first page.
The marketplace link doesn’t work. I tried searching for LifeTracking but only found LifeTrack, are they the same thing?
Probably not, though I have never had access to the Android marketplace, so I’m not sure. Have you tried installing the app directly from the downloadble .apk file?
That seems to have worked.
Sleep as Android is what I use on a tablet under my pillow to keep track of how long i actually spend trying to sleep, as well as if my sleep cycle seems to contain coherent deep-to-not-deep cycles.
Brienne Strohl mentioned she was reading “Robby’s re-sequencing of Eliezer’s Sequences” on facebook/twitter, can anyone link me to it?
Hi, CFAR alumni here. Is there something like a prediction market run somewhere in discussion?
Going mostly off of Gwern’s recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the “flavour” that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.
What would it take to get such a thread going online? I believe one of the reasons it worked so well at minicamp was because we were all in the same area for the same period of time, so it was simple to restrict bets to relevant things we could all verify. Even if most of the posts wind up being relevant only to the local meetups, it would be nice to have them up in the same place for unofficial competition. Is that something you would use?
I haven’t been following LW discussions of Löb’s theorem etc. very much at all but this guide to the m4 macro language (a standard Unix tool) seemed to have the same character, especially this section. Dunno if this is interesting to people who are interested in Löb’s theorem.
Fairly off-topic question, but I imagine there’ll be suitable people to answer it on LW. Any recommendations for cheap and cheerful VPS hosting? Just somewhere to park a minimum CentOS install. It’s for miscellaneous low-priority personal projects that I might abandon shortly after starting, so I’m hesitant to pay top dollar for a quality product that I might end up not using. On the other hand, I want to make sure I get what little I’m paying for.
I promise I’m not a stingy unfriendly AI looking for a new home.
http://www.webhostingtalk.com/
I would like to learn drawing.
I would like to be able to have fun expressing myself via art. How long does it takes to learn to draw from zero to good enough not to be embarrassed of oneself?
What techniques are useful? Is there any sense in e.g. Drawing on the Right Side of the Brain?
Drawing from the real life is especially useful for someone who is learning to draw. It teaches you that drawing is not simply about holding a pen and drawing the correct lines, but it’s also about seeing and thinking correctly. We tend to think in terms of shapes, outlines and symbols, but such things don’t represent the reality very well. You should be thinking in terms of form and contour.
Here’s a good video about it.
I think this post is a good start:
So draw a lot, draw from the real life and draw from reference and begin to think in 3D.
I think Drawing on the Right Side of the Brain is probably pretty effective because one of its main point is the above—that you should just draw what you see and not think in terms of symbols when you draw. The underlying idea about the brain hemispheres is pseudoscience, but that doesn’t mean it can’t still teach useful lessons.
Drawing on the Right Side is great for this reason. The hemisphere stuff is quite tangential to the book’s utility.
If you want to see examples of “visual symbols”, look at the drawings of children. In particular, look at drawings of the human face. The prototypical symbols for something like an eye, just don’t look that much like a human eye. This sounds obvious, but it’s very hard to just draw what you see, and not draw what you “think you ought” to see.
For example, imagine a face lit from one side. Visually, the illuminated side of the face will show the “expected” details: You’ll see the folds in both lids of the eye, and the fine curves of the face and ear. But the dark side of the face will look nothing like this. You’ll only see broad dark areas and broad light areas. However, most people who’d identify as “bad at drawing”, will draw the same details on both sides of the face, and will be genuinely unaware that this isn’t what they really “see”.
This isn’t to say that artists don’t make use of visual symbols, etc, but skill is the ability to take both approaches.
I’d actually advance this as a example of the fundamental analysis of one type of “talent”. The “good at drawing” people grokked the connection between seeing and drawing, and the “bad at drawing” people didn’t.
I’ve wondered for some time if something similar isn’t present in musical talent, where the basic “mindset” has to do with some connection of sound to expression, rather than a connection between sound and physical ritual.
I looked at those links JayDee posted below, namely
http://lesswrong.com/lw/8i1/drawing_less_wrong_observing_reality/
and this is what was said about Edwards’ book:
Since she recognized this, it seems my critique about the hemisphere stuff is not meaningful anymore.
There’s an (unfinished) set of posts about rationality and drawing written by Raemon, Drawing LessWrong p2 p3 p4 p5 that might answer your questions (in the articles or comments.)
What’s the current policy on bare downvoting, as in downvoting a comment/post without giving at least a short explanation for why one did so? I’ve had some comments downvoted recently, and without explanations it’s frustrating and a poor feedback mechanism.
There ain’t no policy. People up- and down-vote as they please.
If the alternative is no feedback at all, downvoting without explanation is a better option.
This is a common question from the new participants. First, there is no policy on downvoting. There can’t be, because there is no enforcement mechanism. There are, however, recommendations, like “downvote something you would like to see less of”, which is often mixed up with “downvote everything I disagree with”, or worse, with “downvote every comment by a user I dislike, regardless of content, to force them post less”. At least one prominent regular has been accused of this last one. Second, commenting on why you downvote tends to result in the comment being downvoted, which discourages such comments very effectively.
Yes, but only in the beginning. Once you have a few hundred karma, a downvote is just an indication that someone disliked your post, nothing more. And if all your comments are universally liked, you must be doing something wrong.
I’ve been here since the beginning of LW, off and on, actually. (This is sort of an alt account.) I just recall discussion on such a policy a while ago, but didn’t see a wikipage giving such recommendations.
It was frustrating because it was on the order of 5 or 15 downvotes, without a single reply. My initial reactions were surprise and then disappointment at the community. I’d rather not be disappointed, so I thought re-focusing on more beneficial norms would be more productive.
If the reply is thoughtful, then it’s much less discouraging (And if you, for rhetorical purposes, claim the downvote was from someone else, then do so), e.g. “This was probably downvoted because X and Y, what do you mean by N. Also here’s some relevant resources/links A and B.” I guess it’s a lot more work than just downvoting, but it’s hardly discourging of done non-patronizingly.
The goal of downvotes is to be discouraging.
I’d say that even more important than giving explanation is not downvoting merely because you disagree. The signal transmitted by downvoting is “I don’t want the hear this” or in simpler language “shut up”. This should be reserved to fight content which is offensive, spam, trolling, rampant crackpottery, blatant off-topic etc. Mistakes made in good faith don’t deserve a downvote. I’d say it is an extension of the “Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.” rule. The alternative is death spirals, blue-green politics and plainly ruining the community experience for everyone.
I personally made a rule of upvoting any content with net negative score which doesn’t deserve a downvote, even if I disagree, especially when it’s a comment of a person I’m currently arguing against. I want arguments that are discussions in which both sides are trying to arrive at the truth, not fights or two-people-showing-off-how-smart-they-are (is there a name for it?).
Not if you aim to enforce a level of discussion higher than mere absence of pathology. I like for there to be places that distance themselves from (particular kinds of) mediocrity...
...which is made more difficult by egalitarian instincts.
It’s not. Punishment is different enough from deciding who to talk with. See also Yvain on safe spaces.
Downvotes are not the way to achieve it. The way to achieve it is by positive personal example and upvoting content which is exemplary. Why are downvotes bad? Because:
We want to allow “mediocre” people (some of which have an unrealized potential to be excellent) that want to learn from excellent people (I hope you agree). Such people can make innocent mistakes. There’s no reason to downvote them as long as they’re willing to listen and aren’t arrogant in their ignorance. Downvoting will only drive them away.
Even smart people occasionally say foolish things. Downvoting sends such a strong negative signal that it discourages even people that get much more upvotes than downvotes. By “discourages” I don’t mean “discourages from saying foolish things”, I mean discourages from participating in the community in general.
Most content is not voted upon by most of the community, therefore statistical variance is large. Again, since the discouragement of downvotes is not cancelled out by the encouragement of upvotes, you get much more discouragement than you want.
Downvotes transform arguments into sort of arena fights where the people in the crowd are throwing spoiled vegetables on the players they don’t like. The emotional aura this creates is very bad for rationality. It’s excellent for blue-green politics (downvote THEM!) and death spirals.
If you don’t want to talk to someone, don’t upvote her and don’t reply to her. The psychological impact of downvoting is equivalent to punishment.
This is completely different. “Safe spaces” are about banning content which might offend someone’s sensibilities. My suggestion is about “banning” less content.
I agree with enough of this. I know there are immediate downsides and hypothetical dangers. But the upsides seem indispensable. The argument needs to consider the balance of the two.
They remain in the fabric of the forum, making it less fun to read. Not upvoting doesn’t address this issue.
Things that are not fun (for certain sense of “fun”) offend my sensibilities (for certain sense of “offend”). My suggestion is to discourage them by downvoting. (This is the intended analogy, which is strong enough to carry over a lot of Yvain’s discussion, even if the concept “safe spaces” doesn’t apply in detail, although I think it does to a greater extent than I think you think it does.)
Let me rephrase. I suggest downvoting a comment only when it makes you think “I don’t want this person in this community”. Don’t downvote comments which might be reasonably attributed to an OK person making an honest mistake.
This sabotages any chance of using karma to find and sort good comments from bad in the future. I want good content to be differentiated from bad regardless of source. I upvote known trolls when they say smart shit and I downvote eliezer when he’s being a douchebag.
I wasn’t at all suggesting to upvote / downvote on ad hominem basis. When someone is being a douchebag, downvote her by all means. When someone is stating an opinion you consider to be wrong while doing it in honest and respectful manner, don’t downvote. If you want to express your disagreement, reply and (politely) explain why you disagree.
I have no problem with that, my problem is with the opposite—people learning from mediocre (or worse) folk, because they don’t realize that their content is flawed (which downvotes signal).
IMO on the Light side you learn from something when you can tell it’s correct, not when someone tells you it’s correct, much less when someone anonymous tells you it’s correct.
To some extend yes, but we don’t want eternal September either. There concern about the average IQ that reported in the LW census dropping over time.
If we would have less downvotes in general then every single downvote would create a much stronger negative signal than it does at the moment.
Hi Christian, thx for commenting!
I’m not that concerned about average IQ. The crucial questions here are what is the purpose you see in LW and how you envision its future. If you want LW to be an elitist discussion forum for high-IQ people comfortable with a relatively aggressive / competitive environment, then it makes sense for you to use downvotes relatively liberally.
I think that the greatest potential value in LW lies elsewhere. I think LW can become a community and a cultural movement that promotes rationality and humanist values. A movement that has the power to steer history into a direction more of our liking. If you accept this vision, then you should be aiming at a much broader group (while making sure the widening circle doesn’t water down our spirit and values). I envision LW as a place where people come to connect to other people that share similar worldview and values, not necessarily all of them being in the top IQ percentile. The “spiritual leadership” of the movement should consist predominantly of highly intelligent people that everyone can learn from, but it is not a necessary requirement for every member.
This effect is only significant for people who spend sufficient time on the forum to get used to the “downvote background”. Moreover, I think it is far from strong enough to cancel the reduction in downvotes.
The LessWrong brand is not optimized for reaching a broad public. To the extend that’s the goal “effective altruism” is a more effective label under which to operate.
In my view the goal of LessWrong is to provide a forum for debating complex intellectual ideas. Specifically ideas about how to improve human thinking and the FAI problem. Having a good signal-to-noise ratio matters for that purpose.
Steer history?
When you said “cultural movement”, did you really mean “social and political movement” for it is those which steer history?
And what gives you the idea that LW could become massively popular, anyway? There’s nothing here particularly interesting for hoi polloi.
What do you mean by “fighting mediocrity”? Should I interpret it literally as “I don’t like mediocre people”? Or as “I want to reward excellence”? If it is the latter you are aiming it, use upvotes, not downvotes (for ideal rational agents the two might be symmetric, but for people they aren’t: the emotional signal from getting a downvoting is very different from the emotional signal of not getting an upvote).
Exactly, and this is a reason why downvoting is important (and shouldn’t be systematically countered): it allows scaring people away (who are not of our tribe). A forum culture that would merely abstain from upvoting is worse at scaring people away than one that actively downvotes.
(Sorry, I heavily edited the grandparent since the first revision.)
Neither, it’s not about what I like (in the sense of emotional response), or about what other people experience, but about what to encourage on the forum to make it a better place.
(Right now it’s not particularly relevant, at least as an intervention on the level of social norms, because the main current issue seems to be that too little meaningful discussion is happening lately, and that doesn’t seem fixable by changing/maintaining voting attitudes.)
The same person who said that also said this, so I guess he meant something narrower by “bullet” than you think.
Upvoted for making an interesting point.
However: I was not appealing to Eliezer’s authority. I was just making a parallel with a similar (but more extreme) phenomenon.
Regarding well-kept gardens. Let me put things in perspective. If you see a comment along the lines of “jesus is our lord” or “rationality is wrong because the world is irrational” or “a machine cannot be intelligent because it has no soul”, by all means downvote. However, if you see two people debating e.g. whether there will be an AI foom or whether consequentialism is better than deontology or whether AGI will come before WBE, don’t downvote someone just because you disagree. Downvote when the argument is so moronic that you’re confident you don’t want this person in our community.
People change. People change even faster when you give them feedback. I downvote things I don’t want to see from people I like and respect the same way I would frown at a friend if they did something I didn’t want them to do.
So instead of ‘I’m confident I don’t want you in our community,’ I view a downvote more as ‘shape up or ship out.’
It depends what you mean by “feedback”. If “feedback” is a polite, respectful reply explaining the mistake then, yes, it is something the other party can learn from. If “feedback” is a downvote chances are it is only going to hurt the other party and possibly make her even more entrenched in her position out of anger. When you argue respectfully, the other party can admit her mistake with small emotional cost. If you call her an idiot, admitting the mistake will become much more difficult for her (since it will become emotionally equivalent to admitting being an idiot).
First, you can allow yourself more with friends because they are friends. Second, a downvote is a sort-of public humiliation, it is much worse than a frown. Imagine that a person you would like and respect makes one of her first comments on the forum and gets downvoted. She might become so upset she won’t return here again.
There are several points here that seem entangled, but I’ll try listing them separately.
First, it is a desirable quality to be able to work out what one did wrong from minimal evidence, or repeated experimentation.
Second, it seems to me that rationality is strengthened by the ability to joyfully accept contradictions and corrections. A view that sees a downvote as a sort-of public humiliation is probably too sensitive.
Third, politeness is costly, in several ways. Most relevant to the others is the time cost of writing a reply. It often takes much longer to instill clarity than it takes to display confusion.
Fourth, as the benefits mostly accrue to the corrected, and the costs mostly accrue to the corrector, it is not clear why we should expect such correction to be the norm instead of virtuous on the part of the corrector.
LWers differ in how hard they want LW to be on its new users. I tend to be softer than, say, Lumifer, but I am not certain that this is a bug instead of a feature. There are people we don’t want to discuss things here on LW, and that sort of reaction may be a decent filter.
I don’t want to set up a hazing ritual to weed out the misfits from among the newbies.
What I want to avoid is LW evolving towards being victim-centric where the main concern is the possibility of giving offence.
Oh, dear. HTFU already. People who think of downvotes as hurtful and public humiliation really shouldn’t venture into the wilds of ’net forums.
Agreed, but...
Nope. Sometimes otherwise-okay people make moronic arguments because they’re mind-killed, they’re tired, etc.
THE WHOLE POINT OF DOWNVOTES IS TO HAVE LESS BAD STUFF AND MORE GOOD STUFF. This applies not just to making people leave but making people who stay post tbings of higher quality.
If you don’t downvote “otherwise-okay” people when they say dumb shit, how are they supposed to learn. Downvote the comment, not the person .
I think the point is that you shouldn’t conclude “that you’re confident you don’t want this person in our community” just because “the argument is so moronic”.
(Because there’s too much noise with individual arguments to deduce a person’s general competence.)
In other words, yes, downvote the comment—not the person.
Er… That was my point.
This is exactly why you shouldn’t downvote such comments: they hurt good people and discourage them from participating in the community. Also, consider the possibility your own judgement is affected by tiredness or mind-murder.
I guess you are talking of conditions in which someone makes a downvoting decision. But then underconfidence is also possible, and also a pathology, making one unable to act on a correct judgement. This point might be a reason that The Sin of Underconfidence is a prerequisite for Well-Kept Gardens Die By Pacifism.
I agree that both overconfidence and underconfidence is possible, but the potential damage from downvoting is larger than the potential damage from not downvoting. Therefore, let’s err on the side of not downvoting.
This is what I disagree with.
I think you’re drawing a false equivalence here. While a downvote does carry the meaning of “I don’t want to hear this”, most of the meaning of “shut up” is connotation, not denotation, and those connotations don’t necessarily carry over.
Mere disagreement generally isn’t enough to justify a downvote, no. But we want to see well-reasoned disagreement: it signifies a chance to teach or to learn, even if it’s unpleasant in the moment. On the other hand, there are plenty of things short of Time Cube or cat memes that one might legitimately not want to see here, even if posted in good faith; restricting the option to those most extreme cases robs it of most of its power to improve discussion.
I donwvoted you, because you seem to use upvotes in a way that diminishes the value of the karma system in my eyes—an undeserved downvote is as bad as an undeserved upvote.
I’ve seen a lot of low quality posts getting some karma, and coming back to positive scores without a good reason—and now I know the behaviour that is partially responsible.
(and the above comes from someone with a mass downvoter after him, who gets a downvote on every single comment he makes)
Downvotes and upvotes are not symmetric, see my reply to Vladimir.
It shouldn’t matter why you downvote something, just give an explanation for why you did so. Ideally the same goes for upvotes, where you should explain why you upvoted (if your explanation is any more valuable than “This.”).
Trying to define what an upvote or downvote “means” or “shouldn’t mean” is futile and beside the point.
No no no no: the beauty of votes is it gives us a very quick and easy way of knowing comment quality without flooding the forum with “good post!” Or countless explanations of things people already know.
Why? What is “the point”? For me, the point is creating a community that is fun, useful and lives up to its ideals of rationality and humanist virtue (whatever the latter means for you, be it utilitarianism, effective altruism etc).
The point is for commenters (and the audience for that matter) not to have to wonder about why they got downvoted/upvoted, in other words for the meaning of that partcular upvote/downvote to be made explicit by the upvoter/downvoter.
And why not? Some introspection does a body good...
…
It would do good to encourage more explaining of upvotes and downvotes. We’re not at the point where there’s “too much” of it. And, if there was “just the right” amount of it, then we wouldn’t be having this discusison.
For a diverse population of people there is no such thing as “just the right amount”. Even if you set it at some kind of a central measure (mean, weighted mean, median, etc.), the left tail would complain it’s too little and the right tail will complain it’s too much.
Speaking personally, most of my downvotes are because the post seemed to me either stupid or dickish. I am not sure LW will gain much if I start posting dick ASCII art as an explanation for downvotes… X-D
Well, if you’re adament about it not being systemic, then (if you or someone reading this would be so kind) help me understand my own case, of a few of my comments before this conversation being severely downvoted. I was surprised at the responses, and without any replies, I’m still in the dark. If you could show me the light, then I’d be grateful.
Please provide links as it’s hard to see comments at −5 and below. The only strongly downvoted comment of yours that I see itself says “hard downvote for stupendous arrogance” so I’m not sure why are you surprised...
In response to someone wholesale dismissing an entire area of scientific study without having had any experience in it, “stupendous arrogance” is both accurate and tame. I guess “stupendous” kind of sounds like “stupid”, but that’s probably not why people downvoted the comment.
I thought you were interested in why people downvoted you and not in justifying your comments..?
I’m interested, that’s why I’m dissectng the post to try and find the reason that it was downvoted. My conclusion is that it was downvoted because the phrase you quoted sounds unnecessarily harsh out of context, and not because of anything regarding facts or offense.
Basically you are engaging in an ad hominem argument and not making decent argument for your position.
Asking people on a public forum for whether the have experience with illegal drugs is also a big no.
Psychonautics is entirely about the “hominem” and inner experience, it can’t not be relevant. I’m not sure what you’re getting at.
And, depending on where you live, I wouldn’t worry about revealing anything, especially if you don’t deal, especially if you can feign not currently using it. There are plenty of places on the internet where people talk about psychedelic drug usage openly, and they’ve been around for a while and not been shut down. To worry at all would be insanely paranoid.
LW is a place where people know their fallacies and pattern match to them. You will get downvotes for things like that. That’s simply the kind of place that LW happens to be.
As far as your argument goes you haven’t made clear why someone can’t get knowledge about psychonautics by reading what other people who have experiences write about psychonautics. How LW you do have a burden to make that argument in more depth if you want to get away with ad hominem.
If you want a security clearance in the US than you need to answer questions about past drug use. If you say on that form that you don’t have used LSD in the past but there a record of you on the internet admitting to LSD usage that might bring you into major trouble is someone finds out. The same goes for other jobs. Basic courtesy is to allow others the freedom to choose whether or not to reveal information like that about themselves and therefore don’t put other in a situation where they are obliged to reveal information like that.
God I hope not, that’s like not having heard of the Disagreement Hierarchy. The “central point” was about inner experience, so pattern matching towards “DH6″ is the more “lesswrong” thing to do then to pattern match towards “ad hominem”. Pattern matching towards “ad hominem” is an example of the “standard rationality” thing that Eliezer spend the entire Sequences attempting to deconstruct and improve upon. If LW has degenerated back to that, then maybe we need another read-through of the sequences.
If you actually use your real name for everything you say online, then it’s your own fault when you get in such a bind. Basic courtesy is to know when to use your real name and when not to, and to not let that shit happen.
In reality rationality is about accepting that the world is the way it is and not as you want it to be. In this case it seems like you don’t want to accept it the way it is. In this case it always useful to keep your audience in mind and if you are making some far off point about psychonautics then you have to be extra careful or accepted that you get downvoted.
Stylometry is pretty good these days. At the 29C3 there was a talk that demostrated a 72% successful author attribution rate for some underground online forums. Underground meaning forums where illegal goods where sold, so the participants are interested in being anonymous. The idea that you can reasonable protect your anonymity by using a nickname is naive.
I think not so naive as all that. The effectiveness of a security measure depends on the threat. If your worry is “employers searching for my name or email address” then a pseudonym works fine. If your worry is “law enforcement checking whether a particular forum post was written by a particular suspect,” then it’s not so good. And if your worry is “they are wiretapping me or will search my computer”, then the pseudonym is totally unhelpful.
I think in most LW contexts—including drug discussions—the former model is a better match. My impression is that security clearance investigations in the United States involve a lot of interviews with friends and family, but, at the present time, don’t involve highly sophisticated computer analysis.
Given the way the NSA works I would highly doubt that they don’t check information in their databases when handing out a security clearance and run highly sophisticated computer analysis. The actual capabilities of those programs are going to be classified. The NSA doesn’t want people to know about the capabilities they have.
In addition the internet doesn’t forget. NSA computer programs might not be good enough at the present to catch it but they might be in five years. Especially the whole Snowden episode encouraged the NSA to invest a lot more effort into gathering data about possible leakers and have computer programs that analyse the behavior of people with a security clearance.
Is that a terminal goal? Or is it an instrumental goal serving to achieve something else?
Both/neither? It’s a reasonable norm and would also help alleviate some personal frustrations. (Sidenote: invoking “Terminal” anything is usually dangerous and unnecessary, c.f. this.)
Well, Eliezer’s policy tends towards “replying to downvote-worthy comments tends to start flame wars and is thus discouraged”.
Right, but then we invented “Tapping out” so that wouldn’t become an issue.
“Tapping out” can be interpreted as conceding and is thus low status.
If you’re that worried, link to the wikipage which defines away that connotation, like “I’m tapping out.”.
Signaling doesn’t work that way. I’d think someone who reads Game blogs would know that.
Call it something else then, or be more direct and paraphrase the wikipage, or take it into PMs, whatever you fancy. The point is that you shouldn’t feel guilty replying to a comment just because it was downvoted.