Elevator pitches/responses for rationality / AI
I’m trying to develop a large set of elevator pitches / elevator responses for the two major topics of LW: rationality and AI.
An elevator pitch lasts 20-60 seconds, and is not necessarily prompted by anything, or at most is prompted by something very vague like “So, I heard you talking about ‘rationality’. What’s that about?”
An elevator response is a 20-60 second, highly optimized response to a commonly heard sentence or idea, for example, “Science doesn’t know everything.”
Examples (but I hope you can improve upon them):
“So, I hear you care about rationality. What’s that about?”
Well, we all have beliefs about the world, and we use those beliefs to make decisions that we think will bring us the most of what we want. What most people don’t realize is that there is a mathematically optimal way to update your beliefs in response to evidence, and a mathematically optimal way to figure out which decision is most likely to bring you the most of what you want, and these methods are defined by probability theory and decision theory. Moreover, cognitive science has discovered a long list of predictable mistakes our brains make when forming beliefs and making decisions, and there are particular things we can do to improve our beliefs and decisions. [This is the abstract version; probably better to open with a concrete and vivid example.]
“Science doesn’t know everything.”
As the comedian Dara O’Briain once said, science knows it doesn’t know everything, or else it’d stop. But just because science doesn’t know everything doesn’t mean you can use whatever theory most appeals to you. Anybody can do that, and use whatever crazy theory they want.
“But you can’t expect people to act rationally. We are emotional creatures.”
But of course. Expecting people to be rational is irrational. If you expect people to usually be rational, you’re ignoring an enormous amount of evidence about how humans work.
“But sometimes you can’t wait until you have all the information you need. Sometimes you need to act right away.”
But of course. You have to weigh the cost of new information with the expected value of that new information. Sometimes it’s best to just act on the best of what you know right now.
“But we have to use intuition sometimes. And sometimes, my intuitions are pretty good!”
But of course. We even have lots of data on which situations are conducive to intuitive judgment, and which ones are not. And sometimes, it’s rational to use your intuition because it’s the best you’ve got and you don’t have time to write out a bunch of probability calculations.
“But I’m not sure an AI can ever be conscious.”
That won’t keep it from being “intelligent” in the sense of being very good at optimizing the world according to its preferences. A chess computer is great at optimizing the chess board according to its preferences, and it doesn’t need to be conscious to do so.
Please post your own elevator pitches and responses in the comments, and vote for your favorites!
You’ve said something similar in a recent video interview posted on LW, and it cringed me then, as it does now. We don’t know of such optimal ways in the generality the context of your statement suggests, and any such optimal methods would be impractical even if known, which again is in conflict with the context. Similarly, turning to the interview, SingInst’s standard positions on many issues don’t follow from formal considerations such as logic and decision theory, there is no formal theory that represents them to any significant extent. If there is strength to the main arguments that support these positions, it doesn’t currently take that form.
Fair enough. My statement makes it sounds like we know more than we do. Do you like how I said it here, when I had more words to use?
It made me cringe as well but more because it will make people hug the opposite wall of the proverbial elevator, not because such methods are conclusively shown as impractical—http://decision.stanford.edu/.
I think Ian Pollock more effectively got at what Luke is trying to communicate.
First, a general comment on your versions, sorry: you tend to use big words, scientific jargon, and too few examples.
Compare your pitch with the following (intentionally oversimplified) version:
“So, I hear you care about rationality. What’s that about?”
And doing that is going to instantly turn people off.
.
Makes it sound great, but what are the real world benefits? I’ve been rational for years and it hasn’t done anything for me.
15 comments and −120 karma? Okay, at this point I may begin immune response against trolling (delete further comments, possibly past comments, as and when I get around to seeing that they were made).
I also remind everyone: Please do not respond at length to trolls, attention stimulates their reward centers.
I’m not so sure he’s a troll. He very well might be, but at least he made this comment which is at 4 karma right now. His more recent comments seem better than his previous ones, too. p(troll) seems pretty high, but not so high that I would support a ban, comment deletions, etc. at this point.
Most of his comments are essentially saying “you are wrong”. Once he was right in saying that, many times he was wrong. He probably knows a lot of facts about many topics, and he expresses with very high certainty; unfortunately the quality of his comments does not match this certainty, and he seems very immune to feedback. Low karma just proves he is right.
He is very negative towards others. Almost all his comments contain something like: “Your work is wrong.” “I never said anything like this” “I never flamed anyone.” “spelled wrong” “I have no such delusions.” “it hasn’t done anything for me.” “it’s definitely going to do more harm than good.” “I already explained why it’s not possible.” “There is practically no chance” “It’s a misconception” “This idea is based on a whole range of confusions and misunderstandings” “just another example of people not understanding” It’s like his only point in discussions is to show that everyone else is wrong, but it’s often him who is wrong. Did he make some useful contribution? I don’t see any.
And then the—“You are trying to submit too fast. try again in %i minutes.” and “You do not have enough karma to downvote right now. You need 1 more point.”—just make me want to scream. (Though the fact that he does not have enough karma to downvote makes me happy. I guess he was going to downvote those who disagree with him. I am happy that LW karma system does not allow him to make dozen sockpuppet accounts and start a downvoting war.)
Maybe the guy is not having fun, maybe that’s just what he honestly is… but anyway his comments seem like optimized to create mental suffering in others, certainly in me. I have left websites where people like this became frequent. If this kind of behavior becomes tolerated on LW, I will either write some GreaseMonkey plugin that will remove all his comments from the page, or I will simply stop reading LW. In theory I am reading this site for information, not for positive emotion, but I am just a human… if this site will give me negative emotion too often, I will stop reading it.
I tried to give him the benefit of doubt, and answered his comment seriously, but now I feel it was totally not worth doing. This is my worst experience on LW so far. Though this mostly means that I did not have bad experiences on LW so far. :) But I prefer it to stay this way.
I tend to agree with you. I think I just have a higher threshold for banning. As such, I would like to see him actively ignore our suggestions before entirely dismissing him, which is not sure is something he’s done yet.
Less Wrong isn’t some kind of human right that we need to go beyond reasonable doubt to withdraw from someone; it’s an online community run by an enlightened dictator, and if you want to keep your well kept garden, you have to accept some collateral damage.
I am extremely wary of this kind of thinking. Partly because using power is a slippery slope to abusing power, and each time you use the banhammer on a maybe-troll it gets a little bit easier to use it on the next maybe-troll.
Not just because of that, but also because when other people come to a community full of self-purported rationalists, and they see someone who does not obviously and immediately pattern match as a troll receiving the banhammer for presenting community-disapproved opinions in what seems superficially to be an adequately calm and reasonable manner, that sets off the ‘cult’ alarms. It makes us look intolerant and exclusionary, even if we aren’t.
It’s fine for places like the SA forums to throw the banhammer around with reckless abandon, because they exist only for fun. But we have higher goals. We have to consider not just keeping our garden tidy, but making sure we don’t look like overzealous pruners to anybody who has a potentially nice set of azaleas to contribute.
Slipper slopes work in both directions. Each time you don’t strike down injustice, it becomes a bit easier to walk by the next time. I’d sooner have Marginal Value > Marginal Cost than Marginal Value < Marginal Cost and a lower Average Value.
Bad impressions work in both directions. When other people come to a community full of self-purported rationalists, and they see someone presenting stupid, low-status, incendary comments and being treated as worthy of respect, it makes LW look stupid, low-status and incendary because of the Representativeness Heuristic.
Obveously there is a continuum between anarchy and banning everything, and both extremes are local minima. The issue is to judge the local gradient
Upvoted for valid point. I agree, but I think there is enough of a difference between ‘being treated as worthy of respect’ and ‘not being banned’ that we can probably ride in the middle ground comfortably without any significant image damage.
On consideration, though… maybe I’m prejudiced against banning because of the sense of finality of it. I guess it’s not hard to make a new account.
I’m still opposed to deleting past comments though, because deleted comments make a mess of the history.
This is how trolling works.
Well he hasn’t commented recently, so I’m guessing he either took our advice and made a new account, or just left the site, neither of which I would attribute to troll behaviour. (Or Eliezer is deleting his posts as promised, which would, obviously, weaken that hypothesis.)
I say just ban him.
I wonder if downvotes have gone from a punishment to a reward at this point.
I hope you’ll treat me fairly as a person and actually read and try to understand my comments instead of jumping to conclusions based on my “score”.
Your best way to be taken seriously would be just to create a new account without making any reference to this one, and, well, not act like a troll.
Huh. Come to think of it, on the Internet there IS a second chance to make a first impression. (a good argument to always using handles). Noted.
.
Are you enjoying wasting your time on this website?
You have 15 comments and a grand total of −120 karma. That is a strong indication that you are doing something wrong. To save you some time: the standard response is “I’m being censored! You’re an Eliezer-cult! All these downvotes are just because you’re scared of the Truth!”.
Please don’t use it, because it is not true: e.g. two links you’ve already seen, people call Eliezer out on mistakes, naunced responses to “Yay for Eliezer/rationality/SI!”-type posts. Part of the reason I like LW is precisely because people do disagree, but there are almost never flame wars: the disagreement means that people actually think about what they believe and even change their minds!
What you are doing is not fitting into the community norms of discussion, like research and linking/referring to specific sources (anyone can say “I’ve done research!”, but that doesn’t mean that you have). (I’ll pre-empt another common whinge: yes, in most cases, Wikipedia is an acceptible reference to use on LW).
The parent comment might not be particularly bad; but your history (and your username) puts you very close to “troll”, and that makes the parent comment look like a pattern-matched response (rather than a genuine question) which is the reason I downvoted.
I never said anything like this and I never invoked Eleizer. I don’t understand why you’re telling me off for something I didn’t do. Look at my post history if you don’t trust me.
It only makes sense to do so when making a claim. Yet people on this site have refused to back up their own claims with citations because apparently “I’m not worth bothering with”.
I never flamed anyone. The only guy who is calling people names “like troll for example” is you (well now that you’ve done it others are following your lead too, well done..).
Not really, I didn’t expect to get rejected so harsly. I’ve read all the sequences twice and been rational for years so I don’t know what the problem is. What’s the point of all this meta discussion, why is everyone trying to drag me into these metadiscussions and brand me as a troll after I passed 100 downvotes. We should get back onto the actual topic.
You are trying to submit too fast. try again in 6 minutes.
One of the problems is that you say things like “I’ve been rational for years”. Sorry. No, you haven’t. EY hasn’t been rational for years. You may have been an aspiring rationalist, but that’s a far cry from actually being rational. When you say things like that it is extremely off-putting because it sounds self-congratulatory. That’s something that this community struggles with a lot, and we typically heavily downvote things that are that way because they send very bad signals about what this website is. Beyond that, when it’s said by someone with the username “911truther”, it implies an element of “You’re not rational unless you’re a truther too”, which mean it or not, is how it comes across.
Secondly, and this relates, your username. It’s inherently political, which brings up all of our opposition to politics every time you make a post. That’s not a good thing, and it will be very difficult for anyone on this site to take you seriously. If two different people wrote two articles that were of exactly equal caliber, and one was named BobSmith, and the other was named Obama2012, I would anticipate at least 2-3 times the upvoting on the former and 2-3 times the downvoting on the latter. And 9/11 is so much more of a polarizing issue. The vast, vast majority of people here disagree with you. But roland, despite being wildly downvoted every time he brings up 9/11, actually manages positive karma, because it’s not inherently brought up every time he posts. I can not recommend strongly enough that you delete your account and create a new username if you wish to continue on this site. If you’re a 911 truther, I would not suggest lying about that, but choosing that as the phrase by which you identify yourself is not a very effective strategy for being taken seriously on this site.
Thirdly, the great grandparent to this isn’t a terrible comment. I agree with you there. I likely would have upvoted it had it been made by a different username, since I didn’t think it deserved that level of downvoting (but not because I thought it was particularly wonderful in and of itself).
I found this claim difficult to believe, so I looked it up. For the record:
I do wish we could discourage the attitude displayed here by gwern. It’s pure ego to respond in this way to someone you deem a “troll”. It certainly won’t change their mind, and it will only spur them to comment more. Either ignore them completely after downvoting, or be polite in your reply. One might justify these posts as important to make sure that 911truther knows why he’s being downvoted, but the aggression in them is entirely counter-productive and, frankly, is quite rude.
For the record, I do think people are a little over-eager to accuse someone of being a “troll” (I think it is much more probable that 911truther is simply ignorant) although I think moderation is warranted in this case.
Was this before or after the other links in other conversations?
I know you didn’t invoke Eliezer, but that is a common statement by people who find themselves downvoted a lot, so I was pre-empting it (if you were not going to do that, I apologise and that sentence should be considered removed from my quote, however the rest still stands). The only reason I said that, was because I looked at your post history and saw this one:
For the rest:
People have been providing links and citations to back up their claims. (Several of the replies in this thread)
I wasn’t implying that you flamed anyone, just that dissent is part of this website, and it is treated with respect.
Dismissing accusation of “troll” with uncheckable and irrelevant claims of rationality is not the right way to do it.
Rational compared to who?
First, thanks to lukeprog for posting this discussion post. The Ohio Less Wrong group has been discussing elevator pitches, and the comments here are sure to help us!
I often end up pitching LW stuff to people who are atheists, but not rationalists. I think this type of person is a great potential “recruit”, because they WANT a community, but often find the atheistic community a little too “patting ourselves on the back”-ish (as do I). My general pitch is that Less Wrong is like the next step: “Yeah, we’re all (mainly) atheists, but now what??”
Here’s an example from a recent facebook comment thread:
Then I point them to Methods of Rationality, and hopefully now to our meetups.
Coming up with elevator pitches/responses strikes me as a great activity to do at LW meetups.
.
If there is interest in some discussion logs to analyse, I’m having a lengthy FB thread with a fairly intelligent theist I knew from rabbinical seminary. I don’t think his arguments are particularly good, and I’m not great at arguing either, though I hope my content is a bit more convincing despite lack of style. I do not expect to change his mind—he holds a rabbinical position and chances of him changing his mind are near zero, but there are some observers I care about and this is an exercise in rationality for me. I can anonymize and post if people find this kind of thing interesting, I would certainly appreciate some feedback.
Well, I would find it interesting, but as a point of order: maybe you should let him know you’re doing this (even anonymizedly) so he can get help from a gang of his friends too?
I have no intention to have this turn into a public debate out of a Facebook thread. This is a chance to improve my rationality and argumentation skills.
Yes… I took “there are some observers I care about” plus “I would appreciate some feedback” to mean ‘I’d like some debate advice (which I will be applying)’. If that’s not getting help from a gang of your friends, I don’t know what is.
You’re correct, it’s a side benefit, but having a thread evolve into some kind of public debate looks silly. If public debate on such issues is desired there are order of magnitude better ways of doing it than this.
I don’t think pedanterrific is planning to have a bunch of LWers start commenting on the thread in support of atheism. I think he’s expecting a bunch of LWers to give you advice in this thread, which you will then use in your own posts. And he thinks the rabbi should be given an opportunity to ask his own community for similar advice. To use a boxing metaphor, nobody else is going to start fighting, but you’re going to have more coaches and your opponent should too.
I got that, but having to tell him that there are a bunch of people helping, bring your friends, seems awkward in the context. I’d rather not have the help and just let people view the log as a post mortem, for improving my rationality. Another part of it is the fact that I’m actually doing ok in the argument (I think) and “calling for help” would look like/could be spun as a weakness.
Okay then! That makes sense. Also, I support posting the log when the argument is done; I’d enjoy reading it and would be happy to comment.
The third, compromise, option would be, if I end up using a suggestion from LW, to say “(I got this argument from talking it over with a a friend)”, though I’m not sure if that goes far enough to satisfy standards of a fair fight people want to see.
.
I too am a member of the Ohio Less Wrong group. I was quite surprised to see this topic come up in Discussion, but I approve wholeheartedly.
My thoughts on the subject are leaning heavily towards the current equivalent of an ‘elevator pitch’ we have already: the Welcome to Less Wrong piece on the front page.
I particularly like the portion right at the beginning, because it grabs onto the central reason for wanting to be rational in the first place. Start with the absolute basics for something like an elevator pitch, if you ask me.
I might cut out the part about ‘human brains’ though. Talk like that tends to encourage folks to peg you as a nerd right away, and ‘nerd’ has baggage you don’t want if you’re introducing an average person.
Possible absolute shite ahead (I went the folksy route):
It’s about being like Brad Pitt in Moneyball. (Oh, you didn’t see it? Here’s a brief spoiler free synopsis). It’s the art seeing how others, and even yourself, are failing and then doing better.
Oh, yeah, I completely agree. But, it does know a helluva lot. It put us on the moon, gave us amazing technology like this [pull out your cellphone], and there’s every reason to think it’s going to blow our minds in the future.
Yeah, no that’s true. We’ve recently seen all kinds of bad decisions—housing crisis and so on. But that’s all the more reason to try and get people to act more rationally.
Yeah, true… true. Still, we can prepare in advance for those situations. For example, you might have reason to believe that you’re going to start a new project at your job. That’s going to involve a lot of decisions and any poor decision at such an early stage can magnify as times goes by. That’s why you prepare the best you can for those quick decisions that you know you’ll be making.
Yeah, intuitions are just decisions based on experience. I remember reading that chess masters, y’know like Billy Fisher or Kasparov, don’t even deliberate on their decisions, they just know; whereas, chess experts, a level below master, do deliberate. But to get to that level of mastery, you need tens of thousands of hours of practice, man. Only a few of us are lucky enough to have that kind of experience in even a very narrow area. If you’re a something like an intermediate chess player in an area with a bunch of skilled chess players, your intuition is going to suck.
Maybe not, but that’s not really important. Did you hear about Watson? That machine that beat those Jeopardy players? They’re saying Watson could act as a medical diagnostician like House and do a better job at it. Not only that, but it’d be easier than playing jeopardy… isn’t that crazy?
.
I like the others, but I think the problem with this one is that it doesn’t provide them with any reason why they shouldn’t just fill the gaps in whatever science knows now with whatever the hell they want.
The elevator pitch that got me most excited about rationality is from Raising the Sanity Waterline. It only deals with epistemic rationality, which is an issue, and it, admittedly, is best fit towards people who belong to a sanity-focused minority, like atheism or something political. It was phrased with regard to religion originally, so I’ll keep it this way here, but it can easily be tailored.
“What is rationality?”
“Why is rationality important? Shouldn’t we focus on religion first?”
Yes, we are emotional creatures. But being emotional is not incompatible with being rational! In fact, being emotional sometimes makes us more rational. For example, anger can inhibit some cognitive biases, and people who sustain damage to “emotional” areas of their brains do not become more rational, even when they retain memory, logical reasoning ability, and facility with language. What we want to do is make the best possible use of our available tools—including our emotional tools—in order to get the things that we really want.
Remember that your links don’t work in speech. :D
Clearly right. I had thought about carrying around hard-copies of papers in a backpack so that I could hand them out as I mention them, but … ;)
One of the most difficult arguments I’ve had making is convincing people that they can be more rational. Sometimes people have said that they’re simply incapable of assigning numbers and probabilities to beliefs, even though they acknowledge that it’s superior for decision making.
.
This. I’m skeptical of almost every numerical probability estimate I hear unless the steps are outlined to me.
No joke intended, but how much more skeptical are you, percentage-wise, of numerical probability estimates than vague, natural language probability estimates? Please disguise your intuitive sense of your feelings as a form of math.
Ideally, deliver your answer in a C-3PO voice.
40 percent.
This may be one reason why people are reluctant to assign numbers to beliefs in the first place. People equate numbers with certainty and authority, whereas a probability is just a way of saying how uncertain you are about something.
When giving a number for a subjective probability, I often feel like it should be a two-dimensional quantity: probability and authority. The “authority” figure would be an estimate of “if you disagree with me now but we manage to come to an agreement in the next 5 minutes, what are the chances of me having to update my beliefs versus you?”
Techniques for probability estimates by Yvain is the best we have.
I agree that it can be difficult convincing people that they can be more rational. But I think starting new people off with the idea of assigning probabilities to their beliefs is the wrong tactic. It’s like trying to get someone who doesn’t know how to walk, to run a marathon.
What do you think about starting people off with the more accessible ideas on Less Wrong? I can think of things like: Sunk Costs Fallacy, not arguing things “by definition, and admitting to a certain level of uncertainty. I’m sure you can think of others.
I would bet that pointing people to a more specific idea, like those listed above, would make them more likely to feel like there are actual concepts on LW that they personally can learn and apply. It’s sort of like the “Shock Level” theory, but instead it’s “Rationality Level”:
Rationality Level 0- I don’t think being rational is at all a good thing. I believe 100% in my intuitions!
Rationality Level 1- I see how being rational could help me, but I doubt my personal ability to apply these techniques
Rationality Level 2- I am trying to be rational, but rarely succeed (this is where I would place myself.) Rationality Level 3- I am pretty good at this whole “rationality” thing!
Rationality Level 4- I Win At Life!
I bet with some thought, someone else can come up with a better set of “Rationality Levels”.
“So, I hear you care about rationality. What’s that about?”
I’m not sure if this deserves its own article, so I’m posting it here: What would be an interesting cognitive bias / debiasing technique to cover in a [Pecha kucha] (http://www.pecha-kucha.org/what) style presentation for a college writing class?
Given the format, it should be fairly easy to explain(I have less time than advertised, only 15 slides instead of 20!) So far, I’ve thought about doing the planning fallacy, representativeness heuristic or the disjunction fallacy. All three are ones I can already speak casually about and don’t leap out at me as empowering motivated cognition (...a topic which would empower it, huh)
I would personally like to do Bayes Theorem, but I can’t 1) Think of a way to compress it down to five minutes 2) Can’t think of ways for other people to help compress it down to five minutes without also omitting the math.
Downvote if this is off topic. If not, please tell me why because I’ll just assume it’s an offtopic downvote!
It’s about figuring out the mistakes that people tend to make, so you can avoid making them. (“Like what?”) Like people aren’t good at changing their minds. They only want to think about information that supports what they already believe. But really, I should look at all the information that comes my way and decide—is my old belief really true? Or should I change my mind based on the new information I got?
This may be difficult to answer appropriately without knowing what the hypothetical speaker means with “emotions” (or “expect”, for that matter). But the phrase seems to me like a potential cached one, so ve may not know it either.
A possible elevator response below:
Rationality is not Vulcan-like behavior; you don’t have to renounce to your emotions in order to act rationally. Indeed, for most people, many emotions (like affection, wonder, or love) are very valuable, and applied rationality is knowing how to obtain and protect what is truly precious for you.
What is important is to rationally understand how your emotions affect your judgment, so you can try to consciously avoid or damper unwanted emotional reactions that would otherwise have undesirable consequences for you.
Does Sark’s recent tweet, “Intuitions are machines, not interior decoration,” work as an elevator pitch, or is it too opaque to a non-LWer? Or is it too short? Maybe it’s a fireman’s pole pitch.
I find the conscious AI response to be the most compelling. Now that I think about it, that’s more evidence for the usefulness of concrete examples.
Yes, but science is all about using whatever methods work to produce more new knowledge all the time. All the new knowledge we can produce with mechanisms that we know are actually trustworthy will eventually become part of science, and the only stuff that’s ultimately going to get left out is information we can only generate through means we know aren’t reliable at producing truth.
Biases from Wikipedia’s list of cognitive biases. Cue: example of the bias; Response: name of the bias, pattern of reasoning of the bias, normative model violated by the bias.
Edit: put this on the wrong page accidentally.