Nonmindkilling open questions
When I explain to people how beliefs should be expressed in probabilities, I would like to use an example like “Consider X. Lots of intelligent people believe X, but lots of equally intelligent people believe not-X. It would be ridiculous to say you are 100% sure either way, so even if you have a strong opinion about X, you should express your belief as a probability.”
Trouble is, I’m having a hard time thinking of an example to plug into X. For an example to work, it would need the following properties:
Factual question. So no value-laden questions like “Is abortion morally acceptable?” or counterfactual questions like “Would the US have won a war with the Soviet Union in 1960?”
Popular and important question. The average person should be aware it’s an issue and care about the answer. So no “My aunt’s middle name is Gladys” or “P = NP.”
High uncertainty. Reasonable people should be reluctant to give a probability >90% or <10%
No opportunity to gain status by signaling overwhelming support for one side. So cryonics is out, because it’s too easy to say “That’s stupid, I’m 100% sure cryonics won’t work and no intelligent person could believe it.” I’m assuming in any debate where you can gain free status by assigning crazy high probabilities to the “responsible” position, people will do just that—so no psi effects, Kennedy assassination, or anything in that cluster. I need a question with no well-accepted “responsible” position.
Minimal mindkilling effect. My previous go-to example has been global warming, but I keep encountering people who say that global warming 100% for sure exists and the only people who could possibly doubt it are oil company shills. Or if I were to try the existence of God, I predict half the population would say it’s 100% certain God exists, and the other half would say the opposite.
So what are the important questions that average (or somewhat-above-average) people will likely agree are complicated open questions where both sides have good points? And if there aren’t many such questions, what does that say about us?
Medicine provides many important-feeling examples that aren’t politically charged.
Will she survive the operation?
Does he have the flu or food poisoning? Or West Nile Virus?
How long will this migraine last?
Medical diagnosis (i.e. flu v. food poisoning) seems like a particularly accessible example.
Is it even possible to have an open question that lots of people would understand that wouldn’t serve for signaling?
I was thinking along the same lines, then saw your comment. I suspect an issue can’t really become “popular” without some some signaling or wishful thinking involved.
Probability of a major earthquake in California this year? High, if you hope those damnfool leftcoasters are finally going to get what’s coming to them. Low, if you have a lot of money tied up in property in California.
Almost everything seems to serve for signaling at least somewhat.
Even with my example, which I think seems to have an unusually dense combination of commonality of being asked, how fun or interesting it seems, and how impartial or non-partisan the answer would be, you could certainly still signal being pessimistic by saying no, or all futuristic or something by saying yes, or whatever.
I think the point would be less to find something that wouldn’t serve for signaling at all, and more to come up with something that would be the least infected with the most mind-killing sort of signaling. Anything sufficiently interesting and common probably has at least some non-trivial signaling incentives.
How about “The stock market will rise over the next month/year/decade”, or “unemployment will go up”, or some similar economics question?
I’d actually go more specific: Will the stock market will rise TOMORROW (alternately: in 3 days / next Thursday / etc.)? There’s a lot more doubt on a smaller time scale (or, at least, that’s my impression looking at the little squiggly graphs?), whereas general trends of “it will go up/down over the next month/year” are an area that plenty of people seem willing to assert unwarranted confidence in. I’d expect (or at least hope) that very few people would put > 90% odds on it going up/down on a daily basis.
It also strikes me as a very CLEARLY uncertain answer—even someone who believes it will rise over the next month, year, and decade will probably concede that it’s variable for tomorrow. But, at the same time, it’s not a simple 50⁄50 - if you believe it will rise over the next year, you’re probably going to assume a higher chance of it going up tomorrow too.
(I haven’t touched the stock market in two decades, so I might be making a stupid mistake in reasoning here)
Alternatively, a specific stock could work, possibly over a longer time-frame, and I think it would be easier to have an interesting discussion (if that is one of the goals) about it. Someone who doesn’t follow the market extremely closely probably won’t have anything to say about why they think the market has a certain probability of going up/down next Tuesday, but people should generally be able to form a coherent (even if not particularly accurate) impression of a major company’s prospects from general knowledge.
In order to avoid mind-killing, it would be best to avoid AAPL or any company in a politically charged sector (energy, banking, defense, healthcare), but that leaves plenty of room. I would be most inclined to go with GOOG, because everyone is very familiar their products.
Asserting p(GOOG goes up over the next year) > 0.9 doesn’t seem that absurd to me. Asserting p(GOOG goes up tomorrow) > 0.9 does seem a bit absurd to me.
I’m not sure whether one is ACTUALLY more absurd than the other, but it intuitive FEELS to me that an economist OUGHT to be able to make long-term predictions about major, stable stocks with 90%+ confidence.
Even if this intuition is wrong (and I’d love to hear if it is!), I think many people will share this intuitive reaction. The goal is after all to select an example which is both factually correct AND intuitively appealing to people. So, an unintuitive but true statement would lack that “gut reaction” value.
Indeed, predictions as a general category:
“[Insert candidate] will win the next election for [insert office].”
“[Insert speculative technology] will be widely available by [insert year].”
“The next major earthquake in California will occur within the next 10 years.”
One good feature of using Intrade predictions: rather than having to count on the audience’s agreement that there exist smart people on both sides, you can simply point to a well-traded contract hovering around 50% and note that even if many of the traders are irrational, there are enough smart rich people to buy it up/down if the answer really were obvious to the intelligent.
Many of these sorts of questions (both broad economic and political outcome type questions) have obvious political implications. Thus, they are likely to be very mind-killing.
Without doubt, so one would have to choose carefully.
That said, I would expect that, even on the most politically charged topics, people would be more comfortable with uncertainty regarding predictions than regarding other beliefs. For example, it’s probably easier to have a mentally nonlethal discussion about whether Obama will be re-elected than one about whether he should be.
Easier—yes.
Actually easy to have that conversation without mind-killing—in many cases, probably not.
Agreed.
That’s exactly the kind of questions the Good Judgement Project is asking, but only about events in other nations. I don’t think most Americans care much about who will be elected president of the Philippines or whether peace talks between x and y nations will resume before May 1st.
I’d think that, especially (but not only) in a presidential election year, this question would be corrupted by politics. If the current administration represents My Team, then it is certainly handling the economy well, and therefore, the stock market will rise and unemployment will fall.
If it smells like it might earn them money people will drop signaling in favor of actually instrumental behaviors FAR more readily. I think this is a winner.
I tentatively suggest there’s a pattern here.
By default, and in practice for the great majority, no factual question can be regarded as popular or important unless it provides an opportunity for status signaling or mind killing.
However, if there is something like a prediction market, a tiny minority will adapt to become specialists in making accurate and profitable predictions.
This applies to sports and stock trades. Most people will be happy to be a [LOCAL SPORTS TEAM] fan, and will happily remained biased for signaling purposes, maybe making penny-ante bets to show loyalty. Professional bookmakers in Vegas and professional coaches of sports teams have to look at reality, or else find another job. Similarly, people in finance may have strong political opinions on their own time, but if they don’t make money, they’ll be out of a job.
This doesn’t necessarily help if you’re trying to “explain to people how beliefs should be expressed in probabilities” to the vast majority of people who don’t have skin in the game. But you could appeal to their imagination.
That sounds like a massive overestimate of the percentage of atheists.
Not to mention a massive underestimation of intermediate positions, e.g. the doubting faithful, agnostics, people with consciously chosen, reasonable epistemology etc. This sets that number to 0. I’ve met plenty of more liberal theists that didn’t assert 100% certainty.
It depends what you mean by ‘God’.
“What is the probability there is microbial-like life (other than from earth) in our solar system?”
I’m having difficulty giving this a good estimate, myself, actually.
This is good because it’s not a future event.
Can you elaborate on what you mean by this comment, Nisan? I don’t see why that is relevant.
The future seems less definite than the present. I can imagine someone telling Yvain “Well, sure, we can assign a probability to whether there will be a major earthquake in California in the next ten years, because it hasn’t happened yet. But a proposition about the present is either true or false; probabilities aren’t appropriate for that.” I’ve never heard anyone say that, but I think it’s something people would say.
So why does that mean that it is good to have difficulty coming up with a good estimate of whether an existing statement is true? It seems like your hypothetical argument is obviously wrong from the Bayesian point-of-view, for example see this recent article. You sound like you don’t support this hypothetical argument, so I still don’t understand your original comment.
As I understand it, the purpose of Yvain’s post is to come up with specific propositions that he can use to convince people that probabilities are appropriate for thinking about propositions in general. calef’s comment above is a pedagogically useful proposition for this purpose because it satisfies most (if not all) of the criteria Yvain listed in his post. My comment to calef points out an additional point in its favor: The proposition is not about a future event, so it sidesteps a possible pedagogical failure mode that I described in the grandparent.
I think you misidentified what the word “this” refers to in my response to calef.
Aha! It suddenly makes sense. Thanks.
The best among the ones I’ve read this far. (E.T. Jaynes used “There once was life on Mars” as an example for something similar.) Though it doesn’t fully meet condition #2, as I guess most people wouldn’t give a damn about that. I’m not sure conditions #2, #4 and #5 are compatible.
Will it rain next Thursday?
Where I am, I would feel comfortable assigning that <10% probability.
Will it rain in London next Thursday?
(11-15 rainy days per month)
http://www.weather.com/weather/tenday/UKXX0085 See for yourself. Regardless, these aren’t particularly open questions. I would trust those people who have spent years developing fairly good models to have a better answer than I do.
Will it rain in London six months from today? (Perhaps I’m planning an outdoor wedding.)
I don’t believe anyone has a way to do better predicting six months out than just looking at historical rain/shine rates.
My point is that while this is non-mindkilling and an open question, there isn’t much productive discussion you can have about it. Forecasters can slightly beat the averages when talking 6 months out, but they do that through the use of sophisticated computer models, not rational discussion. Without spending months or years learning how exactly those models worked and developing improvements, I don’t expect to be able to come up with a better answer than the current accepted one. So this really isn’t an open question of the class we’re interested in.
I’m not sure whether you and Yvain are using “open question” the same way here. I think Yvain is just using it to mean “we don’t know either way”, not in the “not figured out yet but we want a real solution” sense of open problem.
Yes, but open problems that we won’t come up with better answers than the answers already out there are not terribly useful to discuss.
But the point is not to discuss them, it’s to show people that they should not assert 100% probabilities.
I don’t really see sides here. It’s more “the forecast says x%”. So while reasonable people will admit to probability estimates other than 0 or 100%, that’s because of the format the information is presented in.
Substitute “next week” or “next month” as appropriate.
Next week: 90%+ Next month: See next week.
So, you very much expect it to rain some time next week, but specifically not next thursday. Is that because you have storm clouds coming in? ’cause if they’re independent, that math doesn’t work out at all.
probability of 7 days of not-rain
0.9 ^ 7 > 0.1
Indeed, am am expecting no rain on Thursday, but it’s raining now. Specifically on Thursday, we will have a dry air mass moving in.
That’s a good one—it also allows for very easy demonstration of updating ones belief on new information (ie., was the day comes closer).
Questions from DAGGRE with a current estimate between 10 and 90%:
Will Ayman al-Zawahiri still be recognized as the leader of al-Qaeda as of 31 december 2012?
Will there be an official announcement of a new sovereign debt restructuring program for Greece before June 1, 2012?
Will the forces of Alassane Outtara defeat the forces of Laurent Gbagbo in the Ivory Coast before 1 December 2012?
Will Croatia’s GDP grow more than 0 percent in calendar 2012?
Will €1 Euro buy less than $1.20 Australian dollars at any point before 1 January 2013?
Will there will be a 50%-effective malaria vaccine available for general use before 2015?
These would all make decent examples, but I’m surprised by how many posts here have concentrated solely on future events (which are obviously uncertain). I’m curious whether there are any questions about the past or present that fit these criteria.
In order for probabilities to be logical as an approach, you need ignorance. Ignorance is in much more ample supply in the future than it is in the past.
Some suggestions:
Events subject to institutionalised gambling. (I imagine LW doesn’t have a massive sports fan or gambling element)
Popular, non-crackpot conspiracy theories (admittedly, I can’t think of many)
Historical mysteries (Amelia Earhart, crew of the Marie Celeste, etc.)
Common misconceptions
Etymology of words
A non-crackpot conspiracy theory: what’s the connection between News International and the murder of Daniel Morgan#2011_News_of_theWorld.22investigative_journalism.22_scandal)?
People care about their health, and there’s a lot we don’t know about even very popular, non-technical habits.
Does a daily aspirin help prevent cancer?
Is running better for your health than other exercise?
[Will you live longer if you switch from a typical American diet to a vegan diet?][oops. Okay, mindkillers are everywhere.]
(For specific probabilities, add parameters, e.g., ‘live 3 months longer.’ Or ask what parameter value the 50% probability should be at.)
Vegetarianism/veganism vs. meat-eating is a mind-killing subject.
I think that if you changed “vegetarian” to “less meat”, it would no longer be mindkilling.
Ah. Point.
It’s still a good idea, though. If you changed it to, say, paleo vs. Okinawan diets, I think it would work fine.
[Sports team listener is known to like] will win the [upcoming sports event].
I think that there, you would be better off looking up the odds that your favorite bookie is offering.
If you can convince people to check a bookie’s odds instead of asserting “WOO! Go team go! You’re #1!” then I think you have succeeded in raising the sanity waterline...
Clearly cheering on a sports team and checking a bookie’s odds fulfill two different functions. One is about signaling, whereas the other is about improving your knowledge.
Trying to convince someone to forsake one in favor of the other makes about as much sense as telling them to buy a Prius instead of learning to juggle.
Agreed. Teaching someone to differentiate those two would still probably qualify as raising the sanity waterline, though. People do make wagers and answer research questions based off the “social signalling” behavior, and I doubt that this is usually desired or worthwhile behavior (i.e. I doubt many people make bad $100 bets because they want to signal loyalty. I think they make them because they are genuinely confused)
You’re right, unfortunately.
Maybe whether we’ll encounter intelligent life from another planet sometime within the next century?
By encounter, do you mean receive messages from? Because as far as I know the chances of us ever leaving the solar system without any sort of Superintelligence are slim.
I mean anything involving us learning of their existence. Receiving messages from, observing from far away, actually coming into physical contact with, or anything else that would mean they exist.
So I guess the question is whether we’ll find out there’s intelligent life elsewhere in the universe, and specifically whether we’ll do so within a certain time frame (such as a century).
This seems like a good example of what Yvain is looking for because there’s no common political or social signaling incentive (right?), it seems pretty up in the air, it’s sort of a fun question for most people, it’s common, etc.
You could ask: Was the Trojan War an actual historical event?
It is not actually an popular question, but it is a question about a popular subject. I wouldn’t say it’s important, but it fits all other criteria. You could fill the listener about the details.
E.T. Jaynes in PT:TLoS used “Achilles is a real historical person (i.e., not a myth invented by later writers)” in an example. (I don’t like it because it’s not binary: there’s a whole continuum between writers inventing him completely from scratch not based on any real individual at all, and writers having always been as truthful about him as they could have been. I don’t think either extreme is true.)
So this might be the result of huge selection effects, but most “average or somewhat-above-agerage” people I’ve met, e.g. people at my high school, were agnostic, and weren’t very mind-killed on the subject; in fact, they were mostly disinterested, and wary of both theistic and atheistic evangelism. I think the God question would work just fine for such people.
ETA: Ah, Vaniver already said similarly.
Reading the comments, I’m not sure we’re addressing the question you’re asking.
Prediction market proponents have put a lot of thought into how to turn mind-killing open-ended questions (far-mode) into resolvable bets (near-mode), often by use of conditional wagers (chance of Y, wager valid only if X happens). This is a good mechanism for applying percentages to such questions.
However, it completely bypasses the point I think you’re trying to make, which is that all useful beliefs are of this form. Basically, you want a simple-example summary of Making Beliefs Pay Rent.
I’m not sure there exists such a single example—humans are very good at separating the signaling and identity “beliefs” from testable, resolvable “predictions”, and each of us has different topics with different ease of analysis. This means that it takes quite a bit of thought, study, and reflection to see how the word means different things, which shouldn’t be mixed.
I think the direction to go in these discussions isn’t so much to find a topc that’s often expressed as an identity-signal “belief” and push someone toward a prediction-probability-”belief”. Instead, you can ask “what choices are you making based on this belief, and what’s the cost if you’re wrong”? When you find topics on which the individual can admit that there is a concrete wager being made, then you can guide them to understanding how belief works.
And of course, if they can’t tie their belief to any decisions, or understand that the decisions would be different if the belief were incorrect, then you can point out that this kind of belief doesn’t help the holder of the belief.
It’s a bit esoteric for many people, but nobody knows whether or not the LHC will find the Higgs boson...
I don’t think that’s a good question. You’re right that, technically, nobody knows, but my impression is that many physicists would be comfortable assigning p > 0.9 that it will.
The LHC has already collected a few sigma’s worth of evidence that the Higgs boson exists. (I’m assuming find means ‘> 5 sigma’ here.)
How about nutrition related questions? The exact wording of the question is tricky, but something like
“Will a low fat diet help most people lose weight?” isn’t the kind of thing that inspires 0% or 100% responses.
Too vague; “a low fat diet” could be taken to mean replacing fats with carbs, replacing fats with proteins, or just eating less in general. Otherwise I think it’s a fine idea.
Diets are mindkilling. Someone who is or has been on a diet, or has someone important to them who has, will treat choice of and necessity of dieting as a purely affiliation/signalling game.
By that criterion, anything people can have an opinion on is “mindkilling”. Yes, people are vulnerable to becoming entrenched in any way of thinking, but some are less vulnerable to such than others. And in my experience, diets are something people tend to be open to a variety of opinions on.
Anything people can have an opinion on is potentially mindkilling, yes, but I was saying diets are actually so for very many people. Basically, my experience runs counter to yours: I’ve seen a lot of people being unreasonable and irrational about dieting, whether or not they are/were dieting themselves.
I’m happy that you can have rational conversations about diets with people. I still think my experience isn’t extraordinary and so diets would be a poor choice for the OP’s question.
You’re right here of course. It would seem, however that this is such a basic element of our daily existence that it may be worth simply ignoring those with mind killing issues on the subject and bullying the subject down to one of plain good decision making and science, at least in this context.
I agree that 1, 2, and 3 together eliminate all non-mindkilling topics. For a question to be well-known and unanswered, the world must not push back on it very hard (not important). Seemingly important unanswered questions are precisely the domain of religion, on one model (“why are we here?”). And it’s been observed that the mindkilling aspects of debates increase inversely with their observable consequences.
I’m going to go out in the weeds, and simply point at places where probabilities are used all the time: betting.
will there be more than one winner of the XYZ Jackpot Lottery on this drawing?
who is going to win the superbowl?
what are the odds of the Buffalo Sabres making it to the playoffs?
who will win such and such boxing match?
which horse will win this race?
Odds are well understood in these environments, and by picking a sport that a person is only peripherally interested in, most of the mindkilling effects should be reduced to a non-dangerous level.
But isn’t the point to find something where it’s sort of unexpected to express the belief as a probability? I may have misunderstood Yvain’s intention, but I don’t think he was looking for examples where everybody already expresses them in probabilities.
I thought he was trying to figure out how to explain the important epistemological point that an empirical belief must be a probability (range somewhere 0-100%, usually not at either extreme), because a lot of people seem to treat a large category of empirical questions as “you think for a while, and then make up your mind”.
I’m not sure whether I’m being clear enough here, but perhaps I could at least make my position seem a little bit more likely by pointing out that it seems highly unlikely that Yvain would have made a whole discussion topic about this if answering it was as easy as simply pointing to a gigantic, well-known category of beliefs that are always expressed in probabilities. It would be like asking for an example math problem that uses exponents.
Actually I might have just thought of a much better way to put it. I think his intention was to find a good example to include in an attempt to explain the classic LW assertion that probability is not in the territory. Everybody knows that certain things are usually expressed in probabilities (like whether the coin will be heads or tails), but most people don’t realize that every thought process is a probability, and you can assign a probability to every belief depending on how likely you think it is that you thought through it properly.
In fact, maybe that invalidates me qualifying my writing earlier with just empirical beliefs. Isn’t it the view on here that every belief is a probability, because a probability can also be you gauging how likely it is for your thought process to be sound, rather than something in the territory? Can even a non-empirical question have a probability, just not one that one must come to via frequentist methods (testing a bunch of times and seeing the ratio)?
I don’t know any probability theory, so maybe I’m way off here. This post turned from a random observation to a winding attempt to grapple with some (perhaps easy) problems. Anybody who has any thoughts on the matter, it would be appreciated.
Nitpick: I think that most empirical beliefs are very close to the 0 or 100% ends, so much so that we don’t even feel any uncertainty.
That said, you make an excellent point and I think you’re right about Yvain’s goal here.
I’d say that the lottery, specifically those with a variable number of winners, are the best bet. There’s not a lot of emotional investment, unless the person expects themselves to win and a single winner is likely (and even then, if they believe > 90% odds that they’ll win, they’re quite probably a lost cause to begin with)
For those which rarely have winners and tend to just grow each week, “time until next winner” or the like would probably also work.
Sports/horses/gambling, by contrast, strike me as mind-killers for a lot of people.
Replied to the wrong comment. Sorry.
The Space Program? There are reasonable people who say it’s vital for the future good of our species, and reasonable people who say it’s impractical and unnecessary, at least for the time being.
Sounds more like a value question to me.
“Is the fetus growing in [pregnant woman you both know]‘s uterus a boy or a girl.” There is a tremendous amount of people who think that order of birth, shape of belly, pulse on pinky, etc can be used to predict the baby’s sex [anecdotally these results, combined with peoples’ guesses come to 50⁄50, go figure!] , which if enough of them agreed could arguably be used to marginally move away from the base of 51⁄49. This probability could be updated after an ultrasound, but still not move to 100% as there are well documented error rates associated with sexing a baby via ultrasound. Even more fun, these error rates are result specific, (i can’t remember the numbers, they are buried in a baby book) a “boy” result is wrong less often than a “girl” result, and I seem to remember there is user error involved (some technicians are wrong less often than others). If a blood test is done on a fetus it could be used to further update the probability. The final test, inspection upon birth, which will update the probability again, will not quite get to 100% unless tests were done to rule out a rare abnormality (ie apparently a boy, but has an ovary)
As an afterthought, I don’t actually care about the “popular and important” part of it—I usually ask someone for the population of Indonesia, and then to make me a confidence interval. So if he says 2 million, I ask him for a 98% confidence interval and then show him that he was wrong. If you’re interested in trying this, make your own 98% confidence interval [two numbers X,Y such that you are 98% sure that X < population of Indonesia < Y] and then Google it.
Upvote this comment if your X<Pop(indonesia)<Y. [i.e. you made a good confidence interval].
I will do so once there’s a balancing karma sink :-).
For those who think like gjm, downvote this comment once you’ve upvoted the other one.
[though as a side point, if you found the poll worth taking part in, then you found it worthwhile enough to a) read, b) do some [admittedly trivial] research, c) respond to. I think that means I’ve earned a karma point from you]
I don’t care if he gets a few meaningless internet points for making a poll.
Furthermore, I don’t care whether you care if he gets a few meaningless internet points for making a poll
(I don’t care whether TraderJoe gets a few meaningless internet points either. I do, however, prefer a world in which the meaningless internet points have roughly the meaning they’re intended to have.)
Upvote this comment if your confidence interval was too tight and either Pop(Indonesia) < your X or Pop(Indonesia) > your Y.
“Will the Artist win the Best Picture Oscar?” would probably have fit your criteria in the runup to the Academy Awards. Not sure what the current analogy is—probably some similarly ‘unimportant’ yet highly publicised award/reality show exit.
I think when you have a question that fits the first three criteria, it always devolves into mindkilling. (Operating systems are a good example.)
The only time this doesn’t happen is when the question is not popular/important. If you want to find an example, you’re going to have to let go of either #2 or #5.
If we observe, most things that are factual questions are indisputable by intelligent people; for example, “Is the Earth round?” is a question that anyone who is fortunate enough to have some basic intelligence and an elementary school education is unlikely to argue. However, in order to have an opinionated question, one opens the can of worms that is mind killing and biased. For example, if you had two towns side by side, populated by young adults of equal intelligence and equal education, and they each had a sports team that competed against the other town’s, those people living in each town would claim their team to be superior, without evidence other than “I live in this town.” Hence, bias.
Is string theory more closely correct than any other current theory of physics?
I, at least, have essentially no way to judge this question and so don’t really know what probability to assign it. Perhaps 1/n where n is the number of viable theories of physics? Plus a bit since it does seem to have more expert backing than anything else.
How/where memories are stored. It was salient for me, but even so it seems to satisfy all your conditions, except perhaps “important” since it doesn’t impact most decisions that aren’t whether to sign up for cryonics.
(I would like there to be no replies to this comment saying “Memory is probably stored in X via process Y” because yes, probably. That’s the point.)
Doesn’t seem sufficiently common.
Might work in the right crowd, but I got the feeling that Yvain was looking for an example that a lot of people could relate to. I mean, I hang out on here, and don’t even have a first thought on the topic, nor would I expect to get the significance of whatever example answers and probabilities are given.
Questions in the area of health maintenance, weight control, diet & exercise.
I’d call that fairly mindkilling, actually—it’s not a very partisan issue, but there are definite identity groups associated with it.
Mind-killing for some, but not others. Find someone who hasn’t hauled it into their identity, and it might work.
Are GMO foods safe?
“Safe” for whom? This could mean several different things:
Does eating (a particular) GMO food cause health problems?
Does planting (a particular) GMO food crop cause chemical environmental problems, e.g. a chemical expressed by GMO crops causing beneficial insects to die off?
Does planting (a particular) GMO food crop cause biological environmental problems, e.g. by becoming an invasive weed?
Does planting (a particular) GMO food crop cause economic or governmental problems, e.g. by contaminating adjacent non-GMO fields with pollen from a patented gene line?
If GMO crops are very successful in the market, will this lead to crop monocultures that will experience catastrophic failures?
As with “organic food”, the halo/horns effect is strong with this one.
Unfortunately, this question has become entangled with political/personal identity issues, and thus has become quite mind-killing.
Greenpeace and the Heritage Foundation both have strong (and opposed) opinions on the matter.
Suppose you were to calculate expected value (e.g. expected change in final utility) based on your probability estimates. How certain would you be in that number? If not 100%, adjust the number accordingly to take that % into account. Now, how certain are you in that number? Rinse and repeat.
People are not terribly good at transmitting and receiving, across adversary relation, what exactly they are 100% sure about. But it is clear that one has to end up with some value, that one has to trust completely; one could e.g. side with consensus, and trust that completely, or one can try to weight different people’s opinions using made up numbers, but in the end, you will have a number that you’ll trust, and if you don’t, you’ll have to propagate uncertainty and have another number for the ‘expected value’ that you trust; if you want to do it forever you can solve that using algebra and again have a number you’ll be acting on as if it was true. That is just how probabilistic reasoning works.
Naturally, the original AGW estimates, by scientists, are just that: expected value. I find it very dubious that you can improve consensus on expected value (and standard deviation) by factoring in the contesting views yourself, with your own personal probability estimates.
Probably the question of Multiverse- versus Collapse-Interpretation in Quantum Physics could suffice. It fits your needs as a factual and, somewhat, important question, while there is no overall aggreement in the respective fields of experts (although the trend to MI seems becoming clear...). Furthermore I guess there aren’t any status- or mindkilling-issues concerning this question, but, and that might rule out this question at least in some cases, a religious person might feel otherwise and reject the idea of “multiple creations” at first sight. Regarding “High uncertainty” the MI/CI-conflict might not fit your scheme, as(you already know if you read the Quantum-Physics-Sequence) there is a strong Bayesian argument to decide for Multiverse-Interpretation, but since you are going to talk to people who don’t know about expressing their believes in probability, they maybe will not have read into that topic either (of course this just describes the standart-street-guy you encounter, maybe not you discussio-club-friends...). If so, at their state of special knowledge, there is little reason to value one more than the other. The worst would be for them to consider this a problem plainly boring or—within high physics- out of their reach, but if you explain multiverse beautifully enough and, maybe, ask them to imagine they are supposed to discuss the topic in a school-essay, it could become interesting enough.
But in order for them to even give a meaningful probability estimate, they’ll need to spend years actually studying the relevant physics and mathematics. It doesn’t matter how eloquently you explain MW—the Universe doesn’t run on rhetoric.
If you ask people about MW versus CI, from their perspective it’s no different from asking “does the glibbleflop spriel or does it just florl?”
Upvoted. This is why I refuse to hold a position on this (and other similar topics). I also tend to dislike it when people DO choose a side on these sorts of issues, unless they have spent a significant amount of time and effort studying the field. The best guesses that I (and the other non-experts) can come up with rely solely on appeal to authority.
If you do not have enough physics knowledge to have a grad degree in it (I don’t care whether you ACTUALLY have a degree, just the knowledge), then having a strong opinion on MW v. CI is perhaps not the wisest.
Although physics questions require a higher level of knowledge for me to feel I have “right” to form an opinion, there are many other topics, in which this is true as well. (For example, there are political questions that I refuse to take sides on, because the answer is non-obvious to me, I don’t feel like I have near enough knowledge to have the “right” to a strong opinion on them.)
Moreover, the CI doesn’t even include the criterion that distinguishes it from MW, so even for the experts there’s no set of observations that would decide the issue!