Query the LessWrong Hivemind
Often, there are questions you want to know the answers to. You want other people’s opinions, because knowing the answer isn’t worth the time you’d have to spend to find it, or you’re unsure whether your answer is right.
LW seems like a good place to ask these questions because the people here are pretty rational. So, in this thread: You post a top-level comment with some question. Other people reply to your comment with their answers. You upvote answers that you agree with and questions whose answers you’d like to know.
A few (mostly obvious) guidelines:
For questions:
Your question should probably be in one of the following forms:
Asking for the probability some proposition is true.
Asking for a confidence interval.
Be specific. Don’t ask when the singularity will happen unless you define ‘singularity’ to reasonable precision.
If you have several questions, post each separately, unless they’re strongly related.
For answers:
Give what the question asks for, be it a probability or a confidence interval or something else. Try to give numbers.
Give some indication of how good your map is, i.e why is your answer that? If you want, give links.
If you think you know the answer to your own question, you can post it.
If you want to, give more information. For instance, if someone asks whether it’s a good idea to brush their teeth, you can include info about flossing.
If you’ve researched something well but don’t feel like typing up a long justification of your opinions, that’s fine. Rather give your opinion without detailed arguments than give nothing at all. You can always flesh your answer out later, or never.
This thread is primarily for getting the hivemind’s opinions on things, not for debating probabilities of propositions. Debating is also okay, though, especially since it will help question-posters to make up their minds.
Don’t be too squeamish about breaking the question-answer format.
This is a followup to my comment in the open thread.
If you query Less Wrong, what is the probability that the median response is acceptably close to correct? Please provide confidence intervals, feel free to break out any classes of propositions if you feel that it would be unfair/poor form/not very fun at all to group all classes together but explain why.
We would give you our estimates, but they’re probably wrong.
Seriously: For practical real-world questions, my wild guess is that the most-upvoted answer will be “acceptably close to correct” in about two thirds of the questions that are asked. For more nebulous philosophical stuff like many-worlds and qualia, I’d put our accuracy much lower.
Related is the calibration question in the old survey, though I think the staggering accuracy here was a fluke:
Probability: If the typical modern {person, LWer} knew all the positive and negative effects of taking {modafinil, piracetam, etc.} they would pay present prices to take them.
Caffeine: person − 0.9, LW − 0.9
Nicotine: (not including reasons for smoking, addictiveness of smoking, or taking nicotine products to break smoking addiction) person − 0.1, LW − 0.8
Piracetam: person − 0.05, LW − 0.25
Oxi/Ani/other ‘potent’ racetams: person − 0.05, LW − 0.4
Amphetamines (adderall, dexamphetamines, ritalin): person − 0.1, LW − 0.8
Modafinil: person − 0.3, LW − 0.99 (!!)
Source: a rationalist’s interest in nootropics and stimulants, gwern’s site, personal experience with the first four but no statistics. Typical modern person probabilities from discussions with acquaintances of various levels of openness. Summary of probabilities: nicotine and amphetamines are very useful but have negative associations that require high levels of rationality to overcome. Modafinil is very useful but seeing that and using it properly is probably a tad difficult for the average person.
I cannot speak to the probabilities of others but I can give you an anecdote: me. I am dosing modafinil without a prescription (the deeper irony here is that I would actually qualify for one “on-label”). I studied the literature and ‘folk lore’ on the issue before making my decision.
If I am at all representative of a reasonable approximation of a rational ‘typical’ modern person, then I’d say that the probability is very high. (Then again, looking at how my comment history on LW has been treated that may not be a safe assumption.)
Piracetam, on the other hand, appears to have no discernable effect. I’m not even sure that placebo effect occurs; self-reported “diagnosis” of memory is a poor guide for comparing cognitive abilities—and no material test to my knowledge has demonstrated improved cognition as a result of dosing any of the ’racetams.
As to the “etc”—case by case basis, really. I know apocryphally (“truth in journalism”) there are many college students and postdocs who ‘abuse’ adderal.
If you’d qualify for a prescription anyway, doesn’t that indicate that modafinil will do more good for you than others?
It would if the reasons I were dosing were for the symptoms I qualify for the prescription for. I’ve been dosing adrafinil and modafinil for years (with off-periods); I’ve only qualified for the last three months.
Given no other information, that suggests that using them has made you dependent on them.
As I noted; I regularly go through “off-periods”—two to three months once every six months or so—as a general precaution against potential liver damage as well as a test against dependency.
Right now my schedule of dosing is the daily recommended dose once a day for a three or four day period and then “off” for the other four or three unless other circumstances require me to skip sleep cycles (the qualifying condition in my case being sleep-shift disorder as I work 12-hour overnight shifts.)
Where would one go to read more about modafinil?
I have read Wikipedia and Erowid.
If you were to assign a percentage of how much all around “better” you feel when you are on it, what would it be? For example 10% better than off? 20%,30%?
I am very frequently uncomfortable assigning percentages to non-inherently-numerical observations, as humans are notoriously poor judges of probability. That being said, the citation list and “external links” entry for Modafinil on Wikipedia is very extensive. It might also help to follow through with the same on Adrafinil, as the latter is less politicized at this point.
tl;dr version of the below: It’s not about feeling “all around better”: it’s about having control over my productivity cycles, and being able to adapt to alternative cycles of alertness.
The thing about modafinil is that it does not produce euphoric sensation. It’s not that you “feel” anything in particular—if anything, the frequency of headaches (a common side effect) is greater so there’s a real argument that it makes you “feel” worse. In contrast, however, it also prevents the onset of mental and physical fatigue. Given the 12-hour metabolic half-life, this has a more prolongued noticeable impact than caffeine does (at least for me) in terms of whatever “pool of reserves” cognitive load drains; that is, it takes less effort to stay focused, and one experiences far less “grogginess”.
So in terms of allowing me the ability to retain alertness over prolonged periods without experiencing fatigue, it does very well. I have been known to go as long as five days without sleep (longest instance to date, there were external extenuating circumstances requiring this) without significant deleterious effects. Prolonged periods do require either escalating dosage or accepting decline in cognitive function (similar to being drunk; I’ve noticed a high correlation between how I behave after a 48 hour period and those with ‘a light buzz’ behave in terms of inhibition control and reflex response, aside from the window of peak onset from dosage).
Under my regular dosage regimen I frequently sleep roughly three hours per day on-dose and then for twelve hours the day after the dosage window, followed by “normal” behavior. This allows me, as a night-shift worker, to maintain a “regular” social life and permits me to adjust my sleep cycle at will, to the point of forgoing an individual cycle on occassion as I see fit.
Not sure that is really equivalent; adderall is an amphetamine.
Amphetamines are widely seen as nootropic.
Have you known many long-term users?
First; there’s a reason why I myself am not dosing adderall or any other amphetamine.
Secondly; I haven’t known anyone that’s been a long-term user of adderall or any dextroamphetamines, but I do know people that have been using methamphetamines for as long as a decade.
Thirdly; the point remains that amphetamines as a class are widely seen as nootropic.
I did click on your link before, but I guess I did not really read it because I should have noted that the first sentence in that section states exactly my opinion on the issue.
I would guess you probably agree with regard to the side-effect profile and so maybe this just boils down to me being fussy about what they call a “classical nootropic”.
Well, according to the longitudinal studies I’ve seen adderall dosing’s long-term effects in adults aren’t all that severe comparatively speaking. I wouldn’t dose it -- (I already have high blood pressure as it is, and unlike modafinil adderall is a genuine amphetamine). In general I’m rather leary of what I (inappropriately) refer to as “psychoactives”—that is, drugs that induce altered mental states (“highs”).
As in, a pill that raises your IQ. Yeah, no such beast exists today.
Confidence interval(s): If the typical LWer knew the extent of all effects of {cardiovascular, weight-training, other} exercise, and they were able to commit to any amount of said exercise and stick to it, how much would they do?
Assume that any time they spend doing exercise would otherwise have been spent doing other work.
If you want to be more specific, what advice would you give to healthy 25-year-olds, to healthy 40-year-olds, etc.?
I would assume this varies greatly by individual, based on biological factors. My answer would be “enough to feel fit”, where moderate levels of physical play would be considered fun rather than dreaded. For me that’s about 40min/day, 4x per week, of weight training. I don’t know if that’s typical, but having lived on both sides of the line, the benefits of feeling fit are very easily worth that lost time. Everything else I do benefits, it’s akin to getting enough sleep.
Going on my own anecdotal experience, I think there is substantial marginal benefit to cardiovascular exercise even at the level of 4+ hours a day of said exercise.
For a while I’ve been doing 4 or 5 hours a day of cycling and it seems to have weird cognitive effects similar to those of caffeine. While exercising I seem to sometimes go into some sort of trance state where I’m pretty excited and can do lots of work (at the same time as the cycling) pretty quickly without it feeling like actual work. I can listen to the same dance track 30 times without becoming bored of it. I think this might be due to ‘runner’s high’, or due to increased blood flow to the brain.
Probability that the universe is infinitely large.
P(infinite x,y,z) ~ .00001
Isn’t that an unknowable? We literally have no means of deriving information about the universe beyond our lightcone. And that’s not even touching on what qualifies as “part of” the universe, depending on which definition of “universe” you are using.
It’s an undefined question, I feel.
Mathematics.
Pray tell; what mathematics are applicable? (Note: “Physics” isn’t applicable.)
Furthermore: What can mathematics inform you of if I tell you that I either am or am not thinking of a number that may or may not be imaginary, negative, irrational, rational, positive, complex, whole, or real. Please illustrate.
If physics is not applicable to understanding the nature of the universe then we are all in a lot of trouble.
Tell me; are you in the habit of using English grammar as the ruleset for arranging Zen gardens?
Physics is topically contingent to “the physical”. The Laws of Physics as we know them have further been derived through Popperian Falsification. Even within our own lightcone we still from time to time see the revival of conjecture as to whether the gravity constant or others are actually constant or if they vary from one region to another. Because nothing outside our lightcone interacts with us, we have no way of knowing which, if any, of the Laws of Physics we yet have are applicable. We can assume—certainly—but this does not inform us of anything other than our assumptions. Conjecture without corroboration is not derived information on the subject matter.
And all of this is even assuming that there’s “physical” out there at all. Which, again, because it is not observable—we have no way of knowing at all. It could be nothing. It could be a micron larger than our lightcone. It could be a mile. Or it could be infinite. Or, under certain even more bizarre conceptions (involving inversions of topology and strange physics), there could conceivably even be less than what we observe.
All of this without getting into philosophical “trickery” such as the simulated universe argument.
So, yes. Physics is not applicable to answering the question “what is there beyond the Earth’s Lightcone?”.
I don’t know precisely how likely these three options are, but infinite seems astronomically more likely that any arbitrary amount.
Given assumptions that seem natural now. I don’t actually disagree with those assumptions. But those assumptions are, in fact, assumptions. (Recall the bizarre topology example that permits for negative space beyond our lightcone.)
Given, furthermore, what the original question was—P(Universe-is-Infinite) -- the question of whether there’s even a ‘something’ out beyond the lightcone remains even-more-relevant.
And as I originally, I believe, said—the low confidence interval necessary to properly express a Bayesian probability prediction in my opinion makes it far more ‘appropriate’ to simply say, “There is as yet insufficient evidence for a meaningful reply.” (Or, short-handed: “It’s unknowable.”)
I don’t think mathematics claims that it can answer that question. It is more focused on answering questions like “what does 1 + 2 = 3 mean, and why do we think it is true?”
Then you agree with my position over that of wedrifid’s.
But I think “1 + 2 = 3” is true outside our lightcone.
And what information does this allow us to derive about what is outside of our lightcone?
Remember: “1 + 2 = 3” is definitionally true. It would remain true even if the universe did not exist; it is a non-contingent / non-local truth.
Fair enough. I should have reference the Pythagorean theorem.
I don’t disagree with this statement, but folks here at LW seem to disagree when I assert that mathematics lacks empirical content.
That’s a curious notion. I’m about ready to believe just about anything of the LW commenter community nowadays, though. I’ve been thoroughly disabused of several notions regarding this site’s populace over the last two monhs. <_<
That being said; it might help if I explain how I parse “real” from “exists”. To my definitions, “real” covers anything which is a proscriptive restriction on the behaviors of that which exists. “Exists” is anything that directly interacts with something else (or conceivably could / did). I categorize “numbers” in the same ‘area’ as I do ‘the laws of logic’—they are real, but do not exist. Mostly these things can be treated as “definitionally true”; we define 2 as “1+1” and we define “1″ as “a single thing”.
(Side note: this neatly resolves the Transcendental Argument for God, by the way. “Resolves” in the sense I am an atheist.)
Not complex or real, most likely… I’d say 37… wait, actually, I call bluff. You have no number in mind. :P
I no longer recall. Perhaps we should attempt to derive my forgotten memory together. What steps should we take to derive that information?
It’s not completely so—it might become apparent that the universe was finite, and that would answer the question. But determining that it is infinite is different—after any finite amount of time, you can only see a finite part of any universe. So you will never know for sure.
How would this come about? The entire problem of attempting to make observations beyond our lightcone is that there are no interactions beyond that boundary.
If you started seeing earlier versions of the same part of the universe that you now are standing in at an apparently large distance out in space, you pretty much know the universe is finite. Light travelling all the way around the universe and arriving back where it started would be a large clue that the universe was finite.
Unknowable? Maybe you can’t be certain but there are indirect reasons to think it is, by noticing that it appears flat with no sign of a boundary. As for multiple definitions with different answers, can you specify two definitions of ‘universe’ that have different answers? I of course do not only mean the observable universe. I don’t see how the question is undefined.
A ‘standard’ definition of “universe” is “all existing matter and space”. If we allow for the many-worlds hypothesis, then the universe is infinitely large even if a Laplacian Demon could know the entirety of the universe at a given state (i.e.; simultaneously finite and infinite). If we operate under a definition of “universe” whereby the MWI creates a new universe for each “choice”, then we have no way of knowing where or if there is an outer bound of our universe beyond the observable lightcone.
Furthermore, if some variants of M-Theory are correct then our universe may be possessed of a specific shape and be limited in scope regardless; so again it could be finite. And again, under other variants of how we interpret M-Theory, each p-brane and membrane is not a separate universe but part of a whole. Which is presumed infinite.
So the problem is that we have no acceptably rigorous definition of what is a “universe” in order to start making assertions about its finiteness or lack thereof.
Even if we use the conventional “assumption” of what our Universe is which existed shortly after the ‘discovery’ of the Big Bang (i.e.; the collection of galaxies and matter that we can either observe or that directly and observably interacts with what we can observe, and the spacetime continuum these interactions occur within) -- we lack the ability to derive any information about its scope or dimension.
So no probability assertion about the universe’s scope should, rationally speaking, have anything remotely resembling a high threshold of confidence. Said confidence should, in fact, approach zero.
I am not in the habit of bothering with probability statements whose confidence is below 1%; I find them not merely a waste of time but damaging.
Isn’t it enough to simply say, “There is as yet insufficient data for a meaningful reply” to the question?
So you are against induction, in general? Nothing directly unobservable is knowable? Do you really think that the assumption that the physical laws are the same outside Earth’s light cone as they are inside is an error?
It’s not that I am against induction (in fact, I routinely refer to Popperian Falsificationism as the resolution to Hume’s Problem of Induction). Instead, I am acknowledging that induction has limits. What inductive process will allow you to derive the words written on the can in front of me as I type this?
No. All things which are entirely unobservable are unknowable. Indirect observation qualifies as a form of observation. That which is outside of our lightcone is entirely unobservable (as yet.)
We have no basis for the assumption at all. It furthermore rests on the additional assumption *that there is even a “physical” at all there.
Furthermore: there is some disagreement at the “bleeding edge” of physics as to whether gravity is a constant. And that’s just what we can observe.
I recall the admonition that “The Universe is Queerer than we can suppose”. From it, I have a generalized principle: when I have no information to make assertions with, I acknowledge my ignorance. When, however, I observe that no information is available, I note this fact and move on.
Making ‘guesses’ as to the ‘probability’ of assertions when you know your priors are entirely arbitrary is … counterproductive. It can only serve to prime you.
I used to do the same thing and felt quite satisfied doing so. I thought it was settled. But then I started learning about Solomonoff Induction which I now believe is a better solution. If you are a hardcore Popperian Falsification fan, even after learning about Solomonoff Induction, I would suggest reading David Deutsch’s The Beginning of Infinity. It pushes falsification as far as I’ve ever seen and even when you find yourself disagreeing, it’s an interesting read.
We have observed that the universe is regular and that there is nothing special about Earth, as far as we know. That’s quite a good basis for the assumptions, in my opinion. Although I am not completely sure what you mean by “physical” here.
I don’t understand why, in the title of the linked article, possible information leak from black holes is referred to as “gravity not being constant”. Nor I understand what this has to do with induction or falsificationism.
The Copernican Principle has served us well. Ironically, it turns out it was somewhat misguided about the Earth itself. I don’t believe that out of the single-digit percentage of planets yet discovered that are categorized as “Earth-like”, that any of them fall particularly close on the parameters relevant to “humans would be comfortable living here if they brought the right flora and fauna with them Spore-style”). Certainly none of them have been around yellow stars and all have had rather bizarre irradiation profiles.
As to the regularity of the universe—well, that’s what the notion of a variable constant of gravity was about. I’ve seen conjecture that ‘dark matter’/‘dark energy’ might be nothing more than our failure to recognize that the gravitational constant changes in some regions of space. The thing about the information leak from black holes has to to with a conjectured way of testing that (even Hawking Radiation doesn’t retrieve information from black holes; that according to what we now know is a one-way trip.)
Well, imagine spacetime has a definite, discrete barrier. On our side there’s still ‘physical’ stuff. On the outside of that barrier… there’s nothing. No physical anything. Not even space. (This gets headachey when we start realizing that means there’s no “outside” outside there...)
Suffice it to say that I was being ‘colorful’ in saying that we have no way of knowing that the universe doesn’t just stop at the edge of the Earth’s lightcone. (It’s actually a pretty mundane assertion; most discussions on the matter I’ve ever heard of make this the null hypothesis.)
You have a better epistemology for evaluating beliefs about the observable universe?
By special, when speaking about fundamental physics, I certainly don’t mean “is capable of maintaining carbon-based life”. Earth may be unique in this respect while the physical laws being the same everywhere.
Even if this were true, so what? Instead of standard Einstein equations one would get a modified set of equations with a new dynamical field instead of constant G. This wouldn’t challenge regularity of the universe.
The null hypothesis is what? That the universe stops just there, or that we have no way of knowing?
It seems strange. If you walk along an unknown road and are forced to return at one point, do you (without additional information) suppose that the road ends just beyond the last corner you have seen?
By the way, the relevant Earth’s lightcone is precisely your lightcone or mine?
In practice the former.
OK, you have a good point. I was not considering each branch to count as an entire new space that we need to add up with every other branch. I guess I’m talking about our current branch, right now. Also, I could easily be wrong but I think there are no branch points that create an infinite number of new branches and so there still may be an insanely vast but finite number of branches.
I think that if you take Occam’s Razor seriously, then you never have uncertainties that literally are zero. (I don’t know what approaching zero would mean in this context).
What is the minimum effective period over which one should try a new dietary plan, before reaching conclusions on its effectiveness?
(In other words, what is the time granularity for dietary self-experimentation? This question could be generalized to other health issues where self-experimentation is appropriate.)
My wild ass guess is one month.
I base this number on marijuana testing supposedly useless after a month. It does work very well for two weeks though because some byproduct is fat soluble and is absorbed by your fat cells where it takes them up to a month to completely rinse out. So, for example if you were eating a lot of high hormone ranch animal meat there may well be junk in there that takes your system a month to completely purge it out after you stop eating the high hormone ranch animal meat and you would not be able to see the effects undiluted for a month. Also this is the time granule I use for my own diet self-experiments which I seem to never get to the end of. My diets are for fitness and health purposes only as I have never been much over- or underweight.
Did you just base an estimate of how long to test a diet over on a speculative guideline for how to circumvent a blood test that relies on the removal half life of a specific metabolite of a certain recreational drug? Wow. Fermi would love you!
Two weeks. But don’t draw conclusions based on, for example, weight even if ‘looking like you have lost weight’ is actually your goal.
It could… but the more generalized the answer the less useful it is going to be. Specifics are kind of the important thing here!
Probability: You are living in a simulation run by some sort of intelligence.
Probability: Other people exist independently of your own mind.
Probability: You are dreaming at this very moment. (Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you’re dreaming.)
Meta comment: If this questions were in separate comments, I’d upvote/downvote differently. I’m interested in thoughts/arguments related to probability of simulation and I have little interest in solipsism or lucid dreaming. They don’t seem very much related topics to me. Am I missing something?
They all seem to be asking variants on the question “how likely is apparent reality real?”. They also all seem to have weird properties as far as evidence is concerned, because the observable evidence must all come from the very source (observed reality) whose credibility we’re questioning.
Also, except for the solipsism one, they seem to be questions where, contrary to LW canon, it might be a good idea to deliberately self-delude (by which I mean, for instance, not bothering to look at the evidence in-depth). If I really felt a .5 probability in my bones that I was living in a simulation, I don’t think I’d be able to work as hard at achieving my goals; I wouldn’t have as much will to power when it could all disappear any moment.
Aside: I’m genuinely surprised at the lack of discussion of lucid dreaming on LW. Lucid dreaming seems like a big gaping loophole in reality, like one of the elements you’d need in a real-life equivalent of the infinite-wish-spell-cycle, yet nobody seems to be seriously experimenting with finding innovative uses for it.
In hindsight, though, it seems like removing the middle question might have been better.
Would that depend at all on your beliefs about the simulators?
E.g., if you felt a .5 probability that you were in a simulation being run by a real person who shared various important attributes with you, who was attempting to determine the best available strategy for achieving their goals, such that you being successful at achieving yours led directly to them being more successful at achieving theirs, would your motivations change?
I would like to be working on lucid dreaming research but am unaware of any avenues towards obtaining the very expensive MRI time to do it.
I agree that intuitions are challenging here but I really cannot think of a reason to believe that my actions are less meaningful or that reality is any more or less permanent if we’re all being simulated. So maybe there is a tie there to solipsism as I don’t think I have any problems with simulations that are faithfully executing our physics and not making some sort of patchwork Sim in which I’m the only sentient. If I thought Solipsism was .5 probably then I’d have the problem you describe.
P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don’t think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom’s simulation argument is sound.
1 - P(Solipsism) > 0.999; My mind doesn’t contain minds that are consistently smarter than I am and can out-think me on every level.
P(Dreaming) < 0.001; We don’t dream of meticulously filling out tax forms and doing the dishes.
[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]
Idea: play a game of chess against someone while in a lucid dream.
If you won or lost consistently, it would show that you are better at chess than you are at chess.
If anyone actually does this, I think you should alternate games sitting normally and with your opponent’s pieces on your side of the board (i.e. the board turned 180 degrees), because I’d expect your internal agents to think better when they’re seeing the board as they would in a real chess match.
My favorite moment along those lines was at work years ago, when a developer asked me to validate the strategy she was proposing to solve a particular problem.
She laid out the strategy for me, I worked through some examples, and said “OK… this looks right to me. But you should ask Mark about it, too, because Mark is way more familar with our tax code than I am, and he might notice something I didn’t… like, for example, the fact that this piece over here will fail under this obscure use case.”
Then I blinked, listened to what I’d just said, and added “Which I, of course, would never notice. So you should go ask Mark about it.”
She, being very polite, simply nodded and smiled and backed away quickly.
On your argument, there is little need to flawlessly compute the universe. If a civilization sees that their laws are inconsistent with their observations, then they will change their laws to reflect their observations. Because there is no way to conclusively prove your laws are correct, it is impossible for a simulation to state that “Our laws are correct, therefore there is a flaw in the universe”. Furthermore, on the probability that our ancestors have obtained the computing power of running a simulation:
An estimate for the power of a (non-quantum) planet sized computer is 10^42 (R. J. Bradbury, “Matrioshka Brains.”) operation per second. Its hard to pin down how many atoms there are in the universe, but lets put it at around 10^80, and with 128 bits needed to hold each coordinate, to the degree of one pm, and another for its movement, that puts it at around 10^83 operations to run a simulation.
So at first it looks impractical to compute a universe, but this computer need to perform its operations in a seconds time. (Practical value of a computer that runs infinitely slowly), it can compute its values infinitely slowly. And so, no matter the size of the universe, a computer can simulate it. And because it can compute its values infinitely slowly, it can compute an infinite number of universes.
So in conclusion, there is a very low probability that a civilization evolves to the point where it can simulate a universe, and the motives are also dubious. But, because of that fact that if it does, there is no upper bound to the number of universes the civilization can simulate, and so we are almost certainly in a simulated universe, because the probability of us being in a simulated universe is determined by n/p, where p is the probability of a universe being simulated, and n is the number of universes being simulated, that ends up being a probability of infinity, and so we are most likely part of a simulated universe.
You seem to be assuming that we’d be simulated by a universe which is physically like our own.
Our simulations, at least, are of much simpler scenarios than what we’re living in.
I’m not sure what properties a universe would need to have to make simulating our universe relatively cheap and easy. I’m guessing at smaller and faster fundamental particles.
Egan has some fun thoughts about this with his Autoverse in Permutation City. The inhabitants do eventually get stuck with some contradictions that arise from the initial conditions of their universe.
You’re right, that was one of the erroneous assumptions I made. The problem with that is that there are an infinite number of permutations of possible universes. Even if only a small fraction of them are habitable, and a small fraction of those are conducive to intelligent life, we still have the multiplying by infinity issue. I don’t know how valid using infinity in an equation is though, because when there are two infinities it breaks down. For example, if they’re are an infinite amount of dogs in New York, and 10% of dogs are terriers, technically the probability of the next dog you see being a terrier is equal to that of any other dog. That again simply doesn’t make sense to me
You don’t? My dreams suck more than I thought.
(I also give P(muflax is dreaming) < 0.001, but because I can’t easily manipulate the mindstream right now. I can’t rewind time, shift my location or abort completely, so I’m probably awake. I can always do these things in dreams.)
Given your argument, I’m a bit confused by why you assign such a high upper bound to P(Solipsism).
Ah, you’re right. Thanks for the correction.
I edited the post above. I intended P(Solipsism) < 0.001
And now I think a bit more about it I realize the arguments I gave are probably not “my true objections”. They are mostly appeals to (my) intuition.
P(simulation) ~ .01 P(other minds) ~ .9999 P(dreaming) ~ .0001
I find this statement curious. Perhaps my memory is simply biased on the matter but every dream I can recall—or, rather, every dream I recall recalling (and those are far and few between at that) -- has always been lucid. Even growing up this was the case. I’ve always had bouts of insomnia as well. I cannot discount the possibility that I’m simply recalling those things that conform to the patterns of my expectations, but I do know for a fact that I never had to “learn” how to dream lucidly. I recall one particularly vivid string of dreams I had as a child—or, rather, one particular recurring facet of said dreams—that all involved me being able to walk two inches off the ground. This is actually one of my earliest memories (I recall little about my early childhood). This “walking off the ground” was something I did because I knew it was a dream.
I have no inclination towards guessing the significance (or magnitude of that significance) of this.
Some people are naturally better at lucid dreaming than others. There is a great forum for lucid dreaming if you’re interested at dreamviews.com
Could it be a selection effect? Maybe you only remember lucid dreams.
As I said; perhaps my memory is simply biased. But that then yields to the question: why would it be so uniquely biased?
Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.
Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.
Probability: Reading the sequences is a sound investment for G (compared to other activities)
Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.
0.3; 0.9; 0.00hahahaha001
1 & 2: Yes, 80% confidence. However I don’t think reading the sequences should be a chore. Start with the daily Seq Reruns and follow them for a week or two. If you don’t enjoy it, don’t read it. The reason I (and probably most people) read the Sequences was because they were fun to read.
3: “Sane” isn’t precise enough to answer. However I would say that the allocation would be more sane than currently practiced with 98% confidence.
P(substantial improvement) ~ .2 P(sound investment) ~ .8 P(rationaltopia) ~ .01
For 1 and 2:
I think you need to qualify ‘quality of life’ a bit. Are you asking if the sequences will make you happier? Resolve some cognitive dissonance? Make you ‘win’ more (make better decisions)? Even with that sort of clarification, however, it seems difficult to say.
For me, I could say that I feel like I’ve cleared out some epistemological and ethical cobwebs (lingering bad or inconsistent ideas) by having read them. In any event, there are too many confounding variables, and this requires too much intepretation for me to feel comfortable assigning an estimate at this time.
For 3: I think I would need to know what it means to “train someone in rationality”. Do you mean have them complete a course, or are we instituting a grand design in which every human being on Earth is trained like Brennan?
What’s the probability that the Swiss central bank will maintain its cap on the franc vs. euro? And what is your confidence interval for when they might give it up if they do decide to give it up.
They are clearly able to maintain the cap for as long as they choose to. This is because the Swiss central bank has the ability to print Swiss francs. They can always make more, and use them to buy Euros. In fact, the phrase ‘print’ is misleading, as no paper is needed—only electronic money is created.
By this means, the swiss central bank is always able to lower the value of the Swiss franc—it can just carry on printing more until everyone who wants one for the price of 1 Euro 20 cents has one. It is impossible for speculators to buy more Swiss Francs than the bank is able to print.
The strategy seems risk free—if in the future the value of the Swiss franc ever falls against the Euro, they can reverse the process, selling all the Euros they bought earlier with Francs, and buying back their francs at their lower price, and deprinting them. After the deal is fully unwound, they should even have money left over due to the fact that they sold Francs at a higher price than they bought them back.
However, it is not. If the Euro itself were to run into trouble, and become increasingly worthless, the Swiss bank would find that its cap becomes a dead weight—as all the Euros that it bought with printed money become worthless, it will still have lots of outstanding Swiss francs, which will probably be dragged down and become worth much less themselves.
To avoid this problem, they should avoid actually buying Euros. This is done by loudly announcing their strategy of printing Swiss Francs as necessary. Since it’s mathematically impossible for speculators to win by buying more Swiss Francs than the bank can print, hopefully no speculator will try, and then hopefully the bank won’t actually have to buy all that many Euros after all. This should make it much less painful to abandon the cap if the Euro falls to pieces, and the Swiss want the Swiss franc to be worth much more than 1.2 Euros after all.
Whoa, a piece of financial advice that I can actually see the reasoning behind.
I have a question about Pascal’s mugging. This does break the standard question-answer format, but you said not to be squeamish about that, so here goes the problem I am currently considering.
According to the wiki, the Standard Pascal’s mugging is formulated like this:
Now suppose someone comes to me and says:
Now, further suppose that someone says
Let’s call this a Meta Pascal’s Mugging, since it is a Pascal’s Mugging which is contingent on your reaction to a Standard Pascal’s Mugging. This is a fairly complicated mugging!
Now further suppose a third person says:
So we could call this a Recursive Pascal’s Mugging. Both people are making muggings which refer to mugging MORE people than the other one, since the Meta Pascal’s mugging applied to all other muggings, regardless of their level or recursion, although it itself did not start a recursive loop.
Now let’s say I am mugged by all THREE Pascal’s muggers simultaneously. What do I do?
Clearly, the answer “All Pascal’s muggings are not worth worrying about and I don’t need to give into any of them.” is an answer. But it’s also really easy to get to in answer space, so I’m curious if there are any other answers I might not be thinking of.
My own response is that all Pascal’s muggings are not worth worrying about.
I’m curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal’s Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives—but they don’t inform you of that decision.
The probability of this is vanishingly small, of course, but it’s only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?
That’s a good point. Let me see if I understand the conclusion correctly:
I should consider that there is a opposing Pascal’s Anti-Mugging for any Pascal’s Mugging, and it seems reasonable that I don’t have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.
Once the mugger asserts that there is a mugging, I can ask “What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?” If this is a fake mugging (which seems likely), he won’t have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.
If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn’t thought of. If it sounds like I haven’t gotten it, please explain in more detail.
Either way, thank you for the explanation!
So, this is correct enough, but I would recommend generalizing the principle.
The (nominally) interesting thing about Pascal’s Mugging scenarios (and also about the original Pascal’s Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal’s mugging).
It’s conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things… e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.
But I’ve never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.
In general, one such tactic is to maintain a broader perspective than just the specific problem I’ve been invited to think about.
So when the Mugger asserts that there is a mugging, I can ask “Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?”
The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods’ claims equally compelling.
The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.
What is your 90% confidence interval for the total number of people to ever exist?
What is the probability that a person who signs up for cryonics will be revived?
(Yes, I did already ask this, but my estimate is far enough from the apparent consensus here that I’d like to see more estimates)
I read once 15 years ago that when a child is born in a modern-day forager group, e.g., in the Amazon with a missing limb, he or she almost always dies because the tribe ostracizes the child unless there are anthropologists or other such visitors to bring the child to ‘civilization’.
The OP instructs me to ask for the probability, but I am actually more interested in short descriptions of pieces of evidence that would move the probability by a factor of >3 or <.333 and how independent that piece of evidence is from all the other piece of evidence.
In summary, I am asking for pieces of significant evidence for or against the following proposition: the child dies before acheiving adulthood and the child would not have died if he or she was not being ostracized.
The one piece of significant evidence I already have is my memory of a post (probably by an anthropologist or anth grad student) on Usenet back 15 years ago when Usenet (and the internet as a whole) was still brainy. Likelihood ratio of this piece of evidence: 18, by which I mean the proposition above is 18 times more likely given that I noticed and remember this Usenet post than it would be if I had not. My likelihood ratio would be considerably lower if I recalled the observation being made by one faction in a polarized debate. My recollection is to the contrary: namely, that it was made by a calm person with no evidence of ideological investment in the question.