Irrationality Game II
I was very interested in the discussions and opinions that grew out of the last time this was played, but find digging through 800+ comments for a new game to start on the same thread annoying. I also don’t want this game ruined by a potential sock puppet (whom ever it may be). So here’s a non-sockpuppetiered Irrationality Game, if there’s still interest. If there isn’t, downvote to oblivion!
The original rules:
Please read the post before voting on the comments, as this is a game where voting works differently.
Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it’s all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.
Here’s an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.
Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like ‘fairly confident’.
Example (not my true belief): “The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%).”
If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What ‘basically’ means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it’s a pretty big difference of opinion. If they’re at 99.9% and you’re at 99.5%, it could go either way. If you’re genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.
That’s the spirit of the game, but some more qualifications and rules follow.
If the proposition in a comment isn’t incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.
The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.
Some poor soul is going to come along and post “I believe in God”. Don’t pick nits and say “Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us...” and downvote it. That’s cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.
Try to be precise in your propositions. Saying “I believe in God. 99% sure.” isn’t informative because we don’t quite know which God you’re talking about. A deist god? The Christian God? Jewish?
Y’all know this already, but just a reminder: preferences ain’t beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word “should” are are almost always imprecise: avoid them.
That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It’s a challenge!
Additional rules:
Generally, no repeating an altered version of a proposition already in the comments unless it’s different in an interesting and important way. Use your judgement.
If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post.
Don’t post propositions as comment replies to other comments. That’ll make it disorganized.
You have to actually think your degree of belief is rational. You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average. This could be good or bad. Lots of upvotes means lots of people disagree with you. That’s generally bad. Lots of downvotes means you’re probably right. That’s good, but this is a game where perceived irrationality wins you karma. The game is only fun if you’re trying to be completely honest in your stated beliefs. Don’t post something crazy and expect to get karma. Don’t exaggerate your beliefs. Play fair.
Debate and discussion is great, but keep it civil. Linking to the Sequences is barely civil—summarize arguments from specific LW posts and maybe link, but don’t tell someone to go read something. If someone says they believe in God with 100% probability and you don’t want to take the time to give a brief but substantive counterargument, don’t comment at all. We’re inviting people to share beliefs we think are irrational; don’t be mean about their responses.
No propositions that people are unlikely to have an opinion about, like “Yesterday I wore black socks. ~80%” or “Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%.” The goal is to be controversial and interesting.
Multiple propositions are fine, so long as they’re moderately interesting.
You are encouraged to reply to comments with your own probability estimates, but comment voting works normally for comment replies to other comments. That is, upvote for good discussion, not agreement or disagreement.
In general, just keep within the spirit of the game: we’re celebrating LW-contrarian beliefs for a change!
Enjoy!
- Irrationality Game III by 12 Mar 2014 13:51 UTC; 15 points) (
- 15 Sep 2014 15:57 UTC; 12 points) 's comment on What are your contrarian views? by (
- 25 Jan 2014 6:53 UTC; 6 points) 's comment on Open Thread for January 17 − 23 2014 by (
IRRATIONALITY GAME
Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.
Probability: improbable ( 2% )
This seems like a sarcastic Eliezer Yudkowsky Fact, not a serious Irrationality Game entry.
Upvoted for enormous overconfidence that a universal basilisk exists.
Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.
The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it’d make it more likely a weapon like that falls into the wrong hands.
The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).
Must… resist… revealing… info.… that… may… get… people.… killed.
Please do resist. If you must tell someone, do it through private message.
Yea. It’s not THAT big a danger, I’m just trying to make it clear why I hold a belief not based of evidence that I can share.
Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.
Specifically the claim of universality, that “any person” can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that “any one” and “with a few clicks”, this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim “Never said it was a single universal one.” Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.
Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.
Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?
I don’t remember this post. Weird. I’ve updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I’ve updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.
Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:
maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.
Good idea, please do that.
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don’t know exactly how much.
If such a universal basilisk exists, wouldn’t it almost by definition kill the person who discovered it?
I think it’s vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He’s just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.
There are a bunch of tricks that lets you immunize yourself to classes of basilisks, without having access to the specific basilisk- sort of like vaccination, you deliberately infect yourself with a non-lethal variant first.
Eliezer has demonstrated all the skills needed to construct basilisks, is very smart, and have shown to recognize the danger of basilisks. I don’t think that’s a very common combination, but conditional on eliezer having basilisk weapons most others fitting that description equally well probably do as well.
Wouldn’t the world be observably different if everyone of EY’s intellectual ability or above had access to a basilisk kill agent? And wouldn’t we expect a rash of inexplicable deaths in people who are capable of constructing a basilisk but not vaccinating themselves?
Are basilisks necessarily fatal? If the majority of basilisks caused insanity or the loss of intellectual capacity instead of death, I would expect to see a large group of people who considered themselves capable of constructing basilisks, but who on inspection turned out to be crazy or not nearly that bright after all.
...
Oh, shit.
The post specified fatal so I followed it.
For non-fatal basilisks we’d expect to see people flipping suddenly from highly intelligent and sane, to stupid and/or crazy. Specifically after researching basilisk related topics.
Yes, this can also be reversed for a good way to see what topics are practically basilisk construction related.
Yes, but you would get false positives too, such as chess (scroll down to “Real Life”—warning: TVTropes). Edited to fix link syntax—how comes after all these months I still get it wrong this often?
Yup, this is entirely correct. Learned that the hard way. Vastly so, with such weak basilisks constantly arising from random noise in the memepool, while even knowing how and having all the necessary ingredients a Eliezer-class mind is likely needed for a lethal one.
Great practice for FAI in a way, in that as soon as you make a single misstep you’ve lost everything forever and wont even know it. Don’t try this at home.
Not necessarily. If I did, in fact, possess such a basilisk, I cannot think offhand of any occasion where I would have actually used it. Robert Mugabe doesn’t read my emails, it’s not clear that killing him saves Zimbabwe, I have ethical inhibitions that I consider to exist for good reasons, and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?
It would guarantee drastic improvements in secure, trusted communication protocols and completely cure internet addiction (among the comparatively few survivors).
First of, there aren’t nearly enough people for it to be any kind of “rash”, secondly they must be researching a narrow range of topics where basilisks occur, thirdly they’d go insane and lose the basilisk creation capacity way before they got to deliberately lethal ones, and finally anyone smart enough to be able to do that is smart enough not to do it.
This seems like a clear example of “You shouldn’t adjust the probability that high just because you’re trying to avoid overconfidence; that’s privileging a complicated possibility.”
Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.
http://lesswrong.com/lw/u6/horrible_lhc_inconsistency/
Thanks! I recall reading that one but didn’t recall.
It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There’s probably a book or ten out there I could read.
Reading this comment made me slightly update my probability that the parent, or a weaker version thereof, is correct.
It may or may not be an example, but it’s certainly not a clear one to me. Please explain? The entire sentence seems nonsensical, I know that the individual words mean but not how to apply them to the situation. Is this just some psychological effect because it targets a statement I personally made? It certainly doesn’t feel like it but...
Edit: Figured out what I misunderstood. I modelled as .02 positive confidence not .98 negative confidence.
2% is way way way WAY too high for something like that. You shouldn’t be afraid to assign a probability much closer to 0.
Upvoted for vast overconfidence.
Downvoted back to zero because I suspect you’re not following the rules of the thread.
Also, I have no idea who “Eliezer Yudovsky” is, though it doesn’t matter for either of the above.
Well, this is scary enough.
I am way to good at this game. :(
I really didn’t expect this to go this high. All the other posts get lots of helpful comments about WHY they were wrong. If I’m really wrong, which these upvotes indicate; I really need to know WHY so I know with connected beliefs to update as well.
2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It’s the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.
But ascribing such a power to a specific individual who hasn’t had any special connection to cutting edge brain science or DARPA and isn’t even especially good at using conventional psychological weapons like ‘charm’ is what sends your entry into the realm of utter and astonishing absurdity.
Not publicly, at least.
Many say exactly the same thing about cryonics. And lots of anecdotal evidence does exist, not of killing specifically, but of inducing a wide enough range of mental states that some within there are known to be lethal.
So far in my experience skill at basilisks is utterly tangential to the skills you mentioned, and fit Eliezers skill set extremely well. Further, he has demonstrated this type of abilities before, for example in the AI box experiments or HPMoR.
Pointing to cryonics anytime someone says you believe in something that is the realm of speculative fiction and well beyond current science is a really, really, bad strategy for having true beliefs. Consider the generality of your response.
Show me three.
How is this even a thing? That you have experience with?
Your best point. But nearly enough to bring p up to 0.02.
Point, it’s not a strategy for arriving at truths, it’s a snappy comeback at a failure mode I’m getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.
Um, we’re talking basilisks here. SHOWING you’d be a bad idea. However, to NAME a few, there’s the famous Roko incident, several MLP gorefics had basilisk like effects on some readers, and then there’s techniques like http://www.youtube.com/watch?v=eNBBl6goECQ .
Yes, skill at basilisks is a thing, that I have some experience with.
finaly, not in response to anything in particular but sort of related: http://cognitiveengineer.blogspot.se/2011/11/holy-shit.html
The argument isn’t that because something is found in speculative fiction it can’t be real; it’s that this thing you’re talking about isn’t found outside of speculative fiction—i.e. it’s not real. Science can’t do that yet. If you’re familiar with the state of a science you have a good sense of what is and isn’t possible yet. “A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote” is very likely one of those things. I mention “speculative fiction” because a lot of people have a tendency to privilege hypotheses they find in such fiction.
Hypnotism is not the same as what you’re talking about. The Roko ‘basilisk’ is joke compared to what you’re describing. None of these are anecdotal evidence for the power you are describing.
Oh, illusion of transparency. Yea, that’s at least a real argument.
There are plenty of things that individual geniuses can do that the institutions you seem to be referring to as “science” can’t yet mass produce, especially in the reference class of things like works of fiction or political species which many basilisks belong to. “Science” also believes rational agents defect on the prisoners dilemma.
Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.
And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.
I don’t believe you. Look, obviously if you have secret knowledge of the existence of fatal basilisks that you’re unwilling to share that’s a good reason to have a higher credence than me. But I asked you for evidence (not even good evidence, just anecdotal evidence) and you gave me hypnotism and the silly Roko thing. Hinting that you have some deep understanding of basilisks that I don’t is explained far better by the hypothesis that you’re trying to cover for the fact that you made an embarrassingly ridiculous claim than by your actually having such an understanding. It’s okay, it was the irrationality game. You can admit you were privileging the hypothesis.
Again, pointing to a failure of science as a justification for ignoring it when evaluating the probability of a hypothesis is a really bad thing to do. You actually have to learn things about the world in order to manipulate the world. The most talented writers in the world are capable of producing profound and significant—but nearly always temporary—emotional reactions in the small set of people that connect with them. Equating that with
is bizarre.
A government possessing a basilisk and keeping it a secret is several orders of magnitude more likely than what you proposed. Governments have the funds and the will to both test and create weapons that kill. Also, “empathic” doesn’t seem like a word that describes Eliezer well.
Anyway, I don’t really think this conversation is doing anyone any good since debating absurd possibilities has the tendency to make them seem even more likely overtime as you’ll keep running your sense-making system and come up with new and better justifications for this claim until you actually begin to think “wait, two percent seems kind of low!”.
Yea, that this thread is getting WAY to adversarial for my taste, dangerously so. At least we can agree on that.
Anyway, you did admit that sometimes, rarely, a really good writer can have permanent profound emotional reactions, and I suspect most of the disagreement here actually resides in the lethality of emotional reactions, and my taste for wording things to sound dramatic as long as they are still true.
Well, I should point out that if you sincerely believe your knowledge could kill someone if it got out, you likely won’t test this belief directly. You may miss all sorts of lesser opportunities for updating. We don’t have to think you’re Stupid or you Fail As A Rationalist in order to think you got this one wrong.
It’s sort of the same situation as posting any other kind of information on how to construct weapons that may or may not work on public forums. It’s not all that likely to actually give someone access to a weapon that deadly, but it’s bad form just for the possibility, and because they may still hurt themselves or others in the failed attempt.
I were also trying to scare people away from the whole thing, but after further consideration it probably wasn’t very effective anyway.
Yet another fictional story that features a rather impressive “emotional basilisk” of sorts; enough to both drive people in-universe insane or suicidal, AND make the reader (especially one prone to agonizing over morality, obsessive thoughts, etc) feel potentially bad distress. I know I did feel sickened and generally wrong for a few hours, and I’ve heard of people who took it worse.
SCP-231. I’m not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment’s sake. If you’ve read SCP before (I mostly dislike their stuff), you might be guessing which one I’m talking about—so no need to re-read it, dude.
Really? That’s had basilisk-like effects? I guess these things are subjective … torturing one girl to save humanity is treated like this vast and terrible thing, with the main risk being that one day they wont be able to bring themselves to continue—but in other stories they regularly kill tons of people in horrible ways just to find out how something works. Honestly, I’m not sure why it’s so popular, there are a bunch of SCPs that could solve it (although there could be some brilliant reason why they can’t, we’ll never know due to redaction.) But it’s too popular to ever be decommissioned … it makes the Foundation come across as lazy, not even trying to help the girl, too busy stewing in self-pity at the horrors they have to commit to actually stop committing them.
Wait, I’m still thinking about it after all this time? Hmm, perhaps there’s something to this basilisk thing...
Straw Utilitarian exclaims: “Ha easy! Our world has many tortured children, adding one more is a trival cost to pay for continued human existence.” But yes imagining me being put in a position to decide on something like that caused me quite a bit of emotional distress. Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option since my consequentialism infected brain cells yell at me for trying hair brained schemes to help the girl.
On a lighter note my favourite SCP.
Ah the entry is tragically incomplete!
The Catholic faith of the animals was not surprising since contact with SPC-3471 by agent ███ █████ and other LessWrong Computational Theology division cell members have received proof of Catholicism’s consistency under CEV as well as indications it represented a natural Schelling point of mammalian morality. First pausing to praise the sovereigns taste in books, the existence of Protestantism has lead Dr. █████ █████ to speculate SPC-4271 (“w-force”) has become active in species besides Homo Sapiens violating the 2008 Trilateral Blogosphere Accords. He advises full military support to Eugenio the Second in stamping out rebellion and termination of all animals currently under the rule under Duke Baxter of the West Bay.
Adding:
Yep, suicide is probably what I’d do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I’ve said, it’s constructed purely as horror/porn and not as an ethical dilemma.
BTW simply saying that “Catholicism” is consistent under something or other is quite meaningless, as “C.” doesn’t make for a very coherent system as seen through Papal policy and decisions of any period. Will would’ve had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand—for now, Will isn’t doing much with his “Catholicism” strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.
I mentally iron man such details when presented with such scenarios. Often its the only way for me to keep suspension of disbelief and continue to enjoy fiction. To give a trivial fix to your nitpick, the ritual requires not only the suffering of the victim to be undiminished but also the sexual pleasure of the torturer and/or rapist to be present, automating it is therefore not viable.
Do not overanalyse the technobabble it ruins suspension of disbelief. And what is a SPC without technobabble? Can I perhaps then interest you in a web based Marxist state?
Also who is this Will? I deny all knowledge of him!
An agent placed in similar circumstances before did just that.
I do not know with what weapons World War III will be fought, but World War IV will be fought with fairytales about talking ponies!
I love you so much right now. :D
I have a solid basilisk-handling procedure. (Details available on demand.) You or anyone is welcome to send me any basilisk in the next 24 hours, or at any point in the future with 12 hours warning. I’ll publish how many different basilisks I’ve received, how basilisky I found them, and nothing else.
Evidence: I wasn’t particularly shaken by Roko’s basilisk. I found Cupcakes a pretty funny read (thanks for the rec!). I have lots of experience blocking out obsessive/intrusive thoughts. I just watched 2girls1cup while eating. I’m good at keeping non-basilisk secrets.
Has anyone sent you any basilisk so far?
No, I’m all basilisk-less and forlorn. :( I stumbled on a (probably very personal) weak basilisk on my own. Do people just not trust me or don’t they have any basilisks handy?
How do you define basilisk? What effect is it supposed to have on you?
Death and other alterations easier to model as disorders than as thought processes. Persistent intrusive thoughts, that lead to unpleasant effects—fear, obsession, major life changes around the basilisk’s topic (e.g. quitting a promising math career to study theodicy). I’m on the fence on whether flashbacks of disgust or embarrassment count. Non-persistent but extreme such thoughts whose effects persist (e.g. developing a phobia or stress-induced conditions).
The stimulus has to be relatively short (a solid day of indoctrination is way too much), and to be some form of human communication—words, images and videos all count, and nothing where the medium rather than the meaning is damaging (e.g. loud noises, bright lights) does.
The latter. Or, if the former, they don’t trust you not to just laugh at what they provide and dismiss it.
I am amused and curious. :P Did the basilisk-sharing list ever get off the ground?
Not that I know of, and it’s much less interesting then it sounds. Just nausea and permanent inability to enough the show in a small percent of readers of Cupcakes and the like.
Also, always related to any basilisk discussion:
The Funniest Joke In The World
I was about to condescendingly explain that there’s simply no reason to posit such a thing, when it started making far too much sense for my liking. That said, untraceable? How?
Email via proxy, some incubation time, looks like normal depression followed by suicide.
Of course. I was assuming a near-instant effect for some reason.
On the plus side, he doesn’t seem to have used it to remove anyone blocking progress on FAI …
Irrationality Game
If we are in a simulation, a game, a “planetarium”, or some other form of environment controlled by transhuman powers, then 2012 may be the planned end of the game, or end of this stage of the game, foreshadowed within the game by the Mayan calendar, and having something to do with the Voyager space probe reaching the limits of the planetarium-enclosure, the galactic center lighting up as a gas cloud falls in 30,000 years ago, or the discovery of the higgs boson.
Since we have to give probabilities, I’ll say 10%, but note well, I’m not saying there is a 10% probability that the world ends this year, I’m saying 10% conditional on us being in a transhumanly controlled environment; e.g., that if we are in a simulation, then 2012 has a good chance of being a preprogrammed date with destiny.
Upvoted solely because 1999/2000 was foreshadowed so much more heavily.
As I point out in the other comment, the real year of maximum alignment was 1998. So perhaps SubGenius is the true faith, the few true SubGenii were raptured that year, and 2012 is just when the cosmic wrecking crew come in to clean up.
It’s a coincidence of note in itself that the midpoint of the current “galactic solstice” should have occurred so extremely close to a millennial year in the dominant planetary calendar; also that the third Christian millennium begins so close in time to the start of a new Mayan cycle. It would be easier to understand all this if both Mayan and European cultures had a visible history of caring about “galactic alignment”, and there was a visible history of adjusting the calendar accordingly. We know the Mayans were eager astrologers, and the beginning of the “Christian era” was probably associated with the transition between the zodiacal Age of Aries and Age of Pisces (12 signs in the zodiac, divide up the 26000-year precession into 12 periods and you get approximately 2000-year epochs). So we can point to ways in which ancient astronomy has shaped the calendar, but not enough to definitely explain Christian 2000 and Mayan 2012 as attempts to synchronize the calendar with galactic 1998.
It’s already a stretch to posit a secret history of influential esoteric astrology shaping the western calendar. But if we then try to explain the coincidence of this period in time with general technological and scientific acceleration, basically you either have to say that it’s just a coincidence, or that it’s not a coincidence and reality is connected in ways far beyond what we currently understand. The simplest version of that hypothesis, for this community, is “we’re living in the Matrix”.
And in other communities that hypothesis class is called...?
There’s no name for the general idea. But for people who habitually think that everything reduces to computation and/or that physics is largely figured out, the Matrix is the quickest way to reintroduce fundamental uncertainty about what’s behind the appearances of the world.
Another formulation which might have some potency for an audience of materialist futurists, would be to suggest that the stars and planets are all already superintelligences, engaged in purposeful aeon-old interactions about which we know nothing, and that the minutiae of our life and history on Earth are shaped by a local superintelligence, or its agents, by means that we do not know, towards goals that we do not know. Earth is not a rare oasis of life in a cosmic desert; the sum total of our lives here is more like a day’s worth of microbes living and dying, in the dark under a small rock, in a jungle bursting with larger lives and dramas.
If you start just with the data of experience, rather than presupposing physical or computational reductionism, the possibilities are even broader. A dream presents an example of a hallucinated world and narrative which is not only unreal, but often logically incoherent and only imagined rather than experienced, to a degree that isn’t recognized while it’s taking place. Also, the events of dreams can be the product of knowledge and concerns which the dreamer does not consciously recall during the dream (but which will be remembered and understood once awake), and also just the result of external sensory stimuli, transduced into something that fits the dream context.
One might suppose that waking life is a similar phenomenon, but on a higher scale. Perhaps if one looked at all the facts of one’s circumstances with an IQ of 5000 (whatever that might mean), it would be obvious that it’s all a sham and a delirium. That line of thought could lead back to the Matrix, but there ought to be other, more mentalistic, models of real causality (causality outside the illusion), which provide an alternative conception of higher reality. For example, you could combine solipsism, metaphysical idealism, and the idea of a temporary self-induced occlusion concerning your own nature and powers, to arrive at the guess that you are Something, somehow floating in existential isolation, which has produced the illusion of a body and senses and a world, and the illusion of being a limited denizen of that world with no existence before it. Why did you do this? Maybe you went mad in eternal isolated boredom, maybe it was a mistake, who knows.
There are many variations on this sort of hypothesis. It doesn’t have to be solipsistic, for example. But what distinguishes it from the materialist paranoia of the Matrix is that it doesn’t even hold onto the idea that states of mind are “really” material processes, occurring in a physics known or unknown. There is a more direct coupling between appearances and intentions, as in a dream when analysed from the cognitive point of view.
Obviously, if reality were like that, then events might be connected in ways far removed from conventional probabilistic causal thinking. If the world of the senses were just a symbolic realization of the agenda of some governing intention, then events might be orchestrated in all sorts of unusual ways.
Another class of rogue hypothesis might be called the “big dumb spirit-force” hypothesis. Earlier I spoke of superintelligent celestial bodies, the implication being that they are actually giant nano- or pico-computers of a sort that the human race has begun to imagine, and their vast ancient computations are what governs us. A peculiar alternative would be to suppose something like astrology, in which celestial objects are big dumb objects after all, but they exert influences which act “directly” on sensibility, culture, and evolution (I mean in a way which has the directness of physics, rather than the indirectness of cosmic darwinism, whereby the cosmic environment imposes changing conditions on the biosphere).
There is also a type of transcendental hypothesis which is mostly defined negatively. It amounts just to saying that reality consists of “entities” in “relationships”, and not only are you oblivious to most of them, you can’t even conceive of most of them. And not only that, but you aren’t even properly conceiving of what’s happening right in front of you, and of who and what you yourself are. You have to imagine everything you have experienced and thought, and everything that you have ever heard of and thought you understood, as completely superficial, when it’s not outright wrong. To even conceive of the situation as “you getting reality wrong” would still be getting it wrong, in the sense of missing the essence of everything. In other words, you and your life have a meaning other than “semi-intelligent entity blundering through local corner of reality using its inadequate concepts”; your existence (in the broad sense of everything you know about, not just the actions for which you personally take responsibility) has significance, but you are completely blind to it.
Upvoted because 10% as an estimate seems too high.
I especially can’t imagine why transhuman powers would have used the end of the calendar of a long-dead civilization (one of many comparable civilizations) to foreshadow the end of their game plan.
Also, even if the transhuman powers are choosing based on current end-of-the-world predictions, there’s no reason why they would choose 2012 rather than any of the many past predictions.
It’s easy to invent scenarios. But the high probability estimate really derives from two things.
First, the special date from the Mayan calendar is astronomically determined, to a degree that hasn’t been recognized by mainstream scholarship about Mayan culture. The precession of the equinoxes takes 26000 years. Every 6000 years or so, you have a period in which a solstice sun or an equinox sun lines up close to the galactic center, as seen from Earth. We are in such a period right now; I think the point of closest approach was in 1998. Then, if you mark time by transits of Venus (Venus was important in Mayan culture, being identified with their version of the Aztecs’ Quetzalcoatl), that picks out the years 2004 and 2012. It’s the December solstice which is the “galactic solstice” at this time, and 21 December 2012 will be the first December solstice after the last transit of Venus during the current period of alignment.
OK, so one might suppose that a medieval human civilization with highly developed naked-eye astronomy might see all that coming and attach a quasi-astrological significance to it. What’s always bugged me is that this period in time, whose like comes around only every 6000 years, is historically so close to the dramatic technological developments of the present day.
Carl Sagan wrote a novel (Contact) in which, when humans speak to the ultra-advanced aliens, they discover that the aliens also struggle with impossible messages from beyond, because there are glyphs and messages encoded in the digits of pi. If you were setting up a universe in such a way that you wanted creatures to go through a singularity, and yet know that the universe they had now mastered was just a second-tier reality, one way to do it would certainly be to have that singularity occur simultaneously with some rare, predetermined astronomical configuration.
Nothing as dramatic as a singularity is happening yet in 2012, but it’s not every day that a human probe first reaches interstellar space, the black hole at the center of the galaxy visibly lights up, and we begin to measure the properties of the fundamental field that produces mass, all of this happening within a year of an ancient, astronomically timed prophecy of world-change. It sounds like an unrealistic science-fiction plot. So perhaps one should give consideration to models which treat this as more than a coincidence.
Why pick out those events?
It’s easy to see it as a coincidence when you take into account all the events that you might have counted as significant if they’d happened at the right time. How about the discovery of general relativity, the cosmic microwave background, neutrinos, the Sputnik launch, various supernovae, the Tunguska impact, etc etc?
Also all those dramatic technological developments of 6000 years ago, which seem minor now due to the passage of time and further advances in knowledge and technology. As no doubt the discovery of the Higgs Boson or the Voyager leaving the boundary of the solar system would seem in 8012. AD. If anybody even remembers these events then.
I agree that in themselves, the events I listed don’t much suggest that the world ends, the game reboots, or first contact occurs this year. The astronomical and historical propositions—that there’s something unlikely going on with calendars and the location of modernity within the precessional cycle—are essential to the argument.
One of the central ingredients is this stuff about a near-conjunction between the December solstice sun and “the galactic center”, during recent decades. One needs to specify whether “galactic center” means the central black hole, the galactic ecliptic, the “dark rift” in the Milky Way as seen from Earth, or something else, because these are all different objects and they may imply different answers to the question, “in which year does the solstice sun come closest to this object”. I’ve just learned some more about these details, and should shortly be able to say how they impact the argument.
You’re still cherry-picking. There have been loads of conjunctions and other astronomical events that have been taken as omens. You could argue that the conjunction with the galactic center is a “big” one, but there are bigger possible ones that you’re ignoring because they don’t match (eg if the sun was aligned with with CMB rest frame, that would be the one you’d use)
This begs the question: how likely do you think it is that we are in a transhumanly controlled environment?
I don’t have a stable opinion on that topic. But the question here is whether, given that hypothesis, it’s rational to attach significance to 2012-ism.
Irrationality Game
For reasons related to Godel’s incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)
I believe that human hardware can—in principle—be as intelligent as it is possible to be. (60%) To be clear, this doesn’t actually occur in the real world we currently live in. I consider the putatively irrational assertion roughly isomorphic to asserting that AGI won’t go FOOM.
If you voted already, you might not want to vote again.
I would vote differently on these assertions.
Me, too. It wouldn’t surprise me too much if there’s a limit on intelligence, but I’d be extremely surprised humans are at that limit.
What’s your estimate that this value is at a level that we actually care about (i.e. not effectively infinite from our point of view)?
I intended to answer this question with my second prediction—I am 60% confident that super-human intelligence is not possible.
I really do think that the reproductive advantage of increased intelligence is great enough that the line of how intelligent it is possible for agents to be is within a reasonably small number of standard deviations of the mean of current human intelligence. My inability to make seat-of-the-pants estimates of statistical effects may make me look foolish, but maybe-maybe 8-12 standard deviations??
Is there a simple summary of why you think this is true of intelligence when it turned out not to be true of, say, durability, or flightspeed, or firepower, or the ability to efficiently convert ambient energy into usable form, or any of a thousand other evolved capabilities for which we’ve managed to far exceed our physiological limits with technological aids?
Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.
Fair enough. Thanks.
I don’t think I understand your question. There appear to be upper limits to how easy it is to solve certain kinds of problems that an intelligent agent would want to be able to solve. It is uncertain whether we have discovered the most clever methods of solving these problems—for example, we aren’t certain whether P = NP. Apparently, many mathematicians think humanity has been basically as clever as is possible (i.e. P != NP).
If we think there are limits, faul_sname asks the obvious next question—is human-level intelligence anywhere near those limits? I don’t see why not—intelligence has consistently shown reproductive fitness—so I expect evolution would select for it. It could be that humanity is in a local optimum and the next level of intelligence cannot be reached because the intermediate steps are not viable. But I’m not aware of evidence that the shape of intelligence improvement was like that for our ancestors.
Yes, but the speed at which it would do so is quite limited. Particularly with a generational time of 15-25 years, and with the fact that evolution basically stopped working as an enhancer once humans passed the threshold of preventing most premature deaths (where premature just means before the end of the reproductive window).
What makes you think that the threshold for civilization is anywhere near the upper bound for possible intelligence?
This is way off for almost all of human history almost everywhere. See the work of Greg Clark: occupational success and wealth in pre-industrial Britain were strongly correlated with the number of surviving children, as measured by public records of birth, death, and estates. Here’s an essay by Ron Unz discussing similar patterns in China. Or look at page 12 of Greg Cochran and company’s paper on the evolutionary history of Ashkenazi intelligence. Over the last 10,000 years evolutionary selective sweeps have actually greatly accelerated in the course of adapting to agricultural and civilized life.
How did intelligence, or earnings affected by intelligence, get converted into more surviving children?
Average wages until the last couple centuries were only a little above subsistence, meaning that the average household income was just slightly more than enough to raise a new generation to replace the previous one
Workers with below-average earnings could only feed themselves, not a pregnant wife or children
Men with higher earnings were thus more likely to marry, and to be able to afford to do so earlier, as well as paying for mistresses and prostitutes
Workers with high earnings could give offspring more nutritious diets, providing increased resistance to death by infectious disease (very common, and worsened by nutrient deficiency or inadequate calories)
High earnings could be used to build fat reserves to withstand famine, and to produce or purchase enough food to sustain a family through those lean times
Intelligence is helpful in avoiding lethal accidents
Likewise for execution for criminal activities, falling prey to crime, and avoiding death in war
I stand corrected.
You make an excellent point. The evolutionary argument is not as strong as I presented it.
Given that recorded history has no record of successful Xanatos gambits (TVTropes lingo), the case is strong that the intelligence limit is not medium distance from human average (i.e. not 20-50 std. dev. from average).
That leaves the possibility that (A) the limit is far (>50 std dev.) or (B) very near (the 8-12 range I mentioned above).
It seems to me that our ability to understand and prove certain results about computational difficulty (and the power of self-reference) that would apply even if super-human intelligence was possible is evidence that (B) is more likely than (A).
A larger head makes death during childbirth easier, so I’d expect evolution to be optimizing processing power per unit volume even today.
Unfortunately, neurons are about as efficient in most species—they’re already as optimized as you get. For that and other interesting facts, see http://www.pnas.org/content/early/2012/06/19/1201895109.abstract
Can you rephrase “this doesn’t actually occur in the real world we currently live in”?
Downvoted for the first, upvoted for the second.
Physics limit how big computers can get; I have no evidence whatsoever for humans being optimal.
One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn’t have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
I don’t think that’s so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large “brain” will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful “brain” would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann’s Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won’t need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don’t think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
Oh. Well, if you’re just ignoring increases in processing power, then I don’t see why your confidence is as low as 90%.
(Although it’s interesting to observe that if your AGI is currently running on a laptop computer and wants to increase its processing power, then of course it could try to turn the Earth into a planet-sized computer… but if it’s solving exponentially-hard problems, then it could, at a guess, get halfway there just by taking over Google.)
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
I think we basically agree, then, although I haven’t carefully thought about all possible ways to increase processing power.
If it turns out that “human hardware” is as intelligent as it is possible to be, that entails many things in addition to the assertion that AGI won’t go FOOM.
Downvoted for agreement—but I’m interpreting “be as intelligent as it is possible to be” charitably, to mean something like ‘within half a dozen orders of magnitude of the physical limits’.
If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space. Which trivially implies that the maximum amount of intelligence you could fit into a finite amount of space is bounded.
And of course you could also update perfectly on every piece of evidence, simulate every possibility, etc.. in this hypothetical universe. This is the theoretical maximum bound on intelligence.
If our universe can be well approximated by a snap to grid universe, or really can be well approximated by any Turing machine at all, then your statements seem trivially true.
It’s called the Bekenstein bound and it doesn’t require discreteness.
Do you mean “an upper limit” relative to available computing power or in an absolute sense?
Irrationality game
0 and 1 are probabilities. (100%)
Upvoted not for the claim, but the ridiculously high confidence in that claim.
Are you saying that you probably agree that 0 and 1 are probabilities, but my claim is not one of the things you would assign a probability of 1 to?
I believe 0 and 1 are probabilities, but there is no way to obtain that degree of certainty. (unless you have an incredibly clever method you aren’t sharing, which is mean)
An analogy would be that I believe that 3^^^3 is a number, even though I don’t think I will ever have that many dollars. Similarly, I believe that 0 and 1 are probabilities, but I wouldn’t grant any particular belief a probability of 0 or 1.
My intention was to make a stronger claim than the one you agree with, but fortunately my degree of confidence takes care of that for me.
I’d like to point out that anyone who does not share the (claimed) Infinite Certainty should be upvoting, as this confidence level is infinitely higher than any other possible confidence level. (It’s kind of like, if you agree that dividing by zero is merely an error, then any claim to infinite certainty is also an error, almost exactly the same error in fact.)
Nothing we can say will change your mind, unless already don’t believe this.
Downvoted for agreement. Trivially, P(A|A)=1 and P(A|~A)=0.
Does this belief really affect anything, or is it only a proposition considered true without any consequences on your cognitive processes? (I’ve always regarded “0 and 1 are not probabilities” as more of a rhetorical figure than a statement of belief.)
Well, on a somewhat trivial note, I (plan to) make my living proving that certain things have probabilities distinct from 0, so if 0 and 1 weren’t probabilities to begin with I’d be out of a job.
That’s not really it, though, because I think the “0 and 1 are not probabilities” claim is really about degrees of belief in non-mathematical propositions. In its most-reasonable-to-me form, it says something like “Even if you have an argument that statement S is true with probability 1, you should believe Pr[S] < 1, because your argument could be wrong”. And there’s… really not a lot I could say in response to that. Except I would note that the value 1 isn’t really special here.
But there’s a lot of things that go together with this idea that I do disagree with. In very many senses, even non-mathematical propositions do end up having probabilities of 0 or 1. For instance:
Any time we deal with (even theoretical) infinities (this one is important because here we get events with probability 0 that can actually happen)
Tautologies (duh)
Conditional probabilities (nobody really disagrees with this, but I think lots of probabilities we think are unconditional aren’t)
Any belief that I can never be talked out of (given how the human mind works, probably most beliefs we have are like this actually)
Plus in practice accepting “0 and 1 are not probabilities” rhetorically or otherwise just means that you stop writing 1 and start writing 1-epsilon. Whose belief is it really that doesn’t affect anything?
Tautologies are true for mathematical reasons and there is little difference—as far as probability assessment goes—between “P ∨ ∽P” and “Yding Skovhøj is the highest peak of Egypt or Yding Skovhøj is not the highest peak of Egypt”. Thus, tautologies (and pseudologies, or how do we call their false counterparts) don’t really make a category distinct from mathematical statements.
I am not sure what you mean here. Of course there are conditional probabilities of form “if X, then X”, but they already belong to the tautology group.
Regarding mathematical statements it’s nevertheless important to notice that there are two meanings of “probability”. First, there is what I would call “idealised” or “mathematical probability”, formally defined inside a mathematical theory. One typically defines probability as a measure over some abstract space and usually is able to prove that there exist sets of probability 1 or 0. This is, more or less, the sort of probability relevant to the probabilistic method you have linked to. Second, there is the “psychological” probability which has the intuitive meaning of “degree of belief”, where 1 and 0 refer to absolute certainty. This is, more or less, the sort of probability spoken about in “1 and 0 aren’t probabilities”.
These two kinds of probabilities may correspond to each other more or less closely, but aren’t the same: having a formal proof of a proposition isn’t the same as being absolutely certain about it; people make mistakes when checking proofs.
If believing P doesn’t affect anything, then naturally believing non-P doesn’t affect anything either. So, if you agree that “1 and 0 aren’t probabilities” is an inconsequential belief, does it mean that your answer to my original question is “yes”?
Saying that my belief in P is inconsequential implies that actually I am acting as if I believed not-P, even though I profess a belief in P. I argue that, conversely, many people who profess a belief in not-P act as if they actually believe P.
The point is that “acting as if one believes P” and “acting as if one believes not-P” can sometimes be the same actings. This is what I meant by “inconsequential”. I want to know whether, in your opinion, this is such a situation; that is, whether there is some imaginable behaviour (other than professing the belief) which would make sense if one believed that “1 is not a probability” but would not make sense if one believed otherwise.
I suspect that with enough resources you could be talked out of any of your beliefs. Oh, sure, it would take a lot of time, planning, and manpower (and probably some people you approve of having the beliefs we’d want to indoctrinate you with). You’re not actually 100% certain that you’re 100% certain that 0 and 1 are probabilities.
The trouble with thinking 0 or 1 is a probability is that it is exactly equivalent to having an infinite amount of evidence, which is impossible by the laws of thermodynamics; minds exist within physics.
Furthermore, a feeling of absolute certainty isn’t even a number, much less a probability.
At some point you have to ask: who is this “me” that can have any arbitrary collection of beliefs?
(And yes, incidentally, I don’t assign 100% probability to the fact that I assign 100% probability to the statement “0 and 1 are probabilities.” I think I could be persuaded, not to have a lower confidence in the 0-1 statement, but to believe that my confidence in it is lower than it is. This is sort of hard to think about, though.)
Funny.
This doesn’t offer any anticipation about the world for me to agree or disagree with. Probability is just a formalism you use, and there’s no reason for you to not define the formalism anyway you want.
So you are more confident in math than in hallucinating this entire interaction with an internet forum?
I’m not quite sure how to parse that, but I’ll do my best. I am more confident in math than I am in my belief that arbitrary parts of my life are not hallucinations.
Damn… You’re good. Anyway, 1 and 0 aren’t probabilities because Bayes Theorem break down there (in the log-odds/information base where Bayes Theorem is simple addition, they are positive and negative infinity). You can however meaningfully construct limits of probabilities. I prefer the notation (1 -) epsilon.
Log-odds aren’t what probability is, they’re a way to think about probability. They happen not to work so well when the probabilities are 0 and 1; they also fail rather dramatically for probability density functions. That doesn’t mean they don’t have their uses.
Similarly, Bayes’s Theorem breaks down because the proof of it assumes a nonzero probability. This isn’t fixed by defining away 0 and 1, because it can still return those as output, and then you end up looking silly. In many cases, not being able to condition on an event with probability 0 is the only thing to do: given that a d6 comes up both odd and even, what is the probability that the result is higher than 3?
[I tried saying some things about conditioning on sets of measure 0 here, but apparently I don’t know what I’m talking about so I will retract that portion of the comment for the sake of clarity.]
Log-odds are perfectly isomorphic with probabilities and satisfies Cox’s Theorem. Saying that log-odds are not what probabilities are is as non-sequiteur as saying 2+2 isn’t a valid representation of 4.
Bayes theorem assumes no such thing as non-zero probability, it assumes Real Numbered probabilities, as it is in fact a perfectly valid statement of real-number arithmetic in any other context. It just so happens to be that this arithmetic expression is undefined for when certain variables are 0, and is an identity (equal to 1) when certain variables are 1. Neither are particularly interesting.
Bayes Theorem is interesting because it becomes propositional logic when you apply it to a limit going towards 1 or 0.
Real life applications are not my expertise, but I know my groups, categories and types. 0 and 1 are not probabilities, just as positive and negative infinity are not Real Numbers. This is a truth derived directly from Russel’s Axioms, which is the definition basis for all modern mathematics.
When you say P(A) = 1 you are not using probabilities anymore, At best your are doing propositional logic, at worst you’ll get a type error. If you want to be as sure as you can, let credence be 1 - epsilon for arbitrarily small positive real epsilon.
1 and 0 are not probabilities by definition
Clearly log-odds aren’t perfectly isomorphic with multiplicative probabilities, since clearly one allows probabilities of 0 and 1 and the other doesn’t.
Bayes’s theorem does assume nonzero probability, as you can observe by examining its proof.
Pr[A & B] = Pr[B] Pr[A|B] = Pr[A] Pr[B|A] by definition of conditional probability.
Pr[A|B] = Pr[A] Pr[B|A] / Pr[B] if we divide by Pr[B]. This assumes Pr[B]>0 because otherwise this operation is invalid.
You can’t derive properties of probability from Russell’s axioms, because these describe set theory and not probability. One standard way of deriving properties of probability is via Dutch Book arguments. These can only show that probabilities must be in the range [0,1] (including the endpoints). In fact, no finite sequence of bets you offer me can distinguish a credence of 1 from a credence of 1-epsilon for sufficiently small epsilon. (That is, for any epsilon, there’s a bet that distinguishes 1-epsilon from 1, but for any sequence of bets, there’s an 1-epsilon that is indistinguishable from 1).
Here is an analogy. The well-known formula D = RT describes the relationship between distance traveled, average speed, and time. You can also express this as log(D) = log(R) + log(T) if you like, or D/R = T. In either of these formulas, setting R=0 will be an error. This doesn’t mean that there’s no such thing as a speed of 0, and if you think your speed is 0 you are actually traveling at a speed of epsilon for some very small value of epsilon. It just means that when you passed to these (mostly equivalent) formulations, you lost the capability to discuss speeds of 0. In fact, when we set R to 0 in the original formula, we get a more useful description of what happens: D=0 no matter the value of T. In other words, 0 is a valid speed, but you can’t travel a nonzero distance with an average speed of zero, no matter how much time you allow yourself.
What is the difference between log-odds and log-speeds, that makes the former an isomorphism and the latter an imperfect description?
Finally, do you really think that someone who thinks “0 and 1 are probabilities” is a statement LW is irrational about is unaware of the “0 and 1 are not probabilities” post?
Potholing that last sentence was mostly for fun.
By virtue of the definition of a logarithm, exp(log(x)) = x, we can derive that since the exponential function is well-defined for complex numbers, so is the logarithm. Taking the logarithm of a negative number nets you the logarithm of the absolute plus imaginary pi. The real part of any logarithm is a symmetric function, and there are probably a few other interesting properties of logarithms in complex analysis that I don’t know of.
log(0) is undefined, as you note, but that does not mean the limit x → 0 log(x) is undefined. It is in fact a pole singularity (if I have understood my singularity analysis correct). No matter how you approach it, you get negative infinity. So given your “logarithmic velocities,” I counter with the fact that using a limit it is still a perfect isomorphism. Limit of x → -inf exp(x) is indeed 0, so when using limits (which is practically what Real Numbers are for) your argument that log isn’t an isomophism from reals to reals is hereby proven invalid (if you want a step by step proof, just ask, it’ll be a fun exercise).
Given that logarithms are a category-theoretic isomorphism from the reals onto the reals (from the multiplication group onto the addition group) there is no reason why log-odds isn’t as valid as odds, which is as valid as ]0,1[ probabilities. Infinities are not valid Reals, 0 and 1 are not valid probablities QED.
As I said. Do not challenge anyone (including me) on the abstract algebra of the matter.
[I do apologize if the argument is poorly formulated, I am as of writing mildly intoxicated]
Yes, log(x) is an isomorphism from the positive reals as a multiplicative group to the real numbers as an additive group. As a result, it is only an isomorphism from multiplicative probabilities to additive log-probabilities if you assume that 0 is not a probability to begin with, which is circular logic.
So, pray tell: When are P=0 and P=1 applicable? Don’t you get paradoxes? What prior allows you to attain them?
I am really genuinely curious what sort of proofs you have access to that I do not.
To obtain paradoxes, it is you that would need access to more proofs than I do.
From an evidence-based point of view, as a contrapositive of the usual argument against P=0 and P=1, we can say that if it’s possible to convince me that a statement might be false, it must be that I already assign it probability strictly <1.
As you may have guessed, I also don’t agree with the point of view that I can be convinced of the truth of any statement, even given arbitrarily bizarre circumstances. I believe that one needs rules by which to reason. Obviously these can be changed, but you need meta-rules to describe how you change rules, and possibly meta-meta-rules as well, but there must be something basic to use.
So I assign P=1 to things that are fundamental to the way I think about the world. In my case, this includes the way I think about probability.
You can’t really do this, since the answer depends on how you take the limit. You can find a limit of conditional probabilities, but saying “the probability distribution of Y given X=x” is ambiguous. This is known as the Borel-Kolmogorov paradox.
Oops. Right, I knew there were some problems here, but I thought the way I defined it I was safe. I guess not. Thanks for keeping me honest!
I’d not seen Elizier’s post on “0 and 1 are not probabilities” before. It was a very interesting point. The link at the end was very amusing.
However, it seems he meant “it would be more useful to define probabilities excluding 0 and 1” (which may well be true), but phrased it as if it were a statement of fact. I think this is dangerous and almost always counterproductive—if you mean “I think you are using these words wrong” you should say that, not give the impression you mean “that statement you made with those words is false according to your interpretation of those words is false”.
Only Sith deal in absolutes!
I am very happy that the parent is currently at 0 karma.
Upvoted in furious, happy disagreement, because I was going to post this very thing, with a confidence level of 20%, but then I reasoned out that this was unbelievably stupid and the probability of infinite Bayesian evidence being possible should be the same as probabilities for other things we have very strong reason to believe are simply impossible: 1 - epsilon.
I’m pretty sure the probability of almost certainly impossible things being possible is lower than 1-epsilon. Except for very large values of epsilon.
Indeed, for values of epsilon approaching one.
I suppose if I wanted to maximize karma I should have stated a confidence level of 0%.
You’re supposed to post things you actually believe, you know! What are you, a spirit-of-the-game violator?
I do believe the 100% thing, though. It’s just that in this case, karma is not maximized where spirit-of-the-game is maximized, and I thought I’d point that out.
Gaining utility from karma, illegitemate or fraudulent sources regardless, is an ongoing problem which never ceases to amuse me. Let the humans have their fun!
Downvoted for agreement. Of course usually it isn’t rational to assign probabilites of 0 and 1, but in this case I think it is.
Computationalism is an incorrect model of cognition. Brains compute, but mind is not what the brain does. There is no self hiding inside your apesuit. You are the apesuit. Minds are embodied and extended, and a major reason why the research program to build synthetic intelligences has largely gone nowhere since its inception is the failure of many researchers to understand/agree with this idea.
70%
Just because I am an apesuit, doesn’t mean I need to dress my synthetic intelligence in one.
Have you been reading this recently?
More particularly, anything that links to this post.
Do you believe a upload with a simulated body would work? how high fidelity?
I don’t understand why you don’t believe that computations can be “embodied and extended.”
I do believe that the fact that any kind of human emulation would have to be embedded into a digital body with sensory inputs is underdiscussed here, though I’m not even sure what constitutes scientific literature on the subject so I don’t want to make statements about that.
Computations can be embodied and extended, but computationalism regards embodiment and extension as unworthy of interest or concern. Downvoted the parent for being probably right.
Can you provide a citation for that point?
Not knowing anything really about academic cognitive psychologists, and just being someone who identifies as a computationalist, I feel like the embodiment of a computation is still very important to ANY computation.
If the OP means that researchers underestimate the plasticity of the brain in response to its inputs and outputs, and that their research doesn’t draw a circle around the right “computer” to develop a good theory of mind, then I’m extra interested to see some kind of reference to papers which attempt to isolate the brain too much.
I understand “computationalism” as referring to the philosophical Computational Theory of the Mind (wiki, Stanford Encyclopedia of Phil.). From the wiki:
From the SEP:
Because computation is about syntax not semantics, the physical context—embodiment and extension—is irrelevant to computation qua computation. That is what I mean when I say that embodiment and extension are regarded as of no interest. Of course, if a philosopher is less thorough-going about computationalism, leaving pains and depression out of it for example, then embodiment may be of interest for those mental events.
However, your last paragraph throws a monkey wrench into my reasoning, because you raise the possibility of a “computer” drawn to include more territory. All I can say is, that would be unusual, and it seems more straightforward to delineate the syntactic rules of the visual system’s edge-detection and blob-detection processes, for example, than of the whole organism+world system.
I feel like we are talking past each other in a way that I do not know how to pinpoint.
Part of the problem is that I am trying to compare three things—what I believe, the original statement, and the theory of computationalism.
To try to summarize each of these in a sentence:
I believe that the entire universe essentially “is” a computation, and so minds are necessarily PARTS of computations, but these computations involve their environments. The theory of computationalism tries to understand minds as computations, separate from the environment. The OP suggests that computationalism is likely not a very good way of figuring out minds.
1) do these summaries seem accurate to you? 2) I still can’t tell whether my beliefs agree or disagree with either of the other two statements. Is it clearer from an outside perspective?
Your summaries look good to me. As compared to your beliefs, standard Computational Theory of Mind is probably neither true nor false, because it’s defined in the context of assumptions you reject. Without those assumptions granted, it fails to state a proposition, I think.
I am constantly surprised and alarmed by how many things end up this way.
Irrationality Game
I believe that exposure to rationality (in the LW sense) at today’s state does in general more harm than good^ to someone who’s already a skeptic. 80%
^ In the sense of generating less happiness and in general less “winning”.
I realized I didn’t have a model of an average skeptic, so I am not sure what my opinion on this topic actually is.
My provisional model of an average skeptic is like this: “You guys as LW have a good point about religion being irrational; the math is kind of interesting, but boring; and the ideas about superhuman intelligence and quantum physics being more than just equations are completely crazy.”
No harm, no benefit, tomorrow everything is forgotten.
I roughly agree with this one. This is something that we would not see much evidence of, if true.
Downvoted.
Could you provide support? Have you seen http://lesswrong.com/lw/7s4/poll_results_lw_probably_doesnt_cause_akrasia/, by the way?
I predict with about 60% probability that exposure to LW rationality benefits skeptics more and is also more likely to harm non-skeptics.
I’ll bite:
The U.S. government deliberately provoked the attack on Pearl Harbour through diplomacy and/or fleet redeployment, and it was not by chance that the carriers of the U.S. Pacific Fleet weren’t at port when the attack happened.
Very confident. (90-95%)
By the way, the reason I assume I am personally more rational about this than the LW average is that there are lots of US Americans around here, and I have sufficient evidence to believe that people tend to become less rational if a topic centrally involves a country they are emotionally involved with or whose educational system they went through.
I don’t have a lot of strong reasons to disbelieve you, but what evidence makes you think this is so?
Are you referring to my belief regarding the attack on Pearl Harbor, or to my belief regarding my rationality on this topic in relation to the LW average?
Does that mean that you have some strong reasons to disbelieve me?
Downvoted the comment for being bizarrely unresponsive, and the parent for being presumably reasonable in light of evidence that you refuse to share.
I want to know which things you’ve heard or seen that made you believe the United States government provoked the attack on Pearl Harbor. My best reason for doubting you is that I don’t recall hearing anything like this before from academics nor interested amateur historians nor conspiracy theorists.
My guess is that the biasing effects of being funneled through a country’s school system and subjected to its news are much weaker on those who would find LW interesting than the typical citizen.
For what it’s worth, I came across the theory before, in a pretty respectable setting: a popularization book by a historian, where many conspiracy theories (along with “mysteries” like Easter Island) where examined, usually with skeptical conclusions. The Pearl Harbor one was one of the few with a “possible, but unproven” verdict.
Do you remember the title of that book?
I read it long ago, in a Spanish translation from French. It seems the book has not been published in English. The original title is Dossiers secrets de l’histoire, by Alain Decaux.
That reduces the value of the example, IMO. Political conspiracy stuff relies on so much contextual material and government records that it’s hard for a foreigner to make a good appraisal of what went on. It would be like a monolingual American trying to make heads or tails of that incident decades ago (whose name escapes me at the moment) where a high-level Communist Party official died in a airplane crash with his family; was it a normal accident, or was he fleeing a failed coup attempt to Russia, as the conspiracy/coverup interpretations went? If you can’t even read Chinese, I have no idea how one could make a even half-decent attempt to judge the incident.
I have never heard of the book Alejandro1 refers to, but I read a book from Togo Shigenori, the Japanese foreign minister during that time, and he makes a lot of good points how US diplomacy wasn’t focused on securing peace, but on forcing Japan into a war that could only benefit the USA in the long run. From his perspective, the oil embargo left Japan with no other reasonable option than to try to conquer the British and Dutch oil reserves in South East Asia; and I see as little reason to believe that the U.S. government wasn’t aware of this as he does.
Togo was an outspoken opponent of the war against the USA who made efforts towards more diplomatical exchange, which met little interest on part of the U.S. government. He was the thriving force behind Japan’s declaration it would uphold the Geneva Convention, which Japan did not sign. He was also the originator of a peace settlement with the USSR earlier. Lastly, he was also of Korean descent, originally having the surname Park. All this adds up to sufficient evidence for me to believe that he was not a nationalist warmonger, and therefore I take his analysis very serious.
LW readers seem to be better at evaluating arguments from different sides, but not necessarily at acquiring these arguments in the first place unless they are already interested in the topic. Also, the lack of history-related threads in the discussion area leads me to believe that there is no significant correlation between being interested in LW and being interested in history in general or historical accuracy in particular.
Regarding the first part, the truth of that statement critically depends on how exactly you define “provoke.” For some reasonable definitions, the statement is almost certainly true; for others, probably not.
As for the second part (the supposed intentional dispersion of the carriers), I don’t think that’s plausible. If anything, the U.S. would have been in a similar position, i.e. at war with Japan with guaranteed victory, even if every single ship under the U.S. flag magically got sunk on December 7, 1941. So even if there was a real conspiracy involved, it would have made no sense to add this large and risky element to it just to make the eventual victory somewhat quicker.
Also, your heuristic about bias is broken. In the Western world outside of the U.S., people are on average, if anything, only more inclined to believe the official historical narrative about WW2.
This is suspect. The U.S. had greater industrial capacities and population than Japan, but that doesn’t guarantee victory. Rebuilding the navy would take a lot of time which the Japanese could use to end their war in China. Also, it was far from clear in late 1941 whether the USSR would withstand the German assault and whether the British would not seek peace.
Even in the worst possible case, I still don’t see what could prevent the U.S. from simply cranking out a new huge Pacific navy and overwhelming Japan. Yes, the production would take a few years to ramp up to full capacity, as it did in reality—but once it did, I can’t imagine what could save Japan from being overwhelmed.
Ending the war in China wouldn’t have helped the Japanese at all, even if they linked with a victorious German army in the Far East. An additional land army at their disposal could not prevent the U.S. navy steamroller from eventually reaching their home islands, whereupon they would be bombed and starved into surrender. (If not for the atom bomb ending their agony even earlier.) The Japanese islands are so exposed and vulnerable to any superior naval power that they could be lost even as the world’s mightiest army is watching helplessly from the Asian mainland.
The only theoretical chance I see is if Germany somehow conquered both the U.S.S.R. and Britain, and then threw all its resources on a crash program to build up a huge navy of its own and help the Japanese. But I’m not sure if they’d be able to outproduce the U.S. even in that case. (And note that this would require a vanishingly improbable long continuation of the Germans’ lucky streak.)
In the context of this discussion the important thing is what could be reliably predicted in 1941, so we should ignore the possible effects of the atomic bomb.
Assume that the entire U.S. navy is destroyed in January 1942. A reasonable realistic scenario, if everything went really well for Japan, may be this:
Germans capture Leningrad and encircle Moscow in summer 1942, Stalin is arrested in the forthcoming chaos and the new Soviet government signs armistice with Germany, ceding large territories in the west.
German effort is now concentrated on expanding their naval power. Germany has half of Europe’s industrial capacity at her disposal. The production of U-boats increases and Britain alone has not enough destroyers to guard the convoys.
Starvation, threat of German invasion and heavy naval losses to German submarines, leading to inability to supply the Indian armies, make Britain accept Hitler’s peace offer. Britain surrenders Gibraltar, Malta, Channel islands and all interests in European mainland to Germany and Italy, Singapore and Malaya to Japan and backs from the war.
China now obtains no help, no arms, no aircraft and surrenders in 1944, becoming divided among several Japanese puppet states.
The U.S. are alone, still having no significant navy. Hawaii is lost to the Japanese. Germany is aggresively building new ships to improve their naval power and potentially help the Japanese in the Pacific. Roosevelt dies in early 1945, as he did historically. The Japanese offer peace that would secure them the leading position in East Asia, willing to give Hawaii back.
Now in this situation, being a U.S. general, what would be your advice given to Truman? Would it be “let’s continue in a low intensity war against both Germany and Japan until we have a strong enough navy, which may be in 1947 or 1948, and then start taking one island after another, which may take two more years, and then, from the island bases supplied through the U-boat infested Pacific start bombarding Japan, until the damned fanatics realise they have no other chance than to surrender”? Or would it rather be “let’s accept peace if it’s offered on honourable terms”?
Even in that scenario, Japanese victory is conditional on the political decision of the U.S. government to accept the peace. My comments considered only the strategic situation under the assumption that all sides were willing to fight on with determination. And I don’t think this assumption is so unrealistic: the American people were extremely unwilling to enter the war, but once they did, they would have been even less willing to accept a humiliating peace. Especially since the Pacific great naval offensive could be (and historically was) fought with very low casualties, and not to mention the U.S. government’s wartime control of the media that was in many ways even more effective than the crude and heavy-handed control in totalitarian states.
Now, in your scenario, the U.S. would presumably see immediately that its first priority was navy rebuilding. (An army is useless if you can’t get it off the mainland.) This means that by 1944, Americans would be cranking out even more ships than they did historically. I don’t think the Axis could match that output even if they were in control of the entire Eurasia.
(The U-boats would have been a complicating factor. Their effectiveness changed dramatically with unpredictable innovations in technology and tactics. In actual history, they became useless by mid-1943, although Germans were arguably on the verge of introducing dramatically superior ones at the time of their capitulation. But in any case, the U-boat factor cuts both ways: Americans could swamp the Pacific with even greater numbers of U-boats and wreck the entire Japanese logistics, as they actually did.)
Even assuming a plausible scenario in which the US couldn’t defeat Germany, that doesn’t have anything to do with whether we could have defeated Japan standing alone.
Historically, we know it wasn’t that hard for the US—despite Japan attacking first, the US adopted a “Europe First” strategy that committed approx. 2⁄3 of capacity to fighting Germany. Despite this, the US defeated Japan easily—there are no major victories for Japan against the US after Pearl Harbor, and Midway was less than a year after Pearl Harbor. If the US strategy is “Japan First” (doing things like transferring the Atlantic Fleet to the Pacific), why should we expect the Pacific war would last long enough that Germany would be able to consolidate a victory in the east into driving the UK into peace and be able to intervene in the Pacific?
Also, why do you think an invasion of Hawaii was possible? The surprise strike was at the end of Japanese logistical capacity—I think the US wins if Japan tries a land invasion.
Remember the context: we are in the hypothetical where all US ships (Atlantic fleet included) were magically anihilated in the end of 1941.
I’m a big believer in not fighting the hypothetical, but there is no historically plausible account leading to the destruction of the Atlantic fleet. At that point, we aren’t discussing facts relevant to whether FDR knew of the Pearl Harbor attack ahead of time.
The hypothetical of Pearl Harbor as the most resounding success it could possibly be (US Pacific fleet reduced to irrelevance) and Germany winning the Battle of Moscow strongly enough that it has leverage to force the UK out of the war is reasonable for discussing FDR’s decision process. That’s all he could reasonably have thought he was risking by allowing Pearl Harbor. As I stated elsewhere, I think FDR gets his political goals with Japan firing the first shot—there’s no need for him to court a military disaster.
True, but I have joined this part of discussion reacting to this Vladimir_M’s comment:
Could you spell out what you mean by different definitions of “provoke”?
Anyhow, I am more concerned about the word “deliberate.” The government is not a coherent actor; it does not have deliberate actions. For example, FDR explicitly rejected an oil embargo, yet oil exports stopped. Was this because his subordinates correctly interpreted his wishes? Or were they more belligerent? In Present at the Creation (p26) Acheson seems to say that he implemented the embargo by mistake, thinking that Japan had hidden assets that would keep the flow going. On the following page, he agrees to accept payment from a Latin American bank, but something goes awry, seemingly out of his control. Delong asks if FDR even knew of the embargo.
Provoking: presenting someone with a multitude of bad choices, one of them being to attack you.
Deliberate: proceeding with an action in the hope of achieving a specific outcome.
Deliberately provoking: presenting someone with a multitude of bad choices, hoping they will attack you because of this.
The carrier fleet being operational was decisive in preventing an expected Japanese invasion of Midway and Hawaii, and recapturing Hawaii from the American continent would have been very difficult, if not outright impossible. What if China had surrendered or made peace with Japan? What if Germany captured Leningrad, Moscow, and Stalingrad? What if the Japanese nuclear weapon program had succeded? What if the public opinion had turned anti-war, as during the Vietnam War?
“Guaranteed victory” sounds like hindsight bias to me. Even if the US mainland could not have been invaded, that doesn’t mean the USA could not have lost the war.
The point is that the “official historical narrative” is different in different countries. For example, Japan has a strong culture of ignoring Japanese war crimes, in Polish textbooks there rarely is mention of Poland taking part in the partition of Czechoslovakia, Britons are generally unaware of the fact that GB declared war on Germany and not vice versa, many French think that the surrender to Germany was an action the government did not have the license to make, and so on.
“The government” is an abstract concept. I am talking about a circle of people within the government who together had the power to provoke Japan, and to assure that the losses at Pearl Harbor were within reasonable bounds. I am not overly familiar with the way the U.S. government was organised at that time, but it seems to me that such a circle had to include either the president or high ranking intelligence officials, most likely both.
It wouldn’t have mattered for the Pacific war, except by prolonging it somewhat. Even if Japan had conquered every single island in the Pacific and Indian oceans, as long as the U.S. government remained in control of the U.S. mainland, as it surely would have, it still would have had enough resources and industrial capacity to outproduce Japan in warships and other naval assets by orders of magnitude and eventually roll back the Japanese conquests by sheer overwhelming strength.
Germany arguably had some chance to win the European war, but Japan was doomed from day one.
Also, as someone has already noted, the greater importance of carriers over battleships in WW2 is itself known only from hindsight, and contrary to the prevailing beliefs of the time.
Well, yes, you can always conceive of some deus ex machina. But it’s implausible that fears about hypothetical Japanese superweapons would have influenced the strategic plans of FDR & Co. in 1941.
By 1941, FDR & Co. already had sufficiently strong grip on power that they comfortably knew that a war would allow them to seize complete control of the media (and all other means of propaganda) and ensure that this could never happen.
True enough, but thus typically has the form of the same official narrative with some additional spin, omission, and lying with regards to the relevant local details in order to accommodate nationalist sensibilities. In contrast, sensible, intelligent, well-informed, and yet radical criticism of the official narrative can be found, to my knowledge, only within the Old Right intellectual tradition in the U.S. (Which has been driven to the fringe for many decades, but its vestiges somehow still occasionally surface in the respectable public discourse.)
American public opinion may have expected such invasions, but did any serious military experts? Earl Warren and FDR’s political pandering is not really strong evidence of a serious military expectation. Obviously, we know now that the Pearl Harbor attack was at the outermost of Japanese logistical capacity—they never planned an invasion of Hawaii, much less the West Coast.
Given the history, we know that transpacific projections of land forces were very possible for the United States (Guadalcanal, Iwo Jima). Why would an invasion of Hawaii be more difficult?
As an aside, I agree that FDR courted war because he wanted to join the European conflict. Lend-Lease and escorting convoys were not the acts of a neutral party. Likewise, the raw material embargos on Japan placed that nation in an untenable position. I upvoted you for asserting that FDR knew that Pearl Harbor would be attacked in time to make changes to defensive preparations at that base. From FDR’s perspective, a “surprise” attack that was a stalemate instead of a defeat would have served his political goal (war with Germany) just as well.
There were proponents of an invasion of Hawaii within the Japanese military cabinet; I think Genda Minoru was one of them. Plans existed, but were deemed too risky and unlikely to succeed.
I never said anything about an invasion of the US West Coast, but the Japanese invasion of the Aleutian islands was supposed to be the first stage of an invasion of Alaska. Had that plan succeeded, Japan would have been in control of naval bases within reasonable distance of the US West Coast.
Guadalcanal and Iwojima were within range of US forward bases. Carrying out a large-scale invasion over a distance of about 4000km is not something any military power was capable of during WW2, to my knowledge.
Well, “provocation” is one of those problematic words, in that nearly always, the party accused of “provocation” denies it—and the act itself is therefore nearly always done in a way that attempts for some plausible deniability. So even if there is agreement on the facts of what happened, there is usually room for debate over whether an act constituted “provocation.”
Of course. But under FDR, he and his inner circle did act in a fairly coherent way (and by extension, so did the entire pyramid of New Deal patronage that they headed). There were certainly individuals and institutions within the U.S. government outside of their control, but by 1941, they had been mostly side-stepped and pushed away into irrelevance.
I wouldn’t consider Acheson a credible source. Certainly, it’s very naive to take anything written by the political actors of the New Deal/WW2 era at face value, and disentangling the real events from the available information is a task of enormous complexity and difficulty. That rabbit hole is very, very deep.
It seems to me very different to say that it is difficult to assess whether something is a provocation than to say that there are some definitions of provocation under which it is and some under which it isn’t.
Do you think Acheson would lie about external facts, like whether he offered to let the Japanese pay with money in a Latin American bank account?
If we could read minds (including those in the past), it would probably be possible to come to agreement about which concrete acts have been provocations in all cases, by looking for the mens rea: was the given act specifically motivated by the desire to induce a hostile reaction?
But since we can’t read minds, the practical criteria for what counts as “provocation” are murky, and they are typically a mixture of attempts to evaluate indirect evidence about motives and attempts to define certain acts in certain contexts as ipso facto provocative. So there is lots of difficulty on both fronts, even if there is a general agreement on what happened: it’s hard to evaluate the evidence about motives correctly, and there is also disagreement on which acts qualify as ipso facto provocative.
In this concrete case, some people would say that the actions of the U.S. government prior to Pearl Harbor were ipso facto provocative, i.e. that they were far outside of the limits of reasonable behavior of someone who is not actively trying to provoke hostility. Others would say that it isn’t so, and they’d presumably also claim that there is no clear evidence about motives to pronounce the verdict of “provocation.”
It strikes me as wildly implausible that someone relatively low in the pecking order, like Acheson in 1941, could have been in a position to make such tremendous history-shaping decisions on his own whim and without directions from above. So I think his account presents, at best, a strong lawyerly spin on the events with plenty of important omissions, even if there is no outright lying.
Now, why the oil embargo was instituted in this particular puzzling way, I don’t know. I’ve never found the time to sit down and study all the available sources in detail. However, it seems to me that the most probable explanation is that FDR and his clique wanted to execute the embargo in a duplicitous and plausibly deniable way (which would be very much within their usual modus operandi), so they tried to make it look like an underling did the paperwork of export licensing a bit too eagerly, and then also the Japanese unreasonably failed to do the correct bureaucratic procedure, etc., etc.
The “and it was not chance” bit? That requires the conspirators be non-human.
Carrier supremacy was hardly an established doctrine, much less proved in battle; orthodox belief since Mahan was that battleships were the most important ships in a fleet. The orthodox method of preserving the US Navy’s power would have been to disperse battleships, not carriers. Even if the conspirators were all believers in the importance of carriers, even a minimum of caution would have led them to find an excuse to also save some of the battleships. To believe at 90% confidence that a group of senior naval officials, while engaging in a high-stakes conspiracy, also took a huge un-hedged gamble on an idea that directly contradicted the established naval dogma they were steeped in since they were midshipmen, is ludicrous.
Not really. It wasn’t just “a carrier fleet” and “a battleship fleet”, it was a predominantly modern carrier fleet and an outdated battleship fleet that consisted mostly of WWI designs or modifications of WWI designs. It was also consensus that if you were going to deploy carriers, the Pacific Ocean was a more promising theatre than the Atlantic ocean, due to (a) the weather and (b) the lack of strategically positioned air bases on land that were in little danger of being invaded, such as Newfoundland, Great Britain, West Africa, and so on. Also, the U.S. Navy could have commissioned more battleships instead of carriers, but they didn’t, and that means they did have plans for them; most likely in the Pacific theatre. It was clear from the start that being at war with Japan would also mean being at war with Germany, so fighting only on the Pacific front was never an option.
I didn’t say they wouldn’t try to save the carriers. I said they would have hedged their bets by also dispersing some of the battleships. Your 90% confidence in your whole conjunct opinion requires a greater-than-90% confidence in the proposition that while saving the carriers, the people involved, all steeped in battleship supremacy/prestige for decades, would deliberately leave all the battleships vulnerable, rather than disperse even one or two as a hedge.
Only in violation of the Washington and First London Naval Treaties. The US Navy could not have built more battleships at the time it started, for example, the Enterprise (1934) under those treaties.
I note that in the period 1937-to-Pearl-Harbor, which is to say subsequent to the 1936 Second London Naval Treaty that allowed it, the US Navy started no fewer than nine new battleships (and got funding authorization for a tenth), which suggests that they still seriously believed in battleships. Otherwise, why not build carriers in their place?
But they did disperse some of the battleships. That’s why all the battleships at Pearl Harbor were outdated classes. They didn’t have that many outdated carriers, and carriers retain their value more over the course of time than battleships and battlecruisers do.
The ratio value:tonnage of capital ships sunk at Pearl harbor was significantly lower than the ratio value:tonnage of capital ships in the surviving fleets in the Pacific Ocean and elsewhere. This was never about carriers versus battleships, it was about vessels with high value versus vessels with low value.
Er? What battleships are you claiming were dispersed?
There were quite literally no newer battleships on active duty in the US Navy on December 7th, 1941 than the West Virginia, “outdated class” or no, sunk at Pearl Harbor along with her brand-new CXAM-1 radar. The only newer battleships in commission were the North Carolina and Washington, both of which were not yet on active duty because of delays caused by propeller issues.
Yes, I was referring to the North Carolina. She had already completed her sea trials, but was not yet headed for Pearl Harbor when the attack happened.
Also, of the 18 heavy cruisers the US Navy had in 1941 (all of them being post-WW1 designs), only two were present at Pearl Harbor.
Do you think that the U.S. government provoked an attack specifically on Pearl Harbor, or that they just wanted the Japanese to attack somewhere?
Where exactly do you place the boundary of deliberate provocation? That is, does not trying too hard to prevent the attack count, or had they have to be actively persuading the Japanese and moving the fleets into easily attackable positions?
I think they wanted the Japanese to attack somewhere, but they were aware of the fact that Pearl Harbor was a likely target.
I think they were actively persuading the Japanese to commit some act of war, and were not trying too hard to prevent the specific act of war that happened.
Upvoted, not for the assertion, but for the confidence level (I would give it 25-75%)
Thanks; I assumed the many upvotes came from people who considered my confidence level too high, not too low, but it’s nice to have someone actually confirm that.
I have seen a few low status conspiracy theorists advocating a position like this, and eventually started to agree that provoking an attack from an enemy is a strategy the US has used several times this century, my probability for this particular incident is still around 75% at most though
Irrationality Game
Being a materialist doesn’t exclude nearly as much of the magical, religious, and anomalous as most materialists believe because matter/energy is much weirder than is currently scientifically accepted.
75% certainty.
Upvoted, as many phenomena that get labelled “magical” or “religious” have readily-identifiable materialist causes. For those phenomena to be a consequence of esoteric physics and to have a more pedestrian materialist explanation that turns out to be incorrect, and to conform to enough of a culturally-prescribed category of magical phenomena to be labelled as such in the first place seems like a staggering collection of coincidences.
I’m having trouble understanding what you are claiming. It seems that once anything is found to exist in the actual world, people won’t call it “magical” or “anomalous”. When Hermione Granger uses an invisibility cloak, it’s magic. When researchers at the University of Dallas use an invisibility cloak, it’s science.
What I meant was that there may be more to such things as auras, ghosts, precognition, free will, etc. than current skepticism allows for, while still not having anything in the universe other than matter/energy.
Taboo “matter/energy”.
Well damn. What is left? “You know… like… the stuff that there is.”
Thank you. I was about to ask the same thing.
Algebra.
Causes and effects.
Good point. But this ‘cause’ word is still a little nebulous and seems to confuse some people. Taboo ‘cause’!
My point is that what counts as matter/energy may very well not be obvious in different theories.
Upvoted for disagreement with the quibble that there is probably room for a lot of interesting things in the realm of human experience that while not necessarily relating one-to-one with nonhuman physical reality, have significance witin the context of human thought or social interaction and contain elements that normally get lumped into magical or religious.
Downvoted for agreement. (Retracted because I realized you were talking about in our universe, and I was thinking in principle)
Nitpick: do you really mean this? Current scientific theories are pretty damn weird. But not, in your view, weird enough?
I’m pretty sure that the current theories aren’t weird enough, but less sure that current theories need to be modified to include various things that people experience. However, it does seem to me that materialists are very quick to conclude that mental phenomena have straightforward physical explanations.
May I remind you that scientists rescently created and indirectly observed the elementary particle responsible for mass?
The smallest mote of the thing that makes stuff have inertia. Has. Been. Indirectly. Observed.
What.
Do materialists still exist? In order to vote on this am I to imagine what not-necessarily-coherent model a materialist should in some sense have given their irreversible handicap in the form of a misguided metaphysic? If so I’d vote down; if not I’d vote up.
irrationality game: The universe is, due to some non-reducible (i.e. non-physical) entity, indeterministic. 95% That entity is the human mind (not brain). 90%
Irrationality Game:
These claims assume MWI is true.
Claim #1: Given that MWI is true, a sentient individual will be subjectively immortal. This is motivated by the idea that branches in which death occurs can be ignored and that there are always enough branches for some form of subjective consciousness to continue.
Claim #2: The vast majority of the long-term states a person will experience will be so radically different than the normal human experience that they are akin to perpetual torture.
P(Claim #1) = 60%
P(Claim #2 | Claim #1) = 99%
Given these beliefs, you should buy cryonics at almost any price, including prices at which I would no longer personally sign up and prices at which I would no longer advocate that other people sign up. Are you signed up? If not, then I upvote the above comment because I don’t believe you believe it. :)
Well, I agree with you that I should buy cryonics at very high prices and I plan on doing so. For the last few years I’ve spent the majority of my time in places where being signed up for cryonics wouldn’t make a difference (9 months out of the year on a submarine, and now overseas in a place where there aren’t any cryonics companies set up).
You should probably still upvote because the < 1⁄4 of the time I’ve spent in situations where it would matter still more than justify it. I should also never eat an icecream snickers again. I’ll be the first to admit I don’t behave perfectly rationally. :)
more people have died from cryocrastinating than cryonics ;)
The person may not believe that MWI is true; the beliefs were stated as being conditional.
Nevertheless, your argument does apply to me, since I have similar beliefs (or at least worries), and I also for the most part buy your arguments on MWI. I do plan to sign up for cryonics within the next year or so, but not at any price. This is because I don”t expect to die soon enough for my short-term motivational system to be affected.
Irrationality game
Money does buy happiness. In general the rich and powerful are in fact ridiculously happy to an extent we can’t imagine. The hedonic treadmill and similar theories are just a product of motivated cognition, and the wealthy and powerful have no incentive to tell us otherwise. 30%
Irrationality game comment:
Imagine that we transformed the Universe using some elegant mathematical mapping (think about Fourier transform of the phase space) or that we were able to see the world through different quantum observables than we have today (seeing the world primarily in the momentum space, or even being able to experience “collapses” to eigenvectiors not of x or p, but of a different, for us unobservable, operator, e.g. xp). Then, we would observe complex structures, perhaps with their own evolution and life and intelligence. That is, aliens can be all around us but remain as invisible as Mona Lisa on a Fourier transformed picture from Louvre.
Probability : 15%.
Any blob (continuous, smooth, rapidly decreasing function) in momentum space corresponds to a blob in position space. That is, you can’t get structure in one without structure in the other.
The narrower blob, the wider its Fourier transform. To recognise a perfectly localised blob in the momentum space one would need to measure at every place over the whole Universe.
Not every structure is recognisable as such by human eye.
This is an interesting way to look at things. I would assert a higher probability, so I’m voting up. Even a slight tweaking (x+ε, m-ε) is enough. I’m imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.
Upvoted for underconfidence; there are a lot of bases you can use.
Still, what you see in one basis is not independent on what you see in another one, and I expect elegant mapping between the bases. There is difference between
“there exist a basis in the Hilbert space in which some vaguely interesting phenomena could be observed, if we were able to perceive the associated operator the same way as we perceive position”
and
“there exist simple functions of observables such as momentum, particle number or field intensities defining observables which, if we could perceive them directly, would show us a world with life and civilisations and evolution”
My 15% belief is closer to the second version.
Okay, that’s less likely. I’d still give it higher than 15% though. The holographic principle is very suggestive of this, for instance.
It’s hard to know exactly what would count in order to make an estimate, since we don’t yet know the actual laws of physics. It’s obvious that “position observables, but farther away” would encode the regular type of alien, but the boundary between regular aliens and weird quantum aliens could easily blur as we learn more physics.
Irrationality Game
It’s possible to construct a relatively simple algorithm to distinguish superstimulatory / acrasiatic media from novel, educational or insightful content. Such an algorithm need not make use of probabilistic classifiers or machine-learning techniques that rely on my own personal tastes. The distinction can be made based on testable, objective properties of the material. (~20%)
(This is a bit esoteric. I am starting to think up aggressive tactics to curb my time-wasteful internet habits, and was idly fantasising about a browser plugin that would tell me whether the link I was about to follow was entertaining glurge or potentially valuable. In wondering how that would work, I started thinking about how I classify it. My first thought would be that it’s a subjective judgement call, and a naive acid-test that distinguished the two was tantamount to magic. After thinking about it for a little longer, I’ve started to develop some modestly-weighted fuzzy intuitions that there is some objective property I use to classify them, and that this may map faithfully onto how other people classify them.)
Upvoted for this sentence.
Upvoted because I can’t think of any sense in which it’s possible to reliably separate akrastic from non-akrastic media without a pretty good model of the reader. Wikipedia’s a huge time sink, for example, yet it’s a huge time sink because it consists of lots of educational but low-salience bits; that article on orogeny might be extremely useful if I’m trying to write a terrain generation algorithm, but I’ll probably only have to do that at most once in my life.
On the other hand, it’s probably possible to come up with an algorithm that reliably distinguishes some time-wasting content. Coming up with a set of criteria for image galleries, for example, would go a long way and seems doable.
Wikipedia was one example of a challengingly messy corpus, though I do think there’s a sharp division between articles that make you know more stuff and articles that don’t). I personally wouldn’t consider the orogeny article akrasiatic.
It is possible I’m working from a quite specific definition of akrasia in this case.
(You need to put a backslash before any )’s in URLs, e.g. en.wikipedia.org/wiki/Blossom_(TV_series\)).)
(Done)
I would expect to have to do that either zero or at least two times.
I tend to be a one-evolving-draft sort of programmer. Fair point, though.
Upvoted for underconfidence.
Simple compared to what, and with what rates of false positives/negatives?
As in implementable in a couple of hundred lines of JavaScript. If I had good answers for the second question, I’d be a lot more sure than 20%.
Something that disqualifies things that don’t contain keywords from the thing you’re currently working on might function as a very crude version of this.
Come up with 20 independent algorithms like that, use bayes theorem to combine the results, and you should come pretty close.
Irrationality Game
Aaron Swartz did not actually commit suicide. (10%)
(Hat tip to Quirinus Quirrell, whoever that actually is.)
An alien civilization within the boundaries of the current observable universe has, or will have within the next 10 billion years, created a work of art which includes something directly analogous to the structure of the “dawn motif” from the beginning of Richard Strauss’s Also sprach Zarathustra. (~90%)
I’m inclined to downvote this for agreement, but haven’t yet. Can you say more about what “directly analogous” means? How different from ASZ can this work of art be and still count?
The art form must be linear and intend to proceed without interaction from the user.
The length of the three “notes” must be in 8:8:15 ratio (in that order).
The main distinguishing factor between “notes”, must be in 2:3:4 ratio (in that order).
The motif must be the overwhelmingly dominant “voice” when it occurs.
Upvoted for overconfidence, not about the directly analogous art form (I suspect that even several hundred pieces of human art have that) but about there being other civilizations within the observable universe.
Though I would still give that at least 20%.
Cool. Upvoted immediate parent for specificity and downvoted grandparent for agreement.
The probability of this would seem to depend on the resolution of the fermi paradox. If life is relatively common then it would seem to be true purely by statistics. If life is relatively rare then it would require some sort of shared aesthetic standard. Are you saying aesthetics might be universal in the same way as say mathematics?
I would have upvoted this even if it limited itself to “intelligent aliens exist in the current observable universe”.
The case for atheistic reductionism is not a slam-dunk.
While atheistic reductionism is clearly simpler than any of the competing hypotheses, each added bit of complexity doubles the size of hypothesis space. Some of these additional hypotheses will be ruled out due to impossibility or inconsistency with observation, but that still leaves a huge number of possible hypotheses that each add take up a tiny amount of probability mass, but they add up.
I would give atheistic reductionism a ~30% probability of being true. (I would still assign specific human religions or a specific simulation scenario approximately zero probability.)
Assuming our MMS-prior uses a binary machine, the probability of any single hypothesis of complexity C=X is equal to the total probabilities of all hypotheses of complexity C>X.
There is no dark matter. Gravity behaves weirdly for some other reason we haven’t discovered yet. (85%)
Many such “modified gravity” theories have been proposed. The best known is “MOND”, “Modified Newtonian Dynamics”.
Irrationality Game
Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they’re cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevitably crop up in any prediction market system due to bubbles, panics, hedging, manipulation, and either overly simple or dangerously complex derivatives. 90%
Hanson and his followers are irrationally attached to prediction markets because they flatter libertarian sensibilities. 60%
Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.
Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can’t quite back it out of a prediction market’s transaction history. And the amount of money to be made is small in any event because there’s just not enough participation in the markets.
Aren’t prediction markets just a special case of financial markets? (Or vice versa.) Then if your algorithm could outperform prediction markets, it could also outperform the financial ones, where there is lots of money to be made.
In prediction markets, you are betting money on your probability estimates of various things X happening. On financial markets, you are betting money on your probability estimates of the same things X, plus your estimate of the effect of X on the prices of various stocks or commodities.
The IARPA expert aggregation exercises look plausible, and have supposedly done all right predicting geopolitical events. I would not be shocked if the first to use those methods on financial markets got a bit of alpha.
Markets can incorporate any source or type of information that humans can understand. Which algorithm can do the same?
Down-voted for semi-agreement.
There are simply too many irrational people with money, and as soon as it became popular to participate in prediction markets, the way it currently is to participate in the stock market, they will add huge amounts of noise.
The conventional reply is that noise traders improve markets by making rational prediction more profitable. This is almost certainly true for short-term noise, and my guess is that it’s false for long-term noise, i.e., if prices revert in a day, noise traders improve a market, if prices take ten years to revert, the rational money seeks shorter-term gains. Prediction markets may be expected to do better because they have a definite, known date on which the dumb money loses—you can stay solvent longer than the market stays irrational.
A new word to me. Is this what you’re referring to?
Congratulations. You have discovered a way to make a fortune. Mind you, while you’re making your prediction you will have made your prediction wrong. That’s the point of markets. If you can beat them you get paid to improve them.
Downvoted for agreement, but prediction markets still win because they’re possible to implement. (Will change to upvote if you explicitly deny that too.)
If you think Prediction Markets are terrible, why don’t you just do better and get rich from them?
Meta-discussion Comment
I suspect many of the upvotes in this are being done out of an assessment of the interestingness of well-writtenness of a comment rather than disagreement. If this weren’t the case I would expect boring and obviously untrue statements to be at the top, instead the top comments are interesting and more boring ones are hovering around zero.
I suspect upvoting comments you enjoy reading becomes reflexive in long time users, so overriding that instinct requires conscious system 2 effort.
Just want to make sure I’m understanding the terminology. Saying I’m 10% confident of proposition X is equivalent to saying I’m 90% confident in not-X, right?
Yes. However, since the point of the game is to display beliefs that you hold and others don’t, you should choose the phrasing that makes your confidence higher than LW’s. That is: if you think other LWers are 5% confident of X, then you should say you’re 10% confident of X; and if you think other LWers are 15% confident of X, then you should say you’re 90% confident of not-X.
Thanks Hariant!
Irrationality game:
Humanity has already recieved and recorded a radio message from another technological civilization. This was unconfirmed/unnoticed due to being very short and unrepeated, or mistaken for a transient terrestrial signal, or modulated in ways we were not looking for, or was otherwise overlooked. 25%.
What are the rules on multiple postings? I have a cluster of related (to each other, not this) ones I would love to post as a group.
MWI is unlikely because it is too unparsimonious (not very confident).
Okay? So you weakly think reality should conform your sensibilities? I’ve got a whole lot of evidence behind a heuristic that is bad news for you… Not voted anything, both out of not really knowing what you mean, and also because the true QMI (explaining among other things Born Probabilities) might be smaller than just the “brute force” decoherence of MWI (such as Mangled Worlds).
Well, I’m sort of hypothesizing that simplicity is not just elegance, but involves a trade-off between elegance and parsimony (vaguely similar to how algorithmic ‘efficiency’ involves a trade-off between time and space). What heuristic are you referring to which is bad news for this hypothesis? Also, what’s QMI? I’m actually very much ignorant when it comes to quantum mechanics.
First of all, I don’t care much for some philosophical dictionary’s definition of simplicity. You are going to have to specify what you mean by parsimony, and you are going to have to specify it with maths.
Here’s my take:
Simplicity is the opposite of Complexity, and Complexity is the Kolmogorov kind. That is the entirety of my definition. And the universe appears to be made on very simple (as specified above) maths.
The Heuristic I am referring to is: “There are many, many, many occasions where people have expected the universe to conform to their sensibilities, and have been dead wrong.” It has a lot of evidence backing it, and QM is one very counter-intuitive thing (although the maths are pretty simple), you simply aren’t built to think about it.
QMI: Quantum Mechanical Interpretation
Lastly: Have you even read the QM sequence? It gives you a good grasp of what physicists are doing and also explain why everything non-MWI-like is more complex (of the Kolmogorov kind) that any MWI-like.
No, I’m not defining a notion based on anyone’s whim/sensibilities; I fully agree that, to be meaningful, any account of ‘simplicity’ must be fully formalizable (a la K-complexity). However, I expect a full account of simplicity to include both elegance and parsimony based on the following kind of intuition:
a) There is in fact “stuff” out there
b) Everything that actually exists consists of some orderly combination of this stuff, acting in an orderly manner according to the nature of the stuff
c) All other things being equal, a theory is more simple if it posits less ‘stuff’ to account for the phenomena
d) Some full account of simplicity should include both elegance (a la K-complexity) and this sense of parsimony in a sort of trade-off relationship, such that, for example, if all other things equal, there’s a theory A which is 5x more elegant but 1000x less parsimonious, and a theory B which is correspondingly 5x less elegant but 1000x more parsimonious, we should therefore favor theory B
My reasons for expecting there to be some formalization of simplicity which fully accounts for both of these concepts in such a way is, admittedly, somewhat based on whim/sensibility, as I cannot at this time provide such a formalization nor do I have any real evidence such a thing is possible (hence why this discussion is taking place in a thread entitled ‘Irrationality game’ and not in some more serious venue) - however, whim/sensibility is not inherent to the overall notion per se, i.e. I am not suggesting this notion of an elegance/parsimony trade-off is somehow true-but-not-formalizable or any such thing.
Irrationality Game:
Time travel is physically possible.
80%
Irrationality game upvote for disagreement. This is based on the confidence rather than the claim. I would also upvote if the probability given was, say, less than 1%.
80% is hardly “confident”… but fair enough.
I perhaps could have said “the specific probability estimate given” to be clearer about the meaning I was attempting to convey.
Irrationality game comment
The importance of waste heat in the brain is generally under-appreciated. An overheated brain is a major source of mental exhaustion, akrasia, and brain fog. One easy way to increase the amount of practical intelligence we can bring to bear on complicated tasks (with or without an accompanying increase in IQ itself) is to improving cooling in the brain. This would be most effective with some kind of surgical cooling system thingy, but even simple things like being in a cold room could help
Confidence: 30%
The nice thing about this one is that it’s really easy to test yourself. A plastic bag to put ice or hot water into, and some computerized mental exercise like dual n-back. I know if I thought this at anywhere close to 30% I’d test it...
EDIT: see Yvain’s full version: http://squid314.livejournal.com/320770.html http://squid314.livejournal.com/321233.html http://squid314.livejournal.com/321773.html
Self-experimentation seems like a really bad way to test things about mental exhaustion. It would be way too easy to placebo myself into working for a longer amount of time without a break, when testing the condition that would support my theory. Might wait until I can find a test subject.
If you got a result consistent with your theory, then yes it might just be placebo effect, but is that result entirely useless; and if you got a result inconsistent with your theory, is that useless as well?
“Conservation of expected uselessness!”
INSERT THE ROD, JOHN.
Overheating your body enough to limit athletic performance (whether due to associated dehydration or not) is probably enough to impair the brain as well. Dehydration is known to cause headaches.
I think the effect exists. But what’s the size, when you’re merely sedentary + thinking + suffering a hot+humid day?
To the pork futures warehouse!
Some indirect evidence from yawning, with a few references: http://www.epjournal.net/wp-content/uploads/ep0592101.pdf
Multiple systems are correct about their experiences. In particular, killing a N-person system is as bad as killing N singlets. (90%)
From private exchange with woodside, published with auhorization
woodside:
MixedNuts:
I’d say I’m reasonably confident that there is something interesting going on, but I wouldn’t go as far as to say they are genuinely different people to the extent of having equal moral weight to standard human personalities.
I would guess they are closer to different patterns of accessing the same mental resources than fully different. (You could make an analogy with operating systems/programmes/user interfaces on a computer.)
The Mona Lisa currently exposed at the Louvre Museum is actually a replica. (33%)
Irrationality Game:
I believe Plato (and others) were right when they said music develops some form of sensibility, some sort of compassion. I posit a link between the capacity of understanding music and understanding other people by creating accurate images of them in our head, and of how they feel. 80%
Irrationality Game:
The Occam argument against theism, in the forms typically used in LW invoking Kolmogorov complexity or equivalent notions, is a lousy argument: its premises and conclusions are not incorrect, but it is question-begging to the point that no intellectually sophisticated theist should move their credence significantly by it. 75%.
(It is difficult to attach meaningfully a probability to this kind of claim, which is not about hard facts. I guesstimated that in an ideally open-minded and reasoned philosophical discussion, there wold be a 25% chance of me being persuaded of the contrary.)
To the extent that it’s begging anything, it’s begging a choice of epistemology. If no intellectually sophisticated theist should take it seriously, what epistemology should they take seriously besides faith? If the answer is ordinary informal epistemology, when I present the Occam argument I accompany it with a justification of Occam’s razor in terms of that epistemology.
Theists are usually not rational about their theism. So there are relatively few arguments that bite.
Notice that I said “should move their credence”, not “would”. It is not a prediction about the reaction of (rational or irrational) real-life theists, but an assessment of the objective merits of the argument.
Aaaaah. Upvoted for being wrong as a simple matter of maths.
grin That’s more like the reaction I was looking for!
I would be curious to see what is the maths you are referring to. I (think I) understand the math content of the Occam argument, and accept it as valid. Let me give an analogy for why I think the argument is useless anyway: suppose I tried the following argument against Christianity:
The argument is valid as a matter of formal logic, and we would agree it has true premises and conclusion. However, it should (not only would, should) not persuade any Christian, because their priors for the second premise are very low, and the argument gives them no reason to update them. I contend the Occam argument is mathematically valid but question-begging and futile in a similar way. (I can explain more why I think this, if anybody is interested, but just wanted to make my position clear here).
The Occam argument is basically:
Humans are made by evolution to be approximately Occamian, this implies that Occamian reasoning is a least a local maxima of reasoning ability in our universe.
When we use our Occamian brains to consider the question of why the universe appears simple, we come up with the simple hypothesis that the universe is itself simple.
Describing the universe with maths works better than heroic epics or supernatural myths, as a matter of practical applicability and prediction power.
The mathematically best method of measuring simplicity is provably the one used in Solomonoff Induction/Kolmogorov complexity.
Quantum Mechanics and -Cosmology is one of the simplest explanations ever for the universe as we observe it.
The argument is sound, but the people are crazy. That doesn’t make the argument unsound.
Irrationality game
I have a suspicion that some form of moral particularism is the most sensible moral theory. 10% confidence.
Upvoted for too low a probability.
What do you mean by the “most sensible moral theory”?
And what the hell does Dancy mean if he says that there are rules of thumb that aren’t principles?
I would weight this lower than .01% just because of my credence that it’s incoherent.
Perhaps a workable restatement would be something like:
“Any attempt to formalize and extract our moral intuitions and judgements of how we should act in various situations will just produce a hopelessly complicated and inconsistent mess, whose judgements are very different from those of prescribed by any form of utilitarianism, deontology, or any other ethical theory that strives to be consistent. In most cases, any attempt of using a reflective equilibrium / extrapolated volition -type approach to clarify matters will leave things essentially unchanged, except for a small fraction of individuals whose moral intuitions are highly atypical (and who tend to be vastly overrepresented on this site).”
(I don’t actually know how well this describes the actual theories for particularism.)
I agree that your restatement is internally consistent.
I don’t see how such a theory would really be “sensible,” in terms of being helpful during moral dilemmas. If it turns out that moral intuitions are totally inconsistent, doesn’t “think it over and then trust your gut” give the same recommendations, fit the profile of being deontological, and have the advantage of being easy to remember?
I guess if you were interested in a purely descriptive theory of morality I could conceive of this being the best way to handle things for a long time, but it still flies in the face of the idea that morality was shaped by economic pressures and should therefore have an economic shape, which I find lots of support for, so my upvote remains with my credence being maybe .5%-1%, I think about 2 decibels lower than yours.
In the turing machine sense, sure. In the “this is all you should know” sense, no way, have an upvote.
It is plausible that an existing species of dolphin or whale possesses symbolic language and oral culture at least on par with that of neolithic-era humanity. (75%)
Is “it is plausible” part of the statement to which you give 75% credence, or is it another way of putting said credence?
Because cetacean-language is more than 75% likely to be plausible but I think less than 75% likely to be true.
Upvoted for overconfidence.
I proposed a variation on this game, optimized for usefulness instead of novelty: the “maximal update game”. Start with a one sentence summary of your conclusion, then justify it. Vote up or down the submissions of others based on the degree to which you update on the one sentence summary of the person’s conclusion. (Hence no UFOs at the top, unless good arguments for them can be made.)
If anyone wants to try this game, feel free to do it in replies to this comment.
Downvoted for agreement: you did in fact propose the specified variation.
He didn’t state his confidence level. Since his probability estimate for this is likely much higher than mine, I upvoted.
That seems worth its own thread.
Irrationality Game
The Big Bang is not the beginning of the universe, nor is it even analagous to the beginning of the universe. (60% confident)
Nonvoted. It might just be a 0 on the Real line, or analogous. I don’t know the real laws of physics, but that seems sensible.
Time travel is physically possible, and therefore will be achieved someday.
~80%
Irrationality game comment
The correct way to handle Pascal’s Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I’m very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.
Irrationality game:
Different levels of description are just that, and are all equally “real”. To speak of particles as in statistical mechanics or as in thermodynamics is as correct/real.
The same about the mind, talking as in neurochemistry or as in thoughts is as correct/real.
80% confidence
How, if at all, does this differ from “reductionism is true”? There are approximations made in high-level descriptions (e.g. number of particles treated as infinitely larger than its variation); are you saying they are real, or that the high-level description is true modulo these approximations? What do you mean by “real” anyway?
Tentatively downvoted because this looks like some brand of reductionism.
Dark arts are very toxic, in the sense that you naturally and necessarily use any and all of your relevant beliefs to construct self-serving arguments on most occasions. Moreover, once you happen to successfully use some rationality technique in a self-serving manner, you become more prone to using it in such a way on future occasions. Thus, once you catch other people using dark arts and understand what’s going on, you are more likely to use the same tricks yourself. >80% sure (I don’t have an intuitive feeling for amounts of evidence, but here I would need at least 6dB of evidence to become uncertain).
I’m confused by the upvotes. (ETA: Parent was at +7 votes when I commented.) What do people disagree with?
The only controversial part for me is the use of certain words (in bold below), but these are minor disagreements that disappear under a charitable interpretation:
Otherwise, I believe the parent’s statements with high confidence (95%).
What a fun game! I notice that I’m somewhat confused, too. I see a couple of different approaches; maybe some of the upvoters would step in and explain themselves.
If getting upvotes for a comment here is something that would confuses you then you aren’t supposed to make the comment. The point is to make comments that you predict others will disagree/upvote despite you actually believing what you are saying.
I was confused about getting several upvotes quickly, but without prompting debate. I began wondering if my proposition pattern-matched something not as interesting to discuss.
That makes sense.
Your confidence is much higher than marchdown’s. You should have upvoted because you think he’s underconfident. Mind you I upvoted it myself because:
is definitely not true for me, as when I learn that I am subtly and subconsciously manipulating people, I stop doing it. And when I learn some trick to make people agree with me, I make sure I don’t do it.
Even when using 80% as Marchdown’s confidence, I didn’t feel that our confidences were much different, so I just calculated: the ln of our odds ratio is 1.56, whereas in the original rules, that for
is 1.61. So at least according to the OP, I could have downvoted (and I did).
But the other reason I thought our confidences were similar is that Marchdown put >80%, which I took to mean that eir confidence is greater than 80%.
Fair enough.
Irrationality game
Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam’s humanity, you would need very little additional information to get a good idea of Adam’s morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it’s not possible to cheat by formalizing moral separately.
Please attach a probability.
Fairly certain (85%—98%).
That is a very wide range. Downvoted you anyway.
Michio Kaku suggested the idea that Moore’s law will be modified within the next decade to follow the deceleration of improvements in silicone, and within 20 years computational silicone technologies will relatively flat-line, as we reach the peak of silicone computational potential.
I believe in that Moore’s law modeling silicone improvement will collapse in 20 years. (~78%)
We are already seeing deceleration Moore’s curve modeling computational power because silicone wafers are already being printed in three dimensions, and dealing with the heat of in increasing number of electrons while the size of the silicone material stays the same is becoming more of a challenge.
Shouldn’t that be silicon rather than silicone?
Sometimes people make it easy to know whether they have any domain expertise.
No, breast implants double in size every 18 months.
Downvoted on a technicality. I think that as we reach the limits of what silicon can do for us, Moore’s law will continue via some other kind of technology.
Upvoted because 20 years seems way too long. Interestingly, Kaku predicted the end of Moore’s law for silicon “in 20 years” back in 2003. 2023 seems more likely to me than 2032.
However this is only true if you are applying “Moore’s Law” in the stricter sense, which by definition only refers to silicon. These days “Moore’s Law” is usually used to refer more broadly to increases in cost-performance for commercially available computers. My guess is that shortly after Moore’s law for silicon starts flatlining, something else (probably molecular computers built around graphene will become available. It is far from obvious that the new substrate will improve at the same steady pace and overall speed as Moore’s Law for silicone. It could go much faster or much slower, possibly in fits and starts as opposed to steady, gradual process as we’ve seen for decades.
Downvoted because of this “I believe in that Moore’s law modeling silicone improvement will collapse in 20 years” but I would have upvoted instead if you had said that the collapse would be in the direction of flatlining within 20 years.
Irrationality Game:
I believe there is some deeper link than chance for the repetition of the same mythological motifs in non-connected areas of the globe. Be it some sort of shared ancestral unconscious or else, I don’t know, but I don’t believe it is just due to chance. 95%
Irrationality Game
Metaphysical propositions are not only meaningful, but there is a lot of low-hanging fruit left to gather in Ontology. (95%)
What kind of metaphysics? (Non-apples, man, we gotta sell non-apples)
I will delay voting until you answer.
I was speaking generally of “the study of what it is for something to exist” rather than a specific metaphysics.
Well, Tegmark 4 nicely covers that.
Irrationality Game
Tegmark 4 is true, Whatever quantum field theory thingy ends up being the complete one IS the universe we see. Schroedinger’s Equation and the fields it works on.
“Existing” isn’t a privileged state, it is a consequence of Russels Axioms (or similar extremely simple axiom collections). Simply by virtue of maths, the definition of the most basic laws unfolds into the universe we see.
Similarly, to the inside view, you are indistinguishable from every mathematically definable system having your subjective experiences.
Logical Tautology , probability 1 - epsilon.
Irrationality game
For the majority of people, learning techniques of rhetoric, argument and persuasion is actually beneficial to their rationality and sanity. Confidence: 90%
a) It seems I miscalibrated the LW community’s attitude to rhetoric rather severely.
b) I am confused that both this and Marchdown’s post about dark arts are on negative points, signifying agreement.
The world contains people who use those techniques. Seeing it coming is beneficial.
The use of the term “phyg” rather than “cult” when engaging in cult-related metadiscussions just makes LW look more cultish to outsiders. (~80%)
I agree—have a downvote!
This is getting a lot of downvotes, meaning lots of people agree. To those people: Although it does make us look more cultish, does that outweigh the good from the lack of connections between “cult” and “lesswrong” on google?
Edited to correct, I originally had a typo where I said it meant people disagree instead of agree.
This is the irrationality game. The downvotes mean people agree.
The second part of falenas108′s comment (“To those people: Although it does make us look more cultish”) makes me think that he/she knows this and simply mistyped ‘disagree’ instead of ‘agree’.
Yeah, sorry. That’s what I meant.
You should edit your original comment for clarity then.
Fixed.
I wasn’t actually sure what people believed about this, so I was very curious to see how this would be received. So can we say the word “cult” now?
You can if you like getting downvotes! Almost all usages of ‘cult’ in such discussions will be gross abuses of the term.
The only people who search for lesswrong and cult are the people who have already heard of the connection.
Also the people who begin to search for...
“less wrong cryonics”
“less wrong charity”
“less wrong camp”
“less wrong cached thoughts”
“less wrong com”
“less wrong change your mind”
This sounds like magical thinking about Google. Are you worried about people searching for the term “cult” finding Less Wrong? If not, what are you worried about exactly?
I’m pretty sure autocompletes are done on the basis of what people search for, not what keywords are present on a domain.
The main concern would be people already searching cult and lesswrong together, and coming up with articles like this one which would be very bad signaling.
Given the rules of this thread, most of the downvotes are either expressing agreement, or perhaps a meta-disagreement with having the the topic discussed again.
American Jews are, on the whole, more privileged than oppressed. (80%)
The median American Jew will enjoy a higher quality of life than the median American non-Jew, or even the median American Christian. While American Jews do suffer some Antisemitism, the advantages gained from affiliating with other Jews outweighs the damages done by Antisemitism. In fact, Antisemitism can be asset for Jews, because it allows them to maintain the illusion that they are an oppressed class.
Upvoted because I believe the way you’re using the word “privileged” is confused.
I used the word “privilege”because I wanted to compare Jews to other groups that liberals label as privileged (white/Christian/men/cis/upper-class/ect.) Like white/Christian/men/cis/upper-class/ect, American Jews enjoy advantages that out-group members do not, and work to exclude out-group members from these advantages (opportunity hoarding).
LW doesn’t like debates about what a word means, so instead of asking you what you think the definition of “privilege” is, I’m going to ask if you think my comparison of Jews to white/Christian/men/cis/upper-class people neglects something.
I don’t see how this corresponds to a cluster in thing space.
The cluster is more visible among the categories as such than among the persons who are members of the categories.
I was assuming that. I still don’t see how this corresponds to a cluster in category space unless you mean literally the cluster of categories liberals label as privileged. In which case, no liberals generally don’t label Jews privileged.
This seems plausible, but I’m always disappointed when people spend effort trying to figure out which factions are ahead or behind.
At first I was a bit blown away by how much LW agreed with the bolded claim, but looking at the non-bolded explanation underneath it seems to be obviously true.