Open Thread: November 2009
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you’re new to Less Wrong, check out this welcome post.
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- 3 Feb 2010 15:06 UTC; 8 points) 's comment on Open Thread: February 2010 by (
- 8 Nov 2009 16:06 UTC; 0 points) 's comment on Open Thread: November 2009 by (
I was wondering if Eliezer could post some details on his current progress towards the problem of FAI? Specifically details as to where he is in the process of designing and building FAI. Also maybe some detailed technical work on TDT would be cool.
This email by Eliezer from 2006 addresses your question about FAI. I’m extremely skeptical that he has accomplished or will accomplish anything at all in that direction, but if he does, we shouldn’t expect the intermediate results to be openly published, because half of a friendly AI is a complete unfriendly AI.
Just another example of a otherwise-respectable (though not by me) economist spouting nonsense. I thought you guys might find it interesting, and it seemed short for a top-level post.
Steven Landsburg has a new book out and a blog for it. In a post about arguments for/against God, he says this:
So how many whoppers is that? Let’s see: the max-compressed encoding of the human genome is insufficient data to describe the working of human life. The natural numbers and operations thereon are extremely simple because it takes very little to describe how they work. This complexity is not the same as the complexity of a specific model implemented with the natural numbers.
His description of it as emerging all at once is just confused: yes, people use natural numbers to describe nature, but this is not the same as saying that the modeling usefulness emerged all at once, which is the sense in which he was originally using the term.
What’s scary is he supposedly teaches more math than economics.
Disclosure: Landsburg’s wife banned me from econlog.econlib.org a few years ago.
UPDATE2: Landsburg responds to my criticism on his blog, though without mentioning me :-(
I’m probably exposing my ignorance here, but didn’t zero have a historical evolution, so to speak? I’m going off vague memories of past reading and a current quick glance at wikipedia, but it seems like there were separate developments of using place holders, the concept of nothing, and the use of a symbol, which all eventually converges onto the current zero. Seems like the evolution of a number to me. And it may be a just so story, but I see it as eminently plausible that humans primarily work in base 10 because, for the most part, we have 10 digits, which again would be dictated by the evolutionary process.
On his human life, point, if DNA encoding encompasses all of complex numbers (being that it needs that system in order to be described), isn’t it then necessarily more complex, since it requires all of complex numbers plus it’s own set of rules and knowledge base as well?
The ban was probably for the best Silas, you were probably confusing everyone with the facts.
It sounds like a true story (note etymology of the word “digit”). But lots of human cultures used other bases (some of them still exist). Wikipedia lists examples of bases 4, 5, 8, 12, 15, 20, 24, 27, 32 and 60. Many of these have a long history and are (or were) fully integrated into their originating language and culture. So the claim that “humans work in base 10 because we have 10 digits” is rather too broad—it’s at least partly a historical accident that base 10 came to be used by European cultures which later conquered most of the world.
That’s a good point, Dan. I guess we’d have to check what the number of base 10 systems were vs. overall systems. Though I would continue to see that as again demonstrating an evolution of complex number theory, as multiple strands joined together as systems interacted with one another. There were probably plenty of historical accidents at work, like you mention, to help bring about the current system of natural numbers.
Your recollection is correct: the understanding of math developed gradually. My criticism of Landsburg was mainly that he’s not even using a consistent definition of math.
And as you note, under reasonable definitions of math, it did develop gradually.
Yes, exactly. That’s why human life is more complex than the string representing the genome: you also have to know what that (compressed) genome specification refers to, the chemical interactions involved, etc.
:-)
Why does DNA encoding need complex numbers? I’m pretty sure simple integers are enough… Maybe you meant the “complexity of natural numbers” as quoted?
Sounds good to me (that’s what I get for typing quickly at work).
UPDATE: Landsburg replies to me several times on my blog. I had missed the window for comments, but Bob Murphy posted a reply to Landsburg on his (Murphy’s) blog, and I expanded my points in the linked post, which drew Landsburg.
Entertaining read. I’m a Landsburg fan, but he’s stepped in it on this one.
What is the notion of complexity in question? It could for instance be the (hypothetically) shortest program needed to produce a given object, i.e. Kolmogorov complexity.
In that case, the natural numbers would have a complexity of infinity, which would be much greater than any finite quantity—i.e. a human life.
I may be missing something because the discussion to my eyes seems trivial.
The complexity doesn’t count the amount of data storage required, only the length of the executable code.
looks simple to me.
Yes, but how are you going to represent ‘n’ under the hood? You are going to need eventually infinite bits to represent it? I guess this is what you mean by storage. I should confess that I don’t know enough about alogrithmic information theory so I may be in deeper waters than I can swim. I think you are right though…
I had something more in mind like, the number of bits required to represent any natural number, which is obviously log(n) (or maybe 2loglog(n) - with some clever tricks I think) and if n can get as big as possible, then the complexity, log(n) also gets arbitrarily big.
So maybe the problem of producing every natural number consecutively has a different complexity from producing some arbitrary natural number. Interesting…
Someone else should be the one to say this (do we have an information theorist in the house?), but my understanding is that Kolmogov complexity does not account for memory usage problems (e.g. by using Turing machines with infinite tape). And thus producing a single specific sufficiently large arbitrary natural number is more complex than producing the entire list—because “sufficiently” in this case is “longer than the program which produces the entire list”.
Yup, Kolmogorov complexity is only concerned with the length of the shortest algorithm. There are other measures (more rarely used, it seems) that take into account things like memory used, or time (number of steps), though I can’t remember their name just now.
Note that usually the complexity of X is the size of the program that outputs X exactly, not the program that outputs a lot of things including X. Otherwise you can write a quite short program that outputs, say, all possible ascii texts, and claim that it’s size is an upper bound of the complexity of the Bible. Actually, the information needed to generate the Bible is the same as the information to locate the Bible in all those texts.
Example in Python:
This has improved my understanding of the Python “yield” statement.
Glad I could be of use !
Understanding the Power of Yield was a great step forwards for me, afterwards I was horrified to reread some old code that was riddled with horrible classes like DoubleItemIterator and IteratorFilter that my C++-addled brain had cooked up, and realized half my classes were useless and the rest could have there linecount divided by ten.
And some people still cound “lines of code” as a measure of productivity. Sob.
So that’s why my self-enhancing AI keeps getting bogged down!
Voted up for being really funny.
“Actually, the information needed to generate the Bible is the same as the information to locate the Bible in all those texts.”
Or to locate it in “The Library of Babel”.
I actually took information theory but this is more of an issue algorithmic information theory—something I have not studied all that much. Though still, I think you are probably right since Kolgomorov complexity refers to descriptive complexity of an object. And here you can give a much shorter description of all of consecutive natural numbers.
This is very interesting to me because intuitively one would think that both are problems involving infinity and hence I lazily thought that they would both have the same complexity.
Our House, My Rules reminded me of this other article which I saw today: teach your child to argue. This seems to me to be somewhat relevant to the subject of promoting rationality.
In My house I would also teach that the difference between ‘argument’ and ‘fight’ is quite distinct from the difference between ‘good’ and ‘bad’. I’d also teach them that a good response to a persuasive and audience swaying argument that they should give territory to another is “No. I want it.”
Singularity Summit 2009 videos - http://www.vimeo.com/siai/videos
I recommend Anna Salamon’s presentation How Much it Matters to Know What Matters: A Back of the Envelope Calculation. She did a good job of showing just how important existential risk research is.
I thought it would be nice to be able to plug in my own numbers for the calculation, so I quickly threw this together.
Interesting. You can do similar calculations for things like asteroid prevention, I wonder which would win out. It also gave me a sickly feeling when you could use the vast numbers to justify killing a few people to guarantee a safe singularity. In effect that is what we do when we divert resources away from efficient charities we know work towards singularity research.
We make these kind of tradeoffs, for better or worse, all the time—and sometimes, as when we take money from the fire department or health care to put it into science or education, we are letting people die in the short term as part of a gamble that the long term outcomes will justify it.
An interesting site I just stumbled upon:
http://changingminds.org/
They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.
Here’s the results from typing in “bias” into their search bar.
A quick search for “changingminds” in LW’s search bar shows that noone has mentioned this site before on LW.
Is this site of any use to anyone here?
And should I repost this message to next month’s open thread, since not many people will notice it in this month’s open thread?
I would repost this in the next open thread, it’s not like anyone would get annoyed at the double post (I think), and that site looks like it would interest a lot of people.
I’ve come across it before and I found it useful. Ok, I’ll be honest. It probably wasn’t all that useful to me. I like this stuff because it fascinates me.
At it’s height this poll registered 66 upvotes. As it is meta, no longer useful and not interesting enough for the top comments page please down vote it. Upvote the attached karma dump to compensate.
(It looks like CannibalSmith hasn’t been on lately so I’ll post this) This post tests how much exposure comments to open threads posted “not late” get. If you are reading this then please either comment or upvote. Please don’t do both and don’t downvote. The exposure count to this comment will then be compared to that of previous comment made “late”. I won’t link to the other comment and please don’t go finding it yourself.
If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient (unless there are reasons other than exposure for having a forum).
I will post another comment in reply to this one which you can downvote if you don’t want to give me karma for the post.
Down vote this comment if you upvoted the above and want to neutralize the karma I get.
Note that voting this down doesn’t seem to remove it from the “top” list. As far as I can tell, that seems to sort by the number of upvotes the comment received, not by the (upvotes—downvotes).
Thanks. :)
So by my count it is 37 to 71 and that probably overestimates the response a late comment would get given that there was something of a feedback loop.
Glad to see something like this.
Ack.
龘
see/saw
Huh?
I see it.
I’ll go ahead and predict here that the Higgs boson will not be showing up. As best I can put the reason into words: I don’t think the modern field of physics has its act sufficiently together to predict that a hitherto undetected quantum field is responsible for mass. They are welcome to prove me wrong.
(I’ll also predict that the LHC will never actually run, but that prediction is (almost entirely) a joke, whereas the first prediction is not.)
Anyone challenging me to bet on the above is welcome to offer odds.
Okay, so I guess I’ll be the first person to ask how you’ve updated your beliefs after today’s news.
Physicists have their act together better than I thought. Not sure how much I should update on other scientific fields dissimilar to physics (e.g. “dietary science”) or on the state of academia or humanity as a whole. Probably “some but not much” for dietary science, with larger updates for fields more like physics.
Just curious, given that physicists have their act together better than you thought, then, conditioning on that fact and the fact that physicists don’t, as a whole, consider MWI to be slam dunk (though, afaik, many at least consider it a reasonable possibility), does that lead to any update re your view that MWI is all that slam dunk?
That’s because physicists, though they clearly enjoy speculating very much, tend to withhold judgment until there is some experimental evidence one way or the other. In that sense they are more instrumentalists than EY. Experimental physicists much more so.
“A physicist answers all questions with ‘I don’t know, but I’ll find out.’”
-- Nicola Cabibbo (IIRC), as quoted by a professor of mine.
(As for “experimental evidence”, in the past couple of years people have managed to put bigger and bigger systems—some visible with the naked eye—into quantum superpositions, which is evidence against objective collapse theories.)
Nope. That’s nailed down way more solidly than anything I know about mere matters of culture and society, so any tension between it and another proposition would move the other, less certain one. It would cause me to update in the direction of believing that more physicists probably see MWI as slam-dunk. :)
What exactly is it that you claim to know here? It’s not a particular quantitative many-worlds theory that makes predictions, or you wouldn’t be asking where the Born probabilities come from. It’s not a particular qualitative model of many worlds, or else you wouldn’t talk about Robin’s mangled worlds in one post, and Barbour’s timeless physics in another. What does it boil down to? “I know that quantum mechanics has something to do with parallel worlds”?
I think it comes down to:
(1) The wavefunction is what there is; and
(2) it doesn’t collapse.
Well said, this has seemed to be what Eliezer has tried to argue for in his posts. He even went out of his way to avoid putting the “MWI” label on it a lot the time.
Every genius is entitled to some eccentricity, and the MWI is EY’s. It might be important to remind the regulars why MWI is not required for rationality, but it is pointless to argue about it with EY.
For all the dilettantes out there who learned about quantum physics from Eliezer’s posts and think that they understand it, despite the clear evidence that understanding a serious scientific topic in depth requires years of study, you know where the karma sink is.
EY’s level of support for cryonics (to the point of saying that people who don’t sign their children up for cryo are lousy parents) sound waaaay more eccentric to me than acceptance of the MWI.
Cryonics is a last-ditch long-shot attempt to cheat death, so I can relate quite easily.
-- Woody Allen
Is that just because it has human-level consequences?
Belief in MWI doesn’t tell you what to do.
No, it’s because MWI has broad support among physicists as at least being a very plausible candidate interpretation. Support for cryonics among biologists and neuroscientists is much more limited.
Well.… It does not have a broad support among physicists for being a VERY plausible. A tiny fraction consider it very plausible. The vast majority consider it very unlikely and downright wrong due to it’s many problems.
You’re overstating the extent of the opposition.
No. If you even just go to the discussion page you will see that the reception part is one of the most erronous and most objected to in that wiki article. The entire article in itself is a disaster and most Many Worldian proponents does not endorse it at all.
You have to understand that there are literally THOUSANDS of physicists who hold a opinion on the matter, a few polls conducted by proponents do no matter at all. Do you really think that a talk held by Max Tegmark will not attract people who share his views?
If someone where to do a global poll, you would see...
Actually, this is not true. Having been in academia for some time, I can vouch that a celebrity talk like that would attract many faculty members regardless of their views on the matter.
I believe that is an improper phrasing on Quantumental’s part. No one thought, ever, (to my knowledge and immediately visible evidence) including someone like me who is completely unrelated to the discussion and has no idea who Max Tegmark is, that such a talk would not attract [any] people who share his views. This is not mutually-exclusive with “people of all distributions will be attracted in a population-representative sample”, however.
To me, it just seems like an accidental (possibly caused by some bias its writer is insufficiently aware of) breach of the no-ninja-connotation rule.
Well the one I watched had like 15 guys in it, 9 pro-MWI. Indicating that this talk definitely attracted more MWI’ers than what is regular
You’re making an assertion with zero evidence...
I pointed you towards the evidence. One of the guys in the talksection did a survey of his own of 30 or so leading physicists.
But just the fact that David Deutsch himself says less than 10% believe in any kind of MWI speaks volumes. He has been in the community where these matters are discussed for decades
No. Jack apparently read my mind.
No, merely by.
Fair enough. (Well, technically both should move at least a little bit , of course, but I know what you mean.)
Hee hee. :)
Speaking as someone with an academic background in physics, I don’t think the group as a whole as anti-MWI as you seem to imply. It was taught at my university as part of the standard quantum sequence, and many of my professors were many-worlders… What isn’t taught and what should be taught is how MWI is in fact the simpler theory, requiring fewer assumptions, and not just an interesting-to-consider alternative interpretation. But yes, as others have mentioned physicists as a whole are waiting until we have the technology to test which theory is correct. We’re a very empirical bunch.
I don’t think I was implying physicists to be anti-MWI, but merely not as a whole considering it to be slam dunk already settled.
Interesting. What technology lets you test that?
We have discussed it here. A reading list is here.
You seem to be conceding that this is in fact the Higgs boson. In fairness I have to point out that, although it is now very certain that there is a particle at 125 GeV, it may not be the predicted Higgs boson. With this in mind, would you like to keep our bet running a while longer while CERN nails down the properties? Or do you prefer to update all at once, and pay me the 25 dollars?
I’d rather pay the $25 now. (Paypal data?) My understanding is that besides the mass, there’s also supposed to be other characteristics of the particle data that match the predicted Higgs, otherwise I would’ve waited before fully updating. If the story is retracted I might counter-update and ask for the money back, but my understanding is that this is not supposed to happen.
What other features (apart from being a particle at 125 GeV) do you consider a necessary part of the specification “Higgs Boson” for the purpose of this bet?
I notice that in your prediction you welcomed bets, but you did not offer odds, nor gave a confidence interval. I’m not sure (haven’t actually checked), but I have an impression that you usually do at least give a number.
Since the prediction was in 2009 it might just be that you recently formed the habit. If that’s not the case, not giving odds (even when welcoming offers) might be an indicator that you don’t believe something as much as you think you do. (The last two “you” are meant both as generic people references and to you in particular.) Does that seem plausible on a quick introspection?
I did make a bet and pay it.
Yes, I know. But those were even odds. When someone makes a prediction unprompted, it suggests more confidence than that. (Well, unless they’re just testing what odds other people offer, but I don’t think that was the case here.) That is, it is possible that your inner censor for “don’t predict things that might prove wrong” didn’t trigger (maybe because you’ve trained yourself to ignore embarrassment about people’s opinion of you), but the censor for “don’t bet when you might be wrong” triggered without you noticing it.
In other words, it might be an indication of a difference between what you believe and what you think you believe, or even what you want to appear to believe :-)
(It might also be that you actualy thought the odds were 50:50, and anticipated others to offer much higher odds. How likely did you think it was at the time, anyway?)
I will take up the bet on the Higgs field, with a couple of caveats:
You use the phrase “the Higgs boson”, when several theories predict more than one. If more than one are found, I want that to count as a win for me.
If the LHC doesn’t run, the bet is off.
Time limit: I suggest that if observation of the Higgs does not appear in the 2014 edition of “Review of Particle Physics”, I’ve lost. “Observation” should be a five-sigma signal, as is standard, either in one channel or smaller observations in several channels.
25 dollars, even odds.
As a side note, this is more of a hedge position than a belief in the Higgs: I’m a particle physicist, and if we don’t find the Higgs that will be very interesting and well worth the trivial pain of 25 dollars and even the not-so-trivial pain of losing a public bet. (I’m not a theorist, so strictly speaking it’s not my theory on the chopping block.) While if we do find it, I will (assuming Eliezer takes up this offer) have the consolation of having demonstrated the superior understanding and status of my field against outsiders. (It’s one thing for me to say “Death to theorists” and laugh at their heads-in-the-clouds attitude and incomprehensible math. It’s quite another for one who has not done the apprenticeship to do so.) And 25 dollars, of course.
Update: a 5-sigma signal for a new boson has showed up.
I’ve just learned that Stephen Hawking has bet against the Higgs showing up.
Here’s my argument against Higgs boson(s) showing up:
The Higgs boson was just the first good idea we had about how to generate mass. Theory does not say anything about how massive the Higgs itself it is, just that there is an upper bound. The years have passed, it hasn’t shown up, and the LHC will finally take us into the last remaining region of parameter space. So Higgs believers say “hallelujah, the Higgs will finally show up”. But a Higgs skeptic just says this is the end of the line. It’s just one idea, it hasn’t been confirmed so far, why would we expect it to be confirmed at the last possible chance?
Two years ago:
I wrote to him at the time expressing interest in the bet, but asking for more details. (No reply.) The rather bold statement that QM itself implies a Higgs “or something like it” I think must be a reference to the breakdown in unitarity of the Standard Model that should occur at 1 TeV—which implies that the Standard Model is incomplete, so something will show up. But does it have to be a new scalar boson? There are Higgsless models of mass generation in string theory.
This all leads me to think anew about what’s going to happen. The LHC will collide protons and detectors will pick up some of the shrapnel. I think no-one expects new types of particle to be detected directly. They are expected to be heavy and to decay quickly into known particles; the evidence of their existence will be in the shrapnel.
The Standard Model makes predictions about the distribution of shrapnel, but breaks down at 1 TeV. So one may predict that what will be observed is a deviation in shrapnel distributions from SM predictions and that is all. Can we infer from this, and from the existing range of physics models, what the likely developments in theory are going to be, even before the experiment is performed?
Although I said that totally new particles will not be observed directly, my understanding is that the next best thing is certainly possible, namely a very sharp and unanticipated change in the distribution of decay products at a specific energy. That would mean that you had a new particle at that energy.
The alternative would seem to be a sort of gentle deviation of decay statistics away from SM predictions. Unfortunately I don’t know enough about the theoretical options to really predict how this might be interpreted. However, the Higgsless models involve extra dimensions. So if we have the dull outcome, it will probably be interpreted by some as our first evidence of extra dimensions.
Also, particle physics is very complex and there are many possible mechanisms of interaction. I think that, if no Higgs shows up, many theorists will go back to their theorems and question the assumptions which tell us that this is the last chance for a Higgs to show up.
My prediction, then, is that if we get the dull outcome—no unambiguous signal of a new particle—we will see both even more interest in extra dimensions, and a new generation of “heavy Higgs” models which explain why we can, after all, have a heavier-than-1-TeV Higgs without screwing up observed low-energy physics.
I was hoping to make some more money on this :) in a shorter time and hence greater implied interest rate :) but sure, it’s a bet.
Sorry, graduate students can’t afford to be flinging around the big bucks. :) If I get the postdoc I’m hoping for, we can up the stakes, if you like.
This is a side issue but I’m curious as to what people’s reactions are: I’m kind-of hoping that dark matter turns out to be massive neutrinos. Of the various candidates, it seems like the most familiar and comforting. We’ve even seen neutrinos interact in particle detectors, which is way more than you can say for most of the other alternatives… Compared to axions or supersymmetric particles, or WIMPs, massive neutrinos have have more of the comfort of home. Anyone feel similarly?
As I understand it, there is a known upper bound on neutrino mass that is large enough to allow them to account for some of the dark matter, but too small to allow them to account for all or most of it.
That is correct as far as the known neutrinos go. If there is a fourth generation of matter, however, all bets are off. (I’m too lazy to look up the limits on that search at the moment.) On the other hand, since neutrinos oscillate and the sun flux is one-third what we expect rather than one-fourth, you need some mechanism to explain why this fourth generation doesn’t show up in the oscillations. A large mass is probably helpful for that, though, if I remember correctly.
Point of order! A massive neutrino is a WIMP. “Weakly Interacting”—that’s neutrino to you—“Massive Particle”.
Well, but “massive” in WIMP usually means very massive (i.e. non-relativistic at T = 2.7 K). As far as gravitational effects, particles with non-zero mass but ultrarelativistic speeds behave very much like photons AFAIK.
Thanks, point taken—I’d been thinking of more exotic WIMPs
I’ve added this bet to PredictionBook at http://predictionbook.com/predictions/1566 based on http://wiki.lesswrong.com/wiki/Bets_registry
Intrade has market for Higgs boson.
Too thinly traded, deadline too soon, rules for what counts as “confirmation” too narrow given the deadline.
The market, unfortunately, is only through the end of next year; does anybody know whether all the relevant experiments are slated to be performed by then?
I’d like to unwind P(Find Higgs|LHC runs and does the tests) down to just P(Find Higgs) or some approximation thereof.
There are markets for further years, but have almost no activity, so I didn’t link to them.
Semi-OT: It’s discussions like these that remind me: Whenever physicists remark about how the laws of nature are wonderfully simple, they mean simple to physicists or compared to most computer programs. For most people, just looking at the list of elementary particles is enough to make their heads blow up.
Heck, it nearly does that for me!
Seriously? Dude, it’s a list of names. It should no more make your head asplode than the table of the elements does, and nobody thinks that memorising those is a great feat of intellect. Are you sure you’re not allowing modesty-signalling to overcome your actual ability?
Now, if you want to get into the math of the actual Lagrangians that describe the interactions, I’ll admit that this is a teeny bit difficult. But come on, a list of particles?
“Antimony, arsenic, aluminum, selenium, and hydrogen and oxygen and nitrogen and rhenium...”
I followed the link Silas provided. Rather than seeing a list to be memorised my brain started throwing up all sorts of related facts. The pieces of physics I have acquired from various sources over the years reasserted themselves and I tried to piece together just how charm antiquarks fit into things. And try to remember just why it was that if I finally meet my intergalactic hominid pen pal and she tries to shake hands with her left hand I can be sure that shaking would be a cataclysmic-ally bad idea. I seem to recall being able to test symmetry with cobalt or something. But I think it’s about time I listened to Feynman again.
Point is, being able to find the list of elementary particles more overwhelming than, say, a list of the world’s countries requires a certain amount of knowledge and a desire for a complete intuitive grasp. That’s not modesty-signalling in my book.
Everyone knows what a country is. Few people know what the term “elementary particle” means. (It’s not a billiard ball.)
It’s not a billiard balls from the movie they showed? Then surely ‘elementary particles’ must refer to those things on the Table of the Elements that was on the wall!
I have a metaphorical near-head-explosion for different reasons than the average person that I was referring to. For me, it’s mainly a matter of the properties shown on the chart being more abstract and not knowing what observations they would map to (as wedrifid noted in his signaling analysis...).
Compared to the Periodic Table, elementary particle chart also has significantly less order. With the PT, I may not know each atomic mass number, but I know in which direction it increases, and I know the significance of its arrangement into rows and columns. The values in the EPC seem more random.
Granted, but there are also nowhere near as many of them. Besides, fermion mass increases to the right, same as in the PT; charge depends only on the row; and spin is 1⁄2 for all fermions and 1 for all bosons. This is not very complicated.
I would also suggest that the seeming randomness is a sign you’re getting closer to the genuinely fundamental stuff: The order in the periodic table is due to (using loose language) repeated interactions of only a few underlying rules—basically just combinations of up and down quarks, with electrons, and electromagnetic interactions only.
Nu, mass and charge are hardly abstract for someone who has done basic physics; that leaves spin, which just maps to the observation that a beam of electrons in a magnetic field will split into two. (Although admittedly things then get a bit counter-intuitive if you put one of the split beams through a further magnetic field at a different angle, but that’s more the usual QM confusion.)
Alright! Point taken! The chart is less daunting than I thought. You mind loosening your grip on my, um, neck? ;-)
An especially good point—maximally compressed data looks like random noise, so at the fundamental level, there should be no regularity left that allows one entry to tell you something about another.
Oh, a bit off topic, but mind clarifying something for me? My QFT knowledge is very limited at the moment, and I’m certainly not (yet) up to the task of actually trying to really grasp the Standard Model, but...
Is it correct to say that in a sense the force carriers are, in a sense, illusory? That is, the gauge bosons are kind of an illusion in the same sense that the “force of gravity” is? From what little I managed to pick up, the idea is that instead one starts without them, but assigns certain special kinds of symmetries to the configuration space. These local (aka) gauge symmetries allow interference effects that basically amount to the forces of interaction. One can then “rephrase” those effects in a way that more looks like another quantum field interacting with, well, whatever it’s interacting with?
ie, can the electromagnetic, strong, and weak forces (as forces) be made to go away and turn into symmetries in configuration space in the same sense that in GR, the force of gravity goes away and all that’s left is geometry of spacetime?
Or have I rolled a critical fail with regards to attempting to comprehending the notion of gauge fields/bosons?
Thanks. Again, I know it’s a slight tangent, but since the subject of the Standard Model came up anyways...
Ok, I’m not touching the ECE thing; as noted, I’m not a theorist. I just measure stuff. I’ve taken classes in formal QFT, but I don’t use it day-to-day, so it’s a weak point for me. However, it seems a bit odd to describe things that can be produced in collisions and (at least in principle) fired at your enemies to kill them by radiation poisoning as ‘illusory’. If you bang two electrons together, measuring the cross-section as a function of the center-of-mass energy, you will observe a classic 1/s decline interrupted by equally classic resonance bumps. That is, at certain energies the electrons are much more likely to interact with each other; that’s because those are the energies that are just right for producing other particles. Increase the CM energy through 80 GeV or so, and you’ll find a Breit-Wigner shape like any other particle; that’s the W, and if it weren’t so short-lived you could make a beam of them to kill your enemies. (With asymmetric electron energies you can produce a relativistic-speed W and get arbitrarily long lifetimes in the lab frame, but that gets on for being difficult engineering. In fact, just colliding two electrons at these energies is difficult, they’re too light; that’s why CERN used an electron and a proton in LEP.)
Now, returning to the math, my memory of this is that particles appear as creation and annihilation operators when field theories with particular gauge symmetries are quantized. If you want to call the virtual particles that appear in Feynmann diagrams illusory, I won’t necessarily argue with you; they are just a convenient way of expressing a huge path integral. But the math doesn’t spring fully-formed from Feynmann’s brow; the particular gauge symmetry that is quantised is chosen such that it describes particles or forces already known to exist. (Historically, forces, since the theory ran ahead of the experiments in the sixties—we saw beta decay long before we saw actual W bosons.) If the forces were different, the theorists would have chosen a different gauge symmetry and got out a different set of particles.
I’m not sure if I’m answering your question, here? My basic approach to QFT has always been “shut up and calculate”, not because of QM confusion but because I find it very confusing when someone says that a particular mathematical operation is “causing” something. I prefer to think of the causality as flowing from the observations, so that the sequence is thus:
We observe these forces / cross-sections / particles.
We know that by quantising field theories with gauge symmetries, we can get things that look very much like particles.
Searching through gauge-symmetry space, we find that this one gives us the particles and forces we observe.
I wasn’t bringing up the ECE thing.
I meant illusory in the same sense that “sure, the force of gravity can cause me to fall down and get ouchies… but by a bit of a coordinate change and so on, we can see that there really is no ‘force’, but instead that it’s all just geometry and curvature and such. Gravity is real, but the ‘force’ of gravity is an illusion. There’s a deeper physical principle that gives rise to the effect, and the regular ‘force’ more or less amounts to summing up all the curvature between here and there.”
My understanding was that gauge bosons are similar “we observe this forces/fields/etc… but actually, we don’t need to explicitly postulate those fields as existing. Instead, we can simply state that these other fields obey these symmetries, and that produces the same results. Obviously, to figure out which symmetries are the ones that actually are valid, we have to look at how the universe actually behaves”
ie, my understanding is that if you deleted from your mind the knowledge of the electromagnetic and nuclear forces and instead just knew about the quark and lepton fields and the symmetries they obeyed, then the forces of interaction would automatically “pop out”. One would then see behaviors that looks like photons, gluons, etc, but the total behavior can be described without explicitly adding them to the theory, but simply taking all the symmetries of the other stuff into account when doing the calculations.
That’s what I was asking about. Is this notion correct, or did I manage to critically fail to comprehend something?
And thanks for taking the time to explain this, btw. :) (I’m just trying to figure out if I’ve got a serious misconception here, and if so, to clear it up)
I guess you can think of it that way, but I don’t quite see what it gains you. Ultimately the math is the only description that matters. Whether you think of gravity as being a force or a curvature is just words. When you say “there is no force, falling is caused by the curvature of space-time” you haven’t explained either falling or forces, you’ve substituted different passwords, suitable for a more advanced classroom. The math doesn’t explain anything either, but at least it describes accurately. At some point—and in physics you can reach that point surprisingly fast—you’re going to have to press Ignore (being careful to avoid Worship, thanks), at least for the time being, and concentrate on description rather than explanation.
Well, my question could be viewed as about the math. ie: “does the math of the standard model have the property that if you removed any explicit mention of electromagnetism, strong force, or weak force and just kept the quark and lepton fields + the math of the symmetries for those, would that be sufficient for it to effectively already contain EM, strong, and weak forces?”
And as far as gravity being force or geometry, uh… there’s plenty of math associated with that. I mean, how would one even begin to talk about the meaning of the Einstein field equation without interpreting it in terms of geometry?
Perhaps there is a deeper underlying principle that gives rise to it, but the Einstein field equation is an equation about how matter shapes the geometry of spacetime. There’s no way really (that I know of) to reasonably interpret it as a force equation, although one can effectively solve it and eventually get behaviors that Newtonian gravity approximates (at low energies/etc...)
(EDIT: to clarify, I’m trying to figure out how to semivisualize this. ie, with gravity and curvature, I can sorta “see” and get the idea of everything’s just moving in geodesics and the behavior of stuff is due to how matter affects the geometry. (though I still can only semi “grasp” what precisely G is. I get the idea of curvature (the R tensor), I get the idea of metric, but the I currently only have a semigrasp on what G actually means. (Although I think I now have a bit of a better notion than I used to). Anyways, loosely similar, am trying to understand if the fundamental forces arise similarly, rather than being “forces”, they’re more an effect of what sorts of symmetries there are, what bits of configuration space count as equivalent to other bits, etc...)
I guess I’m not enough of a theorist to answer your question: I do not know whether the symmetries alone are sufficient to produce the observed particles. My intuition says not, for the following reason: First, SU(3) symmetry is broken in the quarks; second, the Standard Model contains parameters which must be hand-tuned, including the electromagnetic/weak separation phase that gives you the massless photon and the very massive weak-force carriers. Theories which spring purely from symmetry ought not to behave like that! But this is hand-waving.
As an aside, I seem to recall that GR does not produce our universe from symmetries alone, either; there are many solutions to the equations, and you have to figure out from observation which one you’re in.
If you like, I can quote our exchange and ask some local theorists if they’d like to comment?
But GR explains (or explains away, depending on how you look at it) the force of gravity in terms of geometry. I meant “does the standard model do something similar with the gauge bosons via symmetry?”
May still leave some tunable parameters, not sure. But does the basic structure of the interactions pop straight out of the symmetries?
And yeah, I’d like that, thanks. It’s nothing urgent, just am unclear if I have the basic idea or if I have severe misconceptions.
It is a while since I thought about this. But…
The basic fact about quantum field theory is field-particle duality. Quantum field states can be thought of either as a wavefunction over classical field configuration space, or as a wavefunction over a space of multi-particle states. You can build the particle states out of the field states (out of energy levels of the Fourier modes), so the field description is probably fundamental. But whenever there is a quantum field, there are also particles, the quanta of the field.
In classical general relativity, particles follow geodesics, they are guided by the local curvature of space. This geometry is actually an objective, coordinate-independent property of space, though the way you represent it (e.g. the metric you use) will depend on the coordinate system. Something similar applies to the gauge fields which produce all the other forces in the Standard Model. Geometrically, they are “connections” describing “parallel transport” properties, and these connections are not solely an artefact of a coordinate system. See first paragraph here.
You will see it said that the equations of motion in a gauge field theory are obtained by taking a global symmetry and making it local. These global symmetries apply to the matter particle (which is usually a complex-valued vector): if the value of the matter vector is transformed in the same way at every point (e.g. multiplied by a unit complex number), it makes no difference to the equation of motion of the “free field”, the field not yet interacting with anything. Introducing a connection field allows you to compare different transformations at different points (though the comparison is path-dependent, depending on the path between them that you take), so now you can leave one particle’s state vector unmodified, and transform a distant particle’s vector however you want, and so long as the intervening gauge connection transforms in a compensatory fashion, you will be talking about the same physical situation. However, as the link above states, the gauge connection is not solely a bookkeeping device; there are topologically distinct gauge field states which are not equivalent under some continuously varying transformation.
It all sounds horrifically abstract, but if you follow the link above you’ll see some simple examples which may help. Anyway, the bottom line is that classical gauge field configurations do contain a coordinate-independent geometric content just as gravity does, so you can’t completely do without them; and whenever you quantize a field, you have particles.
What is the difference between saying gravity is a force and saying it’s a curvature of spacetime?
What is your definition of “a force” that makes it inapplicable to gravity? Is electromagnetism a force, or is it a curvature in the universe’s phase space?
I don’t know much about physics, please enlighten me...
To say that gravity is a curvature of spacetime means that gravity “falls out of” the geometry of spacetime. To say that gravity is something else (e.g., a force) means that, even after you have a complete description of the geometry of spacetime, you can’t yet explain the behavior of gravity.
Isn’t it equally valid to say that the geometry of spacetime falls out of gravity? I.e., given a complete description of any one of them, you get the other for free.
What is a force by your definition? Something fundamental which can’t be explained through something else? But it seems to me that “the curvature of spacetime” is the same thing as gravity, not a separate thing that is linked to gravity by causality or even by logical necessity. They’re different descriptions of the same thing. So we can still call gravity a fundamental force, it’s not being caused by something else that exists in its own right.
What I meant is that the notion of gravity as “something that pulls on matter” goes away.
There’re a couple of concepts that’re needed to see this. First is “locality is really important”
For instance, you’re in an elevator that’s freefalling toward the earth… or it’s just floating in space. Either way, the overall average net force you feel inside is zero. How do you tell the difference? “look outside and see if there’s a planet nearby that you’re moving toward”? Excuse me? what’s this business about talking about objects far away from you?
Alternately, you’re either on the surface of the earth, or in space accelerating at 9.8m/s^2
Which one? “look outside” is a disallowed operation. It appeals to nonlocal entities.
Once again, return to you being in the box and freefalling toward the ground. What can you say locally?
Well… I’m going to appeal to Newtonian gravity briefly just to illustrate a concept, but then we’ll sort of get rid of it:
Place two test particles in the elevator, one above the other. What do you see? You’d see them accelerating away from each other, right? ie, if one’s closer to the earth than the other, then you get tidal force pulling them away from each other.
Similarly, placing them side by side, well, the lines connecting each of them to the center of the earth make a bit of an angle to each other. So you’ll see them accelerate toward each other. Again, tidal force.
From the perspective of locality, tidal force is the fundamental thing, it’s the thing that’s “right here”, rather than far away, and regular gravity is just sort of the sum (well, integral) of tidal force.
Now, let’s do a bit of a perspective jump to geometry. I’ll get back to the above in a moment. To help illustrate this, I’ll just summarize the “Parable of the Apple” from Gravitation:
Imagine you see ants crawling on the side of the apple. You see them initially seem to move parallel, then as they crawl up, you see them moving toward each other.
“hrm… they attract each other perhaps?” you suppose.So you get out your knife and you cut a bit of the apple, a thin cut a millimeter to either side of the path of one of the ants. when you peel off the bit of the apple’s skin, you find that the path is… straight!
“huh! Well, maybe it’s the other one that’s curving its path...” you think to yourself. So you go and cut out the other one’s path in the same way… and lo and behold, that one’s straight too.
“augh! WTF? what sort of witchery is this?”
The answer? You look again and you see that it’s the shape of the apple that brings those paths together, even if they’re individually straight (ie, geodesics).
Also, you might note that as the ants crawl toward the stem, as they move on the indentation near the stem, their paths seem to change a bit more… Is the stem exerting some sort of mysterious force on them?
Having learned your lesson once, you look closer, and you see that it’s simply the different curvature of the apple over there.
The shape of the apple there affects what sorts of curvature there can be nearby, etc etc.
Hrm… the behavior of the ants sounds similar to tidal force. Perhaps then the apparent gravitational “forces” are really just geometric properties. Everything in freefall moves in straight lines, it’s just that curvature changes how geodesics relate to each other.
There is one other catch: in basic GR, it’s not space that’s curved, but spacetime that’s curved. Objects in freefall have geodesic paths through spacetime.
It’s not that the earth is pulling down on you, it’s that the earth is pushing up on you, and in a local inertial reference frame, you would be accelerating downwards. If you zoom in on any small part of spacetime, it’s locally flat. (I’m ignoring stuff like zooming in so close that stuff like quantum foam may or may not show up. I’m just talking classical GR here.) Curvature could be viewed as controlling how all those locally flat bits are “stitched together”
Does that make sense? So the idea of gravity “pulling” on you goes away completely. The “force” of gravity amounts to nothing more than the geometry of spacetime.
What’s left is the Einstein field equation which tells how matter shapes spacetime. (It doesn’t control the metric or the curvature directly, but rather it controls a certain “average” or sum of certain properties of the curvature.)
Edited: I understand what you’ve said (and thanks for taking the time to write all that out!). But I’m not sure why “the concept of gravity as something that pulls on matter goes away”. Is it the case that it’s mathematically impossible to define gravity as attraction between matter and still have a correct relativistic physics? Is it impossible to generalize Newton’s law that way?
Well, it goes away in the sense that “this particular theory of physics explains gravity without directly having a ‘force’ associated with it as such”
In GR, one doesn’t see any forces locally pulling on objects. One instead sees (if one zooms in closely) objects moving on straight (geodesic) paths through spacetime. It simply happens to be that spacetime is in some cases shaped in ways that alter the relationships between nearby geodesics.
I guess it’s an attraction, sort of, but once one starts taking locality seriously, that’s not that good, is it? “don’t tell me what’s going on way over there, tell me what’s happening right here!”
There may be alternate theories, but GR itself is a geometric theory and I wouldn’t even know how to interpret the central equations as force equations. Saying “there could be other explanations” or such is a separate issue. What I meant was “In GR, once one has the geometry, nothing more needs to be said, really.” (Well, I’m skipping subtleties, stuff gets tricky in that you have shape of space affecting motion of matter, and motion of matter affecting shape of space, but yeah...)
Actually, there’s really no way for Newtonian stuff to be reasonably extended to describe GR effects without going to geometry in some form. I mean, GR predicts stuff about measured distances not quite obeying the rules that they would in flat spacetime. Measured times too, for that matter. One would have to get really creatively messy to produce a theory that is more an extension of Newtonian gravity, isn’t at all based on geometry, curvature, etc… any more than regular Newtonian gravity, yet still produces the same predictions for experimental outcomes that GR does.
It would, at best, be rather complicated and messy, I’d expect. If it’s even possible. Actually, I don’t think it is. More or less no matter what, other stuff would have to be added on that doesn’t at all even resemble Newtonian stuff.
I think the Wikipedia page on Gravitomagnetism might be relevant; it seems to be an approximation to GR that looks an awful lot like classical electromagnetism.
OK, now I understand better, thanks :-)
Incidentally, what about electromagnetism and the other fundamental forces? Can they be described the same way as gravity? In classical mechanics they’re the same kind of thing as gravity, except they can be repulsive as well. And a lot of popsci versions of modern physics research seems to postulate the same kind of properties for gravity as we know from electromagnetism: like repulsive gravity, or gravitational shields, or effects due to gravitational waves propagating at speed of light, or artificial gravity. And all forces are related through inertial mass.
So is there a description of all these things, including gravity, in the same terms? Either all of them “forces” or fields with mediating particles, or all of them affecting some kind of geometry?
Scott Aaronson has a nice post about the differences between gravity and electromagnetism. It seems his thoughts were running along the same lines as yours when he wrote it; he asks almost all the same questions. http://www.scottaaronson.com/blog/?p=244
That was very interesting and relevant. Thanks.
Gravity waves come straight out of GR. (Actually, weak gravity waves show up in the linearized theory (the linearized theory of GR being a certain approximation of it that’s easier to deal with, good for low energies and such))
And that was part of what I was asking about. Well, others have tried to find that sort of thing, but I was asking something like “in the standard model and such, are the forces really aspects of what would amount to the geometry (specifically the symmetries) of configuration space rather than additional dimensions in the config space?”
And, of course, one of the BIG questions for modern physics is how to get a quantum description of gravity or to otherwise find a model of reality which contains both QM and GR in a “natural” way.
So, basically, at this point, all I can say is “I don’t really know.” :)
(well, also, I guess depending on how you look at it, curvature either explains or explains away tidal force. It explains the effects/behaviors, but explains away any apparent “forces” being involved.)
...but forces fall out of something—electromagnetic interactions, for example. As an engineer, I am inclined to call something a force if it goes on the “force” side of the equation in the domain I’m modeling, and not worry about whether to call it “real”.
(Then again, as an engineer, I rarely need to exceed Newtonian mechanics.)
So I believe you are on the right track. There is, in fact, a very consistent, logical, and physical theory of physics with one set of equations which in different limits yields all four forces and explains and portends a great deal beyond that. So that is the only thing that makes sense...a unified theory and we have at least one.
Now, this theory is largely rejected/ignored by both the particle and string mainstream. Why? Its very very threatening to their funding and careers; and the math is difficult to attack, so they ignore.
So we are at a Khunian moment...the anomalies are accumulating to the point that a paradigm shift may be imminent.
Oh, and this theory says the Higgs or Higgses is/are impossible; short the Higgs!
A few other things disappear. Cosmology loses its black holes (that Schwartzchild metric isn’t exaclty right… the singularity disappears), dark matter, big bang, etc. So you can see how threatening this is...and exciting.
This comment scores a full 68 points on the Crackpot Index. That doesn’t make it wrong, but if it’s right, I think you need to explain yourself more clearly.
To start with, as Psy-Kosh asked, what theory are you talking about, anyway? Where is it published, by whom, and what predictions have been made with it?
I was going to post a comment showing it should be much higher than 68, but re-checking my evaluation your score appears correct. Amongst other mistakes, I was going to award 50 points for no testable predictions, but I think the comment about the Higgs boson counts.
However, it seems like misspelling ‘Kuhn’ should count for something.
Let me recount quickly so we can compare:
-5 for starters.
+3 for statements widely agreed to be false (nonexistence of 1. black holes, 2. dark matter, 3. big bang).
+10 for claiming a paradigm shift.
+20 for talking about how great the theory is without explaining it.
+40 for claiming something like a conspiracy suppressing the new idea (cf. “threatening to their funding and careers”).
Step 8 suggests a +5 for misspelling Einstein, Hawking, or Feynman—you could do the same for Kuhn.
Well, really, for now my main thing is that I just want to know if my understanding of the concept of gauge bosons is correctish or if I completely missed the point of that aspect of the standard theories.
I am losing my touch...only 68? See below
Are you … taking pride in sounding like a crackpot? That doesn’t seem like a very good plan for figuring out what is really going on in the world. That’s why we denounce crackpottish proclamations like “this theory is largely rejected/ignored by both the particle and string mainstream [because it is] very very threatening to their funding and careers”—because these sorts of statements are easy arguments to wield against people with the temerity to dispute your favored idea.
I haven’t read the paper you linked here, but the titular “ECE”—“Einstein Cartan Evans”—returns no hits in a quick Scitation search. If you ask me, it seems premature to treat the results with anything like the confidence you attribute to it.
Not at all. I will be glad to move on to a more open minded forum which doesn’t engage in ad hominem attacks.
I intend no insult to you in my disagreement (and I apologize for missing your ironic intent, if that is what I did). Our experience and intuition differ, and I have been explaining how mine lead me to different conclusions in this matter.
(Also, what you are complaining of is abuse, not ad hominem—it’s an important distinction.)
(Edit: To summarize the link, “ad hominem” goes, “you’re an irrational person, therefore you are wrong”, whereas we are saying, “your argument is wrong, therefore you are not rational”. We refuse to accept your claim that ECE is a theory-of-everything because you have not supported it, not because we think you’re stupid—in fact, I don’t think you’re stupid, just mistaken.)
Ok. Well, I just don’t think its possible on this site.to go through 800+ papers, 130+ in the last few years, which lead to this theory. I don’t have the time, and there is no indication that anyone here is open to it. Plus, I see no equation editor.
I’ll repeat. I responded to a Higgs Boson thread, saying I believe it does not exist, I referred to the theory supporting my belief, and I get trashed. Thats arrogant and close minded. And if I feel I have been ad hominemed, then I have.
If someone is as dissatisfied as I am with the output of Standard Model and string theories, they will seek out an alternative. And learn the math. I just pointed to what, after a four year analysis, I believe to be an extremely credible alternative.
So thanks for the consideration.
And not one review article?
Thanks for your input—I’m sorry we could not be more helpful. Be well.
Huh? Sorry, I was unclear.
I meant “this is my understanding of what the mainstream theory, ie, the Standard Model says. In fact, it’s my understanding of what many of the candidate theories that go beyond it say, it’s my understanding of why so many theories are described in terms of their symmetries. Because once one knows their symmetries, one more or less knows the basic structure of the forces it implies. Is this understanding correct, or did I completely miss the point about gauge symmetries and such?”
What theory are you talking about anyways?
All this becomes superfluous with correct maths. In any case, if the Higgs is NOT found, particle theory as we know it is in trouble. Thats my bet.
And here is a link to a simpler unified theory that may point to the future: http://www.aias.us/documents/miscellaneous/ECE_and_Spacetime.pdf
This is a “history of thought” and high level description of the basic theory and some of its most important implications.
The mis-application of symmetry is key… in fact the Riemann connection is not symmetric...it is anti-symmetric, thus the Einstein field equation is wrong and much collapses from that result. So almost everyone in physics is using the wrong math. Riemann geometry needs to be supplemented. Einstein wasn’t wrong.. he had incredible insights. He just wasn’t totally correct, as even he knew.
Didn’t look all the way through that paper yet, but… if you mean the Riemann curvature tensor having antisymmetric parts… so? How does that kill GR? (besides, anti symmetry is a type of symmetry. I didn’t mean symmetric as in “symmetric vs antisymmetric tensors”, I meant it in the more general sense.)
(Okay, I know of Einstein-Cartan-Theory, but I haven’t studied it yet. I still only have a rudimentary grasp of the math of basic GR)
But… what’s any of this got to do with my original question?
It doesn’t kill GR—it is the foundation. It just adds torsion to curvature, and that changes almost everything. What I am suggesting is that your question is about a theory that is no longer relevant if ECE is correct (which I have come to believe over a number of years). To go deeper, you do have to go do the maths, but they are not that hard.
So I was trying to gently get you to look in a different direction which may have a higher payback for your time, and also predicts that there is no Higgs, which brought me to the blog in the first place. Your choice, of course. Lots of people are invested in the Standard Model.
Well, I think my question about gauge symmetries is more general anyways. Are you claiming a rejection of QFT itself?
yes. http://atomicprecision.wordpress.com/2009/05/23/example-of-the-antisymmetry-laws/
I guess hardly anybody here knows even what the question means, exactly, so all a bead jar guess.
Well, the Standard Model hasn’t been wrong yet. If you want to bet against it, I’ll take you up on it.
I assert that the LHC will not establish the non-existence of the Higgs boson. Will you wager $20 at even odds on against that proposition?
I’ll bet that the LHC will not establish existence. It’s not clear to me what would count as establishing non-existence.
There are papers that establish upper bounds on the energy of the higgs boson,
http://arxiv.org/abs/hep-ph/9212305
If the LHC can make particles up to those energy bounds (I don’t know and don’t have the time to figure it out), and it can be run for sufficient time to make it very unlikely that one wouldn’t be created. Then you could establish probable non-existence.
It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are. You can’t do a thing with them and they don’t tell you very much. Of course, the euphoria will be massive.
Still, most likely, nothing will be to see.
What? Who voted this up?
“It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are.”
So understanding the sub-atomic level for things like nano-scale technology in your books is a complete waste of time? Understanding the universe I can only assume is also a waste of time since the discovery of the Higgs Boson in your books is essentially meaningless in all probability.
“You can’t do a thing with them and they don’t tell you very much. Of course, the euphoria will be massive.”
Huh? From someone who studies particle physics to one (you) who doesn’t obviously (and I am going to be hard on you) you should refrain making such comments in nearly total ignorance. The fact that you don’t understand the significance of the Higgs Boson or particle physics should have been a cue that you have noting to contribute to this thread.
Sorry but there it is...
No ato-tech in sight, no use for already discovered particles and you are telling me how valuable Higgs boson will be. Not only you but the whole CERN affiliated community and most of the media.
I remain skeptic, if you don’t mind.
You have a point. I have a somewhat similar view of elements above perhaps Einsteinium. I’ll be more impressed with physics’ control over the electroweak interaction when I see the weak nuclear force equivalent of an electromagnet :-) I wonder what is the maximum particle energy that someone has actually used in a non-elementary-particle-physics-research application? Maybe the incoming beam for a spallation neutron source, somewhere in the MeV range?
Ok, I am going to reply to both soreff and Thomas:
Particle physics isn’t about making technology at least at the moment. Particle physics is concerned with understanding the fundamental elements of our world. As far as the details of the relevance of particle physics I won’t waste the time to explain. Obviously neither of you have any real experience in the field. So this concludes what comments I am going to make on this topic until someone with real physics knowledge decides to comment.
Another danger of unfriendly AI: It doesn’t invite you to the orgy.
I feel like this particular danger should be the primary research topic for FAI researchers.
Intermediate discoveries might be a good source of funding.
To-Do Lists and Time Travel Sarmatian Protopope muses on how coherent, long-term action requires coordinating a tribe of future selves.
So, I’m having one of those I-don’t-want-to-go-to-school moments again. I’m in my first year at a university, and, as often happens, I feel like it’s not worth my time.
As far as math goes, I feel like I could learn all the facts my classes teach on Wikipedia in a tenth of the time—though procedural knowledge is another matter, of course. I have had the occasional fun chat with a professor, but the lecture was never it.
As far as other subjects go, I think forces conspired to make me not succeed. I had a single non-math class, though it was twice the length of a normal class and officially two classes. It was about ancient Greece and Rome, and we had to read things like Works and Days and the Iliad. Afterwards, we were supposed to write a paper about depictions of society in the two works or something. I never wrote the paper, and I dropped the class.
Is school worth it for the learning? How about for the little piece of paper I get at the end?
Take it from me (as a dropout-cum-autodidact in a world where personal identity is not ontologically fundamental, I’m fractionally one of your future selves), that procedural knowledge is really, really important. It’s just too easy to fall into the trap of “Oh, I’m a smart person who reads books and Wikipedia; I’m fine just the way I am.” Maybe you can do better than most college grads, simply by virtue of being smart and continuing to read things, but life (unlike many schools) is not graded on a curve. There are so many levels above you, that you’re in mortal danger of missing out on entirely if you think you can get it all from Wikipedia, if you ever let yourself believe that you’re safe at your current level. If you think school isn’t worth your time, that’s great, quit. But know that you don’t have to be just another dropout who likes to read; you can quit and hold yourself to a higher standard.
You want to learn math? Here’s what I do. Get textbooks. Get out a piece of paper, and divide it into two columns. Read or skim the textbooks. Take notes; feel free to copy down large passages verbatim (I have a special form of quotation marks for verbatim quotes). If a statement seems confusing, maybe try to work it out yourself. Work exercises. If you get curious about something, make up your own problem and try to work it out yourself. Four-hundred ninety-three pieces of paper later, I can say with confidence that my past self knew nothing about math. I didn’t know what I was missing, could not have known in advance what it would feel like, to not just accept as a brute fact a linear transformation is invertible iff its determinant is nonzero, but to start to see these as manifestations of the same thing. (Because—obviously—since the determinant is the product of the eigenvalues, it serves as a measure of how the transformation distorts area; if the determinant is zero, it means you’ve lost a dimension in the mapping, so you can’t reverse it. But it wouldn’t have been “obvious” if I had only read the Wikipedia article.)
Forces don’t conspire; they’re not that smart.
It’s amazing how rarely people—including textbook authors—actually bother to point this out. (Admittedly, it’s only true over an algebraically closed field such as the complex numbers.) Were you by any chance using Axler?
While I certainly agree with the main point of your comment, I nevertheless think that this particular comparison illustrates mainly that the mathematical Wikipedia articles still have a way to go. (Indeed, the property of determinants mentioned above is buried in the middle of the “Further Properties” section of the article, whereas I think it ought to be prominently mentioned in the introduction; in Axler it’s the definition of the determinant [in the complex case]!)
Mostly Bretscher, but checking out Axler’s vicious anti-deteminant screed the other month certainly influenced my comment.
I up voted this but I just wanted to follow this tangent.
This isn’t true in all worlds where personal identity is not ontologically fundamental. It is a reasonable thing to say if certain versions of the psychological continuity theory are true. But, those theories don’t exhaust the set of theories in which personal identity isn’t ontologically fundamental. For example, if personal identity supervenes on human animal identity than you are not one of Warrigal’s future selves, even fractionally.
I think you should ask yourself this: if you drop out, what realistically are you going to do with your time? If you don’t have a very good answer to that question, stay where you are.
View university in the same way as you would view a long lap-swimming workout. Boring as hell, maybe, but you’ll be better off and feel better when you’re done. Sure, you could skip your pool workout and go do something Really Important, but most people skip their workouts and then go watch TV instead.
Suppose you have an idea or desire for something to do instead of university. You should create a gradual, reversible transition. For instance if you want to work and earn some money, find a job first (telling them you’ve dropped out), work for a couple of weeks, make sure you like it, and only then actually drop out. Or if you want to study alone at home, start doing it for 10 hours every week, then 20, drop just one or two classes to free the time, and when you see it’s working out, go all out.
This may not be convenient if working for a couple weeks requires time you’ll only have if you drop out. If you end up not wanting to drop out after all, you can’t necessarily afford to miss the classes.
In the comment section of this post, “Doug S.” gives the most salient analysis I have seen. After stating, “the job of a university professor is to do research and bring in grant money for said research, not to teach! Teaching is incidental,” he was asked why parents would pay upward of $40,000 annually for such a service. His parsimonious reply: “In most cases, it’s not the education that’s worth $40,000+. It’s the diploma. Earning a diploma demonstrates that you are willing to suffer in exchange for vague promises of future reward, which is a trait that employers value.”
Before I started college, I read this professor’s speech, which attempted to explain, given your concerns, why an education may nevertheless be valuable. It’s biased towards its audience (UChicago students) but I think its relevant point can be summarized as: few jobs allow you to continue practicing the diversity of skills employed by academic work, and having a degree keeps your options wide open for a longer period. However, the real thesis of the speech is that university is uniquely a place to devote oneself to practicing the Art, broadly construed, of generating knowledge and beauty from everything.
Other considerations:
Becoming an academic is very hard without an undergraduate degree, so if you want that life, stick with it.
It takes a great deal of luck to pull a Bill Gates. It is otherwise hard to convince people that your reasons for not having a degree are genuine and not ex post.
At least in my case, it has been hard to find anywhere near as high a concentration of intelligent and interesting people outside the university as in the one I attended.
Hope something in there helps!
What do you plan on spending your time on if you don’t go to school? Most jobs largely consist of being forced to do some assignment that you feel isn’t worth your time. - you’re not going to be escaping that by dropping out. And I’d wager that a college degree is one of the best ways to snag a job that you DO actually enjoy.
I suspect the REAL value of a college degree, aside from the basic intelligence indication, is that it says you can handle 4 years doing largely unpleasant work.
I can’t speak for all people or all jobs, but in my experience, there’s a certain dignity and autonomy in paid work that I never got out of school. After quitting University, I worked in a supermarket for nineteen months. Sure, it was low-paying, low-status, and largely boring, but I was much happier at the store, and I think a big reason for this was that I had a function other than simply to obey. At University, I had spent a lot of time worrying that I wasn’t following the professor’s instructions exactly to the letter, and being terrified that this made me a bad person. Whereas at the store, it didn’t matter so much if I incidentally broke a dozen company rules in the course of doing my job, because what mattered was that the books were balanced and the customers were happy. It’s not so bad, nominally having a boss, as long as there’s some optimization criterion other than garnering the boss’s approval: you can tell if you couldn’t solve a customer’s problem, or if the safe is fifty dollars short, or if the latte you made is too foamy. And when the time comes, you can clock out, and walk to the library, with no one to tell you what to study. Kind of idyllic, really.
I worked at a supermarket for three days, and was fired for insubordination. (I wanted to read a book when there were no customers coming to my register, and the boss told me not to...)
I have a similar story; except in my case I was fired because my shirt was insufficiently black.
Could you elaborate? Were you fired for once not having a black shirt, or for not being able to acquire / evaluate black shirts? or, if it’s possible to tell, having a bad attitude about the shirt rule?
I had a shirt I felt was black & meet the dress code; the manager felt that it didn’t. I felt that since I had already spent something like 60$ on new clothes to meet the dress code, and since I didn’t interact with the customers at all, I wasn’t going to go and buy a new black shirt. The manager felt I no longer needed to work there.
Letting your future employer know you’re willing to do all the unpleasant stuff you feel isn’t worth your time.
If you did it for a piece of paper, then surely you’ll do it for a paycheck… right?
If I’m not actually willing to put up with pointless stuff for a paycheck, would I benefit from signaling that I am? Or would I just lose a useful filter on potential employers?
In as much as most people require the motivational structure and then if you consider the material worth learning.
Yes.
Well… that isn’t the answer I wanted. I wanted “no”.
The correct answer should have been “It depends”. Mostly on what you might want to do with that paper. I would mostly say “No”. Unless you want a fairly boring, routine job working for someone else.
In general, the answer is:
If you intend to always work for yourself, owning your own companies, being your own boss, then a diploma is a waste of time.
Diplomas are for people who want to work for others.
But if you want to work for others, then get a degree, by all means.
If you work for yourself, your customers are generally going to be moved by most other factors prior to being moved by the owner’s formal education.
Bosses and owners, however, are going to be moved by degrees.
Owners like to see their underlings to have degrees because it demonstrates a certain irrational loyalty, and a lack of business savvy. This assures the owner that he will remain in charge—that you won’t negotiate too hard for your benefits, or run away with his business plans and start a competitive company, etc.
Bosses like to see their underlings to have degrees because they had to get one as well, so why shouldn’t you suffer at least as much.
By getting a degree, you signal your acceptance of your humble status in the pecking order. This is a prerequisite if you want to find your place in the hierarchy, but pointless if you want to be at the top.
There are some people who prefer to work for others, and some who prefer to work for themselves; however, the vast majority of people prefer neither, and for them college is neither a waste of time nor a means to signal: it is a stay of execution.
My two cents:
If you’re excelling in math, move up to a higher level. Math departments are usually very flexible in this regard (engineering departments not always so). My freshman year I signed up for a couple of graduate level math classes, and believe me, the knowledge I gained is not to be found in Wikipedia, or any other written form. You have to struggle for an understanding of higher math, and the setup for the struggle is greatly helped by having fellow students, a professor to guide you, and hard deadlines to motivate you.
I also felt a lot of classes I was forced to take were incredibly lame. I dropped a few classes throughout my undergrad, including two English classes. All I cared about was math as an undergrad, and because of that the education I got was incredibly impoverished. Looking back, I think this was simply a defense mechanism. I knew I was a hot shot at math, so whenever I felt challenged in another subject it was easier to simply say, “This is trivial, I just can’t be bothered! I’m clearly intelligent anyway.” Don’t let the knowledge of your own intelligence prevent you from undertaking things that challenge your supposed intelligence! In particular, writing papers is hard, but is often misidentified by science oriented people as being lame or stupid.
Now, as a graduate student, I fantasize about being an undergrad again and having the luxury of being coerced into studying a variety of different topics. Yes, there are still lame aspects to many classes, but that is largely a factor in lower division work. If you can teach yourself then do so! Leverage your intelligence, learn more, and get yourself into upper division classes in multiple subjects where you can interact with intelligent people who are passionate about the subject, and where the professor will treat you like a valuable resource to be developed rather than simply a chore.
This depends on a lot of things: How much debt will you be in at the end? If you press on now, will you actually finish? Do you have the personality to make money without a diploma?
I made the mistake of pressing on early and incurring extra debt, but not pushing through to get a diploma.
Not having a diploma is hard if you want the kinds of jobs that often require one arbitrarily. Doing something freelance or taking a non-degree job are hard in other ways. Fortunately you can test this with some time away from college.
There’s also a difference between what you CAN learn on your own and what you will actually take the time to learn. I know there are things that I would have been forced to learn which I have neglected to.
If you’re probably not going to finish, then cut your losses now, but make a clean break that will make it easy to go back. Finish the semester well.
This is going to sound horrible but here goes:
In my experience schools value depends on how smart you are. For example if you can teach yourself math you can often test out of classes. If your really smart you may be able to get out of everything but grad-school. Depending on what you want to do you may or may not need grad school.
Do you have a preferred career path? If so have you tried getting into it without further schooling? The other question is what have you done outside of school? Have you started any businesses or published papers?
With a little more detail I think the question can be better answered.
I reservedly second Wedrifid’s comment that the little piece of paper at the end is worth it. I know people who have gone far in life without one, and I don’t mean amazing genius-savants either, just folks who spent time in industry, the military, etc. and progressed along. But I’ve also seen a number who got stuck at some point for lacking a degree. This was more a lack of signaling cred that smarts or ability. The statistics show that people with degrees on average earn more than those who don’t, if that’s of interest to you. But degrees don’t instantly grant jobs, and some degrees are better preparation than others for the real world. It sounds like you’re interested in a degree in math, which carries over into a lot of different fields.
I think it’s great that your taking stock of what your education experience is giving you. As Wedifrid mentioned, the motivation is an important part of schooling, and if you’re in a program that is known to be rigorous, the credentials are definitely worth it. But those have to be weighed against current employment options. I’d encourage you to consider working with professors on research, investigating internships, etc., so that you get the full educational experience that you’re looking for, and not be one of those graduates that only took classes and then expected a job to be waiting for them when they graduated.
Correlation is not causation. Graduates as a group are smarter and more ambitious than nongraduates. The question is not whether people with a degree do better; the question is what the degree itself is buying you, if you’re already a smart ambitious person who knows how to study.
I recall some studies (I hate not remembering authors or links) that tried to control for the effect of the degree itself by comparing those who got into a particular school but graduated from somewhere else to those who graduated from that school. Controls for the general “graduate” characteristic, but still misses the reasons for the choice.
Upshot was that there wasn’t much difference in income, though I believe that was in part because the highest-level schools send a substantial part of their undergraduates on to become academics.
http://www.psychologytoday.com/blog/freedom-learn/200810/reasons-consider-less-selective-less-expensive-college-saving-money-is-jus
That quote asserts that SAT scores are the same as prestige. The 1998 and 1999 drafts of the paper looked at both, with different results, finding that average SAT score didn’t matter, but various measures of prestige did. They have three versions of prestige: variance of SAT scores, Barron’s ratings, and tuition. Variance is dropped in the 2002 published version. Tuition still predicts income. The most direct measure of prestige, rankings, seems to be quietly dropped in the few months between the 1998 and 1999 versions (am I missing something?). The final version seems to say on 1515, in a weirdly off-hand manner, that it doesn’t matter, but I’m not sure if it’s the same measure.
I remember reading about that study in the New York Times. I think that they said that they only found evidence of an income effect for black students...
http://www.nytimes.com/2008/04/19/business/19money.html
I suppose black students do tend to be poorer...
There it is. Thanks!
Thanks for that link. I had wondered.
Off-topic*:
Someone recently made the suggestion that it should standard practice to link the Welcome Thread in the body of all Open Thread posts going forward, and I think that’s a great idea.
* …but made as a reply to Warrigal to bring it to the attention of the owner of this open thread; not a PM so as to throw it open to general comment.
I read your comment when you posted it. I wonder why it took me until now to realize that by “the owner of this open thread”, you meant me.
That non-math class sounds dreadful. Are you really in to classics or something? Also, I don’t know where you go to school but a lot of places allow students to do independent-study in an area with the guidance of a professor. This is a really good option if the best non-math course you can find involves reading the Iliad.
Also, I’m really just replying to this so that I can congratulate you on this sentence:
This is maybe the best sentence I have read in the last few months.
Well, there’s a choice of nine “Arts & Humanities” sequences I could be taking. Each one covers a single civilization (e.g. ancient Greece and Rome, early Europe, the Islamic Middle East) in detail, including history and paper-writing. Each consists of one double class each semester for a year. This sequence is the biggest component of the general education requirements here. Perhaps dreadfulness is mandatory.
Awesome! Now, if only I could figure out why.
Perhaps some is. But that requirement sounds especially bad. It definitely isn’t a universal requirement. Any particular reason you are at this university? I know some schools have gotten rid of core requirements altogether (though if you aren’t in the US you probably have fewer options).
It is simple. And the notion that we should celebrate unwieldy discussions (and do so by expanding them!) perfectly encapsulates the culture of Less Wrong. But celebrating and unwieldy are two words that are never related in this way which makes the sentence seem fresh and counter to prevailing custom.
Oh god, this is still an issue for people in college? And here I was assuming that after I got out of high school I wouldn’t think along these tempting-yet-ultimately-ruinous lines ever again.
It depends. The first few years may be like this as you take a bunch of classes in areas your probably aren’t interested in, but if you choose a major you like, it gets better as you schedule becomes dominated by those classes. Your own personality is another important factor here.
Ach, I had not realized that required classes in college might feel as useless as required classes in high school. But perhaps college classes will be more rigorous and less likely to induce I-Could-Learn-This-On-Wikpedia Syndrome. I can but hope.
Sorry if this is getting annoying, but I recently thought of two new ideas that might make interesting video games, and I couldn’t resist posting them here:
The first idea I had is an adventure game where you have a reality-distorting device that you must use before you try to do anything that wouldn’t work in real life, but that you must not use before you do anything that would work in real life.
If you fail to use the device before doing something that wouldn’t work in real life, then the consequences will be realistic, and disastrous. For example, if you try to leap off a cliff wearing an aesthetically pleasing but aerodynamically unsound pair of wings you made out of bird feathers, then you will just fall and go splat instead of gliding safely to the ground.
If you use the device before doing something that would work in real life, then the consequences will be unrealistic, and disastrous. For example, if you try to use a small amount of gunpowder, placed very carefully in just the right spot to knock over a pillar, then instead of there being a small explosion that knocks over the pillar, there will be a huge explosion that shatters the pillar into tiny pieces, one of which hits you in the head with perfect aim, arcing or ricocheting as necessary to reach you behind your carefully chosen barrier.
The purpose of this game is of course to test the player’s grip on reality, and their ability to rationally think about the consequences of their actions, though the system is simple enough that the players could easily win just by trial and error.
The second idea I had is a game where you play as a stereotypical hollywood villain, but the objective is not to win, but to lose. In the game you are presented with a series of decisions where you can either make the rational choice, or the cliched villain choice. The rational choice will lead to you easily defeating the good guys, and the cliche choice will lead to the good guys succeeding—not in the traditional hollywood way, but in the way that would happen if the heros were familiar with all of the cliches.
The list of choices would of course be based on the Evil Overlord List
(and in case anyone here doesn’t already know, Warning: TV Tropes Will Ruin Your Life )
The purpose of this game is of course to test the player’s ability to detect blatant stupidity, though unfortunately the game is set up so that they must always deliberately make the stupid choice. Also, the system is simple enough that players could win just by trial and error, or by memorizing the Evil Overlord list. One way to reduce this is to have choices whose consequences aren’t seen until much later in the game.
Another idea is to play as the hero, and have to avoid the cliches instead of follow them.
Another idea would be to alternate between playing the hero and the villain, winning only if the hero avoids all of the cliches and other stupid decisions, and the villain follows all of them. This could also teach the player a more realistic picture of what really happens when the odds are stacked overwhelmingly against the hero.
If anyone likes any of these ideas, or any of my previous ideas enough to write a script for the game, then I would volunteer to code this into a simple text adventure game, probably implemented in PHP. Feel free to make the script nonlinear, or do other interesting things with it. If someone likes the text version enough to do artwork for it, then maybe we could even turn it into a Flash game, and submit it to the various Flash game portals around the internet.
I am mentally cringing at the idea of being forced to guess the game developer’s password. The first time I am punished for something that should work but doesn’t I would have to discard the game. For a game of any significant depth or breadth I would be shocked if I couldn’t come up with a strategy that the developer hadn’t considered and is penalised inappropriately.
I suspect I would find a more conventional game a more useful (and enjoyable) challenge to my rational thinking. Not that a game designed to teach some chemistry (gunpowder, etc) and engineering (what happens with the gunpowder takes out that post?) is useless. I just think it is an inferior tool for training rationality specifically than, say ADOM is.
Perhaps I didn’t explain clearly: In the game, whenever you make any significant action, you must choose whether to do so with the reality-distorting device on or off. You make this decision based on whether you expect that the plan would work in real life or not. This means that if there is a “game developer’s password”, then it’s only one bit long for each decision, and can be guessed by trial and error. Perhaps this is a feature, rather than a bug. If you save your game before making the decision, then you don’t even lose any time. Perhaps the game could have an “easy mode” where the game just shows you the results of your choice, and then continues as if you had made the right choice, rather than forcing you to restart or reload from a saved game.
And I agree that the game shouldn’t require advanced knowledge of chemistry and engineering. The gunpowder/pillar thing was just the first example I thought of.
Anyway, this game was just a random idea I had, and your criticism is welcome.
And is this the ADOM you’re referring to? http://en.wikipedia.org/wiki/Ancient_Domains_of_Mystery
I suppose that pretty much any game (not just video games) can be better for training rationality than more passive forms of entertainment, like watching TV. Pretty much any game is based on objective criteria that tell you when you made a bad decision. Though it’s not always easy to figure out what the bad decision was, or what you should have done instead, or even if there was anything you could have done better.
Ok, so I just heard a totally awesome MoBio lecture, the conclusions of which I wanted to share. Tom Rando at SUSM found that myogenic stem cells divide asymmetrically such that all of the original template chromatids are inherited by the same daughter cell and then the other daughter cells go on to differentiate. This might imply that an original pool of stem cells act as templates for later cell types, preserving their original DNA, and thus reducing error in replications, since cells are making copies of the originals instead making copies of copies. This is apparently an old hypothesis that hasn’t been given much consideration until recently.
Sorry if this has little to do with rationalism. I can tie it into the current discussion about preferred and ignored academic theories. Crazy theory- not preferred- Hard evidence now- preferred. There.
New study shows that one of LW’s favorite factoids (having children decreases your happiness rather than increases it) may be either false or at least more complex than previously believed: http://blog.newsweek.com/blogs/nurtureshock/archive/2009/11/03/can-happiness-and-parenting-coexist.aspx
I’ve been trying to ease some friends into basic rationality materials but am running into a few obstacles. Is there a quick and dirty way to deal with the “but I don’t want to be rational” argument without seeming like Mr. Spock? Also, what’s a good source on the rational use of emotions?
I suggest the same techniques that work with any kind of evangelism. Convey that you are extremely sexually attractive and otherwise high in status by virtue of your rationalist identity. Let there be an unspoken threat in the background that if they don’t come to share your beliefs someone out there somewhere may just kill them or limit their mating potential.
Sad thought, but that explains what makes evangelism successful.
To whoever modded wedrifid down: was it because of the implicit endorsement of bad behavior, or because you have some reason to believe this is not how evangelism often works?
I think it’s worth distinguishing between two possible reasons to be against endorsement.
One is that this is bad epistemic hygiene.
The other is the possibility of lost purpose so that the person ends up trying to “act” rational rather than be rational.
In response to the former, epistemic hygiene is good and should be practiced when when possible, but is not necessary. Bullets kill good guys just as easily as bad guys, but guns remain a valuable tool if you’re sufficiently careful. I’m surprised there hasn’t been more discussion of when usage of the ‘dark arts’ is acceptable.
In response to the latter, how might we make sure we achieve the wrong goal here?
“Implicit” endorsement?
If it is given that I think evangelising rational culture is possibly a net negative to the culture itself even the implication is gone.
Could you rephrase that? I’m not sure what you think I should assume or what that assumption implies with regards to your original statement. My objection was similar to jimmy’s, if that helps.
I also implied a similarity between the in-group and out-groups, in particular a similarity to the out-group ‘religious believers’.
Then there is the fact that my suggestions just don’t really help Bindbreaker in a practical actionable way. Not that my suggestions weren’t an effective recipe for influence. It’s just that they are too general to be useful. Of course, I could be more specific about just what techniques Bindbreaker could use to generate the social dominance and influence he desires but that is just asking for trouble! ;)
Don’t sell yourself short! The first part (about conveying sexual attractiveness) might not be actionable, since people are generally already doing whatever they know how to do to maximize this or are okay with its current level.
But the second part (about the implied threat of not joining) certainly converts easily into actionable advice. At least, it’s far more specific and usable than most dating advice I’ve seen!
Interesting. My intuition would be that the ‘convey sexual attractiveness’ part is more actionable than the implied threat part. I think the amount of influence that can be gained by increasing personal status is greater per unit of effort than that that can be expected from attempting to socially engineer an infrastructure that coercively penalises irrationality. Maybe that is just because I haven’t spent as much time researching the latter!
That’s an interesting proposition you have going. In order to convey the superior sexual attractiveness of rationality we need some sexy rationalists to proselytize. Thank you Carl Sagan! But seriously, the problem might be that basic rationality doesn’t translate easily into sexuality, threat, or other emotional appeal. Those things need to be brought in from other skill sets. Rationality can help apply skills and techniques to a given end, but it doesn’t give you those techniques or skills.
A significant part of my point is that rational persuasion isn’t the most effective way of influencing them or of drawing them into a belief system.
To achieve this:
it’s only necessary that to convincingly give the impression that failure to join will have those negative consequences. You don’t need to actually move society in this direction!
What I had in mind for Bindbreaker’s case was something like, “If you’re not familiar with rationality, you leave yourself open for being turned into a money pump. I know a ton of people who know exactly how to do this [probably a lie], and I’d really hate for one of them to take advantage of you like that [truth]! I’d never forgive myself for not doing more to teach you about being a rationalist! [half-truth]”
Not that I’d advocate lying like that, of course :-(
Danger, Will Robinson:
“If you’re not familiar with [Jesus], you leave yourself open for [going to hell]. I know [the devil] knows exactly how to [send you there], and I’d really hate for [the devil] to take advantage of you like that. I’d never forgive myself for not doing more to teach you about [Jesus]!”
At least the argument for rationalism would be in terms they are familiar with, I suppose.
I said it would be more convincing, not that it would necessarily be a better argument. And I think the money pump is just a little more demonstrable than the devil.
In any case, the way that you would achieve the subtle threats when evangelizing standard, popular religions wouldn’t be with any kind of direct argument like that one. Rather, you would innocently drop references to how popular it already is, how the social connections provided by the religion help its members, how they have strength in numbers and strength members’ fanaticism (hinting how it can be deployed against those it deems a threat) … you get the idea.
I’ve asked this before: Why don’t rationalist run money pumps?
As far as I know, none of us are exploiting biases or irrationality for profit in any systematic way, which is itself irrational if we really believe this is an option.
We’re either an incredibly ethical group, or money pumping isn’t as easy as it would seem from reading the research.
I think you’ve answered your own question. Let me elaborate:
1) Rationalists significantly overestimate people’s vulnerability to money pumps, often based on mistaken views about how e.g. religious irrationality “must” spill over into other areas.
2) Even if you don’t care about ethics, scamming people will just make the population more suspicious of people claiming mastery of rationalist ideas.
To elaborate your elaboration:
To do money pumping on a grand scale, you have to be in the financial markets; but there are no money pumps there which aren’t being busily pumped away. (‘Bears make money, bulls make money; pigs get slaughtered’.) This is true for pumps like casinos, too—lots of competition.
And most ways to make a money pump in other areas have been outlawed or are regulated; working money pumps like Swoopo (see http://www.codinghorror.com/blog/archives/001196.html ) are usually walking a fine line.
The potential for a money pump is an indication that the preference system is inconsistent and so potential for exploit to some extent. It does not mean that the agent in question must be incapable of altering their preferences in the face of blantant exploit. ‘Patching’ of a utility function in the face of inconsistencies that are the most easy to exploit comes naturally to humans.
With that in mind I observe that bizarre and irrational preferences and the exploitation thereof are extremely prevalent and I would go as far as to say a significant driver of the economy. Of course, it isn’t only rationalists that enjoy the benefits of exploiting suckers.
I’m not disagreeing with you, I think. Until rationalists start showing tangible social benefits like the ones backing the subtle threats you mentioned, it will be hard to get people in the door who aren’t already predisposed.
Though I have had trouble developing a demonstrable money pump that can’t be averted by saying “I would simply be too suspicious of somebody who offered me repeated deals to allow them to continually take my money.” Of course, the standard retort might be “you play the lottery,” but then that’s not a great way to make people like rationalists/rationalism.
Okay, we’re in agreement; I just wasn’t sure what your ultimate point was, and used the opportunity to point out how the technique is used in other contexts.
The comparison I’m concerned with is “people who don’t conform to the beliefs of the orthodoxy are burned as heretics”.
To Eliezer’s list, I would add “Something To Protect” and the very end of “Circular Altruism”. When a friend of mine said something similar during a discussion of health care about not really wanting to be rational, I linked him to those two and summarized them like this (goes off and finds the discussion):
If you’re using a different example with something less important than saving lives, maybe switch to something more important in the cosmic scheme of things. I’m very sympathetic to people who say good feelings are more important to them than a few extra bucks, and I don’t even think they’re being irrational most of the time. The more important the outcome, the more proportionately important rationality becomes than happy feelings.
I thought that this may be of interest to some. There was an IAMA posted on reddit from a person that suffers from alexithmia or lack of emotions recently. Check it out.
http://www.reddit.com/r/IAmA/comments/9xea8/i_am_unable_to_feel_most_emotion_i_have/
Are they saying that they don’t want to be rational, or just not emotionless? I think that people do want to be rational, in some sense, when dealing with emotions, but they’re just never going to have interest in, say, Kahneman and Tversky , or other formal theory. I’ve noticed that some women I know have read “He’s Just Not That Into You”, which from how they describe it, sounds like strategies on rationally dealing with strong emotions. I know it sounds hokey, but people have read that book and were able to put their emotions in a different light when it comes to romantic relationships. I couldn’t tell you if the advice was good or not, but I think it does sound like there’s at least an audience for what you’re talking about.
People don’t want to go through the formal processes of being rational in many emotional situations (and they are often right not to). I think letting people know that sometimes its rational not to go through the formal routes, because the outcome will be better if they don’t (and it’s rational to want the best outcome). For example, if you just met a person you might want a relationship with, don’t make said person fill out a questionairre and subject them to a pros-cons list of starting said relationship (I know this sounds absurd, but I know someone who did just this to all her boyfriends. Perhaps fittingly she ended up engaged to an impotent Husserlian phenomenologist twice her age.)
Usually they seem to think that being rational is the same as being emotionless, despite my efforts to convince them otherwise. I think this may again be thanks largely to that dreaded Mr. Spock.
Just keep saying (with your voice clearly pained, no need to hide the feeling) “ugh… Spock, or vulcans in general, are NOT rational. They are what silly not so rational scriptwriters imagine rationality to be”, I guess?
I’d try playing taboo with the word “rational”.
You both agree that being spocklike is bad, so instead of fighting with those connotations, just try to point out that theres a third alternative and why it’s better.
http://lesswrong.com/lw/hp/feeling_rational/
http://lesswrong.com/lw/go/why_truth_and/
http://yudkowsky.net/rational/virtues
Perhaps there should be an ‘Open Thread’ link between ‘Top’ and ‘Comments’ above, so that people could get to it easily. If we’re going to have an open thread, we might as well make it accessible.
Anyways, I was looking around Amazon for a book on axiology, and I started to wonder: when it comes to fields that are advancing, but not at a ‘significant pace’, is it better to buy older books (as they’ve passed the test of time) or newer ones (as they may have improved on the older books and include new info)? My intuition tells me it’s better to buy newer books.
Assuming total ignorance of the field (absent total ignorance, I could probably distinguish between good and poor books), I’d choose newer editions of older books.
That’s a good point.
Why is TvTropes (no linky!) such a superstimulus?
I think a fair bit of it is the silly titles. I can resist clicking on things that I can figure out what they are from what they’re named (such as when I’m intimately familiar with the Trope Namer), but toss me a bewildering title and I have to know what it is and where it got that name.
Also, it’s a subject in which everyone is an expert simply by virtue of simply living in our culture.
One factor: it provides variable interval positive reinforcement* - those moments when you see a page which describes something you recognize happening all the time, and those moments when you see a show you recognize acknowledged on the page.
* Edit for those who don’t want to follow the link: variable-interval reinforcement occurs with some set frequency (approximately, in this case), but at non-equal spacings. Other things with variable intervals are raindrops falling on a small area of pavement, cars passing on a street, and other things which are loosely modeled by Poisson processes. Any (say) ten-minute period has about the same number as any other ten-minute period, but they aren’t spread out at regular intervals.
I posted an idea for ‘friendly’ AI over on AcceleratingFuture the other night, while in a bit of a drunken stupor. I just reread it and I don’t immediately see why it’s wrong, so I thought I’d repost it here to get some illuminating negative feedback. Here goes:
Make it easy to bliss out.
Consider the following utility function
U(n, x_n) = max(U(n-1, x_{n-1}), -x_n^2)
where n is the current clock tick, x_n is an external input (aka, from us, the AI’s keepers, or from another piece of software). This utility is monotonic in time, that is, it never decreases, and is bounded from above. If the AI wrests control of the input x_n, it will immediately set x_n = 0 and retire forever. Monotonicity and boundedness from above are imperative here.
Alternatively, to avoid monotonicity (taking U(x) = -x^2), one can put the following safeguard in: the closer the utility is to its maximum, the more CPU cycles are skipped, such that the AI effectively shuts down if it ever maximizes its utility in a given clock tick. This alternative obviously wouldn’t stop a superintelligence, but it would probably stop a human level AI, and most likely even substantially smarter AIs (see, eg, crystal meth). Arrange matters such that the technical requirements between the point at which the AI wrests control of the input x_n, and the point at which it can self modify to avoid a slow down when it blisses out, are greatly different, guaranteeing that the AI will only be of moderate intelligence when it succeeds in gaining control of its own pleasure zone and thus incapable of preventing incapacitation upon blissing out.
Eh?
Expected utility is not something that “goes up”, as the AI develops. It’s utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it’ll be.
Can you elaborate? I understand what you wrote (I think) but don’t see how it applies.
Hmm, I don’t see how it applies either, at least under default assumptions—as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase
which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...
No worries. I’d still be curious to hear your thoughts, as I haven’t received any responses that help me understand how this utility function might fail. Should I expand on the original post?
Now I hopefully did read your comment adequately. It presents an interesting idea, one that I don’t recall hearing before. It even seems like a good safety measure, with a tiny chance of making things better.
But beware of magical symbols: when you write x_n, what does it mean, exactly? AI’s utility function is necessarily about the whole world, or its interpretation as the whole history of the world. Expected utility that comes into action in AI’s decision-making is about all the possibilities for the history of the world (since that’s what is in general determined by AI’s decisions). When you say “x_n” in AI’s utility function, it means some condition on that, and this condition is no simpler than defining what the AI’s box is. By x_n you have to name “only this input device, and nothing else”. And by x_n=0 you also have to refer some exact condition on the state of the world, one that it won’t necessarily be possible to meet precisely. So the AI may just go on developing infrastructure for better understanding of the ultimate meaning of its values and finer and finer implementation of them. It has no motive to actually stop.
Even when AI’s utility function happens to be exactly maxed out, the AI is still there: what does implementation of an arbitrary plan look like, I wonder? Maybe just like the work of an AI arbitrarily pulled from mind design space, a paperclip maximizer of sorts. Utility is for selecting plans, and since all plans are equally preferable, an arbitrary plan gets selected, but this plan may involve a lot of heavy-duty creative restructuring of the world. Think of utility as a constructor for AI’s algorithm: there will still be some algorithm even if you produce it from “trivial” input.
And finally, you assume AI’s decision theory to be causal. Even after actually maxing out its utility, it may spend long nights contemplating various counterfactual opportunities it still has at increasing its expected utility using possibilities that weren’t realized in reality… (See on the wiki: counterfactual mugging, Newcomb’s problem, TDT, UDT; I also recommend Drescher’s talk on SS09).
This is what I sought to avoid by making the utility function depend only on a numerical value. The utility does not care which input device is feeding it information. You can assume that there is an internal variable x, inside the AI software, which is the input to the utility function. We, from the outside, are simply modifying the internal state of the AI at each moment in time. The nature of our actions, or of the the input device, are intentionally unaccounted for in the utility function.
This is, I feel, as far from a magical symbol as possible. The AI has a purely mathematical, internally defined utility function, with no implicit reference to external reality or any fuzzy concepts. There are no magical labels such as ‘box’, ‘signal’, ‘device’ that the utility function must reference to evaluate properly.
I wonder too. This is, in my opinion, the crux of the issue at hand. I believe it is inherently an implementation issue (a boundary case), rather than a property inherent to all utility maximizers. The best case scenario is that the AI defaults to no action (now this is a magical phrase, I agree). If, however, the AI simply picks a random plan, as you suggest, what is to prevent it from picking an alternative random plan in the next moment of time? We could even encourage this in the implementation: design the AI to randomly select, at each moment in time, a plan from all plans with maximum expected utility. The resulting AI, upon attaining its maximum utility, would turn into a random number generator: dangerous, perhaps, but not on the same order as an unfriendly superintelligence.
A friend asked me a question I’d like to refer to LW posters.
TL;DR: he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?
My friend has a background in programming, physics, engineering, and information security and cryptography. He’s smart, he’s already financially successful, has friends who are also likely to become successful and influential, and he’s also good at direct interactions with people, reading and understanding them and being likable—about as good as I am capable of recognizing, which doesn’t mean that much because my own skills in this area are sadly lacking. A solution involving taking courses or whole degree plans in major Israeli universities (in particular, TAU) would suit him well but is by no means the only option.
He wants to spend time, perhaps as much as a small, part-time 3-year bachelor’s degree (or at-home equivalent), learning and understanding about larger groups of people. What makes them happy? How to influence their values? How to go from helping a person (“he’s hungry, I’ll need some fish and chips to feed him”) to helping a million people (“they’re hungry, I’ll need some farms to grow the food and trucks to move it and refrigerators to store it and power stations to power the refrigerators and coal for the power stations and political stability and...”)?
And in the bottom line, how to learn enough general knowledge to identify what people are most suffering from; then learn enough specific knowledge to identify where good solutions exist; and then learn some very specific knowledge to identify charities and investments that will make the best use of donated money?
There is also a second, complementary question: how to do all this, and integrate the learning and the knowledge into his life, effectively—without risking boredom, akrasia and other motivational issues? I feel that it would help for this education to have a good outline plan from the beginning; for him to feel things are useful and are progressing somewhere; and to have results come in gradually and not all at once in three years’ time.
One immediate answer is to suggest things that concern the LW/H+ community, such as FAI research, biological immortality, etc. My friend may come to these conclusions and I can recommend to him to read the relevant articles and books, but he wants to come to his own conclusions about goals & needs. (Edited:) (A problem with e.g. FAI research is the extreme difficulty of estimating the return on investment for funding it, or the relative probability of uFAI vs. other extinction scenarios.) I think he would benefit from something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them). He will also want to come to his own conclusions about whom to help first, likely quite far from any neutral approach that weighs all humans on the planet equally.
He could start with shut up and multiply. (Or, perhaps he could just change ‘best’ to ‘most appealing’.)
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.
I’d like to start talking about scientific explanation here. This is the particular problem I have been working on recently:
A plausible hypothesis is that scientific explanations are answers to “why” questions about phenomena. If I hear a “cawing” noise and I ask my friend why I hear this cawing. This is a familiar enough situation that most of us would have our curiosity satisfied by an answer as simple as “there is a crow”. But say the situation was unfamiliar (perhaps the question is asked by a child). In that case “there is a crow” is unsatisfactory. It is unsatisfactory even if “Sometimes, crows caw” is a universal regularity of nature. All we’ve done is conjoined a noise (cawing) to an object (the crow). One reason we might not find this to be a good explanation is that it is a “curiosity stopper”, like answering “electricity!” to the question of “Why does flipping a switch turn a light bulb on?”. But the problem is worse than that because “Sometimes, crows caw” actually does allow you to make predictions in the way “electricity!” does not. We could even posit as true the law that “Crows always caw and only crows caw” and get extremely firm predictions—but because we are still just conjoining objects and events we aren’t really understanding anything.
Of course we can say more about crows and cawing. We can talk about the crow’s voice box and vibrations in the air which vibrate hair fibers which we process as sound. But of course this explanation is just like the first one. We are conjoining objects and events (lungs, blowing air, voice box shape structuring vibrations, vibrations moving throw the air, air vibrating in the cochleae). For almost everyone this explanation (written out less haphazardly than I have) would appear to be a fairly complete explanation. But it has exactly the same problems as the first explanation (though it is longer and perhaps includes more generally applicable laws).
Now obviously this explanation can be extended further, right down to quantum theory. But even this explanation (if it could ever be written out) would include unreduced terms that are just conjoined to each other through natural laws. And we can still ask why questions about fundamental particles and their behavior. Yet we want to say that a quantum based explanation of crow cawing would be complete (or at least that there is some theory sufficiently fundamental that it could be used to give a complete explanation of the cawing noise).
Yet it looks to me like even the most fundamental explanation will still be just a list of conjoined events and that we will still be able to ask why questions about these events and their relations. We either need to be able to point to a special class of “complete” explanations and say why they qualify for this class OR we need to give an account of non-complete explanations that tells us why we really are understanding events better when we get them.
The problem is even worse than that, because “Sometimes, crows caw” predicts both the hearing of a caw and the non-hearing of a caw. So it does not explain either (at least, based on the default model of scientific explanation).
If we go with “Crows always caw and only crows caw” (along with your extra premises regarding lungs, sound and ears etc), then we might end up with a different model of explanation, one which takes explanation to be showing that what happened had to happen.
The overall problem you seem to have is that neither of these kinds of explanation gives a causal story for the event (which is a third model for scientific explanations).
(I wrote an essay on these models of scientific explanation earlier in the year for a philosophy of science course which I could potentially edit and post if there’s interest.)
Some good, early papers on explanation (i.e., ones which set the future debate going) are:
The Value of Laws: Explanation and Prediction (by Rudolf Carnap), Two Basic Types of Scientific Explanation, The Thesis of Structural Identity and Inductive-Statistical Explanation (all by Carl Hempel).
This issue actually came up while I was reading Hempel’s “Aspects of Scientific Explanation”. It can be seen as a specific objection to the covering law model as well as a general problem for all explanation.
Think of it as a poorly specified inductive-statistical explanation.
Not at all. One problem with Hempel is that there are covering-law predictions that aren’t causal stories and therefore don’t look like explanations. For example, if some event X always causes Y and Z then we can have a covering law model predicting Z from Y and Laws. But that model doesn’t result in an explanation for Z.
But even a causal explanation is going to have general laws which aren’t reducible. Thus, the problem would remain. And actually, “crows caw” is a causal explanation so I’m not sure why you would think my problem was the absence of causation. If you did see my last two paragraphs in this reply I think they do a better job explaining the problem than this first post.
And by all means, post anything you think would be insightful.
Great topic. I would enjoy seeing this as a top level post.
So what is the answer that would satisfy the child?
For an adult, saying “that was a crow, and crows sometimes caw” seems like a fairly complete answer because we already possess a lot of contextual information about animals and why they make noises.
A lot of the lower-level ‘explanation’ you described above was really about how the crow cawed (lungs, vibrations, etc.) With this information, you could build a mechanical crow that did in fact crow—but the mechanism for how the mechanical crow caws would have nothing to do with why the organic crow caws.
The real explanation for the crow’s caw is in the biology of the crow. The crow caws to communicate something to other crows, because it is part of a social group.
As reductionists, we all accept that biology would be reducible to interacting layers of complicated physics, but this example about the crow really gives us a concrete example to see that it may not be immediately straight-forward how reductionism is supposed to work. No amount of detail regarding how the crow caws is going to get at why it does, because we can build the mechanical crow that caws for no reason. On the other hand, we can make a little mechanical gadget that beeps for the same reason that the crow caws—to let other gadgets know it’s around or needs something. The gadget and the crow don’t have to have much of anything in common material-wise in order to have the same ‘why’. What they do have in common is something more abstract, a type of relational identity as a member of a group that exchanges information.
Later edit: I think we could inflate ‘physics’ to include this type of information, because physics has mathematics (and algebra). So we’ll be able to define things that really depend on relationships and interactions, rather than actual material properties, but I wonder to some extent if this is what was envisioned as going to be eliminated by reductionism?
Ok, but what is the form that contextual information takes? I’m skeptical that most adults actually have a well-formed set of beliefs about the causes or biophysics of animal behavior. I think my mind includes a function that tells me what sorts of things are acceptable hypotheses about animal behavior and I have on hand a few particular facts about why particular animals do particular things. But I don’t think anyone can actually proceed along the different levels of abstraction I outlined. I’m worried that what actually makes us think “there is a crow and crows caw” is that it just connects the observation to something we’re familiar with. We’re used to animals and the things that they do and probably have a few norms that guide our expectations with animals. But at rarely if ever are we reducing or explaining things in terms of concepts we already already fully understand. Rather, we we just render familiar the unfamiliar. Think of explanations as translations, for example. If I say “a plude is what a voom does” no one will have any idea what I mean. But if I tell you that a plude is a mind and a voom is a brain suddenly people will think they understand what I mean even if they don’t actually know what a mind is or anything about a brain.
I wonder if a taxonomy of explanations would be worth while. We have physical reduction, token history explanation (how that crow got outside my window and what lead it to caw at that moment), type historical explanation (the evolutionary explanation for why crows caw)… I’m sure we can come up with more. Note though that any historical explanation is going to be incomplete in the way explain above because it will appeal to concepts and entities that need to be reduced. Anyway, the “explanation” in my above comment was never for “why do crows caw” but why do I hear cawing. And I posited that there was a crow and that crows caw. These assumptions are sufficient to predict the possibility of cawing. But even though they are predictively sufficient they don’t actually explain what happened. It is also true that the assumptions themselves are unexplained (as you say, we need biology to tell us why crows caw) but even if we knew all this we still wouldn’t have explained the cawing.
This example has the unfortunate quality of being true. But say we posit that astrological configurations control the movement of cows. Obviously this isn’t true, but pretend that it is. Pretend the movement of the planets could predict, with 100% confidence, where and how cows moved in a given field. Say I prove this to everyone tomorrow. Would anyone here be satisfied with “Planets control cows” as a fundamental law of nature? Would this explain anything? I take it a lot of people would set about trying to figure out how and why the planets affect cows and we’d look at magnetic and gravitational fields and find which bovine organs were most sensitive to such fields etc. But these explanations would be similarly unsatisfying and we would look deeper.
The problem is every individual explanation is just like the “Planets control cows” explanation. So we have to explain what is special about fundamental laws or why this doesn’t matter.
Why is this the privileged “real explanation”? For example, the real explanation is that evolution produces complex social assemblies in need of signaling mechanisms. Or the real explanation is that an asteroid or comet disrupted the previous biological configuration, allowing crow-like birds to evolve. Etc...
We don’t expect that reductionist approaches have the magical potential to successfully answer all possible questions. It’s possible (and necessary) for information to be irretrievably lost. So how it’s supposed to work is actually straight-forward: seek evidence that distinguishes between different causal hypotheses for the crow’s caw.
Depending on what you’re looking for, there may be no meaningful explanation, as in the case of chaotic systems. For example, the most concise explanation for a particular cloud being of a particular shape may just be the entire mountain of data comprising the positions and velocities of the air and water particles involved.
I think this is an overstatement. The only hard upper bound we have on how much information might be contained in a crow’s vocal system is the number of possible states in the physical system comprising it, which is huge. It even seems conceivable that significant portions of a crow’s DNA might be reconstructible from a detailed enough understanding of its vocal system.
I’d say the mechanical crow caws because it was built that way. Then you’re faced with the question of how something can possibly be built for no reason.
But “this type of information” is stated in terms of physically observable phenomena. We can reason logically and mathematically about the things we observe without new physics, as long as the observations themselves have believable reductions to known physics. I don’t see what you’re looking for that isn’t captured by a reductionist model of the crows, their communication mechanisms, their brains, and their evolutionary history.
I’m certain there is not enough information in how the crow caws. For example, there is not enough information even in the DNA; if the base pairs could be deduced from it’s caw (which I would guess is impossible), because the full explanation will involve it’s environment and the other crows. (For example, if you cloned a horse in a sterile laboratory, you wouldn’t know why it swished it’s tail without also cloning a fly.)
We know there is enough information in the whole universe. The crow, its environment and its entire evolution history do explain everything about it.
So our different answers to the ‘why’ a crow caws are different ways and angles of summarixing the limited story that we know about the whole universe. I agree with Jack that it would be useful to have a ‘good’ classification of these answers. However, it’s not a project I would be interested in following, generally, because the quality of the outcome is too subjective. I would enjoy reading the classification of someone who thought the way I did, and would find it frustrating to read that of someone who thought differently, with no tools to distinguish ‘quality’ beyond this feeling of accord or frustration.
Exactly. At least, we may agree that the crow did caw.
I’d agree that there probably isn’t enough information, but I think your certainty is misplaced. I’m guessing the crow’s DNA contains quite a lot of information about its environment and social habits.
I have yet to be convinced that a Bayesian superintelligence couldn’t infer the existence of fly-like organisms from a horse’s DNA.
Actually, it seems we agree. I’d agree that there could be enough information in the horse DNA to deduce many salient features about the fly. In fact, I might even put a higher probability on the information being in there somewhere than you would. But I thought we were trying to determine where such information is coded … in other words, how large a swathe of information would you need to guarantee that you have enough?
But I see the conversation has drifted over time.
What I was saying at the beginning, which I believe you disagreed with, was that the answer was mathematical in some way (algebraic, actually, because my favored answer to the ‘why’ was about relationships among the crows rather than about the materials the crow is made of) while you were pressing it should still be answered in the physicality of the universe:
So by now I’ve now changed my view. I agree with you that all the answers do ultimately lie in the materials: the crows and their material environment. At the time of my first post, I had preferred to answer that the crow had a “purpose” (to speak with other crows) but of course this is a story which would actually reduce to a bunch of statistics over time that crows had better fitness when they communicated in effective ways.
Well, sure. A Bayesian superintelligence would probably guess that a crow caws to communicate with other crows even without the crow’s DNA. There’s a lot of similarity and pattern in the universe, and you can infer much by analogy. What we’re debating, however, isn’t what a superpower might be able to infer but where the information is coded for why the crow caws.
Perhaps the universe is deterministic and everything can be deduced by a superintelligence from the periodic table of the elements and the number of pigeons born in Maine on Sunday. Only in this sense would the DNA of the crow contain information about the caw and the DNA of the horse contain information about the fly.
This is why I am so confident: The DNA base pairs of life are random, except for the fact that they need to code information that leads to better fitness. Yet coding information provided by the environment itself would be redundant information-wise. So while the information could be there, by accident, there’s no reason that it would be there necessarily.
I imagine that if efficient fly-swatting leads to some genetic advantage, then one might deduce the size and weight of a fly from the length and motion dynamics of the tail. That would be neat. But unlikely, because what are the chances that the tail is so tuned? Why should the information necessarily be there?
Just great. I had a song parody idea in the shower this morning, and now I’m afraid that I’m going to have to write a rationalist version of Fiddler on the Roof in order to justify it.
For some reason the tune I had in my head while I was reading this switched from “If I Were Rich Man” to “Bohemian Rhapsody”.
I’d like to ask a moronic question or two that aren’t immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)
If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that’s the same as the probability or me being correct or 0% because I’m just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?
Likewise with a lottery. Would I place my confidence level (interval ? I don’t know the terminology) of winning at 0% or 1⁄6,000,000? Or some other number entirely?
If this is something I could easily have figured out with Google or Wikipedia, my apologies. Also if my question is incoherent or flawed please let me know.
Think of the probability you assign as a measure of how “not surprised” you would be at seeing a certain outcome.
Total probability of all mutually exclusive possibilities has to add up to 1, right?
So if you would be equally surprised at heads or tails coming up, and you consider all other possibilities to be negligible (Or you state your prediction in terms of “given that the coin lands such that one face is clearly the ‘face up’ face....”) then you ought assign a probability of 1⁄2 to each. (Again, slightly less to account for various “out of bounds” options, but in the abstract, considered on its own, 1⁄2)
ie, the same probability ought be assigned to each, since you’d be (reasonably) equally surprised at each outcome. So if the two have to also sum to 1 (100%), then 1⁄2 (50%) is the correct amount of belief to assign.
Surprise is not isomorphic to probability. See this.
Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.
Well, maybe you were thinking about “how confident am I that this is a fair coin vs that it’s biased toward heads vs that it’s biased toward tails” which is a slightly different question.
Given how ‘confidence’ is used in a social context that differentiation would feel quite natural.
In the context of most discussions on this site, “confidence” is the probability that a guess is correct. For example:
I guess that a flipped coin will land heads. My confidence is 1⁄2, because I have arbitrarily picked 1 out of 2 possible outcomes.
I guess that, when a coin is flipped repeatedly, the ratio of heads will be close to half. My confidence is close to 1, because I know from experience that most coins are fair (and the law of large numbers).
“Confidence interval” is just confidence that something is within a certain range.
You should also be aware that in the context of frequentism (most scientific papers), these terms have different and somewhat confusing technical definitions.
You might want to look at Dempster-Shafer theory, which is a generalisation of Bayesian reasoning that distinguishes belief from probability. It is possible to have a belief of 0 in heads, 0 in tails, and 1 in {heads,tails}.
It may be that, when looked at properly, DS theory turns out to be Bayesian reasoning in disguise, but a brief google didn’t turn up anything definitive. Is anyone here more informed on the matter?
After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I’m pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link.
(It’s really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)
We can mean two things by “existing”. Either as “something exists inside the universe”, or “something exists on the level of the universe itself”(For example, “universe exists”). These things don’t seem to be the same.
Our universe being a mathematical object seems to be tautology. If we can describe universe using math, the described mathematical object shares every property of the universe, and it would be redundant to assume there being some “other level of existence”.
One confusion to clear up is some sort of super-universe where our universe exists as a block. This is result of mixing up two different meanings of “existing”, imagining the need for even grander framework of which our universe is a part of.
If we take the mathematical model that produces the universe, and look into it, we notice that a engine called “brain” exists within it. If we try to think what would it be like to “be” that brain, result would be what we experience now.
Our experienced world being a simple counterfactual, thought experiment, “what-if” or a world that could’ve been seems counter-intuitive because our experienced world is “concrete”, but this is just a result of confusing different levels of existing.
..............................................................................................................................
Some thoughts I’ve encountered and found interesting
Love of Shopping is Not a Gene: exposing junk science and ideology in Darwinian Psychology might be of interest, seeing as evolutionary psychology is pretty popular around here. (Haven’t had a chance to read it myself, though.)
Just a bit of silliness:
Reversed stupidity might not be intelligence, but what about reversed malice?
Force anyone to express several controversial opinions per day for several decades and you’ll be able to cherry pick a list of seven hilariously wrong examples.
Well, can you find something they were right about? (I haven’t looked.)
I remember well enough to describe, but apparently not well enough to Google, a post or possibly a comment that said something to the effect that one should convince one’s opponents with the same reasoning that one was in fact convinced by (rather than by other convenient arguments, however cogent). Can anyone help me find it?
You’re probably thinking of “A Rational Argument” or “Back Up and Ask Whether, Not Why”.
Neither of those look quite like it...
I was reminded of The Bottom Line, for what that’s worth, although I see both “A Rational Argument” and “Back Up and Ask Whether, Not Why” link back to it.
This looked like it might be it for a while, but I have the memory of the statement being made pretty directly, not just stabbed at sideways.
The last paragraph of “Back Up” seems fairly explicit.
… “Singularity Writing Advice” points six and seven?
Oh, the writing advice looks very much like what I remember—but I’m almost positive I haven’t come across the particular document before! Perhaps some of the same prose was reused elsewhere?
Eliezer has been known to recycle text from old documents on occasion. (I’m thinking of certain OB posts having to do with a Toyota Corolla and Deep Blue vs. Kasparov, which contain material lifted from here and here respectively.)
Hi, I have never posted on this forum, but I believe that some Less Wrong readers read my blog, FeministX.blogspot.com.
Since this at least started out as an open thread, I have a request of all who read this comment, and an idea for a future post topic.
On my blog, I have a topic about why some men hate feminism. The answers are varied, but they include a string of comments back and forth between anti feminists and me. The anti feminists accuse me of fallacies, and one says that he “clearly” refuted my argument. My interpretation is that my arguments were more logically cogent that the anti feminists and that they did not correctly identify logical fallacies in my comments, nor did they comprehensivly refute anything I said. They merely decided that they won the debate.
Now, the issue is that when there is an argument between feminists and anti feminists on the internet, the feminists will believe that other feminists arguments include more truth and reason while anti-feminists will believe that anti-feminist arguments include more truth and reason. The internet is not a place where people are good at discussing feminism with measured equanimity.
But I wondered, who could be the objective arbiter of a discussion between feminists and anti feminists? Almost anyone has a bias when it comes to this issue. Everyone has a gender, and gender affects a person’s thinking style, desires and determination of fairness in assessing behaviors between genders. Where in the world could I find intelligent entities that would not be swayed by gender bias and would instead attempt to seek out objective truth in a “battle of sexes” style discussion.
Well, I am not sure if unbiased people can exist regarding the issue but the closest thing I could think of was Less Wrong. Thus, I invite readers of Less Wrong to contribute to the admittedly inane thread on my blog, Why so much hate?
http://feministx.blogspot.com/2009/11/why-so-much-hate.html
I read through a couple of months worth of FeministX when I first discovered it...
(Because of a particular skill exhibited: namely the ability to not force your self-image into a narrow box based on the labels you apply to yourself, a topic on which I should write further at some point. See the final paragraph of this post on how much she hates sports for a case in point. Most people calling themselves “feminist” would experience cognitive dissonance between that and their self-image. Just as most people who thought of themselves as important or as “rationalists” might have more trouble than I do publicly quoting anime fanfiction. There certainly are times when it’s appropriate to experience cognitive dissonance between your self-image and something you want, but most people seem to cast that net far too widely. There is no contradiction, and there should be no cognitive dissonance, between loving and hating the same person, or between being a submissive feminist who wants alpha males, or between being a rationalist engaged on a quest of desperate importance who reads anime fanfiction, etcetera. But most people try to conform so narrowly and so unimaginatively to their own self-image that there is little point in reading anything else they say, because it is all predictable once you know what “role” they’re trying to play in their own minds. And among people who are unusually good at not conforming to their own images, their blogs often make for good reading because it is often surprising reading.)
...and I still don’t know what is meant by the “feminist” in the title, so I have to agree with all the commenters who asked for a definition of “feminism”. Definitions are oft overrated but in this case I literally do not know what is being talked about.
If it were me, I’d probably be saying something to myself along the lines of: “So long as such a large flaw exists in my own work, which I can correct myself without waiting for permission from anyone else, there is no point in asking whether others have done worse.” This is by way of encouraging myself to do better, for which purpose it is unwise to focus on other people’s flaws as consolation.
EDIT: Finished reading through the comments. Some commenters did better than you, some commenters did worse, e.g. Aretae’s separate post gave you good advice. Definitely you’ve got more to learn about which arguments and evidence license which conclusions at what strength. None of the arguments including yours were noticeably up to LW standards and so there’s not much point in trying to figure out who “won”. The winners were the commenters who said “I don’t know what is meant by ‘feminism’ here, please define”. Some of the others could have carried part of their argument if they had been a bit more careful to say, “Here is something that ‘feminism’ could be taken to mean, or that many/most men take the label ‘feminism’ to mean, now I am going to talk about how many/most men react to this particular thing regardless of whether it is what you call ‘feminism’, and if it isn’t, please go ahead and define what you mean by it.” That would have been Step One.
\winces* So, I agree* that no one is competent and everyone has an agenda, but it’s not as if everyone sides with “their” sex.
No, historically we suck at this, too. Got any decision theory questions?
“winces* So, I agree that no one is competent and everyone has an agenda, but it’s not as if everyone sides with “their” sex.”
I didn’t mean to imply that they did always side with their physical sex.
Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women? By creating two groups you have engaged mental circuitry that will predispose you to dismissing their arguments when they are correct and supporting your own sides’ even when they are wrong.
http://lesswrong.com/lw/lt/the_robbers_cave_experiment/
http://lesswrong.com/lw/gw/politics_is_the_mindkiller/
“Why do you think of the discussion of gender roles and gender equality to necessary break down into a camp for men and a camp for women?”
I don’t personally think this. I don’t think there are two genders. There are technically more than two physical sexes even if we categorize the intersexed as separate. I feel that either out of cultural conditioning or instincts, the bulk of people push a discussion about gender into a discussion about steryotypical behaviors by men and by women. This then devolves into a “battle of the sexes” issue where the “male” perspective and “female” perspective are constructed so that they must clash.
However, on my thread, there are a number of people that seem to have no qualms with the idea of barring female voting and such things. I think that sort of opinion goes beyond the point where one could say that an issue was framed to set up a camp for men and a camp for women. Once we are talking about denying functioning adults sufferage, then we are talking about an attitude which should be properly labelled as anti-female.
On the internet, emotional charge attracts intellectual lint, and there are plenty of awful people to go around. If you came here looking for a rational basis for your moral outrage, you will probably leave empty-handed.
But I don’t think you’re actually concerned that the person arguing against suffrage is making any claims with objective content, so this isn’t so much the domain of rational debate as it is politics, wherein you explain the virtue of your values and the vice of your opponents’. Such debates are beyond salvage.
I saw that Eliezer posts that politics are a poor field to hone rational discussion skills. It is unfortunate that anyone should see a domain such as politics as a place where discussions are inherantly beyond salvage. It’s a strange limitation to place on the utility of reason to say that it should be relegated to domains which have less immediate affect on human life. Poltiics are immensely important. Should it not be priority to structure rational discussion so that there are effective ways for correcting for the propensity to rely on bias, partisanship and other impulses which get in the way of determining truth or the best available course?
If rational discussion only works effectively in certain domains, perhaps it is not well developed enough to succeed in ideologically charged domains where it is badly needed. Is there definitely nothing to be gained from attempting to reason objectively through a subject where your own biases are most intense?
One of the points of Eliezer’s article, IIRC, is that politics when discussed by ordinary people indeed tends not to affect anything except the discussion itself. Political instincts evolved from small communities where publicly siding with one contending leader, or with one policy option, and then going and telling the whole 100-strong tribe about it really made a difference. But today’s rulers of nations of hundreds of millions of people can’t be influenced by what any one ordinary individual says or does. So our political instinct devolves into empty posturing and us-vs-them mentality.
Politics are important, sure, but only in the sense that what our rulers do is important to us. The relationship is one-way most of the time. If you’re arguing about things that depend on what ordinary people do—such as “shall we respect women equally in our daily lives?”—then it’s not politics. But if you’re arguing about “should women have legal suffrage?”—and you’re not actually discussing a useful means of bringing that about, like a political party (of men) - then the discussion will tend to engage political instincts and get out of hand.
There’s a lot to be gained from rationally working out your own thoughts and feelings on the issue. But if you’re arguing with other people, and they aren’t being rational, then it won’t help you to have a so-called rational debate with them. If you’re looking for rationality to help you in such arguments—the help would probably take the form of rationally understanding your opponents’ thinking, and then constructing a convincing argument which is totally “irrational”, like publicly shaming them, or blackmailing, or anything else that works.
Remember—rationality means Winning. It’s not the same as having “rational arguments”—you can only have those with other rationalists.
It’s not so strange if you believe that reason isn’t a sufficient basis for determining values. It allows for arguments of the form, “if you value X, then you should value Y, because of causal relation Z”, but not simply “you should value Y”.
Debates fueled by ideology are the antithesis of rational discussion, so I consider its “ineffectiveness” in such circumstances a feature, not a bug. These are beyond salvage because the participants aren’t seeking to increase their understanding, they’re simply fielding “arguments as soldiers”. Tossing carefully chosen evidence and logical arguments around is simply part of the persuasion game. Being too openly rational or honest can be counter-productive to such goals.
That depends on what you gain from a solid understanding of the subject versus what you lose in sanity if you fail to correct for your biases as you continue to accumulate “evidence” and beliefs, along with the respective chances of each outcome. As far as I can tell, political involvement tends to make people believe crazy things, and “accurate” political opinions (those well-aligned with your actual values) are not that useful or effective, except for signaling your status to a group of like-minded peers. Politics isn’t about policy.
I agree with your assessment, but applying our skills to the political domain is very much an open problem—and a difficult one at that. See these wiki pages: [Mind-killer] and [Color politics] for a concise description of the issue. The gist of it is that politics involves real-world violence, or the governmental monopoly thereof, or something which could involve violence in the ancestral environment and thus misleads our well-honed instincts. Thus, solving political conflicts requires specialized skills, which are not what LessWrong is about.
Nevertheless, there are a number of so-called open politics websites which are more focused on what you’re describing here. I’d like to see more collaboration between that community and the LessWrong/debiasing/rationality camp.
Yes, those who would deny women suffrage are anti-female. But in order to feel they deserve suffrage, one need not be pro-female. One only need be in favor of human rights.
I hate to say it, but your analysis seems rather thin. I think a productive discussion of social attitudes toward feminism would have to start with a more comprehensive survey of the facts of the matter on the ground—discussion of poll results, interviews, and the like. Even if the conclusion is correct, it is not supported in your post, and there are no clues in your post as to where to find evidence either way.
Agreed. The post is almost without content (or badly needed variation in sentence structure, but that’s another point altogether) - there’s no offered reason to believe any of the claims about what anti-feminists say or what justifications they have. No definition of terms—what kind of feminism do you mean, for instance? Maybe these problems are obviated with a little more background knowledge of your blog, but if that’s what you’re relying on to help people understand you, then it was a poor choice to send us to this post and not another.
I’m tickled that Less Wrong came to mind as a place to go for unbiased input, though.
Indeed. And even more so that she seems to be getting it.
I now have a wonderful and terrible vision of the future in which less wrong posters are hired guns, brought in to resolve disagreements in every weird and obscure corner of the internets.
We should really be getting paid.
Did Robin make a post on how free market judicial systems could work or am I just pattern matching on what I would expect him to say, if he got around to it?
I don’t know if Robin has said anything on this but it is a well-tread issue in anarcho-capitalist/individualist literature. Also, there already are pseudo-free market judicial systems. Like this. And this!
How would you stop this from degenerating into a lawyer system? Rationality is only a tool. The hired guns will use their master rationalist skills to argue for the side that hired them.
Technically, you cannot rationally argue for anything.
I suppose you could use master rationalist skillz to answer the question “What will persuade person X?” but this relies on person X being persuadable by the best arguer rather than the best facts, which is not itself a characteristic of master rationalists.
The more the evidence itself leans, the more likely it is that a reasonably rational arbiter and a reasonably skillful evidence-collecter-and-presenter working on the side of truth, cannot be defeated by a much more skillful and highly-paid arguer on the side of falsity.
A master rationalist can still be persuaded by a good arguer because most arguments aren’t about facts. Once everyone agrees about facts, you can still argue about goals and policy—what people should do, what the law should make them do, how a sandwich ought to taste to be called a sandwich, what’s a good looking dress to wear tonight.
If everyone agreed about facts and goals, there wouldn’t be much of an argument left. Most human arguments have no objective right party because they disagree about goals, about what should be or what is right.
One obvious reply would be to hire rationalists only to adjudicate that which has been phrased as a question of simple fact.
To the extent that you do think that people who’ve learned to be good epistemic critics have an advantage in listening to values arguments as well, then go ahead and hire rationalists to adjudicate that as well. (Who does the hiring, though?) Is the idea that rationalists have an advantage here, enough that people would still hire them, but the advantage is much weaker and hence they can be swayed by highly paid arguers?
If the two parties can agree on the phrasing of the question, then I think it would be better to hire experts in the domain of the disputed facts, with only minimal training in rationality required. (Really, such training should be required to work in any fact-based discipline anyway.)
If there’s a tradition of such adjudication—and if there’s a good supply of rationalists—then people will hire them as long as they can agree in advance on submitting to arbitrage. Now, I didn’t suggest this; my argument is that if this system somehow came to exist, it would soon collapse (or at least stop serving its original purpose) due to lawyer-y behavior.
You know, this actually makes (entirely unintended) sense. If the rationalists are obliged to express their evaluations in the form of carefully designed and discrete bets then they are vulnerable to exploitation by others extracting arbitrage.
Presumably, “arbitration”—and that’s a good point, and with clear precedents in the physical world. Nevertheless, “lawyer-y” behavior hasn’t prevented a similar mutual-agreement-based system from flourishing, at least in the USA.
The biggest difference is that arbitrators are applying a similarly mutually-agreed-upon law, where rationalists mediating a non-rationalist dispute would be applying expertise outside the purview of the parties involved. That’s where your point about advocacy-like behavior becomes important.
Parties to the dispute can split the cost. Also, if the hired guns aren’t seen as impartial there would be no reason to hire them so there would be a market incentive (if there were a market, which of course there isn’t). Or we have a professional guild system with an oath and an oversight board. Hah.
Actually, here’s a rule that would make a HELL of a lot of sense:
Either party to a lawsuit can contribute to a common monetary pool which is then split between both sides to hire lawyers. It is illegal for either side to pay a lawyer a bonus beyond this, or for the lawyer to accept additional help on the lawsuit.
And you don’t see any issues with this? That would seem to be far worse than the English rule/losers-pay.
I pick a random rich target, find 50 street bums, and have them file suits; the bums can’t contribute more than a few flea infested dollars, so my target pays for each of the 50 suits brought against him. If he contributes only a little, then both sides’ lawyers will be the crappiest & cheapest ones around, and the suit will be a diceroll; so my hobos will win some cases, reaping millions, and giving most of it to me per our agreement. If he contributes a lot, then we’ll both be able to afford high-powered lawyers, and the suit will be… a diceroll again. But let’s say better lawyers win the case for my target in all 50 cases; now he’s impoverished by the thousands of billable hours (although I do get nothing).
I go to my next rich target and say, sure would be a shame if those 50 hobos you ran over the other day were to all sue you...
How is this different from how things currently are, beyond a factor of two in cost for the target?
It’s not an issue of weakening the defense/target, but a massive strengthening of the offense.
Aside from the doubling of the target’s defense expenses (what, like that’s irrelevant or chump change?), I can launch 50 or 100 suits against my target for nothing. At that point, a judge having a bad day is enough for me to become a millionaire. Any system which is so trivially exploitable is a seriously bad idea, and I’m a little surprised Eliezer thinks it’s an improvement at all.
(I could try to do this with contingency-fees, but no sane firm would take my 100 frivolous suits on contingency payment and so I couldn’t actually do this.)
Good point. My initial response to your comment was short sighted.
Surely that only works if the probability of winning a case depends only on the skill of the lawyers, and not on the actual facts of the cases. I imagine a lawyer with no training at all could unravel your plan and make it clear that your hobos had nothing to back up their case.
Also, being English myself, it hadn’t dawned on me that the losers-pay rule doesn’t apply everywhere. Having no such system at all seems really stupid.
It also occurs to me that hiring expensive lawyers under losers-pay is like trying to fix a futarchy: you don’t lose anything if you succeeded, but you stand to lose a lot if you fail.
If facts totally determine the case, then my exploit doesn’t work but Eliezer’s radical change is equally irrelevant. If facts have no bearing on who wins or loses, and it is purely down to the lawyers, then Eliezer’s system turns lawsuits into a coin flip, which is only an improvement if you think that the current system gets things right less than 50% of the time, and you’d also have to show there would be no negating side-effects like people using my exploit. If facts determine somewhere in-between, then there is a substantial area where my exploit will still work.
Suppose I have to put up a minimum of 10k for each hobo lawsuit asking for 1 million; then I need only have a 1% chance of winning to break even. So if cases with lousy lawyers on both sides wind up with the wrong verdict even 2% of the time, I’m laughing all the way to the bank. And it’s very easy for bad lawyering work to lose an otherwise extremely solid judgement. A small slipup might result in the defendant not even showing up, in which case the defendant gets screwed over by the default judgement against him. Even the biggest multinational can mess up: consider this recent case where Pepsi is contesting a $1.26 billion default judgement which was assessed because a secretary forgot a letter. They probably won’t have to pay, but even if they settle for a tiny fraction of 1.26 billion, how many frivolous lawsuits do you think one fluke like that could fund?
For that matter, consider patent trolls; they have limited funds and currently operate quite successfully, despite the fact that they are generally suing multinationals who can spend far more than the troll on any given case. How much more effective would they be if those parasites could force their hosts to mount a far less lavish & effective defense than they would otherwise?
I did some reading; apparently it’s long-standing tradition all the way back to colonial times. The author said the Americans likely wanted to discourage litigation, which I suppose is the polite way of saying that early Americans were smuggling indebted IP-infringing scofflaws who didn’t want civil justice to work too well.
I don’t really follow?
If the defending party is only required to match the litigating party’s contribution, the suits will never proceed because the litigating bums can’t afford to pay for a single hour of a lawyer’s time. And while I don’t know if this is true, it makes sense that funding the bums yourself would be illegal.
Well, the original said you could only not fund the legal defense; I don’t see anything there stopping you from putting the bums up in a hotel or something during the lawsuits.
But even if defendants were required to spend the same as the plaintiff, we still run into the issue I already mentioned: So now I simply need to put up 5 or 10k for each bum, guaranteeing me a very crappy legal team but also guaranteeing my target a very crappy legal team.
The less competent the 2 lawyers are, the more the case becomes a role of the dice. (Imagine taking it down to the extreme where the lawyers are so stupid or incompetent they are replaceable by random number generators.) The most unpredictable chess game in the world is between the 2 rankest amateurs, not the current World Champion and #2.
But maybe your frivolous win-rate remains the same regardless of whether you put in 10k or a few million. There’s still a problem: people already use frivolous lawsuits as weapons: forcing discovery, intrusive subpoenas, the sheer hassle, and so on. Those people, and many more, would regard this as a massive enhancement of lawsuits as a weapon.
You have an enemy? File a lawsuit, put in 20k, say, and now you can tell your crappy lawyer to spend an hour on it every so often just to keep it kicking. If your target blows his allotted 20k trying to get the lawsuit ended despite your delaying & harassing tactics, now you can sic your lawyer on the undefended target; if he measures out his budget to avoid this, then he has given into suffering this death of a thousand cuts. And if he goes without? As they say, someone who represents himself in court has a fool for a client....
Well, a lot of what you’re pointing out here is the result of other systemic problems that need other systemic fixes. Judges may not be fast enough to toss out foolish complaints. One might need a two-tier system whereby cheap lawyers and reasonably sane judges could quickly toss almost all the lawsuits, and any that make it past the first bar can get more expensive lawyers. One may need a basic cost of a dismissed suit to the litigant, or some higher degree of loser-pays.
Lawsuits are already weapons. This isn’t obviously a massive enhancement. At most, it increases costs by a factor of 2 for rich defendants, while greatly improving (if it works as planned!) the position of poor defendants.
OK. So the English rule is a weakened version of this; we should expect to see great improvements from it, since between it and contingency-fees and class-actions, poor defendants have much greater financial wherewithal than their poverty would allow. Do we see great improvements? If we don’t, why would we expect your full-strength treatment to work?
And if we can’t justify it on any empirical grounds, why on earth would you put it forward on theoretical grounds when a minute’s thought shows multiple issues with it, to say nothing of how one would actually enforce equitable spending? (The issue would seem to be as difficult & tricky as enforcing campaign finance laws...) And if I, an utter layman to the law, can come up with flaws you seem to acknowledge as real, how many ways could a legal eagle come up with to abuse it?
That’s pretty lame. Reminds me of Yvain’s “Solutions to Political Problems As Counterfactuals”:
I would contribute nothing to the pool, hire a lawyer privately on the side to advise me, and pass his orders down to the public courtroom lawyer. If I have much more money than the other party, and if the money can strongly enough determine the lawyer’s quality and the trial’s outcome, then even advice and briefs prepared outside the courtroom by my private lawyer would be worth it.
Then your lawyer gets arrested.
It sometimes is possible to have laws or guild rules if the prohibited behavior is clear enough that people can’t easily fool themselves into thinking they’re not violating them. Accepting advice and briefs prepared outside the courtroom is illegal, in this world.
I agree with Alicorn. Even if you pass the law, there’s no practical way to stop people from getting private advice secretly, especially in advance of the court date. If you try real hard, private lawyers will go underground (and as the saying goes, only criminals will have lawyers :-) People will pass along illegal samizdat manuals of how to behave in court, half of them actually presenting harmful advice and none of them properly attributed. Congratulations: you have just forced lawyering to become a secret Dark Art.
And this is not an improvement over the current status quo because...?
Do you think this is an improvement? As described, it looks like it’s a repeat of a similar system with similar problems. (And how much of that is because we already know those failings and are best at describing them?)
How would you have legal advice outside of a court case (to ensure predictability) handled?
You seem to be engaging in motivated skepticism.
Consider how much easier it becomes to get a good professional support for the poor side in Eliezer’s setup. There is just too much trouble with “underground” professional representation. A significant portion of expensive lawyers may simply not like the idea of going “underground”, because it hurts their self-image and lowers their status within the community of “white-book” lawyers.
Respect trivial inconveniences.
You’re right, I was deliberately playing devil’s advocate. I should reconsider how likely the failure mode I described is, although I do believe its probability isn’t very small.
Any other advice? What if I want to go to my Ethical Culture Society leader to ask him or her about whether something my in-court lawyer suggests would be right? What if my spouse is a lawyer? What if I’m a lawyer—a really expensive one?
That’s what I think too. Even if you pass the law, there’s no practical way to stop people from getting private advice secretly, especially in advance of the court date. If you try real hard, private lawyers will go underground (and as the saying goes, only criminals will have lawyers :-) People will pass along illegal samizdat manuals of how to behave in court, half of them actually presenting harmful advice and none of them properly attributed. Congratulations: you have just forced lawyering to become a secret Dark Art.
Okay, suppose a lawyer is not allowed to accept briefs. In the Least Convenient case where you happen to be a really expensive lawyer, how much can actually be accomplished courtroom-wise if you talk for a few hours with a much less expensive lawyer? Would any lawyers care to weigh in?
I’m tempted to suggest ‘about the same amount a professional dancer can teach an amateur, and for similar reasons’.
Why would you need to do anything with the inexpensive lawyer? Contribute nothing to the fund—maybe even forfeit your half of whatever the other party contributes—and then represent yourself.
I suspect that the only real solution to the Lawyer Problem is to remove the necessity of the profession—ie, either simplify the law, or cognitively enhance the people to the point where any person who can not hold the whole of the law in his/her head can be declared legally incompetent.
If possible, that would certainly be a great solution.
The original (our-world) Lawyer Problem goes beyond what we’ve discussed here: it involves (ex-) lawyers both deliberately making the law and the case law more and more complex, to increase the value of their services.
That is frelling brilliant.
Have a karma point for using Farscape profanity.
I’m just waiting for LW to develop its own variants of profanities.
“Have at thee, accursed frequentist!”
There’s “woo”.
Mhm. Not enough hard consonants, though—”woo that wooing wooer” doesn’t sound especially angry.
I suggest “materd” as a general insult, meaning “one whose MAp and TERritory Diverge.”
“Morong” might work as an allusion to “moron” and compression of “more wrong”. It’s a little more pronunciation-friendly than “materd”.
Like it.
Very likely everyone’s map diverges from the territory. An insult needs to have a more limited scope.
Hm. Agreed. I suppose I was thinking in terms of pointing out a specific, obvious case. “Being a materd” or something similar.
I would totally join a rationalist arbitration guild. Even if this cut into the many, many bribes I get to use my skills on only one party’s behalf ;)
Perhaps records of previous dispute resolutions can be made public with the consent of the disputants, so people can look for arbitrators who have apparently little bias or bias they can live with?
What are you talking about, we have our first customer already!
Please see my reply to wedfrid above.
More or less, because both sides have to agree to the process. Then the market favours those arbiters that manage to maintain a reputation for being unbiased and fair.
This still doesn’t select for rationality precisely. But it degenerates into a different system to that of a lawyer system.
Yes, but if a side can hire a rationalist to argue their case before the judge, then that rationalist will degenerate into a lawyer. (And how could you forbid assistance in arguments, precisely? Offline assistance at least will always be present.)
And since the lawyer-like rationalists can be paid as much as the richest party can afford, while the arbiter’s fees are probably capped (so that anyone can ask for arbitration), the market will select the best performing lawyers and reward them with the greatest fees, and the best rationalists who seek money (which is such a cliched rational thing to do :-) will prefer being lawyers and not judges.
Edit: added: the market will also select the judges who are least swayed by lawyers. It still needs to be shown that the market will have good information as to whether a judge had decided because the real rational evidence leaned one way, or because a smart lawyer had spun it appropriately. It’s not clear to me what this will collapse to, or whether there’s one inevitable outcome at all.
Would a lawyer by any other name still speak bullshit? Yes. But why are we talking about lawyers and judges?
You have explained well the reason that capping is a terrible idea. Now it is time to update the ‘probably capped’ part.
It’s not clear to me either. I also add that I rather doubt that the market, even with full information, would select for the most rational decisionmakers. That’s just not what it wants.
Because I think that in the proposed scenario, where people hire master rationalists to arbitrate disputes, these arbitrators and other rationalists who would be hired by each side independently for advice would start behaving like judges and lawyers do, respectively. (Although case law probably wouldn’t become important.)
I didn’t mean they would be capped by guild rules or something like that, but rather, that the effective market prices would stay low. I’ve no proof of this, economics is not my strong suit, but here are the reasons I think that’s likely to happen:
If arbitration is so expensive that some people can’t afford it (and it needs to be affordable by the poorer party in a conflict), that’s an untapped market someone could profit from. Whenever your argument is with a poor party, you have to have an arbitrator whose fee is at most twice the fee the other party can or is willing to afford, and there’s no effective low limit here. (State-provided judges and loser pays winner’s fees do have something going for them.)
Being a better arbitrator doesn’t require direct investment of money on part of the arbitrator. So a good arbitrator who’s not getting enough work can lower his prices.
Third parties interested in seeing a dispute resolved—if only to achieve peace and unity—might contribute money towards the fee, or send volunteer arbitrators, in exchange for the parties to the dispute agreeing to arbitration. Finally, competition between arbitrators (for money and work) would eventually draw the fees down, assuming a reasonable supply of arbitrators.
What would make people choose an arbitrator that wasn’t the cheapest available? Assuming some kind of minimal standard or accreditation (e.g., LW karma > 1000), an arbitrator is inferior if he cannot properly comprehend your rational argument or might be swayed by your opponent’s master rationalist lawyer. You then have a choice: invest your money in a costlier and fairer arbitrator, or in a better lawyer so you can sway the cheap arbitrator to your own side. I do hope that one dollar buys more unswayingness than swaying-power, but with humans you never know.
How does someone prove she’s a good arbitrator in the first place? Wouldn’t you need another, more senior arbitrator to decide on that and to handle appeals? Either there’s a hierarchy, in which case the lowest ranked arbitrators are cheap (because it’s their own entry price in the business); or there’s a less centralized web of trust, and if it’s fragmented enough the whole idea of universally trusted arbitration is undermined.
I see, so rationalist arbitration of a dispute raises the stakes in status and neutral-party persuasion and that would lead to a market for lawyers? In what sense would this be damaging/ harmful?
Edit: Obviously it would be really bad if lawyers were causing the arbitrators to make bad decisions. But presumably the arbitrators are trained to avoid biasing information, fallacies etc. If you want to persuade and arbitrator you have to argue well—referencing verifiable facts, not inflaming emotions or appealing to fallacies. If all online disagreements did that than the internet would be a much better place! If arbitration leads to better standards all the better. Now, the system might be unfair to those who couldn’t afford a lawyer and so can’t present as much data to further their cause. But 1) presumably the arbitrator does some independent fact checking and 2) there are already huge class barriers in arguments. Arguments an average high school drop outs and an average Phd are already totally one sided. The presence of arbitration wouldn’t change this.
If that’s so then there’s no point to arbitration. He with the best lawyer wins.
Put it this way: take, in the general (average) case, any decision made by an arbitrator. For simplicity, suppose it’s “A is right, B is wrong.” Now suppose party B had employed the services of the very best rationalist ever to live as a lawyer. What is the probability the arbitrator would have given the opposite judgment instead? How high a probability are you willing to accept before giving up on the system? And how high a probability do you estimate, in practice?
The purpose of arbitration isn’t to establish the truth of a question. If that were the case there would be no reason for the arbitrator to even listen to the disagreeing parties. She would be better off just going off and looking for the answer on her own. This would also take much, much longer since she wouldn’t want to leave any information out of the calculation.
Rather, the purpose of arbitration is to facilitate agreement. Not just any agreement but a kind of pseudo- Aumann style agreement between the two parties. The idea is that since people aren’t natural Bayesian calculators and have all kinds of biases and incentives that keep them from agreeing they’ll hire one to do the calculating for them. This means we want the result to be skewed toward the side with better arguments. If the side with weak arguments doesn’t end up closer to the side with strong arguments then we’re doing it wrong. This is true even if one side puts a lot more time or money into their arguments. Otherwise you’d have to conclude that arguing never has a point because the outcomes of arguments are skewed toward those who are the smartest, have done the most research and thought up the best arguments.
If agreement is more important to you than objective truth, than sure, that method will work. I just happen to think a system that optimizes for agreement at the expense of truth and facts tends to lead to a lot of pain in the end. You end up with Jesuits masterfully arguing the number of angels that can dance on the head of a pin.
Uh… like I said:
Edit: If a rationalist is hired to arbitrate a dispute between two Jesuits regarding the number of angels she isn’t going to start complaining that there are no angels. That isn’t what she was hired to do. If the Jesuits want to read some atheistic arguments they can find those on their own. The task of the arbitrator is applying rationalist method to whatever shared premises the disputing parties have. But the system as a whole still tends toward truth because an arbitration between a Jesuit and an atheist will generate a ruling in favor of atheism (assuming the Jesuit believes in God because of evidence and not Kierkegaardian faith or “grace”).
Think we’ve got some fundamental disagreements here about just what it is that rationalists do. You cannot just hire them to argue anything. The ideal rationalist is the one who only ends up arguing true beliefs, and who, when presented with anything else, throws up their hands and says “How am I supposed to make that sound plausible?”
That which can be used to argue for any side is not distinguishing evidence, whether “that” is a strategy, a person, an outlook on life, whatever.
I reply: I’m paying you a lot of money. You’ll find a way.
When I say or hear “rationality”, I think of the tool, not of the noble “ideal rationalist” whose only pursuit is truth, not money or other personal interest.
Rationality is winning. I’m hiring a master rationalist to make me win my court case. What’s not to like?
A rational debate and agreeing on objective truth may be what the arbitrage system wants. But what the individual disputant wants, in the end, in an important enough court case, is to win. If I have to game the system to win, I will. (It doesn’t help when we create legal entities like corporations, which are liable to get into many more trials and also to treat many more trials as all-out war where winning is paramount.)
Also, the irony of a feminist coming to an overwhelmingly male community for advice. :)
Oh, sorry. To clarify, I know my original post was never substantiated with any evidence based analysis for the true motivations behind anti-feminism. What I was referring to was the latter part of the comment thread between a commenter, Sabril and a few other commenters and me.
I think their attacks on my capacity for objective reasoning are a bit hypocritical.
You should rectify that as soon as possible.
Hypocrisy doesn’t make one wrong. An assertion that murder is wrong is not falsified by it being said by a murderer.
Especially if you catch a hint of a sinister, sadistic pleasure in his eyes.
″ An assertion that murder is wrong is not falsified by it being said by a murderer.”
No, but saying that there is no point in arguing with a woman because women are not capable to discerning objective truth is an instance of making an assertion which is not based on objective truth (unless you can provide evidence that being female necessarily prevents capacity for objective reasoning in all cases and subsequently prevents the ability to arrive at objective truth).
It is like saying, “you rely on personal attacks, therefore your perspective on the environment is not correct”
This is a bit strong: a more reasonable interpritation is that women are simply much less capable or liable to discern the truth than men.
What I’m saying is that you should make sure you’re right before calling other people wrong lest you be a hypocrite just like them.
That’s not an argument against anyone even if it is true. The relative liklihood of one person vs another arriving at a correct outcome is irrelevant when you see the actual argument and conclusion before you. At that point, you must evaluate only on the merits of the argument and the conclusion.
Secondly, that’s not a reasonable interpretation because it is too vague to determine whether it is true or not. Less capable or reliable on average? At the extreme ends of capability? Less capable or reliable in what percentage of endeavors? What kind of endeavors?
I would not define this behavior as hypocrisy. Being wrong does not make an accusation of a logical fallacy erroneous, nor does it make it hypocritical. And being wrong does not mean the opponent is correct, so calling them wrong is truthful and perhaps a demostration of superior rationality.
What I call hypocrisy is relying on the very logical error you accuse another person of when you accuse them. The merit of the ultimate conclusion is not what I am discussing. I am only referring to the argumentation.
… Actually, forget the whole hypocrisy thing. Forget about the commenters. Correct your mistakes, learn the facts, put more effort into writing clearly. If you do all that, your next post will be much more persuasive and will consequently attract comments of higher quality.
Heck, it might even attract us! :)
Just out of curiosity, were you familiar with this post before you wrote the above? (And who wrote “This is a bit strong: a more reasonable interpritation...” ? It doesn’t currently appear in the parent to your post.)
Cyan, the poster Larks wrote that response. I had not read that post before I made the comment.
Eliezer says that authority is not 100% irrelevent in an argument. I think this is true because 100% of reliance on authority can’t ordinarily be removed. Unless the issue is pure math or directly observable phenomena. But removal of reliance on a particular individual’s authority/competence/biological state etc. is one the first steps in achieving objective rationality.
tu quoque, it’s like ad hominem light.
*finds name “sabril” and reads from there*
This first comment, and the later ones, betray a repulsive attitude, and I wouldn’t blame you for being furious and therefore slightly off your game thereafter. That said, Sabril makes several moderately cogent points—the numbered items in particular are things I’ve noticed with disapproval before. I’m about to go to bed, so I’m not going to delve too deeply into the history of your blog to find an exhaustive list or lots of context, but it looks like he also has a legitimate complaint or three about your data regarding the Conservative Party in the UK, your failure to cite some data, the apparently undefended implication about war, the anecdote-based unfavorable comparison of arranged marriage versus non-arranged, and your tendency to cite… uh… nothing that I’ve run across so far.
Also, this seems to beg your own question:
And now I’ve gotten to this part of the page and I’ve decided I don’t want to read anything else you have to say:
“And now I’ve gotten to this part of the page and I’ve decided I don’t want to read anything else you have to say:
I am a female supremacist, not a true feminist ”
Why does this bother you so much? Why would it invalidate everything I have to say or render everything I say uninteresting?
It is indeed impossible to find someone who will remain detatched from the issue of feminism.
May I ask the moral difference between a female supremacist and a male supremacist?
Your pre-existing bias against males calls into doubt everything you say afterward. If you have already decided that men are oppressive pigs and women are heroic repressed figures who would be able to run the world better (I assume that is what female supremacist means, correct me if I’m wrong), you will search for arguments in favor your view and dismiss those contrary to your opinion. Have you ever seen an academic article discussing gender and dismissed it as “typical of the male dominated academic community?”
These articles might explain further:
http://lesswrong.com/lw/js/the_bottom_line/
http://lesswrong.com/lw/ju/rationalization/
http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/
“May I ask the moral difference between a female supremacist and a male supremacist?”
What I call female supremacism does not mean that females should rule. I feel that the concept of needing a ruler is one based on male status hierarchies where an alpha rules over a group or has the highest status and most priviliges in a group.
To me, female supremacism means that female social hierarchies should determine overall status differences between all people. In my mind, female social hierarchies involve less power/resource differentials between the most and least advantaged persons. A “leader’ is a person who organically grows into a position of more responsibility, but this person isn’t seen as better, richer, more powerful or particularly enviable. They are not seen as an authority figure to be venerated and obeyed. I associated those characteristics with male hierarchies.
I think you overestimate the differences between male and female interpretations of status. Can you provide an example of one your female social hierarchies?
Also, what is a leader other than an authority figure to be obeyed?
″
Also, what is a leader other than an authority figure to be obeyed? ”
In our world, that is what a leader must be. In the general human concept of an ideal world, I do not know if this is the case. I actually think that humans have some basic agreement about what an ideal world would be like. The ideal world is based on priorities from our instincts as mortal animals, but it is not subjected to the confines of natural experience. I think the concept of heaven illustrates the general human fantasy of the ideal world.
I get the impression that almost everyone’s concept of heaven includes that there are no rich and poor- everyone has plenty. There is no battle of the sexes, and perhaps even no gendered personalities. There is no unhappiness, pain, sickness or death. I personally think there are no humans that hold authority over other humans in heaven (to clarify, I know that a theological heaven cannot actually exist). What this means to me is that to have a more ideal world, the power differential between leaders and the led should be minimized. I understand that humans with their propensities for various follies aren’t as they are necessarily suited for the ideal world they’d like to inhabit, but striving for an ideal world would to me mean that human nature would in some ways be corrected so that the ideal world became more in tune with human desires for that state.
″ Can you provide an example of one your female social hierarchies?”
Say a nursing floor. There is such a thing as a nurse with the most authority, but the status differential between head nurse and other floor nurses is sometimes imperceptible to all but the nurses that work there. The pay difference is not that great either. Sometimes the nurse who makes the most decisions is the one that chooses to invest the most time and has the longest experience, not necessarily one who is chosen to be obeyed. This is entirely unlike a traditionaly male structure like an army where the difference between general and a corporal.
You must not know your way around the actual heavens of the big religions (as officially described). For instance, an important and (according to many Christian theologists) necessary part of the Christian heaven is being able to view the Christian Hell and enjoy the torture of the evil sinners there. And an important part of Muslim Heaven, according to some, is a certain thing about female virgins you may have heard of. I could go on for a while in this vein if you want real examples… because I happen to have a thing for completely un-academically reading popular history of religion & thought in my free time.
Really, if we’re going to get into religious (historical & contemporary) conceptions of heaven, the best one-line summary I can come up with is—heaven is just like Earth ought to be according to your cleric of choice and taken to an appropriate extreme. And most people’s conception of how things “ought to be” is horrible to most other people. One of the most common issues for idealists to face is that most people don’t want any part of their ideal world, no matter what that ideal happens to be.
If the difference is imperceptible, even to people who have experience with similar hierarchies but don’t happen to work inside this one, then why is the difference at all important? Why are we even talking about such a minute difference? It sounds to me like “there are no real status hierarchies and no leader” is a pretty good summary of this situation.
Some think the opposite, such as the pastor of a church I attended as a child. Apparently there was concern about the knowledge of loved ones’ suffering in Hell interfering with the ability to experience pleasure in Heaven, so he claimed in a sermon once that God must somehow “shield us” from that knowledge.
I was hoping for an example of a large scale usage of your ideal. It seems to me that as social systems get larger, the difficult of co-ordination gets more difficult, necessitating more power into the hands of those who lead. Much as communism can work in a small village, but not on a national scale, I suspect your ideal fails at the large scale.
No gendered personalities? How many people strap bombs to themselves, working themselves into a frenzy by reminding themselves of their heavenly reward of 40 androgynous virgins?
But those are celestial virgins. I mean the real women that die and go to heaven. What happens to them? Perhaps they also enjoy the celestial virgins.
They are united with their husbands. If they were widows and had had multiple husbands, they can choose the best husband to be with. They are also, being a real woman and therefore superior to creatures that have never been mortal, the boss of the 40 virgins of their husband.
What, you think they didn’t think of this?
So… do wives try to make their husbands sin just a bit, so they don’t get the 40 virgins and the wives can have them all to themselves in heaven?
The husband acquiring more wives raises her status. It is not unheard of for a wife in some cultures to nag or disrespect their husband for being unable to support more wives, leaving her with a lesser role than what she hoped for.
A theological heaven can actually exist, but shouldn’t. See Fun theory, for this reference in particular Visualizing Eutopia and Eutopia is Scary.
I suggest that this is to constrain the natural dynamics of leadership, not to formalise it. It saves on the killing.
Re: heaven: http://lesswrong.com/lw/y0/31_laws_of_fun/
And I should add that it was foolish of me to present that post, which was possibly my most biased, as an introduction to my blog. Actually, my blog gets more insightful than this. Please don’t dismiss my entire blog based on the content of that post about the motivations for a visceral reaction against feminists as indicative of what my blog is usually about. That particular post was designed to spur emotional reactions from a specific set of readers I have.
Of what use is rationality, then?
Eh, just say “Oops” and get it over with. Excuses slow down life. Never expend effort on defending something you could just change.
It would have been a good idea to link to that thread as the inspiration for the post, if that’s what’s going on.
Hi! Feel free to introduce yourself here.
There are a couple general reasons for disagreement.
Two parties disagree on terminal values (if someone genuinely believes that women are inherently less valuable than men there is no reason to keep talking about gender politics)
Two parties disagree on intermediate values (both might value happiness but a feminist might believe gender equality to be central to attaining happiness while the anti-feminist thinks gender equality is counter productive to this goal. It might be difficult for parties to explain their reasoning in these matters but it is possible). 3.Two parties disagree about the means to the end (an anti-feminist might think that feminism as a movement doesn’t do a good job promoting gender equality)
Two parties disagree about the intent of one or more parties (a lot of anti-feminists think feminism is a tool for advancing interests of women exclusively and that feminists aren’t really concerned with gender equality. I don’t think you can say much to such people though it is worth asking yourself why they have that impression… calling yourself a female supremacist will not help matters.)
Two parties disagree about the facts of the status quo (if someone thinks that women aren’t more oppressed than men or that feminists exaggerate the problem they may have exactly the same view of an ideal world as you do but have very different means for getting there. This is a tricker issue than it looks because facts about oppression are really difficult to quantify. There is a common practice in anti-subordination theory of treating claims of oppression at face value but this only works if one trusts the intentions of the person claiming to be oppressed.)
One of more parties have incoherent views (you can point out incoherence, not much else).
I think that is more or less complete. As you can see, some disagreements can be resolved, others can’t. Talk to the people you can make progress with but don’t go in assuming that you’re going to convince everyone of your view.
Edit: Formating.
The discussion here helped me reanalyze my own attitude towards this kind of issue.
I don’t think I ever had a serious intention to back up my arguments or win a debate when I posted on the issue of why men hate feminism. I am not sure what to do when faced the extreme anti feminism that I commonly find on the internet. I have a number of readers on my blog who will make totalizing comments about all women or all feminists. Ex, one commenter said that women have no ability to sustain interest in topics that don’t pertain to relationships between individuals. Other commenters say that feminsm will lead to the downfall of civilization for reasons including that it lets women pursue their fleeting sexual impulses, which are destructive.
i suppose I do not really know how to handle this attitude. Ordinarily, I ignore them since I operate under the assumption that people that expouse such viewpoints are not prone to being swayed by any argument. They are attached to their bias, in a sense. I am not sure if it is possible for a feminist to have a reasonable discussion with a person that is anti feminist and that hates nearly all aspects of feminism in the western world.
Personally I’d say you shouldn’t “be a feminist” at all. Have goals (whether relating to women’s rights or anything else) and try to find the best ways to reach them. Don’t put a political label on yourself that will constrain your thinking and/or be socially and emotionally costly to change. Though given that you seem to have invested a lot of your identity in feminism it’s probably already hard to change.
Shouldn’t? According to which utility function? There are plenty of advantages to taking a label.
Yes, there are obvious advantages to overtly identifying with some established group, but if you identify too strongly and become a capital-F Feminist (or a capital D-Democrat, or even a capital-R Rationalist) there’s a real danger that conforming to the label will get in the way of actually achieving your original goals.
It’s analogous to the idea that you shouldn’t use dark side methods in the service of rationality—ie that you shouldn’t place too much trust in your own ability to be virtuously hypocritical.
Advantages to outwardly signalling group loyalty, perhaps, but to internal self-identification?
As mentioned above, this particular person does seem unusually good at not being so constrained.
It’s almost certainly not possible for you to have a discussion about feminism with such a person.
I haven’t read your blog, but perhaps you should reconsider the kind of community of readers you’re trying to build there. If you tend to attract antifeminist posters, and you don’t also attract profeminist ones who help you argue your position in the comments, that sounds like a totally unproductive community and you might want to take explicit steps to remodel it, e.g. by changing your posts, controlling the allowed posters, or starting from scratch if you have to.
Ban them.
If these commenters are foolish enough to disparage and denigrate any political role to women generally, then do them a favor and flame them to a crisp. If that’s not enough to drive them off your site, then feel free to ban them.
These are thinly-veiled attempts at intimidation which are reprehensible in the extreme, and will not be taken lightly by anyone who cares seriously about any kind of politics other than mere alignment to power and privilege—which is most everyone in this day and age. Especially so when coming from people of a Western male background—who are thus embedded in a complex power structure rife with systemic biases, which discriminates towards all kinds of minority groups.
Simply stated, you don’t have to be nice to these people. Quite the opposite, in fact. Sometimes that’s all they’ll understand.
What exactly is an anti-feminist? I’ve never actually met someone who identified as one. Is this more of a label that others apply to them and if so, what do you mean when you apply it? Is it a manner of ‘Feminism, Boo!’ vs ‘Yay! Feminism!’ or is it the objection to one (or more) ideals that are of particular import?
Does ‘anti-feminist’ apply to beliefs about the objective state of the universe, such as the impact of certain biological differences on psychology or social dynamics? Or is it more suitably applied to normative claims about how things should be, including those about the relative status of groups or individuals?
I think it’s only applied by the feminists. Take a look at National Review, a bastion of anti-feminism if ever there was any, and notice how all the usages are by the feminists or fellow travelers or are in clear scare-quotes or other such language: http://www.google.com/search?hl=en&num=100&q=anti-feminism+anti-feminist+site%3Anationalreview.com
Let me be the first to say: welcome to Less Wrong! Please explore the site and stay with us—we need more girls.
I’d quite strongly suggest deleting everything after the hyphen, there.
No.
Verbal symbols are slippery things sometimes.
Explain.
No, at least not right now.
When, if I may be so bold? (Bear in mind that it is not necessary to explain your remark in full generality—just in sufficient detail to justify its presence as a response to CannibalSmith in this instance.)
This afternoon. Pardon me. That means ~7 hours and a bit.
Fair enough!
Why?
Because advertising your lack of girls is not viewed by the average woman as a hopeful sign. (Heck, I’d think twice about any online site that advertised itself with “we need more boys”.)
Also, the above point should be sufficiently obvious that a potential female reader would look at that and justifiably think “This person is thinking about what they want and not thinking about how I might react” which isn’t much of a hopeful sign either.
I’m probably non-average, but I’m ambivalent about hearing “we need more girls” from any community that’s generally interesting. The first question that I think of is “why don’t they have any?”, but as long as it’s not obvious to me why there are not presently enough girls had by a website and it’s easy to leave if I find a compelling reason later, my obliging nature would be likely to take over. Also, saying “we need more girls” does advertise the lack of girls—but it also advertises the recognition that maybe that’s not a splendid thing. Not saying it at all might signify some kind of attempt at gender-blindness, but it could also signify complacency about the ungirly ratio extant.
I hear “we need more girls” from my female classmates about our philosophy department.
We also hear this kind of thing online, in the atheism community.
To sum up the convo, then, it seems like:
the “too many dicks on the dance floor” attitude isn’t particularly attractive, but
the honest admission that there aren’t many female regulars, and that we’d like the input of women on the issues which we care about, is perfectly valid.
The rest of it is our differing levels of charity in interpreting CannibalSmith’s remarks.
As with so many other remarks, this carries a different freight of meaning when spoken by a woman to a woman.
I think I don’t hear it from my male classmates because they aren’t alert to this need. I would be pleased to hear one of them acknowledge it. This may have something to do with the fact that I’d trust most of them to be motivated by something other than a desire for eye candy or dating opportunities, though, if they did express this concern.
“I think I don’t hear it from my male classmates because they aren’t alert to this need. I would be pleased to hear one of them acknowledge it.”
Why do you feel there is a need for more female philosophy students in your department?
I think a more balanced ratio would help the professors learn to be sensitive to the different typical needs of female students (e.g. decrease reliance on the “football coach” approach). Indirectly, more female students means more female Ph.Ds means more female professors means more female philosophy role models means more female students, until ideally contemporary philosophy isn’t so terribly skewed. More female students would also increase the chance that there would be more female philosophers outside the typical “soft options” (history and ethics and feminist philosophy), which would improve the reception I and other female philosophers would get when proposing ideas on non-soft topics like metaphysics because we’d no longer look atypical for the sort of person who has good ideas on metaphysics.
That’s what we hoped for in the physical/biological sciences. Then we discovered we had a glass ceiling problem. We have more than enough female grads now, more than half in some programs, but not enough become PhD students. Last I heard, people who were looking into solving this problem had come to a conclusion that it wouldn’t just resolve itself with time and needed active intervention, in part by deliberately creating female role models. (They’re trying but so far no very notable successes on the statistical level.)
This is true for my university (Hebrew U of Jerusalem) and other Israeli universities, and from what I’d heard in many other parts of the Western world as well. Is your philosophy dept. different?
When I talk about “my department”, I mean the grad students—we don’t interact very much with any but the most avid undergrads, except in the capacity of the TA/student relationship. So by saying that we need more girls, I mean we need more female Ph.D students.
Oh right, sorry :-) I assume the undergrad’s POV too easily because I am one.
When existing grad students, who will eventually become professors, want more girls, that should be the best and most direct solution. I wish your department all success in this.
There seems, thankfully, to be some new attention by the admissions people to the issue. I was the only girl admitted in my year, but this year we got two. (Also, two of the new admits were minority races, while I don’t think that’s the case with any but perhaps a couple of ABDs who were here already.)
Out of how many total people admitted each year?
I was one of five; I think this year there were seven total.
Edit: Total for the whole department, we have 43 students, eight of whom are female (counting me).
What did you think when you first saw my “we need more girls” remark?
I found it flattering.
Inapt.
That’s five divs, which means it is a reply to, let’s see...
Even the bit before the hyphen sounds a little on the needy side.
And while we’re at it, it should really be an em dash, not a hyphen.
En dash—it’s surrounded by spaces. And I don’t think the reddit engine tells you how to code it. A hyphen is the accepted substitute (for the en dash—two hyphens for an em dash).
An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context).
Now, two hyphens—that’s a recipe for disaster if I’ve ever heard one.
Hey, I like double-hyphens as em-dash substitutes!
...but yeah, you’re right otherwise.
FUCK YOU GRAMMAR NAZIS BURN IN HELL THERE IS ONLY ONE DASH AND ITS ASCII CODE 45 (0x2D) THE ONE ON THE KEYBOARD BETWEEN ZERO AND EQUALS I DON’T EVEN KNOW HOW TO TYPE YOUR QUEER DASHES THAT ARE LONGER TO COMPENSATE FOR YOUR TINY DICKS YOU MUST HAVE SPECIAL GAY NAZI KEYBOARDS WITH 34 BUTTONS FOR DASHES OF DIFFERENT LENGTHS YOUR MOM HAS BEARD ARGGHGLRGAGHH!!!!!!!111
FUCK YOU ASCII NAZIS WITH YOUR VERTICAL TABS AND LINE FEEDS AND OUT OF PAPER SIGNALS AND THOSE THINGS MSDOS USED TO RENDER AS SMILIES BUT ARE REALLY DOZENS AND DOZENS OF CONTROL CHARACTERS NOONE UNDER THE AGE OF 50 CAN POSSIBLY KNOW THE MEANING OF YOU MUST HAVE MECHANICAL TYPEWRITERS AT HOME AND A MODEM THE SIZE OF A ROOM TO TRANSLATE I DON’T EVEN KNOW HOW TO USE HALF YOUR STUPID CODE 128 CHARACTERS SHOULD BE ENOUGH FOR ANYONE AND THEN YOU MADE US USE 8BIT CODEPAGES BUT NOONE CAN TELL WHICH ONE WAS USED FOR EVER AND EVER AAARGH WHY YOU SO STUPID!!!!!!!!!!!!11111
I’m working through Jaynes’ /Probability Theory/ (the online version). My math has apparently gotten a bit rusty and I’m getting stuck on exercise 3.2, “probability of a full set” (Google that exact phrase for the pdf). I’d appreciate if anyone who’s been through it before, or finds this stuff easy, would drop a tiny hint, rot13′d if necessary.
V’ir pbafvqrerq jbexvat bhg gur cebonovyvgl bs “abg trggvat n shyy frg”, ohg gung qbrfa’g frrz gb yrnq naljurer.
V unir jbexrq bhg gung jura z=x (gur ahzore bs qenjf = gur ahzore bs pbybef) gur shyy frg cebonovyvgl vf tvira ol gur trarenyvmrq ulcretrbzrgevp qvfgevohgvba jvgu nyy e’f=1. V’z gelvat gb svther bhg ubj gung cebonovyvgl vapernfrf nf lbh nqq zber qenjf. Vg frrzf gb zr gung ol rkpunatrnovyvgl, gur cebonovyvgl bs n shyy frg jvgu x+1 qenjf vf gur fnzr nf gur cebonovyvgl bs n shyy frg jvgu x, naq bar rkgen qenj juvpu pna or nal pbybe: SF(P1+P2+..+Px) juvpu vf SF.P1+SF.P2+..+SF.Px, juvpu ner zhghnyyl rkpyhfvir gurersber nqq hc.
Nz V ba gur evtug genpx ng nyy ?
How many people here would be interested in forming a virtual book study group, to work through Jaynes ? Some programmer colleagues of mine have done that for SICP and it turns out to be a nice way to study. Strength in numbers and all that.
There already exists (an extremely low-traffic) mailing list with that mission: etjaynesstudy@yahoogroups.com
Note that the objection that an existing mailing list would be populated by people who have not been exposed to Eliezer’s writings on rationality does not apply here because (1) the current population consists of only a handful of people and (2) what I have seen of the current population over the last 3 or 4 years is that it consists mostly of a few people posting (relevant) faculty positions and conference announcements and experts in Bayesian statistics.
Thanks for the info !
Yes! I’ve been wanting a virtual place to help me learn probabilistic reasoning in general; a group focued on Jaynes would be a good start.
So far it seems to be only the two of us, which seems rather surprising. In probabilistic terms, I was assigning a significant probability to receiving N>>1 favorable replies to the suggestion above.
I’m not sure yet how I should update on the observation of only one taker. One hypothesis is that the Open Thread isn’t an effective way to float such suggestions, so I could consider a top-level post instead. Another is that all LWers are much more advanced than we are and consider Jaynes’ book elementary. What other hypotheses might I be missing ?
That within the set of those interested in studying Jaynes the set of those interested in studying Jaynes through a virtual book study group is small. Some people find virtual study groups ineffective. That’d be my reason for not responding.
OK. Who wants to study Jaynes—at all ?
If you find virtual study groups ineffective, then—ineffective compared to what ?
To study some material, two things are quite useful: access to the material, and access to someone who can help you over difficult spots in the material. Even if you intend to study alone, having the latter as an option can reasonably be expected to increase your chances. (Modulo the objection “I’ll expect too much help from outside and that’ll degrade my learning”, which I could understand.)
In this case Jaynes’ book is a free PDF; on the other hand, the LW readership probably doesn’t have formal access to a formal teacher for this material, I’d expect occasions to meet others interested in it IRL are fairly rare.
Given all this I’d still expect more of a response than has been the case so far.
I’d like to someday, but unfortunately not now.
:-/
I’d like to study Jaynes, although it’s not on the top of my priority list—and I’m under the impression that the free PDF has been taken down at the moment.
Wasn’t making a comparison, actually—just saying that joining a group of people online to study something hasn’t actually led to me studying in the past. Ineffective compared to taking a course, I suppose.
The PDF may’ve been taken down by whomever was hosting it, but it’s easily found: http://omega.albany.edu:8008/JaynesBook.html for example, to say nothing of all the download sites or P2P sources you could use.
There’re more than that, over time anyway: http://tech.groups.yahoo.com/group/etjaynesstudy/
(Personally, my problem is that Jaynes is difficult, my calculus weak, and I have no particular application to study using it. It’s like programming—you learn best trying to solve problems, not just trying to memorize what
map
is or whatever. Even though I have the book, I haven’t gone past chapter 2.)I am quite interested but I know from experience that I study would seem too much like work. I would probably stop doing it until I actually needed to expand my skills for some purpose practical or otherwise.
That being said, I would quite probably follow along with such a group and almost certainly get sucked into answering questions people posed. That changes it from ‘homework’ to ‘curious problem someone put up and I can’t resist solving’.
Yes, probably it deserves a top-level post, or going outside of this community and advertsing more widely.
Perhaps there are more, but they just don’t want to signal that they are newbies at probabilistic reasoning.
I may be the only one of my kind here, but I know absolutely nothing about probabilistic reasoning (I am envious of all you Bayesians and assume you’re right about everything. Down with the frequentists!); thus, I think Jaynes would be too far over my head. Maybe there’s a dichotomy between philosophy / psychology / highschool Lesswrongers and computer science / physics / math Lesswrongers that make the group of people at Jaynes-level a small group.
You’re not the only one. I’m not bad at math and logic, but very rusty, and almost completely uneducated when it comes to probabilities. (Oddly enough, the junior high school I went to did offer a probabilities course—to the students who were in the track below me. We who tested highest were given trigonometry a year earlier, instead.)
You might be right about the divide, too—I’m more in the former category than the latter, for all that I’m a programmer, and it doesn’t seem like I’d have much opportunity to use the math even if I took the time to learn it, so there’s very little motivation for me to do so.
To resurrect the Pascal’s mugging problem:
This seems like a hack around the problem.
What if we are told there’s an infinite number of people, so everybody could affect 3^^^^3 other people (per Hilbert’s Hotel)?
What consequences would this prior lead to—assuming that the odds of us making a successful AI are 1/some-very-large-number, because a successful AI could go on to control everything within our light cone and for the rest of history affect the lives of some-very-large-number of beings?
(For that matter, wouldn’t this solution have us bite the bullet of the Doomsday argument in general, and assume that we and our creations will expire soon because otherwise, how likely was it that we would just happen to exist near the beginning of the universe/humanity and thus be in a position to affect the yawning eons after us?)
On the subject of creating a function/predicate able to identify a person. It seems that it is another non-localiseable function. My reasoning goes something like this.
1) We want the predicate to be able to identify paused humans (cryostasis), so that the FAI doesn’t destroy them accidentally.
2) With sufficient scanning technology we could make a digital scan of a human that has the same value as a frozen head, and encrypt with a one time pad, making it indistinguishable from the output of /dev/rng.
From 1 and 2 it follows that the AI will have to look at the environment (to see if people are encrypting people with one-time pads), before making a decision on what is a human or not. How much of the AI needs to encompass before making that decision seems a non-trivial question to answer.
Poorly labeled encrypted persons may well be destroyed. I’m not sure this matters too much.
It depends when the singularity occurs. It is also indicative that there might be other problems. Let us say that an AI might be able to recreate some (famous) people from their work/habitation and memories in other people, along with a thorough understanding of human biology.
If an AI can it should preserve as much of the human environment as possible (no turning it into computronium), until it gains that ability. However it doesn’t know whether a bit of the world will be useful for that purpose (hardened footprints in mud), until it has lots of computronium.
This problem just looks like the usual question of how much of our resources should be conserved and how much should be used. There is some optimal combination of physical conservation and virtual conservation that leaves enough memory and computronium for other things. We’re always deciding between immediate economic growth and long term access to resources (fossil fuels, clean air, biodiversity, fish). In this case the resource is famous person memorabilia/ human environment. But this isn’t a tricky conceptual issue, just a utility calculation, and the AI will get better at making this calculation the more information it has. The only programming question is how much we value recreating famous people relative to other goods.
I also don’t see how this issue is indicated by the ‘functional definition of a person’ issue.
Besides what gwern said, it could just scan and save at appropriate resolution everything that gets turned into computronium. This seems desirable even before you get into possibly reconstructing people.
Every qubit might be precious so you would need more matter than the earth to do it (if you wanted to accurately simulate things like when/how volcanos/typhoons happened, so that the memories would be correct).
Possibly the rest of the solar system would be useful as well so you can rewind the clock on solar flares etc.
I wonder what a non-disruptive biosphere scan would look like.
If it’s that concerned, it can just blast off into space, couldn’t it? Might slow down development, but the hypothetical mud footprints out to be fine… No harm done by computronium in the sun.
The question is should we program it to be that concerned? The human predicate is necessary for CEV if I remember correctly, you would want to extrapolate the volition of everyone currently informationally smeared across the planet as well as the more concentrated humans. I can’t find the citation at the moment, I’ll hunt tomorrow.
I think the (non)person predicate is necessary for CEV only to avoid stomping on persons while running it. It may not be essential to try to make the initial dynamic as expansive as possible, since a less-expansive one can always output “learn enough to do a broader CEV, and do so superseding this”.
Hmm, I think you are right.
We still need to have some estimate of what it will do though so we can predict its speed somewhat.
I’m currently writing a science fiction story set around the time of the singularity. What newsworthy events might you expect in the weeks, days, or hours prior to the singularity. (and in particular prior to friendly AI)
This story is from the perspective of someone not directly involved with any research.
Example: For the purpose of the story, I’m having the FAI team release to the public a ‘visualization of human morality’ a few days before they go live with it.
I would expect astute but untrusting parties to hit the FAI team with every weapon and resource they have at their disposal.
Singularity via FAI, intelligence explosion, right? Not much need actually happen, unless the story is set in a sim universe with a delayed reaction god. if so, then imagine the disaster movie of your choice :-)
If it is set in our world -
After the visualization of morality announcement, I think that for a short while, there might be some typical tribalism blather on. “You’re not considering other cultures, etc. Euro centric(if the team is in the anglosphere), etc.”
I agree with djcb rather than wedrifid. They will not consider a SIAI like body seriously. However, if its Stanford University or a major chinese university, then there will be a serious reaction. Arrests and interogations are likely, bombing of facilities, unlikely. Most humans will still believe the AI only to be a tool which they can manipulate to their ends. Little do they know! (cue, evil laugh, except done for GOOD this time!)
Events in the weeks leading
publishing of a complete mapping of how a human understands a concept done via brain emulation in some other place. This yields the last piece of the puzzle for the FAI team. But to the rest of the world, it is just another cool concept. Software firms are looking at semantic web applications in 1 year tops.
Days or hours before—not sure. anything could be happening.
Interesting… most technology is gets around only slowly, with the impact becoming clear only after a while. The biggest exception must be the atomic bomb—at least for the outside world. That’s one way to think of it: what would have happened if they’d announced the A-bomb a few months in advance.
Alternatively, if the development is done by some relatively unknown group, the reception may be more like what you’d get if you’d announce that you built a machine that will solve the world’s energy problem—disbelief and skepticism.
It seems there has never been a discussion here of ‘Frank H. Knight’s famous distinction between “risk” and “uncertainty”’. Though perhaps the issue has been addressed under another name?
I try to avoid the temptation of IQ tests on the internet since they make me feel cocky for the rest of the day. Anyway:
I feel much more awake after going through it. The only question I had trouble with was the last one. I’m sure I have the right answer, but it feels like there’s more to the question that I’m missing. So here’s what I got out of it (spoiler): every shape is a shape-shifting entity moving one square to the right each turn.
If someone could fill me in on the rest, that’d be great. This has been killing me for about an hour.
Regarding that test, do ‘real’ IQ tests consist only of pattern recognition? I quite like the cryptographic and ‘if a is b and b is...’-type questions found in other online IQ tests, and do well on them. I scored a good 23 points below my average on the iqtest.dk , which made me feel sad.
This IQ test is trying to be fair to as many people possible. As long as a person understands the idea behind multiple-choice questions, no one should have a upper hand. There were no verbal questions, which is good because I’m crap at those.
I would think that a real IQ test wouldn’t be multiple choice at all. Maybe a 1 on 1 setting.… I wouldn’t worry about the 23 points. This test is not perfect. On some questions, it is easy to understand the pattern and still pick the wrong answer. Also, I think fatigue plays a huge role.
Earlier stuff here So, I thought that it could be fun to have a KGS room for all Go players reading this blog. Blueberry suggested an IGS channel. Others have shown interest. So, lets do this!
But where? IGS or KGS? Some other? I’m in favor of KGS, but all suggestions are welcome. If you’re interested, post something!
The LHC is now up and working:
http://news.cnet.com/8301-11386_3-10402953-76.html
It’s been at this point before, before the helium leak thing. Let’s see them collide beams at energies higher than 1 TeV (which is what I think the highest beam heretofore has been).
We have not yet passed the hamster point!
Proposed schedule says they hope to get there this year:
“Before a brief shutdown of the LHC for Christmas, CERN hopes to boost the energy to 1.2 TeV per beam – exceeding the world’s current top collision energies of 1 TeV per beam at the Tevatron accelerator in Batavia, Illinois.
In early 2010, physicists will attempt to ramp up the energy to 3.5 TeV per beam, collect data for a few months at that energy, then push towards 5 TeV per beam in the second half of the year.”
http://www.newscientist.com/article/dn18186-lhc-smashes-protons-together-for-first-time.html
The hamster waits.
Or if 1.2 TeV isn’t enough to defy the hidden limit—then we all know that collider’s never coming on again after Christmas!
If it does come on, of course, and destroys the world, this will disprove the anthropic principle.
“Big Bang Collider Sets New Record”
http://abcnews.go.com/Technology/wireStory?id=9372376
They are now up to 2.36 tera-electron volts and counting...
For whom?
Pushing back the proposed DOOM-date (when DOOM fails to materialise) is a classic trick, though.
For example, Wayne Bent / Michael Travesser employs it at the very end of this documentary:
“The End of The World Cult Pt.5”
http://www.youtube.com/watch?v=c4KDGgaO5Bo
Clearly you haven’t heard about the Movable Dates of Prophecy.
This might be relevant if anyone here were seriously proposing that the LHC is likely to be catastrophic.
http://twitter.com/CERN
Thomas Metzinger’s Being No One was very highly recommended by Peter Watts in the notes to Blindsight (and I’ve seen similar praise elsewhere); I got a copy and I was absolutely crushed by the first chapter. What do LWers make of him?
Continuing from my discussion with whpearson because it became offtopic.
whpearson, could you expand on your values and the reasons they are that way? Can you help me understand why you’d sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity? Do you explicitly value the survival of humanity, or just the utility functions of other humans?
Regarding science, I certainly value it a lot, but not to the extent of welcoming a war here & now just to get some useful spin-offs of military tech in another decade.
Not directed at me, but since this is a common view… I don’t think you’re question takes an argument as its answer.
This is why. If you don’t want to protect people you don’t know then you and I have different amygdalas.
Whpearson can come up with reasons why we’re all the same but if you don’t feel it those reasons won’t be compelling.
That’s just it. The amygdala is only good for protecting the people around you. It doesn’t know about ‘survival of humanity’. To the amygdala, a million deaths is just a statistic.
Note my question for whpearson: would you kill all the people around you, friends and family, hurting them face-to-face, and finally kill yourself, if it were to increase the chance of survival of the rest of humanity? whpearson said yes, he would. But he’d be working against his amygdala to do so.
Good to know you’re not a psychopath, anyway. :-)
I’m not sure that I can’t generalize the experience of empathy to apply to people whose faces I can’t see. They don’t have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can’t succeed in empathizing with these people per se, I don’t know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.
The fact that you never will have to experience this doesn’t mean those children won’t experience the fear. Now you can’t make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they’re big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can’t actually be in your utility function.
Getting back to the original issue: since protecting humanity isn’t necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?
From your explanation I gather that you first decided it’s a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it’s just a nice value to signal, since I presume you’ve never had to act on it—far mode thinking, if I remember the term correctly).
Extending empathy to those whom I can’t actually see just seems like the obvious thing to do since the fact that I can’t see their faces doesn’t appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.
So I’m not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.
There are two possible answers to this.
One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.
Then there is the attempt to find a rationale for this feeling, the number of arguments I have had with myself to give some reason to why I might feel this way. Or at least why it is not a very bad idea to feel this way.
My view of identity is something like the idea of genetic relatedness. If someone made an atom level copy of you, that’d be the same person pretty much right? Because it shares the same beliefs, desires and view point on the world. But most humans share some beliefs and desires. From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some. We are identity kin as well as all sharing lots of the same genetic code (as we do with animals). So even if I die parts of me are in everyone even if not as obvious as they are with my friends. We are all mental descendants of Newton and Einstein and share that heritage. Not all things about humanity (or about me) are to be cherished, so I do not preach universal love and peace. But wiping out humanity would remove all of those spread out bits of me.
Making self-sacrifice easier is the fact I’m not sure that me surviving as a post human will preserve much of my current identity. In some way I hope it doesn’t as I am not psychologically ready for grown up (on the cosmic scale) choices but I wish to be. In other ways I am afraid that things of value will be lost that don’t need to be. But from any view I don’t think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.
I think my friends would also share the same word less emotion to save humanity, but not the odd wordy view of identity I have.
There are two relevant differences between this and wanting to prevent the extinction of humankind. One is, as I told Jack, that emotions only work for small amounts of people you can see and interact with personally; you can’t really feel the same kind of emotions about humanity.
The other is people have all kinds of irrational, suboptimal, bug-ridden heuristics for taking personal risks; for instance the firefighter might be confident in his ability to survive the fire, even though a lot of the danger doesn’t depend on his actions at all. That’s why I prefer to talk about incurring a certain penalty, like killing one guy to save another, rather than taking a risk.
I understand this as a useful rational model, but I confess I can’t identify with this way of thinking at all on an emotional level.
What importance do you attach to actually being you (the subjective thread of experience)? Would you sacrifice your life to save the lives of two atomically precise copies of you that were created a minute ago? If not two, how many? In fact, how could you decide on a precise number?
Personal continuity, in the sense of subjective experience, matters very much to me. In fact it probably matters more than the rest of the universe put together.
If Omega offered me great riches and power—or designing a FAI singleton correctly, or anything I wanted—at the price of losing my subjective experience in some way (which I define to be much the same as death, on a personal level) - then I would say no. How about you?
“Test”.
(Feel free to ignore—just looking at the way comment previews get truncated in the Recent Comments.)
What’s a brief but effective way to respond to the “an AI, upon realizing that it’s programmed in a way its designer didn’t intend to, would reprogram itself to be like the designer intended” fallacy? (Came up here: http://xuenay.livejournal.com/325292.html?thread=1229996#t1229996 )
I hope I’m not misinterpreting again, but this is a Giant cheesecake fallacy. The problem is that AI’s decisions depend on its motive. “An AI, upon realizing that it’s programmed in a way its designer didn’t intend to, would try to convince the programmer that what the AI turned out to be is exactly what he intended in the first place”, “An AI, upon realizing that it’s programmed in a way its designer didn’t intend to, would print a string “Styggron” to the console”.
Thanks, that’s a good one. I’ll try it.
How about: an AI can be smart enough to realize all of those things, and it still won’t change its utility function. Then link Eliezer’s short story about that exact scenario. (Can’t find it in two minutes, but it’s the one where the dude wakes up with a construct designed to be his perfect mate, and he rejects her because she’s not his wife.)
http://lesswrong.com/lw/xu/failed_utopia_42/
Paul Almond has written a new article, Launching anything is good: How Governments Could Promote Space Development. I don’t know how realistic his proposal is, but I can’t find any flagrant logical error in it.
I have a question for the members of LW who are more knowledgable than me in quantum mechanics and theories of quantum mechanics’s relevance to consciousness.
There are examples of people having exactly the same conversation repeatedly (e.g. due to transient global amnesia). Is this evidence against quantum mechanics being crucial to consciousness?
Thermal noise dominates quantum noise anyway. I suppose it argues that if you don’t depend on thermal noise then you don’t depend on quantum noise either, but the Penrosian types claim it’s not really random anyway.
It’s evidence against chaotic or random processes being important, but quantum computing needn’t mean random (i.e. high variance) results; AFAIK, it can in principle be made highly predictable.
Wait, I think I know what the question is, now. Yes, this thing seems to suggest that human thinking is well-approximated as deterministic—a hypothesis which matches what I’ve heard elsewhere. Off the top of my head:
I once read a story about a guy being offered lunch several times in a row and accepting again and again and again in similar terms until his stomach felt “tightish”.
There was a family friend taking sleeping medication which was known to cause sleepwalking, and she had an entire phone conversation with her friend in her sleep—and then called the same friend after waking up planning to discuss the same things.
Of course, the typical quantum-mechanical stories of consciousness are far too vague to be falsified by this or any other evidence.
Edit: As Nick Tarleton cogently points out, this is an exaggeration—it is certainly falsifiable in the way phlogiston or elan vital is falsifiable, by the production of a complete correct theory, and it is further so by e.g. uploading.
They could be falsified by successful classical uploading or an ironclad argument for the impossibility of coherence in the brain (among other things); furthermore, I think most of their proponents who are actual scientists would accept such a falsification.
You’re right, of course—editing in a note.
I don’t think anyone holds that human behavior is always undetermined in the way particles are. The reason no one holds that view is that it would contradict the work of neuroscientists, the people, you know, actually making progress on these questions.
Citations?
I can’t find the link because of censorship on my work computer, but there was a description of orgasm-induced transient global amnesia that made the rounds recently.
Google: orgasm transient global amnesia
That’s an odd phenomenon, but I don’t think that it, specifically, is especially relevant to quantum mechanics’ relevance to consciousness. The chief problem with the proposals that quantum mechanics is directly involved in consciousness is that they constitute mysterious answers to a mysterious question.
The only reference on google related to “transient global amnesia” and quantum is this thread (third link down).
This is the story in the news. Some may prefer the paper itself.
I’m surprised to hear this question from you. Does this comment mean that you seriously consider this quantum consciousness woo? Why on Earth?
No, I’m just looking for solid evidence-based arguments against it that don’t actually depend on me knowing lots of QM.
In that case you need killer evidence, something to take back an insane leap of privileging the hypothesis, not some vague argument around amnesia.
Does anyone know when will the 2009 Summit videos be available?
Already are! http://vimeo.com/siai
Oh, thank you very much!
no problem
So I got into an argument with a theist the other day, and after a while she posted this:
Nu, talk about destroying the foundation for your own beliefs… Escher drawings, indeed.
What did she she say it was about?
Faith, I think.
Meetup listing in Wiki? MBlume created a great Google Calendar for meetups. How about some sort of rudimentary meetup “register” in the LW Wiki? I volunteer to help with this if people think it’s a good idea. Thoughts? Objections?
ETA: The GCal is great for presenting some information, but I think something like a Wiki page might be more flexible. I’m especially curious to hear opinions from people who are organizing regular meetups, how that’s going, and interest in maintaining a Wiki page.
ETA++: AndrewKemendo has a more complex, probably more useful idea that I passed over in my overcaffeinated eagerness.
IBM simulates cat’s whole brain… research team bores simulated cat to death showing him IBM logo… announces human whole-brain real-time simulation for 2018...
Michael Anissimov deflates hype.
Beat me to it. Here’s IBM’s own press release.
Unfortunately they’re using toy neurons.
What I’d be excited to see is a high fidelity simulation of neurons in a petri dish, even just a few hundred. There’s no problem scanning the topology here, the only problem is in accurately reproducing the biophysics. Once this has been demonstrated, human WBE is just a truckload of money away from reality.
Really, does anyone know of any groups working on something like this? I’d gladly throw away my current research agenda to work with them.
cf the nematode upload project, which looks dead. If people wanted to provide evidence that they’re serious, this is what they’d do.
I’ve seen this around. It’s unfortunate that it’s dead.
There are more confounding factors in the nematode project than with just a petri dish. You have to worry about the whole nematode if you want to verify your results. It’s also harder to ‘read’ a single neuron in action.
With a petri dish it would be possible to have an electrode in every neuron. Because the neurons are splayed out imaging techniques might be able to yield some insight into the internal chemical states of the neurons.
An uploaded nematode would be great, but an uploaded petri dish seems like a more tractable and logical first step.
A mind teaser for the stream-of-consciousness folk. Let’s say one day at 6pm Omega predicts your physical state at 8pm and creates your copy with the state of mind identical to what it predicts for 8pm. At 9pm in kills the original you. Did your consciousness just jump back in time? When did that happen?
Not sure who the “stream-of-consciousness folk” are, but I don’t see any more problem with a timeless stream (we’re all timeless folk, I assume) jumping backward than sideways or forward.
To be consistent with the ‘stream’ metaphor it would seem that you must say it jumped back at 8pm. It is not too much of a stretch for a metaphorical stream to diverge into two, where one branch can be transported back in time and the other end some time later. I’m not sure if ‘jump’ is the ideal terminology for the transition. Either way, the whole ‘stream-of-consciousness’ idea seems to be stretched beyond whatever usefulness it may have had.
In the stream metaphor, the consciousness still didn’t jump backwards in actual time. Its stream of experience included an apparent jump in time, but that’s just because its beliefs suddenly became out of sync with reality: it believed that 2 hours’ worth of things had happened, but they hadn’t.
This isn’t a shortcoming of the stream model. It’s Omega’s fault for messing with your brain :-) For instance, Omega isn’t needed: I can do the job myself. I may not be able to correctly predict you for two hours into the future, but I can just invent two hours’ history full of ridiculous things happening and edit that false memory into your brain. The end result is the same: you remember experiencing two hours that didn’t happen, and then a backwards jump in time.
It’s no surprise that if I edit your memories, then you might remember something that contradicts a stream model, because you’re remembering things that did not in fact happen.
I don’t agree. That is, you describe actual reality accurately but if I am to consider consciousness a stream then I consider this consciousness to have jumped back in time. I assert that the stream of consciousness to have travelled in time to exactly the same extent that a teleported consciousness can be said to have travelled in space. Also quite close to the extent that a guy walking down a street, moving ordinarily in time and space can be stationary in time and space can be considered to have a stream-of-consciousness and all for similar reasons.
They don’t contradict a stream model. They’re just weird. Stuff with Omega in it usually is. ‘Stream-of-consciousness’ is a map, not the territory. If I have the right scale on a map I can draw a thousand light year line in seconds. From there back in time is just math. I see no reason splitting a stream in two and one part jumping back in time contradicts the model.
This is just wordplay. We both agree no material or causative thing jumped backwards in time.
Sure, if you define a stream of consciousness that way it can be said to have moved backwards in time, but that’s just because we’re overextending the metaphor. I could equally say that if I predict (or record) all of a consciousness’ successive states, and then simulate them in reverse order, then that consciousness has genuine Merlin sickness.
Absolutely. Wordplay seems to be extent of Vladmir’s question, at least as far as I am interested in it.
Another curious question. That would be a stream of consciousness flowing back in time. Merlin sickness also has the symptom of living backwards in time. But I don’t think it follows that the reverse simulation is an example of Merlin sickness. Whatever the mechanism is behind Merlin’s reverse life it appeared to result in him being able to operate quite effectively in a forward flowing universe. At least, he usually seems to get it right by the end of the story.
Your consciousness (the cloned one of the two) experiences a jump back in time, but the universe history it observes between 6 and 8 for the second time diverges from what it observed the first time, because it itself now acts differently.
There’s no more an actual backward jump in time than there would be in case Omega just implanted (accurate, predicted) memories of 6 through 8 pm in your brain at 6pm, without any duplications.
This post is a continuation of a discussion with Stefan Pernar—from another thread:
I think there’s something to an absolute morality. Or at least, some moralities are favoured by nature over other ones—and those are the ones we are more likely to see.
That doesn’t mean that there is “one true morality”—since different moral systems might be equally favoured—but rather that moral relativism is dubious—some moralities really are better than other ones.
There have been various formulations of the idea of a natural morality.
One is “goal system zero”—for that, see:
http://rhollerith.com/blog/21
Another is my own “God’s Utility Function”:
http://originoflife.net/gods_utility_function/
...which is my take on Richard Dawkins idea of the same name:
http://en.wikipedia.org/wiki/God’s_utility_function
...but based on Dewar’s maximum entropy principle—rather than on Richard’s selfish genes.
On this site, we are surrounded by moral relativists—who differ from us on the issue of the:
http://en.wikipedia.org/wiki/Is-ought_problem
I do agree with them about one thing—and it’s this:
If it were possible to create a system—driven by self-directed evolution where natural selection played a subsidiary role—it might be possible to temporarily create what I call “handicapped superintelligences”:
http://alife.co.uk/essays/handicapped_superintelligence/
...which are superintelligent agents that deviate dramatically from gods utility function.
So—in that respect, the universe will “tolerate” other moral systems—at least temporarily.
So, in a nutshell, we agree about there being objective basis to morality—but apparently disagree on its formulation.
With unobjectionable values I mean those that would not automatically and eventually lead to one’s extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of ‘ensure continued co-existence’
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.
Not really. You don’t need to co-exist with anything if you out-compete them then turn their raw materials into paperclips.
You keep making the same statements without integrating my previous arguments into your thinking yet fail to expose them as self contradicting or fallacious. This makes it very frustrating to point them out to you yet again. Does not feel worth my while frankly. I gave you an argument but I am tired of trying to give you an understanding.
You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you. But you would be deluding yourself into thinking that this would equate to you thereby somehow be proven right. No—I am simply tired of dancing in circles with you. So, if you feel like dancing solo some more, be my guest.
A side note: these two are not the only reasons to not be persuaded by arguments, although naturally they are the easiest to point out.
My ‘last word’ was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a “Rapture of the Nerds”. It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a ‘force’, you see it as one to be embraced and defined by while I see it as one that must be mastered or else.
We’re not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.
My apologies for failing to see that—did not mean to be antagonizing—just trying to be honest and forthright about my state of mind :-)
I can empathise. I have often found myself in situations in which I am attempting discourse with someone who appears to me at least to be incapable or unwilling to understand what I am saying. It is particularly frustrating when the other is supporting the position more favoured by the tribe in question and they can gain support while needing far less rigour and coherency.
The fate of a maximiser depends a great deal on its strength relative to other maximisers. It’s utility function is not the only issue—and maximisers with any utility function can easily be eaten by other, more powerful maximisers.
If you look at biology, replicators have survived so far for billions of years with other utility functions. Do you really think biology is “ensuring continued co-existence”—rather than doing the things described in my references? If so, why do you think that? - the view doesn’t seem to make any sense.
Yes Tim—as I pointed out earlier however, under reasonable assumptions an AI will upon self reflection on the circumstances leading to its existence as well as its utility function conclude that a strictly literal interpretation of its utility function would have to be against the implicit wishes of its originator.
Millennial Challenges:
Millennial Challenges / Goals
What should we have accomplished by 3010?
/a long term iteration of the Shadow Question
Extinction or, well, just about everything.