Open thread, Nov. 10 - Nov. 16, 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I’ve been commenting on the site or a few months now, but so far just replies and responses. I’ve been thinking about potential contributions for a top-level discussion post, and I thought I’d ask about it here first to gage interest.
I have taught university classes in the past, usually with traditional methodology but in one memorable case with some experimental methods. There were a few ways this was different; as an example, we used ‘high expectations, low stakes’- we allowed students to retake any assignment as many times as they liked, but their grade for the entire class was basically the lowest grade they got on any assignment. (This was partly inspired by video games, actually.)
It will obviously be of particular interest to anyone else who does teaching, but there’s reasonable hope that some of my experiences there would be of use to audidacts. Do you think this would be a good use of my time?
I think the phrase in the education literature is mastery learning: my exposure to it was discussion of how Khan Academy does math tests. Because they’re on a computer-based system, they can generate an arbitrary number of problems of a particular form (like, for example, ‘multiply two three digit numbers together’) and give each student as many problems as it takes for them to get 10 right in a row. Sometimes the student gets the lesson and only does 10 questions; sometimes the student takes 200 tries to get 10 right in a row, but they always master the skill before they move on (or they spend a lot of time getting very lucky).
I think your account will be received fairly well in Discussion if you present it like the above.
Isn’t that a high stake situation?
Holistically, yes. But they are free to fail any given assignment any number of times- in fact, many would sign up to take quizzes before studying, as a preview of what the assessment would look like and a way to rapidly jump through sections they might have studied elsewhere.
Does anyone know how Eliezer first met Robin? How did the first end up as a co-editor of the latter’s blog?
I don’t know, but I’m guessing the Extropians mailing list.
That’s my guess, too. I know that both Eliezer and Robin posted there. Eliezer had definitely come to Robin’s attention by 1999; he is cited in Robin’s “Comments on Vinge’s Singularity” page.
Of course, the most straightforward way to answer this question is to simply ask either of them.
Your guess and your evidence are both correct.
Even better, ask both.
Ok. How? Can I just send Eliezer an Email?
You can send any user a private message, which shows up in their inbox next time they check this forum, so they will definitely see it. Whether they choose to reply is up to them.
There is a handy link on this page.
OB was sponsored by the Future of Humanity Institute, (IIRC) so perhaps they encouraged RH and EY to post there? You could always ask at Robin Hanson’s monthly open thread.
You’ve got a lot of backwards arrows in that diagram there.
Is that on overcoming bias?
Yes.
Nick Bostrom toured a bunch to promote Superintelligence after it came out. This included presentations, basically he would give a very condensed summary of the book, roughly the same each time. For anyone who’s read the book you’re probably not going to hear anything new in that.
However, the Q&As that took place after these presentations are a somewhat interesting look at how people react to hearing about this subject matter.
Talks at Google, the Q&A Starts 45:14. The first person in the audience to speak up is Ray Kurzweil.
His talk at the Oxford Martin School, Q&A starts at 51:38.
His talk at UC Berkeley, hosted by MIRI. Q&A starts at about 53:37.
There have been a lot of posts over the years about the fungibility of money and time—but strangely (at least to me), they all fill up with suggestions for how to turn money into time. Personally, I have the opposite desire—to turn time into money. I have found it extremely hard to find a decently-paid part-time job that fits around my main job. I also don’t know how to get into freelancing.
Does anyone have any good suggestions? Possibly relevant info: I live in the UK, and am a programmer, of good but not phenomenal skill.
Why do you think that a part time job is the way to go? Maybe it’s better to switch your main job to a higher paying job that’s more demanding?
I would love to make such a switch, and am currently working on it. But that is a long-term goal, and in the meantime I’m looking to turn some time into money on the margin.
Freelancing is your best bet. Listing yourself online at places like Elance is a good start. Submit aggressive bids. Understand that feedback is CRITICAL—bad reviews can sink you, even 4 stars out of 5 is “bad”, and you’ll have to bid low until you have a good feedback record.
Alternately, you could do something like make an app. You probably won’t make much money directly, but it could be a good longer-term investment in skills and resume.
Also, if you are considering selling some stuff you have, make an effort to shop around and get the best price. Putting it on ebay will yield higher prices than a garage sale, but take more time.
Hm. This advice runs exactly counter to the Charge Your Happy Price advice. Why is that?
Freelancing is brutally competitive, especially if you have no track record.
My advice is second hand. Maybe if he has a good network already, or is just willing to wait a while for work, he can start charging his “happy price” immediately.
Attach your “(” to your “]”.
Dudeism? What in the world are they blathering about?
It turns out Dudeism is a thing. Wikipedia summarizes it best:
I thought it’d be worth bringing to attention here, because if there’s one adjective that would not apply to the online LW community, it’s “laid back”. Note that many of us are lazy, but we struggle with laziness, we keep looking for self-help and trying to figure out our motivation systems and trying so hard to achieve. Other urges and “sufferings” we struggle with are the need to fit in, the need to make sense of the world, the need to be perfectly clear in thought and expression (and the need to demand that of others), and so on and so forth.
How much could we benefit from being more laid-back, from openly and deliberately saying “fuck it”? From doing what we actually want to do without regard for what’s expected of us?
The thoughts on this post aren’t very well-articulated, and perhaps I’m misjudging LW completely, but, um, you know, that’s just, like, uh, my opinion, man. Obviously, it’s open for debate; that’s what we’re here for, yes?
Hmm. I have pretty strong Daoist / Stoic tendencies, and a large part of that deals with rejection of “should-ness;” that is, things are as they are, and carrying around a view of how the world “should” be that disagrees with the actual world is, on net, harmful.
I’ve gotten some pushback from LWers on that view, as they use the delta between their should-world and their is-world to motivate themselves to act. As far as I can tell, that isn’t necessary; one can be motivated by the is-world directly, and if one reasons in the is-world one is more likely to make successful plans than if one reasons in the should-world (which is where one will reason, since diffs between the should-world and is-world are defects!).
But I think that LW is useful at dissolving that pushback; the practice of cashing out beliefs as predictions about the world rather than tribal identifications or moral claims is basically the practice of living in the is-world instead of the should-world.
My understanding is that defects are like speed-bumps and potholes, pieces where the harmonious flow of reality is interrupted, dissonances and irregularities. Going with the flow and being in harmony with the world requires more sensitivity, training, and awareness than simply letting oneself get carried by the current. It’s the difference between ‘surfing’ waves, and ‘getting engulfed’ by them, yes?
I would say substantially. LW largely seems to advocate for preference utilitarianism, whereas EA and animal rights subsets of the group often come suspiciously close to deontological “whatever you do care about, here is what you should care about”. As a matter of fact, the whole advocacy for consistency in ethics (e.g. “shut up and multiply”) can backfire since System 1′s values are not necessarily consistent. I’m not suggesting giving up on these attempts, but I guess that many people would benefit from being able to listen to System 1′s voice saying “I want to invent a lightsaber” without having System 2 immediately scream “but people in Africa are suffering, and you’re just being scope insensitive”.
Well, sorting out system 1′s inconsistencies can help one feel happier and more at peace with oneself. You can’t achieve serenity just by giving in to all your impulses, because they contradict each other.
Sure, and I found that incredibly useful in my life as well—particularly, it helps to stop feeling bad about what’s considered morally questionable, but doesn’t in fact hurt anybody. But some people may go way over the top on that, and it may be useful to throttle down as well.
I think the steelman of “The Dude” is that you shouldn’t run your mind like a police state, it’s cutting against the grain.
But “The Dude” is kinda “trampy,” for lack of a better word, I don’t think he’s a diamond in the rough or anything like that.
Nah, he’s no hero, he’s just a selfish man. But, of all the characters, he is the only one who is honest about doing nothing, while every other character on the film (and many, many people in Real Life) go to great lengths to sustain the illusion of activity and productiveness.
That doesn’t make him any better, he’s just failing in a different way. Nul points.
And then, some people are active and productive. I don’t know the film, but from your description of it, it’s about a bunch of losers, in a fictional world from which every other possibility is excluded. Why should I take notice of anything in it?
Because there’s a grain of truth in it that extends far beyond its admittedly limited scope.
I prefer larger doses.
Raymond Smullyan’s The Tao is Silent is possibly relevant.
There is, well, y’know, early Christianity… :-)
Matthew 6:25-27:
and Matthew 6:34:
Jesus and the early Christians were all about proselytising, sacrifice, and martyrdom. The Dude is about none of those things. He doesn’t try and persuade others to become Dude-like, and he doesn’t stand up for what he believes in—if he believes in anything. The Dude isn’t a preacher, he’s a bowler. He’s all about going with the flow, in his own little way.
All true, my comment was somewhat tongue-in-cheek :-) The early Christians tended to be awfully serious fellows. That quote from Matthew, though, was popular in the California hippy scene much, much later :-D
The pre-ecclesiastical Jesus of Nazareth is referenced as a Dude avant-la-lettre (as are Sidhartha, Laozi, Epicurus, Heraclitus, and other counter-culturals that gained a cult following). Not to be confused with Jesus of North Hollywood, who is the opposite of a Dude.
Also, again with the smileying. :-(
Chill, dude :-P
The Dude abides… {B{í=
QFT
I could speculate why this is the way it is but that would be too much work to type up.
I am constructing a political bias quiz together with Spencer Greenberg, who runs the site Clearer Thinking and wonder if people could help me coming up with questions. The quiz will work like this. First, you’ll respond to a number of questions regarding your political views: e.g., republican or democrat, pro-life or pro-choice, pro- or anti-immigration, etc. Then you’ll be given a number of factual questions. On the basis of your answers, you’ll be given two scores:
1) The number of correct answers—your degree of political knowledge. 2) Your degree of political bias.
The assigmment of political bias will be based on the following reasoning. Suppose you’re a hard-core environemntalist, and are consistently right about the questions where hard-core environemntalist like the true answer (e.g. climate change) but consistenly wrong about the questions where they are not (e.g. GMOs). Now this suggests that you have not reviewed these questions impartially, but that you acquire whatever factual beliefs suit your political opinions—i.e. that you’re biased. Hence, the higher the ratio between the correct answers you like and the correct answers you dislike is, the more biased you are.
(The argument is slightly more complicated, but this should suffice for the present purposes. Also, the test shouldn’t be taken too seriously—the main purpose is to make people think more about political bias as a problem).
The questions are intended for an American audience. I have come up with the following questions so far:
1) Which of the following statements best describes expert scientists’ views of the claim that global temperatures are rising due to human activities (this question is taken from a great paper by Dan Kahan )? (Most agree it’s true, divided, most agree it’s false)
2) Which of the following statements best describes expert scientists’ views of the claim that genetically modified foods are safe? (Same possible answers)
3) Which of the following statements best describes expert scientists’ views of the claim that humans are causing mass extinction of species at a rate that is at least 100 times the natural rate? (Same possible answers)
4) Which of the following statements best describes expert scientists’ views of the claim that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities? (Same possible answers)
5) Which of the following statements best describes expert scientists’ views of the claim that humankind evolved from other species through natural selection. (Same possible answers)
6) Which of the following statements best describes expert scientists’ views of the claim that the death penalty increases homicide rates. (Same possible answers)
7) Studies show that on spatial reasoning tests, male mean scores are higher than females’, whereas the converse is true of emotional intelligence tests. (True/false)
8-10) (These are taken from Bryan Caplan’s excellent The Myth of the Rational Voter ) Expert economists were given the following possible explanations for why the economy isn’t doing better. For each one, please indicate whether they thought it is a major reason the economy is not doing better than it is, a minor reason, or not a reason at all:
8) “Taxes are too high”
9) “Foreign aid spending is too high”
10) “Top executives are paid too much”
11) How much does the US spend on foreign aid, as a share of GDP (0-1 %, 1-3 %, 3+ %).
I need perhaps 10-15 additional questions. The questions need to have the following features:
1) The answer needs to be provable. Hence why ask what expert scientists believe about P – on which there are surveys I can point to – rather than P itself in many of the questions. However, you can also have questions about P itself if you can point to reliable sources such as government statistics, as I do in question 11.
2) They should be “baits” for biased people; i.e. such that biased people should be expected to give the wrong answer if they don’t like the true answer, and the true answer if they like it.
3) The questions shouldn’t be very difficult. If you give people questions on, e.g, numbers, you have to give fairly large intervals, as I do in question 11. Also you cannot ask too outlandish questions (e.g., questions on small parts of the federal budget).
At present I seem to have more questions where the liberal answer is the true one, so “pro-conservative” questions are particularly welcome.
Any suggestions of questions or other forms of input is highly appreciated! :)
I see no reason to bundle those claims.
True. I’ll split them. Thanks!
I would rather rephrase “X is too high” as “X should be reduced” if that’s what you want to ask. Otherwise it seems to shift the perspective from policy-making to emotional evaluation.
How about a questions about the average IQ in some subsaharan country?
Hm, good idea. Could be very controversial, though, and I’m not sure of whether it would be sufficiently provable. But yes, a question where the true answer is some negative fact about an African country is a good idea. Thanks!
If you don’t want to go into the IQ area, personal values are a good topic.
The World Value Survey seems a good source.
In some African countries more Muslims believe that homosexuality should be punishable by death than most Western liberals would like.
A number of the above questions are not asking, “Is X true?” but rather “Do group Y believe that X is true?”
But once you get into asking “Do group Y believe that X should be done?” you’re not talking about respondents’ model of others’ factual “is” beliefs, but respondents’ model of others’ moral “ought” beliefs.
That might be a very different thing.
Excellent! Yes those sorts of questions are even better.
Without knowing anything about the extinction of species, I could guess that the answer is “most scientists agree”.
If the correct number is not 100 but is large, the question would incorrectly conclude that some people who are not biased are biased (since someone who falsely thinks the number is 100 when it’s really 75 or 125 is not biased, but would incorrectly answer “scientists agree”).
If the correct number is not 100 but is small, the question would incorrectly conclude that some people who are biased are not biased (since someone who falsely thinks the number is 75 or 125 when it’s really 1 is biased, but this bias would be undetectable since he would correctly answer the question with “scientists disagree”)
Therefore the correct number is 100.
This question should be phrased using words like “many”, not using the exact number 100.
For pro-conservative questions, one could be: In a recent poll of 15000 police officers polled, a large majority thought assault weapon bans are effective, a small majority thought assault weapon bans are effective, they were about equal, a small majority thought assault weapon bans are ineffective, a large majority thought assault weapon bans are ineffective. http://www.policeone.com/Gun-Legislation-Law-Enforcement/articles/6183787-PoliceOnes-Gun-Control-Survey-11-key-lessons-from-officers-perspectives/
Generally, however, people should be suspicious of such questions because in real political discourse, these questions are used to set the terms of the debate.
-- Does it matter whether the death penalty increases homicide rates?
-- Does it matter that humans cause climate change regardless of the size of the change?
-- I would think that any truly expert economist would say “we have no way to know the answers to these questions to the same degree as we know physics or chemistry answers. I could give you my educated opinion, but there’s still a lot of disagreement within economics”.
Hm, you’re right that question 3) is not formulated rightly. Great comment!
Thanks for the poll data. One worry is, though, that the police officers might not be seen as proper “experts” in the same sense as climate scientists are. I need to think about that.
The data are very interesting, though. US police officers seem to very conservative indeed.
Thanks!
The point of the exercise is detecting bias. As such it’s not important whether the answer to the question is important.
It doesn’t work that way.
Imagine that you don’t know much about homeopathy, but you do know that experts oppose it. Then someone asks you the question “The number of homeopathic cures of all types rejected by the FDA for not being effective is (much less than) (less than) (equal to) (greater than) (much greater than) the number of allopathic cancer cures.”
If you approached this question out of context, you would think “I know that experts believe homeopathy isn’t effective. The FDA uses experts. So experts probably rejected lots of homeopathic remedies.”
If you approached this question in context, however, you would reason “I know that experts believe homeopathy isn’t effective. But given the way this question is phrased, it’s being asked by a homeopath. He’s probably asking this question because it makes homeopathy look good, so this must be an unusual situation where experts’ belief on homeopathy doesn’t affect the answer, and he’s falsely trying to imply that it does. So the FDA probably rejected few homeopathic remedies for being ineffective, but for some reason this doesn’t reflect the belief of experts.”
For instance, if most homeopathic treatments are not submitted to the FDA, they would not have a chance to reject them.
Actually, one of the sponsors of the act that created the FDA was a homeopath and he wrote in an exception for homeopathy, so homeopathic treatments don’t have to prove they are safe and effective.
Also, keep note of who this question would falsely mark as biased. Someone who opposes homeopathy and correctly knows that experts also oppose homeopathy, who tries to reason the first way, would be marked down as biased, because he answered in a way favorable to his own position but contrary to the facts. Yet answering the first way doesn’t mean bias, it just means he ignored the agenda of the person asking the question.
On your question 1, I would rephrase it to say that human activities tend to cause global temperatures to rise. Or that human activities have caused global temperatures to rise. Otherwise, you get stuck in the whole issue about the “pause,” which might show that temperatures are not currently rising for reasons that are not fully understood and are subject to much debate. The paper you cite was from early 2010, and was based on research before that, so the pause had not become much-discussed by then.
One thing that I think will be interesting if you run the quiz is to identify a group who resist polarization and are between the extremes. For example, I think there are plenty of people that agree that carbon dioxide causes temperatures to rise (all else being equal) but believe that the feedback loop is not significantly positive. People from each extreme tend to lump them middle-grounders in with the people at the other extreme: “You’re an alarmist!” “You’re a denier!” etc.
Good point—I’ll change the formulation.
What do economists think of the American Reinvestment and Recovery Act of 2009?
Or, the following based on http://ew-econ.typepad.fr/articleAEAsurvey.pdf. (I’ve bolded the answers I think are supported, but you should check my work!)
“What do economists think about taxes on imported goods?”Most favor; divided; most disfavor.
“What do economists think about laws restricting employers from outsourcing jobs to other countries?” Most favor; divided; most disfavor.
“What do economists think about anti-dumping laws, which prohibit foreign manufacturers from selling goods below cost in the US?” Most favor; divided; most disfavor.
“What do economists think about subsidizing farming?” Most favor; divided; most disfavor.
“What do economists think about proposals to replace public-school financing with vouchers?” Most favor; divided; most disfavor.
“What do most economists think about the proposal of raising payroll taxes to close the funding gap for Social Security?” Most favor; divided; most disfavor.
“What do economists believe the effect of global warming will be on the US economy by the end of the 21st century?” Most believe it will help significantly; divided; most believe it will hurt significantly.
“What do economists think about marijuana legalization?” Most favor; divided; most disfavor.
“What do economists believe about legislation for universal health insurance?” Most favor; divided; most disfavor.
“Do more economists believe that the minimum wage should be raised by more than $1 or should be abolished?”
Great!
For the true/false(/divided) questions, it’d be wise to aim for an even split in true/false(/divided) answers to minimize acquiescence bias. At the moment disproportionately few answers for questions 1-7 are “false”, so someone who just likes agreeing with things has an unfair advantage there!
Haha! Good point!
You could have a question about the scientific consensus on whether abortion can cause breast cancer (to catch biased pro-lifers). For bias on the other side, perhaps there is some human characteristic the fetus develops earlier than the average uninformed pro-choicer would guess? There seems to be no consensus on fetus pain, but maybe some uncontroversial-yet-surprising fact about nervous system development? I couldn’t find anything too surprising on a quick Wiki read, but maybe there is something.
I would expect that even as a fairly squishy pro-abortion Westerner (incredibly discomforted with the procedure but even more discomforted by the actions necessary to ban it), I’m likely to underestimate the health risks of even contragestives, and significantly underestimate the health risks of abortion procedures. Discussion in these circles also overstates the effectiveness of conventional contraception and often underestimates the number of abortions performed yearly. The last number is probably the easiest to support through evidence, although I’d weakly expect it to ‘fool’ smaller numbers of people than qualitative assessments.
I’m also pretty sure that most pro-choice individuals drastically overestimate its support by women in general—this may not be what you’re looking for, but the intervals (40% real versus 20% expected for women who identify as “pro-life”) are large enough that they should show up pretty clearly.
These are good ideas. You’ve got it quite right—these are exactly the kinds of questions I’m looking for. Possibly the health risks questions are the best ones—I’ll see what evidence I can find on those issues. Thanks!
It wouldn’t surprise me if people generally overestimate the safety and effectiveness of drugs and medical procedures—would you want to compare the accuracy of people’s evaluation of contraceptives and abortions to their evaluation of medicine in general?
It also wouldn’t surprise me if there’s a minority who drasitically underestimate the safety and effectiveness of medicine.
If you want to catch the other side on the global warming debate as well, there are a bunch of claims where I suspect the average liberals is overconfident. Maybe something like “Hurricane X wouldn’t have happened without global warming”. The IPCC report shows their confidence for various claims and it’s likely something there to catch liberals.
Yes, something like that could probably catch some liberals, that’s true.
Not unlikely at all. Try “There would have be many fewer hurricanes in the past 10 years without global warming” instead.
I said something like because I just want to illustrate an idea and not the suggestion in that form. It makes sense to a claim directly from the IPCC report instead of making up your own claim.
Some domestic questions would be nice. Opinions about school choice for example.
Or homeschooling. Possibilities:
“Studies show that home-schooled children score worse on tests related to socialization than conventionally educated children.” This is false according to the first paragraph under “Socialization” on en.wikipedia.org/wiki/Homeschooling in that always true resource, Wikipedia.
“The most cited reason for parents to choose homeschooling over public schools is the public schools’ (a) the lack of religious or moral instruction, (b) social environment, or (c) quality of instruction.” The actual answer is (b), with (a) taking second place and (c) taking third. See http://nces.ed.gov/pubs2006/homeschool/parentsreasons.asp.
I was homeschooled and hated every minute of it! But I think it can be alright in a few cases. I came out pretty good.
Did you also attend public school? If so, which did you dislike more? If you didn’t, which do you think you would have disliked more?
I’m also curious if you don’t mind me asking: what did you hate about it?
I went to private schools, a montessori and a private christian school. I hated the isolation, the lack of intellectual curiosity and my browbeating perfectionist mother who never let me learn and just expected perfection at all times in all subjects. It led to a lot of abuse in my family, especially being an only child.
When I was trying to make sense of Peter Watts’ Echopraxia it has occurred to me that there may be two vastly different but both viable kinds of epistemology.
First is the classical hypothesis-driven epistemology, promoted by positivists and Popper, and generalized by Bayesian epistemology and Solomonoff induction. In the most general version, you have to come up with a set of hypotheses with assigned probabilities, and look for information that would change the entropy of this set the most. It’s a good idea. It formalizes what is science, and what is not; it provides the framework for research, and, given the infinite amount of computing power on a hypercomputer, extract the theoretical maximum of utility from sensory information. The main problem is that it doesn’t an algorithmic way to come up with hypotheses, and the suggestion to test infinitely many of them (aleph-1, as far as I can tell) isn’t very helpful either.
On th other hand, you can imagine data-driven epistemology, where you don’t really formulate any hypotheses. You just have a lot of pattern-matching power, completely agnostic of the knowledge domain, and you use it to try to find any regularities, predictability, clustering, etc. in the sensory data. Then you just check if any of the discovered knowledge is useful. That barely (if at all) can distinguish correlation and causation, that does not really distinguish scientific and non-scientific beliefs, and it doesn’t even guarantee that the findings will be meaningful. However, it does work algorithmically, even with finite resources.
They actually go together rather nice, with data-driven epistemology being the source of hypotheses for the hypothesis-driven epistemology. However, Watts seems to be arguing that given enough computing power, you’d be better off spending it on data-driven pattern matching than on generating and testing hypotheses. And since brains are generally good at pattern matching, System 1, slightly tweaked with yet-to-be-invented technologies, can potentially vastly outperform System 2 running hypothesis-driven epistemology. I wonder to which extent it may actually be true.
Reminds me of “The Cactus and the Weasel”.
The philosopher Isaiah Berlin originally proposed a (tongue-in-cheek) classification of people into “hedgehogs”, who have a single big theory that explains everything and view the world in that light, and “foxes”, who have a large number of smaller theories that they use to explain parts of the world. Later on, the psychologist Philip Tetlock found that people who were closer to the “fox” end of the spectrum tended to be better at predicting future events than the “hedgehogs”.
In “The Cactus and the Weasel”, Venkat constructs an elaborate hypothesis of the kinds of belief structures that “foxes” and “hedgehogs” have and how they work, talking about how a belief can be grounded in a small number of fundamental elements (typical for hedgehogs) or in an intricate web of other beliefs (typical for foxes). The whole essay is worth reading, but a few excerpts that are related to what you just wrote:
That is very interesting and definitely worth reading. One thing though, it seems to me that a rationalist hedgehog should be capable of discarding their beliefs if the incoming information seems to contradict them.
When you say “pattern-matching,” what do you mean? Because when I imagine pattern-matching, I imagine that one has a library of patterns, which are matched against sensory data- and those library of patterns are the ‘hypotheses.’
But where does this library come from? It seems to be something along the lines of “if you see it once, store it as a pattern, and increase the relevance as you see it more times / decrease or delete if you don’t see it enough” which looks like an approximation to “consider all hypotheses, updating their probability upward when you see them and try to keep total probability roughly balanced.”
That is, I think we agree; but I think when we use phrases like “pattern-matching” it helps to be explicit about what we’re talking about. Distinguishing between patterns and hypotheses is dangerous!
Probably a better term would be “unsupervised learning”. For example, deep learning and various clustering algorithms allow us to figure out whether the data had any sorts of non-temporal regularities. Or we may try to see if the data predicts itself—if we see X, in Y seconds we’ll see Z. That doesn’t seem to be equivalent to considering infinitely many hypotheses. In Solomonoff induction, hypothesis is the algorithm capable of generating data, and based on the new incoming information, we can decide whether the algorithm fits the data or not. In unsupervised learning, on the other hand, we don’t necessarily have an underlying model, or the model may not be generative.
I think it’s useful to think of the parameter-space for your model as the hypothesis-space. Saying “our parameter-space is R^600” instead of “our parameter-space is all possible algorithms” is way more reasonable and computable, but what it would mean for an unsupervised learning algorithm to have no hypotheses would be that it has no parameters (which would be worthless!). Remember that we need to seed our neural nets with random parameters so that different parts develop differently, and our clustering algorithms need to be seeded with different cluster centers.
Does it mean then that neural networks start with a completely crazy model of the real world, and slowly modify this model to better fit the data, as opposed to jumping between model sets that fit the data perfectly, as Solomonoff induction does?
This seems like a good description to me.
I’m not an expert in Solomonoff induction, but my impression is that each model set is a subset of the model set from the last step. That is, you consider every possible output string (implicitly) by considering every possible program that could generate those strings, and I assume stochastic programs (like ‘flip a coin n times and output 1 for heads and 0 for tails’) are expressed by some algorithmic description followed by the random seed (so that the algorithm itself is deterministic, but the set of algorithms for all possible seeds meets the stochastic properties of the definition).
As we get a new piece of the output string—perhaps we see it move from “1100” to “11001″--we rule out any program that would not have output “11001,” which includes about half of our surviving coin-flip programs and about 90% of our remaining 10-sided die programs. So the class of models that “fit the data perfectly” is a very broad class of models, and you could imagine neural networks as estimating the mean of that class of models instead of every instance of the class and then taking the mean of them.
I see a lot self-help books and posts following the general pattern: Don’t read all the advice and apply it all at once but read and master it step by step (mostly really urging not to continue reading). I think this is a sound approch which could be applied more often. It is kind of clicker-traing advice applied at a high level. I wonder about the best granularity. The examples below use between 3 and about 100 steps. And I’d guess the more is better here—if possible. But it may depend on the topic at hand.
Examples:
Peter Hurfords Productivity 101
7 Habits of highly effective People
Athol Kays Map
Rules of the Game
Probably you can think of lots more...
Carnegie’s How to Make Friends and Influence People has a slightly parallel approach: reread the book on a regular schedule, as you’ll notice things the fourth time you didn’t notice the third because your skill growth puts you in a different place relative to the material. (Maybe he also recommends not to read the whole book at once, also, and I just forgot that part; I think he does encourage people to read just the parts they want to.)
It seems to me that rereading is likely to be more effective than staggering the reading, and that rereading enables staggering. One of my childhood English teachers was more forgiving of reading in class than other teachers, and had a small rack of books available- one of them was Watership Down, which I read cover to cover probably ~7 times, and afterwards would just open to a random page and then would be able to immediately place myself in the story at that point and read from there.
This also calls to mind the practice among more serious Christians of reading the Bible once a year- it takes about four pages a day, and does not take many years for much of it to be very familiar. Muslims have the term “Hafiz” for someone who has memorized the Quran, which typically takes several years of focused effort, and I don’t think Christians have a comparable term, but I’ve definitely noticed phrases along the lines of “quote chapter and verse” for when people had sizeable blocks of the Bible memorized.
Now when I think about rationality seminars—aren’t they analogical to reading the whole book at the same time? So the proper approach would be instead an hour or two, once in a week or two weeks (as long as it takes to master the lesson). But that would make travelling really expensive, so instead the lessons would have to be remote. Perhaps explaining the topic in a YouTube video, then having a Skype debate, a homework, and a mailinglist only for debating the current homework.
Three CFAR tools come to mind that reduce this somewhat:
First is the practice of “delegating to specific future selves.” You plan a specific time (“two weeks from now, Sunday, in the morning”) to do a specific task (“look through my workshop notes to figure out what things I want to focus on, and again delegate those things to specific future selves”), and they explicitly suggest using this on the seminar materials and notes.
Second is the various alumni connection mechanisms- a few people have done set up groups to go through the materials again, there’s people that chat regularly on Skype, and so on.
Third is the rationality dojo in the CFAR office (so only applicable for the local / visiting alums) that meets weekly, I believe.
If you have not yet read Jaynes’ Probability Theory I urge you to do so. If you are not willing to read almost a thousand pages, just read the preface.
Started yesterday and I can’t keep my eyes off it.
Seconded. There are a lot of clever ideas that I haven’t seen anywhere in other probability books: A_p distributions, group invariance, the derivation of an ignorance prior as a multi-agents problem, etc.
The only lacking (due to obsolescence) chapter is the one about quantum mechanics. Jaynes advocates (although implicitly) a hidden variables theory, but so far Bell’s and Kochen-Specher’s theorems imposed heavy constraints on those.
You seem to imply that Jaynes was writing before Bell. That is not true by many decades. I suppose it is possible that the chapter is based on a paper he wrote before Bell, but he had half a century to revise it.
Jaynes thought he had found an error in Bell’s theorem, but he was wrong. (I wrote a comment somewhere on LW about this before; I’ll link to it as soon as I find it.)
I’m under the impression that he was so committed to the idea that there are no probabilities due to intrinsic indeterminacy of nature rather than our ignorance that he got mind-killed. (I wonder whether he had ever heard (and seriously thought) about the MWI.)
http://arxiv.org/pdf/physics/0411057.pdf
That is a remarkable error, actually. As far as I can tell, it’s basically denying that conditional independence is possible in Nature (!?)
The existence of Bell’s inequality is basically a theorem about marginals of Bayesian networks with hidden variables. If you get an independence in the underlying Bayes net, you sometimes get an inequality in the marginal. This is not about causality at all, or about physics at all, this is a logical consequence of a conditional independence structure. It does not matter if it is causal or physical or not. Bell’s theorem is about this graph: A → B ← H → C ← D, where we marginalize out H. My “friend in Jesus” Robin Evans has some general conditions on graphs for when this sort of thing happens.
Jaynes was aware of MWI. Jaynes and Everett corresponded with one another, and Jaynes read a short version of Everett’s Ph.D. dissertation (in which MWI was first proposed and defended) and wrote a letter commenting on it. You can read the letter here. He seems to have been very impressed by the theory, describing it as “the logical completion of quantum mechanics, in exactly the same sense that relativity was the logical completion of classical theory”. Not entirely sure what he meant by that.
I don’t think it’s mind-killed. It’s possible to reject the premise of the Bell inequality by rejecting counterfactual definiteness, and this is a small but substantial minority view. MWI then takes this a step further and reject factual definiteness, but this is not the standard way in which it’s presented, so someone who has issues with the notion of “Alice makes a decision ‘of her own free choice’, unaffected by events in her past light cone” but has never encountered the descriptions of MWI which mention factual and counterfactual definiteness, can justifiably believe that contrary to appearances, some hidden-variable or superdeterminist theory must be true.
I speak from personal experience, here. Up until about a year ago, I held two beliefs that I recognized were in defiance of the standard scientific conclusions, both on logical grounds. One was belief in hidden variable theories of quantum physics; the other was belief that the Big Crunch theory must be correct, rather than the Big Chill (on counter-anthropic grounds; a Big Chill universe would be the last of all universes, and that we should happen to live in the last universe, which happens to be well-tuned for life, strains credulity). Upon realizing that MWI solved the problems that led me to hidden-variable theories, and also removed the necessity for an infinite succession of universes, thus reconciling the logical non-exceptionalist argument and the Big Chill data, I switched to believing in MWI.
So I guess you haven’t read very far yet? The first chapter is great, but the demand for mathematical fluency goes up fast from there.
Working my way through chapter three currently. I have a strong background in mathematics relative to the average physics student. In any case, so far the exact derivations were not important and the substance is repeated in prose, as should be standard in any good mathematics book.
I’ve been rereading some of the older threads and urge you to do so too. There might be stuff you didn’t see the first time around and there will be stuff you noted but have forgotten.
Seconding this. A good way to find good old threads is to go through the highest ranked comments and click through to the thread, or look at the back comments of good but non-prolific commenters like vassar.
Wei Dei provided a fantastic tool to read the highest rated comments for a given user. You can find it linked to here.
I also found that browsing thru comments and submissions of the top contributors (karma total or last 30 days) produces a trove of insightful and interesting material. The downside: It eats a lot time. I needed to limit it. Procrastination warning.
Sidetrack: Maybe we should cultivate a habit to automatically turn away from procrastination tasks if not otherwise mandated.
I’ve read Yvain’s Meditations on Moloch and some of the stuff from the linked On Gnon, but I’m still unclear on what exactly Gnon is supposed to signify. Does someone have a good explanation?
IANANRx, but I think the maximally charitable answer is “Nature; especially, the biological, physiological, and game-theoretical constraints within which any society and culture must operate.” By extension, a culture neglecting these constraints is necessarily in a state of collapse- a faux perpetual motion machine may move for a few moments because of initial momentum in the system, but it must necessarily halt.
As an additional corollary, homeostatic societies (which were, presumably, not in a state of collapse) must have been acting within these constraints. Therefore, long-running traditional cultures most clearly illustrate the terms of compliance with Gnon.
This is why deep ecologists and ‘Soylent Greens’ often advocate tribal-like structures, as found in hunters-gatherers’ societies. But this clearly raises a question, how do we know which kinds of technological or social evolution are compatible with Gnon? Is greenwashed “natural capitalism” good enough, or do we need to radically simplify our lives in the name of sustainability? Or even forsake all kinds of technology and go primitivist?
Thanks, your answer together with jaime2000′s clarified things considerably.
A critique of the general concept: A culture may remain “in a state of collapse” for a long, long time. It’s a little like saying “as soon as you’re born, you start dying” — it’s a statement more about the speaker’s attitude toward life or society than about the life or society being described.
(Moreover, homeostasis only works until invaded. That’s why there ain’t no more moa in old Aotearoa.)
In terms of instrumental goals (‘keep society functioning’), I think these are secondary concerns. A person might believe that we are all in a perpetual state of decay; a doctor finds it necessary to understand the kidneys of a high-functioning adult so that later problems may be diagnosed and fixed. Even if decay itself might take a long time- and even if decay is ultimately inevitable- there are reasons to want to understand and replicate the rules that provide access to ‘doing okay, for now’.
Departing from my steelman for a moment, I think a more pressing concern with the model might be a poor understanding of the environmental pressures on specific societies. Homeostasis is contextual- gills are a bad organ for somebody like me to have. In the case of human societies, it’s not obvious what these environmental pressures might be, or what consequences they might have. Technology is certainly one of them, as are other human societies, as are material resources and so on, but it’s just not a well constrained problem. Does internet access alter the most stable implementations of copyright law? Does cheap birth control change the most economically viable praxis of women’s education? Would we expect Mars colonization to result from a new cold war? So I think it is not enough to show that a society endured- you have to show that the organs of that society act as solutions to currently existing problems, otherwise they are likely to multiply our miseries.
(Rejoinder to the rejoinder: Chesterton’s Fence.)
I think the “in a state of collapse” expression is a bit misleading with wrong connotations. A culture neglecting the real-world constraints is not necessarily collapsing. A better analogy would be swimming against the current—you can do it for a while by spending a lot of energy, but sooner or later you’ll run out and the current will sweep you away.
What is energy in this analogy, and where does it come from?
In the most general approach, negentropy. In the context of human societies, it’s population, talent, economic production, power. Things a society needs to survive, grow, and flourish.
A lot of that doesn’t look like the kind of thing societies consume, more like the substrate they run on. At least aside from a few crazy outliers like the Khmer Rouge.
I’m having a hard time thinking of policy regimes that require governments to trade off future talent, for example, for continued existence. Maybe throwing a third of your male population into a major war would qualify, but wars that major are quite rare.
Tentatively—keeping the society poor and boring. Anyone who can leave, especially the smarter people, does leave. This is called a brain drain.
Literally borrowing ever increasing amounts of money against future generations’ productivity.
Having social policies that lead to high IQ people reproducing less.
They are now, anyway.
The Ottoman Empire lost 13-15% of its total population in WWI but had by far the worst proportional losses of that war, particularly from disease and starvation.
In WWII, Poland lost 16%, the Soviet Union lost 13%, and Germany 8-10%..
In the U.S. Civil War, the U.S. as a whole lost 3% of its population, including 6% of white Northern males and 18% of white Southern males..
Rare, not nonexistent. The World Wars are the main recent exception I was gesturing towards, although more extreme examples exist on a smaller scale: the Napoleonic Wars killed somewhere on the order of a third of French men eligible for recruitment, for example. And they were rarer before modern mass conscription, although exceptions did exist.
Gnon is reality, with an emphasis towards the aspects of reality which have important social consequences. When you build an airplane and fuck up the wing design, Gnon is the guy who swats it down. When you adopt a pacifist philosophy and abolish your military, Gnon is the guy who invades your country. When you are a crustacean struggling to survive in the ocean floor, Gnon is the guy who turns you into a crab.
Basically, reality has mathematical, physical, biological, economical, sociological, and game-theoretical laws. We anthropomorphize those laws as Gnon.
Thanks, your answer together with Toggle’s clarified things considerably.
(Also, that crab thing is fascinating.)
Oh, definitely. It’s a really good analogy for the NRx view of civilization, too. That’s why Gnon’s symbol is a crab.
If you want to read another non-obscurantist explanation of Gnon, try Nyan Sandwich’s “Natural Law and Natural Religion”.
Gnon’s symbol is a crab because someone had to slip subtle pro-Maryland propaganda into the memeplex.
See also: List of examples of convergent evolution
Seems like they take Feynman’s “reality must take precedence over public relations, for nature cannot be fooled” and add a but of mysticism, resulting in “Nature is out to get you, constant vigilance, citizen!”. Certainly makes the message easier to internalize for those who already think they live in a hostile environment.
That’s just a less-pithy version of “The perversity of the Universe tends towards a maximum”, one of the formulations of Finagle’s Law.
And then we get to the hard question—how do we decide what is true about nature?
The usual way, make models (and metamodels) and refine them to explain and predict better.
As I understand it, the apatheist statement of “the laws of nature as they justify traditional societal hierarchy”.
From Body Learning by Michael Gelb, while he’s quoting another book in the middle. I know there are a handful of other LWers out there who do Alexander, but it’s a very interesting technique because it seems useful to know (I wouldn’t be surprised if it’s the inspiration for a lot of the skillful movement stuff in Dune, which made an impression on me as a child) but useless to discuss: there’s not really a way to teach it except by touch, which doesn’t scale very well.
Alexander Technique is one somatic technique of many. All of them lead to skillful movement and without background knowledge there no reason to assume that a specific technique inspired Dune.
I don’t think that discussing it is inherently impossible. It just hard. It’s even more hard when the people you are talking with don’t have the background to understand the basic claims and then want peer reviewed research for every claim.
If you want to discuss issues around shifting around your center of gravity than the people you talk to have to know what you mean with shifting around one’s center of gravity and it’s not something that can be easily done via a blog post.
I was guessing based on timing, but looking into Herbert I’m not seeing any obvious influence. It’s more a statement of Dune’s impact on how I think about movement than it is about Herbert, it looks like.
Well, you can certainly elaborate the basic intellectual edifice, as done for Zen here. My stab at it:
Humans are ‘psychophysical systems’ (read this as a rejection of dualism in practice rather than just philosophy). Most people don’t use themselves skillfully. One of the key skills in using yourself skillfully is learning the skill of not doing habitual wasteful or harmful actions, and this entails unlearning habits and defaults.
The sort of person who reads LW probably has significant experience entering strange new conceptual territory, to wrap their mind around beliefs or opinions that seem totally alien to them; they probably don’t have significant experience entering strange new physical territory, in the sense of moving or keeping their body in a manner that seems totally alien to them. And just as concepts that start off seeming alien can turn out to be helpful and grow familiar, so can physical mannerisms.
Communicating concepts in words is difficult but mostly doable. Communicating mannerisms in words is many times more difficult, and illusions of transparency even worse. Communicating mannerisms by touch is difficult but mostly doable. The communication difficulty is increased by the fact that the ‘mannerism’ involves the level of tension in the muscles and resistance to movement as well as the position of the joints. (I can show you a picture of how my shoulder is oriented; can I show you a picture of how readily it moves when you push or pull my hand?) Note also that many people spend years of focused effort in learning how to better communicate with words (both listening/reading and speaking/writing), and very little focused effort in learning how to better communicate with mannerisms (observing with sight or touch and demonstrating with example or touch).
There seems to be a general heuristic of “if you can’t articulate how you know X, I don’t believe that you know X” that I am deeply ambivalent about using. On the one hand, it serves as an impetus to abstract and formalize knowledge and is useful as a cautionary principle against trickery. On the other hand, much (if not most!) knowledge cannot be easily articulated because it is stored in the form of muscle memory or network associations rather than clear logical links. I don’t seem to rely on that heuristic very much, for reasons I haven’t fully unpacked.
Last Sunday I went to a 1 1⁄2 hour Grinberg Method presentation by a Grinberg teacher. At the end I asked innocently asked a deep question. After a bit forth and back the teacher did understand my question. On the other hand someone practicing Grinberg professionally with 1 year of professional training didn’t even understand my question.
Not only that. If a specific concept withstood 100 separated attempts of falsifying it, I can be pretty confident in the concept. On the other hand summarizing those 100 separate attempt of falsifying it can’t be done in a LW post. Of course the concepts for which that’s true are also quite central for the way I view the world.
When talking about somatics, it often also useful to think “if you don’t articulate how you know X, then I have no good idea what you mean when you say that you know X”. Unfortunately that’s quite unavoidable in the topic.
“If I claim that lowering my center of gravity will ground me and make it harder for someone to push me” then, the average person on LW likely does not have a concept of what that sentence means.
Not only that. It also involves movement intentions. Movement intentions are not something trivial to explain.
At the beginning of the year I was a Bachata Congress taking a workshop. The teacher announced to the group that he does something and the audience is supposed to tell him what he does. He did the basic step and changed his movement intention from up to down and back a few times. I was the only person who noticed that. He said that nobody even noticed before in his workshops and he’s teaching at a different Congress most weeks. The kind of people who go to dance congresses are not totally incompetent at human movement and still he usually does this and nobody can tell him what he’s doing. For me it looks quite obvious but then I spent a lot of time with somatics (but still have no professional training).
Concepts like tensions, muscles, resistance to movement and position of joints are all ideas that for which I assume that most people on LW have phenomenological primitives. Movement intention isn’t like that. It’s nothing that somebody in school told you about.
As far as the concept of muscles go, I’m currently reading Anatomy Trains with includes the nice passage:
That’s were it get’s conceptually interesting and unfortunately that’s no ground that’s easy to discuss on LW or for that matter on any online forum I know of.
I don’t know how to be less boring. It does not help that most people aren’t the least bit interesting.
Boring means different things to different people. I personally like a deep intellectual discussion.
Other people value other characteristics. Many people are not boring when they are unstifled and just follow their impulses.
Even two erudites can find each other boring if their views are irreconcilable.
That assumes the goal of discussion is to reconcile views. If I meet someone who thinks radically different from myself that’s for me an interesting opportunity to understand a new perspective.
Do you set apart a specific mix of circumstances for the pursuit of Aumann agreement?
Agreement is seldom a goal for me in intellectual discussion. The goal is rather to learn something new or to let the person I’m discovering with learn something new. It’s about exploration of ideas.
“I drink to make other people more interesting”—Ernest Hemingway
But don’t forget that this didn’t work out well for him.
Otherwise, the solution is to find interesting people. Internet helps a LOT.
My spouse has agreed to give up either chicken or beef. Beef is significantly worse than chicken from an environmental standpoint, but more chickens die (possibly after suffering) to feed us. How can I compare the two different ethical dimensions and decide which to eliminate?
Which would you eliminate? [pollid:801]
If you can afford pasture-raised chicken or grass-fed beef, the animal suffering consideration becomes less important than if you’re eating factory-farmed animals.
That is a fair point. We already do that with our chicken, beef and pork having sourced them all locally and made sure that the animals are treated about as well as you could expect. In fact, there may even be a Hansonian argument that these animals generally have lives that are worth living even if you take into account that their last day may be fairly awful. I’m not sure how to factor that in either. I guess if the crappy lives of many factory-farmed chickens outweighs the crappy life of one factory-farmed cow, the nice lives of many hippy-farmed chickens should still outweigh the nice life of one hippy-farmed cow.
Also consider which choice is more likely to stick, or make future ethical choices seem reasonable.
Why choose one? If you aren’t sure which is worse, maybe you should assume that they are about equal. Then you should reduce total consumption. Is eliminating one option going to help you do that? Or will the other grow to fill the void?
It’s easier to follow a hard-and-fast rule than it is to promise yourself you’ll do less of something.
Yes, but it’s just a commitment to an instrumental goal. To repeat myself: will it actually change, or will the one fill the void of the other? If you go to a wedding where you are offered a choice between fish and beef, the right ban does force the choice of fish, but most menus are longer than that; in particular, cooking at home offers the longest menu.
My goal is to get to neither. My partner is willing to eliminate one and I think that showing that we can substitute veggies for one form of meat will make an emotionally stronger case later that we can make the same substitution for a different meat.
If you think it is going to be a temporary phase, then it is even less important which one you choose.
But, again, flesh and fowl are fungible. Will eliminating one actually reduce your consumption? Perhaps setting a quota for how much meat to buy on weekly grocery store trips, or going by days of the week (the most popular method in the world!) would be more effective.
The current plan is to eliminate meat from lunch and substitute veggie soups and some other kind of sandwhiches (we have a panini grill and I know a few good vegetarian options). Also, we’re going to swap in a veggie pizza once per week as well. It may be a temporary phase, but that is not the goal for either of us.
If your goal is for this to be a temporary step, pick whichever one will make a stronger argument. I.e. if one has much better substitutes available, get rid of it now.
I thought the same. From the way the choice is framed, animal suffering is not a factor to consider. It should be, but if you really were considering it, you’d give up both.
Animal suffering and environmental impact are the primary factors for me but I’m weakly motivated and don’t think I’ll be able to change my habits without my partner changing her eating habits as well (she prepares most of our meals because she likes cooking and I do not). Animal suffering is not important to her and she’s had some health problems on a vegetarian diet before so she’s only willing to cut one form of meat and see how that goes before cutting further. I’d like to cut the one that generates the most problems and replace it with vegetable products first and establish a new, better equilibrium first. I do think that I’ll be better at planning vegetarian replacements than she was, so I’m optimistic that eventually we’ll get to pescitarian at least, but I wanted to get input on how to think about the first step.
Chicken. Far more chickens die per amount of meat, and I suspect that they have worse lives, since it is probably easier to keep a whole bunch of chickens in a small space and cut off bits of them without anaesthesia. Brian Tomasik writes about this question here, although be warned that there are some pretty nasty pictures that you will have to scroll past to get to his estimate of the numbers.
I don’t worry too much about their living conditions. We already eschew factory farmed meats, so the chicken is free range and the cattle is raised and butchered by people with a religious obligation to treat the animals relatively well. These are definitely things that are good to consider though.
I eat chicken but I don’t eat mammals. This is partly for environmental reasons, but it is also because my ethics are not cosmopolitan. I think beings that are more cognitively similar to me are owed more moral concern (by me, not everyone else), not merely because they are more likely to be sentient or sapient or whatever, but because they are more likely to share my interests and outlook on the world, have emotions that I can identify with, etc. So I believe that I have greater moral obligations to my family and friends than to strangers, greater moral obligations to humans than to great apes, and so on. In the absence of contrary evidence, I use distance on the evolutionary tree as a proxy for cognitive distance. On those grounds, I am pretty uncomfortable with the suffering that cattle (and other mammals) undergo in the factory farming industry. I am significantly less uncomfortable about the suffering that chickens undergo.
So I guess my point is that you shouldn’t be weighing chicken suffering against cattle suffering on a one-to-one scale, because completely cosmopolitan ethical systems are wrong. Our sphere of moral concern shouldn’t work like an absolute threshold, where we have equal concern for all entities within the sphere and no concern for any entity outside it. Instead, it should gradually attenuate with distance. I probably can’t convince you of all this in a single comment, but perhaps you should at least consider it as a morally relevant possibility.
I would add that (vague recollection upcoming:) chicken might be healthier than beef, if you’re just going to eat one of them.
Beef. Chickens are even less likely to be sentient by a considerable margin.
Related to this and your recent poll on Facebook: Do you distinguish between sentience and sapience?
I agree that chickens are less likely to be sentient, but is killing an animal with 50 sentience units worse than killing 10 animals with 5 sentience units? How is suffering likely to scale with sentience?
Odds on the Philae comet lander finding prevalence of the “right” kind of aminoacids#In_biology) (conditional on the lander surviving long enough to run the experiment and transmit the results)?
I would guess < 1%.
I don’t know much about organic chemistry, but amino acids are fairly simple molecules, and if memory serves they’ve been found free-floating in molecular clouds. I’d tentatively bet at 100:1 odds that that the right isomers of at least the simpler ones will be found conditional on the experiment being run, since a comet’s basically a giant slushball of compounds left over from those molecular clouds.
On the other hand, I’d put the odds at well under 1% that we’ll see terrestrial-like isomer ratios. That would be tantamount to finding a biosphere—an active one, not just the remains of one, since amino acids are photosensitive and tend toward an equilibrium isomer ratio unless biological processes are leaning on it.
Well, amino acid racemization rates is high (under 1M years) at room temperature, but probably much lower at 3K or whatever the comet ambient temperature is, and the warm layers of the comet are regularly blown off, anyway. Not sure about the effects of X- and gamma rays on the racemization rate, but if it is anything significant, then yeah, little chance of finding anything but 50⁄50.
EDIT: actually, having looked a bit more, a deviation from equal ratio is quite common in extraterrestrial objects, and is an open problem, so the odds of finding something non-racemized are much higher than 1%. D’oh.
xkcd on efficiency.
Is there actually good AI research somewhere in Europe? (Apart from what the FHI is doing.) Or: can the mission for FAI benefit at all from me doing my PhD at the AI lab of some university? (Which is my plan currently.)
Do you mean AI research or FAI research? (The FHI does not do AI research.)
Maybe he uses good as a synonym for friendly?
Suppose we believe that stock market prices are very good aggregators of information about companies future returns. What would be the signs that the “big money” is predicting (a) a positive postscarcity type singularity event or (b) an apocalypse scenario AI induced or otherwise?
For (a), it would probably look like the late 90′s dot com bubble runup, except that it wouldn’t end with a bubble burst and most of the companies going under, instead it would just keep going, while world dramatically changed.
For (b), I don’t think we would really know until it had started, at which point things would go bad very, very quickly. I doubt that you could use price movements far in advance to predict it coming.
In general, markets can go down in price much faster than they went up. Scenario (a) would look like a continual parabolic rise, while (b) would just be a massive crash.
Why? In both cases money becomes meaningless post-singularity.
If you expect a happy singularity in the near future, you should actually pull your money out of investments and spend it all on consumption (or risk mitigation).
My idea was that for (a), money was becoming worthless but ownership of the companies driving the singularity was not. In that case, the price of shares in those companies would skyrocket towards infinity as everyone piled all of that soon-to-be-worthless money into it.
Of course, if the ownership of those companies was not going to matter either, then what you said would be true.
I this is something that I think is neglected (in part because it’s not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who’s goals are implemented. If it’s a fast take-off, whoever cracks recursive self-improvement first basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think “We can create utopia beyond anyone’s wildest dreams” and instead to default to “We’ll skewer the competition in the next quarter.”
Essay about wish-granting in Madoka, (an anime)
I have a hypothesis based on systems theory, but I don’t know how much sense it makes.
A system can only simulate a less complex system, not one at least as complex as itself. Therefore, human neurologists will never come up with a complete theory of the human mind, because they’ll not be able to think of it, i.e. the human brain cannot contain a complete model of itself. Even if collectively they get to understand all the parts, no single brain will be able to see the complete picture.
Am I missing some crucial detail?
I think you may be missing a time factor. I’d agree with your statement if it was “A system can only simulate a less complex system in real-time.” As an example, designing the next generation of microprocessors can be done on current microprocessors, but simulation time often takes minutes or even hours to run a simulation of microseconds.
Institutions are bigger than humans.
Also the time thing.
The whole point of a theory is that it’s less complex than the system you want to model. You are always making some simplifications.
It’s my understanding that nobody understands every part of major modern-day engineering projects (e.g. the space shuttle, large operating systems) completely, and the brain seems more complex than those, so this is probably right. That said, we still have high-level theories describing those, so we’ll likely have high-level theories of the brain as well, allowing one to understand it in broad strokes if not in every detail.
It probably depends on what you mean by complexity. Surely a universal Turing machine can emulate any other universal Turing machine, given enough resources.
On the other side, neurological models of the brain need not to be as complex as the brain itself, since much of the complexity is probably accidental.
Seems unlikely, given the existence of things like quines), and the fact that self-reference comes pretty easily. I recommend reading Godel Escher Bach, it discusses your original question in the context of this sort of self-referential mathematics, and is also very entertaining.
Quines don’t say anything about human working memory limitations or the amount of time a human would require for learning to understand the whole system, and furthermore only talk about printing the source code not understanding it, so I’m not sure how they’re relevant for this.
I wouldn’t be too surprised if the hypothesis is true for unmodified humans, but for systems in general I expect it to be untrue. Whatever ‘understanding’ is, the diagonal lemma should be able to find a fixed point for it (or at the very least, an arbitrarily close approximation) - it would be very surprising if it didn’t hold. Quines are just an instance of this general principle that you can actually play with and poke around and see how they work—which helps demystify the core idea and gives you a picture of how this could be possible.
Unless from the beginning you create the system to accomplish a certain number of tasks and then work to create the system to complete them. That can mean creating systems and subroutines in order to accomplish a larger goal. Take stocking a store for example:
There are a few tasks to consider:
Price Changes
Presentation
Stocking product
Taking away old product (or excess)
A large store like Target has 8 different, loosely connected teams that accomplish these tasks. That is a store system within a building of 8 different subroutines to create a system that, if it works at its best, makes sure that the store is perfect stocked with the right presentation, amount of produce and the correct price. That system of 8 subroutines is back up by the 3 backroom subroutines that create the backroom system that take in product and make it available for stocking and that system is backed up by the distribution center system which is backed up by the transportation system (each truck and contractor working as a subroutine).
These systems and subroutines are created to accomplish one goal and that is to make sure that customers can find what they are looking for a buy it. I think using this idea we can start to create systems and subroutines that make it possible to replicate very complicated systems without losing anything.
io9 popularizing Bayes’ theorem.
http://en.wikipedia.org/wiki/Project_Cybersyn
What of it?
A friend of mine recently succumbed to using the base rate fallacy in a line of argumentation. I tried to explain that it was a base rate fallacy, but he just replied that the base rate is actually pretty high. The argument was him basically saying something equivalent to “If I had a disease that had a 1 in a million chance of survival and I survived it, it’s not because I was the 1 in a million, it’s because it was due to god’s intervention”. So I tried to point out that either his (subjective) base rate is wrong or his (subjective) conditional probability is wrong. Here’s the math that I used, let me know if I did anything wrong:
Let’s assume that the prior probability for aliens is 99%. The probability of surviving the disease given that aliens cured it is 100%. And of course, the probability of surviving the disease at all is 1 out of a million, or 0.0001%.
Pr(Aliens | Survived) = Pr(Survived | Aliens) x Pr(Aliens) / Pr(Survived)
Pr(Aliens | Survived) = 100% x 99% / 0.0001%
Pr(Aliens | Survived) = 1.00 * .99 / .000001
Pr(Aliens | Survived) = .99 / .000001
Pr(Aliens | Survived) = 990,000 or 99,000,000%
There’s a 99,000,000% chance that aliens exist!! But… this is probability theory, and here probabilities can only add up to 100%. Meaning that if we end up with some result that is over 100% or under 0% something in our numbers is wrong.
The Total Probability Theorem is the denominator for Bayes Theorem. In this aliens instance, that is the probability of surviving the disease without alien intervention, which is 1 out of a million. The Total Probability Theorem, meaning 1 out of a million in this case, is also equal to Pr(Survived | Aliens) x Pr(Aliens) + Pr(Survived | Some Other Cause) x Pr(Some Other Cause):
1 in a million = Pr(Survived | Aliens) x Pr(Aliens) + Pr(Survived | Some Other Cause) x Pr(Some Other Cause)
0.0001% = 100% x 99% + ??? x 1%
0.0001% = 99% + 1%*???
If we want to find ???, in this case it would be Pr(Survived | Some Other Cause), we need to solve for ??? just like we would in any basic algebra course to find x. In this case, our formula is 0.000001 = 0.99 + 0.01x.
If we solve for x, it is −98.999, or −9899%. Meaning that Pr(Survived | Some Other Cause) is −9899%. Again, a number that is outside the range of allowable probabilistic values. This means that there is something wrong with our input. Either the 1 in a million is wrong, the base rate of alien existence being 99% is wrong, or the 100% conditional probability that you would survive your 1 in a million disease due to alien intervention is wrong. The 1 in a million is already set, so either the base rate or conditional probabilities are wrong. And this is why that sort of “I could only have beaten the odds on this disease due to aliens” (or magic, or alternative medicine, or homeopathy, or Chthulu, or...) reasoning is wrong.
Again, remember the base rate. And you can’t cheat by trying to jack up the base rate or you’ll skew some other data unintentionally. Probability is like mass; it has to be conserved.
This.
Since P(S) = P(S|A)P(A) + P(S|-A)P(-A), and P(S|A)P(A) is already .99, then P(S) cannot be .000001.
Those two assertions are contradicting each other: you cannot coherently believe a composite event (suriving an illness) less than you believe each factor (surviving the illness with the aid of magic).
If you believe that God will cure everyone who gets the disease (P(S|A) = 1) and God is already a certainty (P(A) = .99), then why so few people survive the illness?
One possibility is that it’s P(S|-A) that is one in a million (surviving without God is extremely rare). In this case:
P(A|S) = P(S|A) P(A) / P(S) -->
P(A|S) = P(S|A) P(A) / (P(S|A) P(A) + P(S|-A) P(-A)) -->
P(A|S) =1 .99 / (1 .99 + .000001 * .01) -->
P(A|S) = .99 / (.99 + .00000001) -->
P(A|S) = .99 / .99000001 -->
P(A|S) = .9999999...
If you already believe that curing aliens are a certainty, then for sure surviving an illness that has only a millionth possibility otherwise, will bring up your belief up to almost a certainty.
Another possible interpretation, that keeps P(S) = .000001, is that P(S|A) is not the certainty. Possibly God will not cure everyone who gets the disease, but only those who deserves it, and this explains why so few survive.
In this case:
P(S) = x .99 + y .01 = .000001 -->
P(A|S) = x * .99 / .000001
a number that depends on how many people God considers worthy of surviving.
I think your denominator in your original equation is missing a second term. That is why you get a non-probability for your answer. See here: http://foxholeatheism.com/wp-content/uploads/2011/12/Bayes.jpg
There no reason why God in principle should be unable to choose which of the people of the mass of one million survives. If you don’t have a model of how the one in a million gets cured you don’t know that it wasn’t the God of the gaps.
In medicine you do find some people having theories according to which nobody should recover from cancer. The fact that there are cases in which the human immune system manages to kick out cancer does suggest that the orthodox view according to which cancer develops when a single cell mutates and the immune system has no way to kill mutated cells is wrong.
Today we have sessions with a psychologists as the standard of care for cancer patients and we pushed back breast cancer detection screening because a lot of the “cancers” that the screening found just disappear on their own and it doesn’t make sense to operate them away.
I am reposting my question from the last open thread: Where can I find reading groups for arbitrary books? I saw the Superintelligence reading group and I realised that I am currently in a reading group for another book. Since my reading list is huge, I could use the mild social incentive of a reading club. Also the commets are usually enlightening and drawing my attention to a point I have not considered before.
I prefer discussion board based groups as I can skip the meaningless discussions.
Have you tried Goodreads?
Embarassingly I am a member of that site and have not seen that kind of option. Where is it and can you point me to relevant groups?
The About page links to the Groups section:
http://www.goodreads.com/group
Joined the LessWrong group. Thanks.