Example #149 of why it’s difficult to specify bets...
Louie texted me a screenshot showing that Zagat had given an opinion on Subway (the fast-food chain). My girlfriend said “No way,” so we both specified a bet that if we went to the Zagat website, we wouldn’t be able to find a Zagat rating for Subway. She said 40% and I said 65%. When we checked, it turned out Zagat had conducted a survey of people who visit fast food joints, and Subway had been one of the restaurants they got survey results for. So does that count as Zagat giving Subway a rating? I don’t know. I was just thinking of “official Zagat ratings,” rather than survey ratings, but it’s technically true that there’s a rating for Subway on the Zagat website because of that survey of random people who eat fast food.
What i really need is a panel of 5 trusted judges to decide whether my bets are right or wrong, in contested cases.
Digging behind the bet, I think you were betting that Louie had been spoofed. If the screenshot Louie sent was really sourced in zagat and not spoofed, then zagat did indeed have the opinion lukeprog thought it didn’t.
If the purpose of bets is measure your model against the world, it seems to me that the more valuable lesson is how often you are surprised in the process of evaluating the bet than how often you are correct. If you put 40 or 65% on the hypothesis that the restaurant falls in a particular bin, you aren’t surprised either way by the answer, but you both erred in believing that there were just two bins.
I tried to code a simple bot for recurring threads on LW based on bots written for Reddit. It doesn’t work as there is apparently no API or a different one from the vanilla version of Reddit. If there is an API is there a documentation for it that I can access?
I was looking for an old Robin Hanson post to use as an example in an upcoming post of mine, and tried to get there through the Opposite Sex, an old post of Eliezer’s. When I click that link, though, I get a “You aren’t allowed to do that.” error, which appears to be a change in the last two years. Anyone know what happened? (My guess is Eliezer or someone decided to retract the article, but it would be nice to know for sure.)
On Facebook one time, there was some discussion or other about gender, and a link to the post was made. EY said something to the effect of ‘I no longer endorse that post sufficiently enough to keep it up’, and took it down.
Being sick makes me stupid. Yesterday I was teaching economics while I had a mild cold. I made multiple simple math mistakes, far more than normal. I need to be mindful that being sick reduces my cognitive capacities.
At my workplace, the question came up of how best to publicly recognise people for good work, while minimising the amount of politics/friction/jealousy that comes about as a direct result of it. We have only just grown past the point where we all know each other well; hence why this sort of thing is becoming interesting.
My initial response to the question was “Make being praised unpleasant, using ugly trophies (sports team strategy) or stupid hats (university graduation strategy)” but I would like to say something more upbeat as well.
Is anyone aware of good writing on the subject/google keywords I could use to find the literature?
You don’t want to make being praised unpleasant for the recipient—that leads to perverse incentives. And you don’t want to give an award a stupid name or an embarrassing shape—part of the point of this sort of thing is that it looks good on your resume or perched over your desk. You want to mark their achievement in a way they’ll genuinely appreciate, but simultaneously add symbolism to make their coworkers feel that their status hasn’t been diminished.
I think what you’re looking for is a little temporary public humiliation, not intrinsic to the award but coming along with more standard recognition. You could do this in several different ways. If it’s a fairly small group and the awards are a fairly big deal, for example, you could run a roast) as part of the party following the award. You could probably contrive ways to add this kind of symbolism to physical awards, too.
Public attention of any kind is just embarrassing. Probably not unrelated to past experience of politics, friction, and jealousy resulting from praise in the past. But if I were threatened with public praise in my job I would be strongly tempted to quit before it could strike.
Find a way to make it non-zero-sum. Only 1 person can be employee of the year, so others lose out and may resent their colleague. Maybe a gold/silver/bronze border around your portrait on the intranet, with a tooltip that explains what you got it for?
“The technique involves replacing all of a patient’s blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. “If a patient comes to us two hours after dying you can’t bring them back to life. But if they’re dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed,” says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique.”
I welcome criticism of my new personal favorite population axiology:
The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person’s life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it’s allowed to depend on whether the person’s experiences are veridical.
This axiology implies that it’s important to ensure that the future will contain many people who have better lives than us; it’s consistent with preferring to extend someone’s life by N years rather than creating a new life that lasts N years. It’s immune to Parfit’s Repugnant Conclusion, but doesn’t automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.
There are straightforward modifications for dealing with general relativity and splitting and merging people.
The one flaw is that it’s temporally consistent: If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don’t like my robot. Good thing?
A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.
If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.
Edit: oh, I was thinking of an average over time, not people.
Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, “let’s figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct”. But from our perspective, those children won’t have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.
it’s important to ensure that the future will contain many people who have better lives than us
Are you swiping the complexity of value under the terms “better” and “veridical”? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?
Yes, I mean (for example) that if a person believes they’re married to someone, their life’s welfare could depend on whether their spouse is a real person or if it’s a simple chatbot. Also, if a person feels that they’ve discovered a deep insight, their life’s welfare could depend on whether they have actually discovered such an insight.
It took me a long time to find LessWrong and I found it through a convoluted and ultimately entirely random series of events. Though English is neither my first language nor do I live in an anglophone country so I’d love to find a similar community in my language, German, or more generally interesting smaller, though active, communities in other languages than English. How would I go about that?
I’m pretty sure that due to the free rider problem, active and lively communities are much more likely to grow via word of mouth than via public advertising or easily googled websites.
Maybe try Mensa? I don’t know if they’re any good, but they’re in Germany and they’re big enough to know an LWish community if one is available.
In Berlin we found that having Quantified Self events in English makes more sense then holding them in German. All the interested people speak English. Our local Berlin LW meetups are also in English. There no good reason to use German if you want to discuss intellectual topics on a deep level. English is the language of science. English is the language of programming.
That said, see whether there a LW meetup, QS meetup, Chaos Computer Club Erfa or Hackerspace at your city. That’s were the kind of people who are here hang out. Meetup.com in general is good to find interesting groups.
When the Chaos Computer Congress was still in Berlin I went there multiple years in a row. Now I don’t travel to Hamburg but if you are free on 27-30 December going there, is a very worthwhile experience to be in the company of very smart people.
As far as general strategies for finding communities, talk to people. Ask them to which communities they belong.
Who actually speak it at the meetup or who can speak it? People in Paris speaking German at a public meetup would go against my idea of French people looking out to protect their language.
LW may be interested to learn about Amazon Smile, which gives 0.5% of your Amazon purchases to charity, and the Smile Always Chrome extension that will route your browser to smile.amazon.com by default. (Yes, you can support MIRI through Amazon Smile.) Total setup time estimated at under 5 minutes.
Oh yeah, it looks like they’re having some kind of promotion where if you sign up and make a purchase by March 31, they will give an extra $5 to your chosen charity.
I have been using Amazon Smile and Smile Always for MIRI for about a year.
IIRC, Amazon Smile used to be listed on MIRI’s Donate for free page, but has since been replaced by “Good Shop”. Good Shop appears to give a higher percentage, but I was unable to get the browser extension working so that it happened automatically, so I still use Smile. If anyone knows of a way to get it working, I’d be happy to hear it. But I tried to do it manually for a while, and I just don’t remember often enough.
Facebook bought Oculus Rift for $2 billion. What makes this, and so many other large deals, such clean numbers? Are the press rounding the details? Are the companies only releasing approximate or estimate numbers? Can the value of a company like Oculus really not be estimated to the nearest 10%? Or do these whole numbers just serve as nice Schelling points on which to hinge a bargain? Or am I forgetting lots of ugly-numbered deals?
(WhatsApp purchase was 2 significant figures, and this list on Wikipedia does show mostly 2-3 significant figures though some figures are probably converted from other currencies.)
In cases like this, a large portion of the “money” paid is actually in the form of shares, which can vary wildly on a day-to-day basis (especially during a takeover!). It doesn’t make sense to specify the value of it too precisely because nobody knows what the shares are going to be worth tomorrow.
What makes you think that these numbers are determined by some kind of rational cost-benefit analysis rather than those with the money rattling off numbers until those with the property give in?
Facebook today announced that it has reached a definitive agreement to acquire Oculus VR, Inc., the leader in immersive virtual reality technology, for a total of approximately $2 billion. This includes $400 million in cash and 23.1 million shares of Facebook common stock (valued at $1.6 billion based on the average closing price of the 20 trading days preceding March 21, 2014 of $69.35 per share). The agreement also provides for an additional $300 million earn-out in cash and stock based on the achievement of certain milestones.
There’s rounding right there (23.1m * 69.35 is not a round number). And there’s plenty of uncertainty about how much they will actually pay: how can anyone know how much of that earn-out will ultimately be paid?
There’s rounding right there (23.1m * 69.35 is not a round number).
That sounds backwards to me. It looks to me like they started at the round 2.0, set 20% in cash, 80% in stock, found that required 23.1m shares of facebook and rounded that to three digits to get within 2% of the 1.6 they wanted.
Also, the incentive pay of 300m is 15% of the fixed payment, less uncertainty than rounding to 1 significant figure.
It absolutely could be all of the above! But see I write questions like this fairly frequently: I notice something surprising and don’t have a good explanation for it. I then write down the question and pose a couple possible explanations which makes me think of more possible explanations. Frequently I realized that taken together the possibilities I thought of are enough to explain what I was surprised at and I don’t even ask the question. Other times, like this one, I still feel like I’m missing something. So I ask the question.
In this case it looks like the biggest thing I missed was how much these sales’ values depend upon the moment-to-moment stock prices of the parties involved and so that not rounding them hardly even makes sense, as you and bramflakes point out. Thanks!
no matter how much is discovered about neurobiology and the measurable correlates of consciousness, it seems to me that stoners will always be able to ask each other, “dude, what if like, my red is your blue?”
no matter how much is discovered about mathematics and the measurable regularities of reality, it seems to me that stoners will always be able to ask each other, “dude, what if like, two plus two isn’t four?”
Seriously though, that’s a really bad argument, why have you added it here?
Are you indicating that only the relation between wavelengths and the brain’s information processing counts, and that differing conscious perceptions of these wavelengths are analogous to the use of different sets of symbols used to denote the additive relationship between “two-ness” and “four-ness” (two plus two equals four and deux plus deux égalent quatre)?
No, I just meant that, just because a ‘stoner’ can ask a question, doesn’t mean the answer to the question is permanently unknowable.
Edit: Or even that difficult to answer. In fact, that a stoner can ask a question is almost no evidence of anything at all. Applying principle of charity, if Scott meant that anyone can always ask that question, that’s true for any question; you can always keep asking if two plus two equals four. Now, if in fact Aaronson wants to present evidence for the claim that we can never know if your and my ‘blues and reds’ are the same, that would be cool, but there was no real argument given.
Recently I changed some of my basic opinions about life, in large part because of interaction with LessWrong (mostly along the axes Deism → Atheism, ethical naturalism → something else (?)).
It inspired me to try to summarize my most fundamental beliefs. The result is as follows:
Epistemology
1.1. Epistemic truth is to be determined solely by the scientific method / Occam’s razor.
1.2. The worldview of mainstream science is mostly correct.
1.3. The many religious / mystical traditions are wrong.
Philosophy of mind
2.1. Consciousness is the result of computing processes in the brain. In particular, if a machine would implement the same computations it would be conscious. However, in general I don’t know what consciousness is.
2.2. Identity is not fundamentally meaningful. However, there might be useful “fuzzy” variants of the concept.
Metaethics
3.1. Humans are agents with (approximately) well-defined utility functions.
3.2. The moral value of an action is the expectation value of the utility function of the respective agent.
3.3. I should take actions with as much value as possible. This is the only meaningful interpretation of “should”.
4.2. I cannot give anything close to a full description of my utility function, but it seems to involve terminal values such as: beauty, curiosity, humor, kindness, friendship, love, sexuality / romance, pleasure… These values are computed on all sufficiently human agents (but I don’t know what “sufficiently human” means). The weights for myself and my friends / loved ones might be higher but I’m not sure.
Less fundamental and less certain are:
Metaphysics
5.1. UDT is the correct decision theory.
5.2. Epistemic questions don’t make fundamental sense (I realize the apparent contradiction with 1.1 but 1.1 is still a useful approximation and there’s also a meta-epistemic level on which UDT itself follows from Occam’s razor) as opposed to decision-theoretic questions. Subjective expectations are ill-defined.
5.3. Temark’s level IV multiverse is real, or at least as “real” as anything is.
I’m curious to know how many LessWrongers have similar vs different worldviews.
1.1- Disagree, but I may not understand the claim (what’s ‘epistemic truth’?).
1.2- Agree.
1.3- Agree.
2.1- Agree that consciousness is the result of computing processes in the brain, disagree that a machine implementing the same computations would necessarily be conscious. (i.e. agree with physicalism, don’t agree with functionalism).
2.2- I don’t understand the claim. But I think I disagree.
3.1- Agnostic.
3.2- Disagree.
3.3- Disagree, especially with the claim that this is the only meaningful interpretation of ‘should’.
4.1- Agnostic.
5.1- Agnostic.
5.2- I don’t understand this at all.
5.3- I don’t understand your use of the word ‘real’.
Morality is a fundamental component of human cultures and has been defined as prescriptive norms regarding how people should treat one another, including concepts such as justice, fairness, and rights. Using fMRI, the current study examined the extent to which dispositions in justice sensitivity (i.e., how individuals react to experiences of injustice and unfairness) predict behavioral ratings of praise and blame and how they modulate the online neural response and functional connectivity when participants evaluate morally laden (good and bad) everyday actions. Justice sensitivity did not impact the neuro-hemodynamic response in the action-observation network but instead influenced higher-order computational nodes in the right temporoparietal junction (rTPJ), right dorsolateral and dorsomedial prefrontal cortex (rdlPFC, dmPFC) that process mental states understanding and maintain goal representations. Activity in these regions predicted praise and blame ratings. Further, the hemodynamic response in rTPJ showed a differentiation between good and bad actions 2 s before the response in rdlPFC. Evaluation of good actions was specifically associated with enhanced activity in dorsal striatum and increased the functional coupling between the rTPJ and the anterior cingulate cortex. Together, this study provides important knowledge in how individual differences in justice sensitivity impact neural computations that support psychological processes involved in moral judgment and mental-state reasoning.
I’ve seen a lot of discontent on LW about exercise. I know enough about physical training to provide very basic coaching and instruction to get people started, and I can optimize a plan for a variety of parameters (including effectiveness, duration of workout, frequency of workout, cost of equipment, space of equipment, gym availability, etc.). If anyone is interested in some free one-on-one help, post a request for your situation, budget, and needs and I’ll write up some basic recommendations.
I don’t have much in the ways of credentials, except that I’ve coached myself for all of my training and have made decent progress (from sedentary fat weakling to deadlifting 415lbs at 190lb BW and 13%BF). I’ve helped several friends, all of which have made pretty good progress, and I’ve been able to tailor workouts to specific situations.
I’ve been reading a bit of books that I guess could be classified as “pop psychology” and “pop economics” lately. (In this concept I include books like Kahneman’s Thinking Fast and Slow. Hence what I mean is by “pop” is not that it’s shallow but rather that it has a wide lay audience.) Now I’d like to turn to sociology—arguably the most general and allencompassing of the social sciences. But when I google “pop sociology”, all the books seem to have been written by economists or psychologists or non-academics such as Malcolm Gladwell. For instance, see here:
Are there no well-known pop-sociological books written by sociologists and, if so, what does this say about sociology as a discipline? You very seldom hear about sociological research in the media if you compare with economics and psychology, and surely there has to be an explanation of this?
You can try Everything Is Obvious by Duncan Watts. He’s a multi-disciplinary type, but he is at least partially a sociologist. He also discusses a lot in there about sociology as a discipline.
Erving Goffman is said to be accessible and said to be a sociologist. One issue is that “sociology” has several meanings. His version shades into psychology.
I’m currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:
P(data at least as extreme as your data | Null hypothesis)
This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).
Put another way:
P(data | hypothesis) = 1 - p-value
and if 1 - p-value is high enough then you accept the hypothesis. (My use of “data” is handwaving and not quite correct but it doesn’t matter.)
But it seems more useful to me to calculate P(hypothesis | data). And that’s not quite the same thing.
So what I’m wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.
I’m currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:
P(data at least as extreme as your data | Null hypothesis)
This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis.
This is correct.
Put another way:
P(data | hypothesis) = 1 - p-value
and if 1 - p-value is high enough then you accept the hypothesis. (My use of “data” is handwaving and not quite correct but it doesn’t matter.)
This is not correct. You seem to be under the impression that
complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].
More generally, most people (both frequentists and bayesians) would object to “accepting the hypothesis” based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.
But it seems more useful to me to calculate P(hypothesis | data).
That may be true if you have little influence over what data is available.
Frequentists are mainly interested in situations where they can create experiments that cause P(hypothesis) to approach 0 or 1. The p-value is intended to be good at deciding whether the hypothesis has been adequately tested, not at deciding whether to believe the hypothesis given crappy data.
So what I’m wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.
is correct. Frequentists do indeed claim that P(hypothesis | data) is meaningless for exactly the reasons you gave. However there are some little details in the rest of your post that are incorrect.
null hypothesis (which is the complement of the hypothesis that you are trying to test).
The hypothesis you are trying to test is typically not the complement of the null hypothesis. For example we could have:
H0: theta=0
H1:theta>0
where theta is some variable that we care about. Note that the region theta<0 isn’t in either hypothesis. If we were instead testing
H1′:theta isn’t equal to 0
then frequentists would suggest a different test. They would use a one-tailed test to test H1 and a two-tailed test to test H1′. See here.
P(data | hypothesis) = 1 - p-value
No. This is just mathematically wrong. P(A|B) is not necessarily equal to 1-P(A|¬B). Just think about it for a bit and you’ll see why. If that doesn’t work, take A=”sky is blue” and B=”my car is red” and note that P(A|B)=P(A|¬B)~1.
So what I’m wondering is whether under frequentism P(hypothesis | data) is actually meaningless.
It’s not meaningless, but people who follow R. A. Fisher’s ideas for rejecting the null do not use p(hypothesis | data). “Meaningless” would be if frequentists literally did not have p(hypothesis | data) in their language, which is not true because they use probability theory just like everybody else.
Don’t ask lesswrong about what frequentists claim, ask frequentists. Very few people on lesswrong are statisticians.
“Meaningless” would be if frequentists literally did not have p(hypothesis | data) in their language, which is not true because they use probability theory just like everybody else.
Many frequentists do insist that P(hypothesis) are meaningless, despite “using probability theory.”
There are two misconceptions that you must be aware of, as you will certainly hear these. The first is thinking that we calculate the probability of the null hypothesis being true or false. Whether the null hypothesis is true or false is not subject to chance; it either is true or it is false—there is no probability of one or the other.
So from this statement you conclude that frequentists think P(hypothesis) is meaningless? Bayesians assign degrees of belief to things that are actually true or false also. The coin really is either fair or not fair, but you will never find out with finite trials. This is a map/territory distinction, I am surprised you didn’t get it. This quote has nothing to do with B/F differences.
A Bayesian version of this quote would point out that it is a type error to confuse the truth value of the underlying thing, and the belief about this truth value.
You have successfully explained why it is irrational for frequentists to consider P(hypothesis) meaningless. And yet they do. They would say that probabilities can only be defined as limiting frequencies in repeated experiments, and that for a typical hypothesis there is no experiment you can rerun to get a sample for the truth of the hypothesis.
Yes, you’re right. Clearly many people who identify as frequentists do hold P(hypothesis) to be meaningful. There are statisticians all over the B/F spectrum as well as not on the spectrum at all. So when I said “frequentists believe …” I could never really be correct because various frequentists believe various different things.
Perhaps we could agree on the following statement: “Probabilities such as P(hypothesis) are never needed to do frequentist analysis.”
For example, the link you gave suggests the following as a characterisation of frequentism:
Goal of Frequentist Inference: Construct procedure with frequency guarantees. (For example, confidence intervals.)
Since frequency guarantees are typically of the form “for each possible true value of theta doing the construction blah on the data will, with probability at least 1-p, yield a result with property blah”. Then since this must hold true for each theta, the distribution for the true value of theta is irrelevant.
I could never really be correct because various frequentists believe various different things.
The interesting questions to me are: (a) “what is the steelman of the frequentist position?” (folks like Larry are useful here), and (b) “are there actually prominent frequentist statisticians who say stupid things?”
By (b) I mean “actually stupid under any reasonable interpretation.”
Clearly many people who identify as frequentists
Quote from the url I linked:
One thing that has harmed statistics — and harmed science — is identity statistics. By this I mean that some
people identify themselves as “Bayesians” or “Frequentists.” Once you attach a label to yourself, you have
painted yourself in a corner.
When I was a student, I took a seminar course from Art Dempster. He was the one who suggested to me that
it was silly to describe a person as being Bayesian of Frequentist. Instead, he suggested that we describe a
particular data analysis as being Bayesian of Frequentist. But we shouldn’t label a person that way.
I think Art’s advice was very wise.
“Keep your identity small”—advice familiar to a LW audience.
Perhaps we could agree on the following statement: “Probabilities such as P(hypothesis) are never needed to do
frequentist analysis.”
I guess you disagree with Larry’s take: B vs F is about goals not methods. I could do Bayesian looking things while having a frequentist interpretation in mind.
In the spirit of collaborative argumentation, can we agree on the following:
We have better things to do than engage in identity politics.
But it seems more useful to me to calculate P(hypothesis | data). And that’s not quite the same thing.
It is not the same thing and knowing P(hypothesis | data) would be very useful. Unfortunately, it is also very hard to estimate because usually the best you can do is calculate the probability, given the data, of a hypothesis out of a fixed set of hypotheses which you know about and for which you can estimate probabilities. If your understanding of the true data-generation process is not so good (which is very common in real life) your P(hypothesis | data) is going to be pretty bad and what’s worse, you have no idea how bad it is.
Not having a good grasp on the set of all hypotheses does not distinguish bayesians from frequentists and does not seem to me to motivate any difference in their methodologies.
Added: I don’t think it has much to do with the original comment, but testing a model without specific competition is called “model checking.” It is a common frequentist complaint that bayesians don’t do it. I don’t think that this is an accurate complaint, but it is true that it is easier to fit it into a frequentist framework than a bayesian framework.
I have said nothing about the differences between bayesians and frequentists. I just pointed out some issues with trying to estimate P(hypothesis | data).
I don’t think that’s really the fallacy of grey, if I have to map it to a piece of LessWrong-speak, I’d call it privileging the hypothesis; it’s not saying you can’t be sure of anything, if you’re a good Bayesian it’s not really telling you anything new there, it’s just bringing one particular super-implausible hypothesis to your attention so it occupies much more of your mind than it has any right to.
I am assembling a list of interesting blogs to read and for that purpose I’d love to see the kind of blog the people in this community recommend as a starting point. Don’t see this just as a request to post blogs according to my unknown taste but as a request to post blogs according to your taste in the hope that the recommendation scratches an itch in this community.
John Baez, “from math to physics to earth science and biology, computer science and the technologies of today and tomorrow,” plus stuff on catastrophic risk w.r.t. climate change.
Andrew Gelman, pointing out bad statistics in social science. More than once, I’ve revised my beliefs about some study months later when he points out a failure to replicate or other problem.
Timothy Gowers, math, analysis I stuff recently, mostly over my head
Terry Tao, math, number theory, mostly over my head
And finally my blog, I try to share surprising information, cogsci/relationships/computer science/math
gwern posts on google+ and Kaj Sotala posts interesting stuff on Facebook. I also subscribe to a number of journal’s table of contents via this site to keep up with research and some stuff on arxiv.
I have to admit the intersection with my feed list is most definitely non-empty: I’d add Good Math Bad Math, mathematics, computer science and, sometimes, recipes and playlists.
After discussing diffusion of interesting news at the most recent London meetup, I was planning on asking something like this myself.
Futility Closet is nothing but “interesting stuff”. It describes itself as “a collection of entertaining curiosities in history, literature, language, art, philosophy, and mathematics, designed to help you waste time as enjoyably as possible”. It has more chess than I personally care for, but is updated with what I find to be novel content three times a day.
Conscious Entities is a blog on Philosophy of Mind. It takes an open position on a lot of questions we would consider to be settled on LessWrong, but I think it has value in a steel-manning / why-do-people-believe-this capacity.
(The categories on my feed reader are “Blogs”, “CS”, “Dance”, “Econ”, “Esoterica”, “Maths/Stats”, “Philosophy”, “Science” and “Webcomics”. I’d be interested in finding out how other people classify theirs.)
In the Pipeline is a very good blog to keep up with what’s happen in big pharma and chemistry in general. It written by someone employed inside a pharma company, and not someone who criticizes pharma policy from a outsider standpoint.
There are some posts about specific chemical reactions that I skip but if you want to understand how the healthcare industry works, I can recommend the blog very much. It also provides good coverage if a new relevant biomedical finding comes in the news.
The comment section is usually full of people with domain experience.
I used to read Matt Talibies Rolling Stones blog column for Finanical news. Stories about Libor are explained in detail on that blog.
Matt now left the Rolling Stones and will lead his own magazine under the heading of First Look Media which is funded by the tech billionaire of ebay cofounder Pierre Morad Omidyar. At the moment First Look Media already publishes Glenn Greenwald & company which the primary job of still processing the pile of Snowden documents.
I used to read Glenn Greenwald for a long time but reading every other day about the NSA, used to become boring so I won’t read everything that comes out in the Intercept but I subscribe to the feed.
Alternativlos is an interesting project. Two members of the Chaos Computer Club basically concluded that speaking to the public and explaining them how things work politically is “without alternative” and started to explain topics in 2 hours.
It explains a topic like stuxnet in detail in a detail that explains what exploits cost on the black market.
Today some public radio stations rebroadcast the podcast.
I don’t really read many blogs frequently. The one sort-of-bloggish site I visit daily is 3 Quarks Daily. They link to a number of interesting articles every day, and their taste in topics coincides quite nicely with mine.
Some factors pointing towards an increase: decreased emotional, health, and financial cost of commutes.
Some factors pointing towards a decrease: decreased cost and increased flexibility and convenience of car-sharing services, which work better in higher density locations.
I think the primary driver of urban sprawl is schooling (and thus home prices), not commuting. The growing acceptance of online schooling will likely decrease urban sprawl significantly.
Here’s an economics analysis: self-driving cars reduce the effective cost of a commute, by allowing the passenger to focus on other things than driving during the commute (reading, watching TV, playing games, etc). Since a significant limit to sprawl is the cost, broadly construed, of long commutes, self-driving cars will increase sprawl.
Based on the idea that you get what you incentivize, and irrespective of other factors, I’d expect a marginal to mild increase. Self-driving cars can make commutes a bit more pleasant and substantially less dangerous, but they can’t reduce commute times (until they reach saturation or close to it), and time’s the main limitation.
Some of the effects will depend on details of the implementation. For example, if self-driving cars are constrained to obey highway speed limits, the commute time may increase in some cases, at least initially. Upon achieving saturation of self-driving cars, I would expect shorter commute times on non-highways. Also, upon saturation, it may be seen as desirable to raise the highway speed limit.
Keep in mind that non-self-driving cars will always be cheaper and will always have a market no matter how good autonomous ones get since autonomous vehicles have more parts and maintenance needs. You will never have an area with only autonomous vehicles, absent massive government intervention.
I place the likelyhood of massive government intervention or the equivalent (insurance becoming flat out unavailable for manual drivers) at somewhere north of 90 %. Driver error has a really high cost in quality adjusted life years, every year. If eliminating that cost becomes an option, it will get used.
A friend of mine has mild anorexia (she’s on psych meds to keep it contained) and recently asked me some advice about working out. She told me that she is mainly interested in not being so skinny. I offered to work out with her one day of the week to make sure she’s going about things correctly, with proper form and everything.
The thing is, just going to the gym and working out isn’t effective if her diet and sleeping cycle aren’t also improved. I would normally be really blunt about these other facts, but her dealing with anorexia probably complicates things a bit… especially the proper diet part. I was thinking that if she has trouble eating enough, maybe she could try drinking some protein shakes. But I’m not sure if that would actually be effective in helping her reach her goal of putting on more weight if she’s not eating properly other times of the day. If anyone has any advice on how I could more effectively broach that subject without being insulting or belittling I would appreciate it.
If she’s on medicine to contain her anorexia she knows she has an issue. You could start with simple asking her what she eats and listen empathically.
I would also suggest that you think about your relationship with her. What does she want? Does she want your approval? Does she want that you tell her what to do, to not have to take the responsibility for herself? Does she care about looking beautiful to you? Does she want a relationship with you? Do you want a relationship with her?
Knowing answers to questions like that is important when you deal with deep psychological issues. It shapes how the words you say will be understood.
That’s actually a good question. Without disclosing too much of her psych history, she seems to be really impulsive and might even be prone to addiction. I suppose she could get an exercise disorder… this makes it even more complicated than I thought.
I’d caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).
Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).
There are a few approaches that might work for different people:
Talk as though she doesn’t have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don’t recommend this approach.
Confront the issue directly (“Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I’m aware that this is probably a much harder challenge for you than most people...”). I don’t recommend this approach.
Ask her how she feels about discussing diet. (“Do you feel comfortable discussing diet with me? Feel free to say no. Also, don’t feel constrained by your answer to this question; if later you start wishing you’d said no, just say that, and I’ll stop.”). I recommend this approach.
In any case, make it clear from the outset you want to be respectful about it.
You may directly ask her in what terms she prefers to discuss those matters. That way you’ll get across your message, i.e. that proper diet is worth talking about, with little risk of involving the wrong message in the mix.
This form of question seems no less likely to raise problems than asking about the topic itself.
I’d suggest something more along the lines of “This is pretty standard advice, and it works for most people, but it’s built on some assumptions about an average diet. How much should I be tailoring it for you?” Which is basically an indirect request for a status report, without implying anything about whether or not her current eating pattern is unhealthy. From that response, you can probably gauge to what degree you can safely bring it up directly.
If she wants to get bigger, then I’d get her started with Greyskull LP. It’s a fairly basic beginner weight lifting program that, when combined with a caloric surplus, will get good results for size and strength. There isn’t much work involved (just three sets on 2-3 exercises; doing more is counterproductive for beginners) so it won’t use as much energy as a cardio or circuit intensive routine.
A couple of protein shakes with milk/almond milk are enough to get a caloric surplus going. You only need 250-500 extra calories to make good gains, and you can easily get that with a shake or two.
A friend of mine has mild anorexia (she’s on psych meds to keep it contained)
I don’t have good ideas about dealing with anorexia, but I think you should suggest to your friend that she is being used as a pawn by the psycho-pharmaceutical industry to extract dollars from her health insurance provider.
Every “proof” of Godel’s incompleteness theorem I’ve found online seems to stop after what I would consider to be the introduction. I find myself saying “yes, good, you’ve shown that it suffices to prove this fixed point theorem… now where’s the proof of the fixed point theorem, surely that’s the actual meat of the proof?” Anyone have a good source that shows the full proof, including why for a particular encoding of sentences as numbers the function “P → P is not provable” must have a fixed point?
Here’s a cute/vexing decision theory problem I haven’t seen discussed before:
Suppose you’re performing an interference experiment with a twist: Another person, Bob, is inside the apparatus and cannot interact with the outside world. Bob observes which path the particle takes after the first mirror, but then you apply a super-duper quantum erasure to Bob so that they remember observing the path of the particle, but they don’t remember which path it took. Thus, at least from your perspective, the superposed versions of Bob interfere, and the particle always hits detector 2. (I can’t find the reference for super-duper quantum memory erasure, probably because it’s behind a paywall. Perhaps (Deutsch 1996) or (Lockwood 1989).)
Suppose that after Bob makes their observation, but before you observe Bob, you offer to play a game with Bob: If the particle hits detector 2, you give them $1; but if it hits detector 1, they give you $2. Before the experiment ran, this would have seemed to Bob like a guaranteed $1. But during the experiment, it seems to Bob that the game has expected value -$.50. What should Bob do?
If it seems unfair to wipe Bob’s memory, there’s an equivalent puzzle in which Bob doesn’t learn anything about the particle’s state, but the particle nevertheless becomes entangled with Bob’s body. In that case, the super-duper quantum erasure doesn’t change Bob’s epistemic state.
My grasp of quantum physics is rudimentary; please let me know if I’m completely wrong.
I disagree that Bob’s expected value drops to −0.5$ during the experiment. If Bob is aware that he will be “super-duper quantum memory erased”, then he should appropriately expect to receive 1$.
There may be more existential dread during the experiment, but the expectations about the outcome should stay the same throughout.
Ok, User:Manfred makes the same point here. It implies that at any point, heretofore invisible worlds could collide with ours, skewing the results of experiments and even leaving us with no future whatsoever (although admittedly with probability 0). Would you agree with that?
Worlds only interfere when they evolve into the same state. Because the state space is exponentially large, only worlds that are already almost-equivalent to our world are likely to “collide with us”.
If you’ve based a decision on some observation, worlds where that observation didn’t happen are not almost-equivalent. They differ in trillions (note: massive underestimate) of little ways that would all need to be corrected simultaneously, lest the differences continue to compound and push things even further apart. Their contributions to the branch we’re in is negligible.
Your thought experiment used a “super duper quantum eraser”, but in reality I don’t think such a thing is actually possible. The closest analogue I can think of is a quantum computer, but those prevent decoherence/collapse. They don’t undo it.
Bob cannot become entangled with the outside world while in the middle of a quantum erasure experiment, or else it doesn’t work. So he doesn’t really get to do anything :P
If Bob knows that the particle becomes entangled with him, then he still makes the same predictions.
If Bob knows that the particle becomes entangled with him, then he still makes the same predictions.
Ok, that’s surprising. Here’s why I thought otherwise: From Bob’s perspective, a particle is prepared in a superposition of states B and C. Then Bob observes or becomes entangled with the particle, thus collapsing its state. Then the super-duper quantum erasure is performed, which preserves the state of the particle. Then the particle strikes the second half-silvered mirror. A collapse interpretation tells Bob to expect two outcomes with equal probability. Is this, then, an experiment where a collapse interpretation and a many-worlds interpretation give different predictions?
The collapse interpretation predicts that you can’t do the super-duper quantum erasure. Once the collapse has occurred the wavefunction can’t uncollapse.
Basically, there are a variety of collapse interpretations depending on where you make the collapse happen. Every time we’ve tested these hypotheses (e.g. by this sort of experiment), we haven’t been able to see an early collapse.
At this point, all actual physicists I know just postpone the collapse whenever necessary to get the right answer.
Hm, so that means that quantum physics predicts that our observations depend on the presence of parallel worlds in the universal wavefunction, which in theory might interfere with our experiments at any time, right?
Hm, good point. We could set aside a few instants for him to send a few photons that wouldn’t depend on the state of the particle. From a practical standpoint that’s pretty impossible, but forget practicality.
So, sure; Bob should accept the bet. Although if he makes his answer to the bet depend on the state of the particle at all, then he shouldn’t accept the bet :P There might be some interesting applications of this, say where the option “don’t accept” has some positive payoff if the particle changes directions. Bob can precommit to send out an entangled qubit to get some chance at that reward.
Rational thinking against fear in a TED talk by (ex) astronaut Chris Hadfield. Has anyone else seen it? I really enjoyed it, in particular the spider example.
It seems clear that for people with a bachelor’s in CS, from a purely monetary viewpoint, getting a master’s in the same area usually is dumb unless you plan on programming a long time.
This article says the average mid-career pay for MSc holders is $114,000. This says the mid-career bachelor’s salary is $102,000. A master’s means 12 to 24 months of lost pay and anywhere from a $20,000/year salary in some lucky cases to a $50,000+ debt. You need at least a decade of future work to justify this. And that likely overstates the benefits since it does not control for ability.
I don’t necessarily trust these statistics but employers can always make people write code on whiteboards to assess actual skill.
An exception might be if you want technically cutting-edge CS: Google for example prefers MSc/PhD guys. But I think most programming jobs are not like that.
FYI, it is possible to do a part-time masters program and some employers will pay for you to get a graduate degree (usually as part of an agreement to keep working for the company several years afterwards).
IMO the real problem is that academia teaches computer science whereas what programmers need to know to be valuable is software engineering. Those seem to be rather different disciplines.
Disclaimer: I didn’t study CS myself and this opinion is based on indirect evidence.
So I’ve kind of formulated a possible way to use markets to predict quantiles. It seems quite flawed looking back on it two and a half weeks later, but I still think it might be an interesting line of inquiry.
My comments earlier. Wolfram is a beautiful language with well designed libraries. If you are new to programming, you will learn a lot mucking about with it. It’s probably not going to take the world by storm.
I’m not sure whether this is a bigger problem than already exists in general, but if some of the information in the libraries is out of date or just plain wrong, it would be a challenge to notice that there’s a problem.
Apparently, founding mathematics on Homotopy Type Theory instead of ZFC makes automated proof checking much simpler and more elegant. Has anybody tried reformulating Max Tegmark’s Level IV Multiverse using Homotopy Type Theory instead of sets to see if the implied prior fits our anthropic observations better?
Homotopy type theory differs from ZFC in two ways. One way is that it, like ordinary type theory, is constructive and ZFC is not. The other is that it is based in homotopy theory. It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.
Tegmark is quite explicit that he has no measure and thus no prior. Switching foundations doesn’t help.
It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.
I found a textbook after reading the slides, which may be clearer. I really don’t think their mathematical aspirations are limited to homotopy theory, after reading the book’s introduction—or even the small text blurb on the site:
Homotopy type theory offers a new “univalent” foundation of mathematics, in which a central role is played by Voevodsky’s univalence axiom and higher inductive types. The present book is intended as a first systematic exposition of the basics of univalent foundations, and a collection of examples of this new style of reasoning
Which implied prior? My understanding is that the problem with Multiverse theories is that we don’t have a way to assign probability measures to the different possible universes, and therefore we cannot formulate an unambiguous prior distribution.
The two usual implied prior taken from Level IV are a)that every possible universe is equally likely and b)that universe are likely in direct proportion to the simplicity of their description. Some attempts have been made to show that the second falls out of the first.
Well, I don’t really math; but the way I understand it, computable universe theory suggests Solomonoff’s Universal prior, while the ZFC-based mathematical universe theory—being a superset of the computable—suggests a larger prior; thus weirder anthropic expectations. Unless you need to be computable to be a conscious observer, in which case we’re back to SI.
I enjoy reading perceptive / well-researched futurism materials. What are some good resources for this? I’m looking for books, blogs, newsfeeds, etc.. Also, I’m only looking for popular-level rather than academic material—I have neither the time nor the knowledge to read through most scholarly articles on the subject.
Example #149 of why it’s difficult to specify bets...
Louie texted me a screenshot showing that Zagat had given an opinion on Subway (the fast-food chain). My girlfriend said “No way,” so we both specified a bet that if we went to the Zagat website, we wouldn’t be able to find a Zagat rating for Subway. She said 40% and I said 65%. When we checked, it turned out Zagat had conducted a survey of people who visit fast food joints, and Subway had been one of the restaurants they got survey results for. So does that count as Zagat giving Subway a rating? I don’t know. I was just thinking of “official Zagat ratings,” rather than survey ratings, but it’s technically true that there’s a rating for Subway on the Zagat website because of that survey of random people who eat fast food.
What i really need is a panel of 5 trusted judges to decide whether my bets are right or wrong, in contested cases.
Digging behind the bet, I think you were betting that Louie had been spoofed. If the screenshot Louie sent was really sourced in zagat and not spoofed, then zagat did indeed have the opinion lukeprog thought it didn’t.
If the purpose of bets is measure your model against the world, it seems to me that the more valuable lesson is how often you are surprised in the process of evaluating the bet than how often you are correct. If you put 40 or 65% on the hypothesis that the restaurant falls in a particular bin, you aren’t surprised either way by the answer, but you both erred in believing that there were just two bins.
I tried to code a simple bot for recurring threads on LW based on bots written for Reddit. It doesn’t work as there is apparently no API or a different one from the vanilla version of Reddit. If there is an API is there a documentation for it that I can access?
I don’t know about documentation, but you can start looking here.
I was looking for an old Robin Hanson post to use as an example in an upcoming post of mine, and tried to get there through the Opposite Sex, an old post of Eliezer’s. When I click that link, though, I get a “You aren’t allowed to do that.” error, which appears to be a change in the last two years. Anyone know what happened? (My guess is Eliezer or someone decided to retract the article, but it would be nice to know for sure.)
On Facebook one time, there was some discussion or other about gender, and a link to the post was made. EY said something to the effect of ‘I no longer endorse that post sufficiently enough to keep it up’, and took it down.
Thanks!
Here is a working link.
Being sick makes me stupid. Yesterday I was teaching economics while I had a mild cold. I made multiple simple math mistakes, far more than normal. I need to be mindful that being sick reduces my cognitive capacities.
At my workplace, the question came up of how best to publicly recognise people for good work, while minimising the amount of politics/friction/jealousy that comes about as a direct result of it. We have only just grown past the point where we all know each other well; hence why this sort of thing is becoming interesting.
My initial response to the question was “Make being praised unpleasant, using ugly trophies (sports team strategy) or stupid hats (university graduation strategy)” but I would like to say something more upbeat as well.
Is anyone aware of good writing on the subject/google keywords I could use to find the literature?
You don’t want to make being praised unpleasant for the recipient—that leads to perverse incentives. And you don’t want to give an award a stupid name or an embarrassing shape—part of the point of this sort of thing is that it looks good on your resume or perched over your desk. You want to mark their achievement in a way they’ll genuinely appreciate, but simultaneously add symbolism to make their coworkers feel that their status hasn’t been diminished.
I think what you’re looking for is a little temporary public humiliation, not intrinsic to the award but coming along with more standard recognition. You could do this in several different ways. If it’s a fairly small group and the awards are a fairly big deal, for example, you could run a roast) as part of the party following the award. You could probably contrive ways to add this kind of symbolism to physical awards, too.
Isn’t being praised already unpleasant enough?
Most people rather like it. It appears you don’t; what makes you dislike it?
Public attention of any kind is just embarrassing. Probably not unrelated to past experience of politics, friction, and jealousy resulting from praise in the past. But if I were threatened with public praise in my job I would be strongly tempted to quit before it could strike.
Woah, that’s paranoid!
Find a way to make it non-zero-sum. Only 1 person can be employee of the year, so others lose out and may resent their colleague. Maybe a gold/silver/bronze border around your portrait on the intranet, with a tooltip that explains what you got it for?
Money. Give the person a token bonus and let it be known that if you get a bonus you are not supposed to tell other people about it.
Money is expensive and not public recognition.
Cryonics ideas in practice:
“The technique involves replacing all of a patient’s blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. “If a patient comes to us two hours after dying you can’t bring them back to life. But if they’re dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed,” says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique.”
http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death.html
I welcome criticism of my new personal favorite population axiology:
The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person’s life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it’s allowed to depend on whether the person’s experiences are veridical.
This axiology implies that it’s important to ensure that the future will contain many people who have better lives than us; it’s consistent with preferring to extend someone’s life by N years rather than creating a new life that lasts N years. It’s immune to Parfit’s Repugnant Conclusion, but doesn’t automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.
There are straightforward modifications for dealing with general relativity and splitting and merging people.
The one flaw is that it’s temporally consistent: If future generations average the welfare of lives after their “present moments”, they will make decisions we disapprove of.
I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don’t like my robot. Good thing?
A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.
Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.
Edit: oh, I was thinking of an average over time, not people.
Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, “let’s figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct”. But from our perspective, those children won’t have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.
Are you swiping the complexity of value under the terms “better” and “veridical”? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?
Yes. It’s hard enough to come up with a decent way of aggregating individual welfares without making a comprehensive theory of value.
Is this different from whether their perception of their experiences is correct, or is it jargon?
Yes, I mean (for example) that if a person believes they’re married to someone, their life’s welfare could depend on whether their spouse is a real person or if it’s a simple chatbot. Also, if a person feels that they’ve discovered a deep insight, their life’s welfare could depend on whether they have actually discovered such an insight.
So it’s just jargon. OK.
It took me a long time to find LessWrong and I found it through a convoluted and ultimately entirely random series of events. Though English is neither my first language nor do I live in an anglophone country so I’d love to find a similar community in my language, German, or more generally interesting smaller, though active, communities in other languages than English. How would I go about that?
There are LessWrong meetups in many countries, in particular there are 4 in Germany.
See http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups
I’m pretty sure that due to the free rider problem, active and lively communities are much more likely to grow via word of mouth than via public advertising or easily googled websites.
Maybe try Mensa? I don’t know if they’re any good, but they’re in Germany and they’re big enough to know an LWish community if one is available.
In Berlin we found that having Quantified Self events in English makes more sense then holding them in German. All the interested people speak English. Our local Berlin LW meetups are also in English. There no good reason to use German if you want to discuss intellectual topics on a deep level. English is the language of science. English is the language of programming.
That said, see whether there a LW meetup, QS meetup, Chaos Computer Club Erfa or Hackerspace at your city. That’s were the kind of people who are here hang out. Meetup.com in general is good to find interesting groups.
When the Chaos Computer Congress was still in Berlin I went there multiple years in a row. Now I don’t travel to Hamburg but if you are free on 27-30 December going there, is a very worthwhile experience to be in the company of very smart people.
As far as general strategies for finding communities, talk to people. Ask them to which communities they belong.
At Paris meetups we occasionally have more people who speak German than people who speak French :)
Who actually speak it at the meetup or who can speak it? People in Paris speaking German at a public meetup would go against my idea of French people looking out to protect their language.
Who can speak it; we usually speak English though not if all participants speak good French.
LW may be interested to learn about Amazon Smile, which gives 0.5% of your Amazon purchases to charity, and the Smile Always Chrome extension that will route your browser to smile.amazon.com by default. (Yes, you can support MIRI through Amazon Smile.) Total setup time estimated at under 5 minutes.
Oh yeah, it looks like they’re having some kind of promotion where if you sign up and make a purchase by March 31, they will give an extra $5 to your chosen charity.
I have been using Amazon Smile and Smile Always for MIRI for about a year.
IIRC, Amazon Smile used to be listed on MIRI’s Donate for free page, but has since been replaced by “Good Shop”. Good Shop appears to give a higher percentage, but I was unable to get the browser extension working so that it happened automatically, so I still use Smile. If anyone knows of a way to get it working, I’d be happy to hear it. But I tried to do it manually for a while, and I just don’t remember often enough.
Is this for the EU too? 0.5% seems a bit low, for every $200 you’d spend the charity receives $1. Then again, it is about the aggregate effect.
Edit: Browsing through the charities, they are all located in the US, so it seems specific to there.
Facebook bought Oculus Rift for $2 billion. What makes this, and so many other large deals, such clean numbers? Are the press rounding the details? Are the companies only releasing approximate or estimate numbers? Can the value of a company like Oculus really not be estimated to the nearest 10%? Or do these whole numbers just serve as nice Schelling points on which to hinge a bargain? Or am I forgetting lots of ugly-numbered deals?
(WhatsApp purchase was 2 significant figures, and this list on Wikipedia does show mostly 2-3 significant figures though some figures are probably converted from other currencies.)
In cases like this, a large portion of the “money” paid is actually in the form of shares, which can vary wildly on a day-to-day basis (especially during a takeover!). It doesn’t make sense to specify the value of it too precisely because nobody knows what the shares are going to be worth tomorrow.
That’s a good reason for the press to round such figures, but the actual figure is round, on the day it was decided, and that itself is mysterious.
What makes you think that these numbers are determined by some kind of rational cost-benefit analysis rather than those with the money rattling off numbers until those with the property give in?
Why not all of the above? We can see some rounding already; http://investor.fb.com/releasedetail.cfm?ReleaseID=835447 says
There’s rounding right there (23.1m * 69.35 is not a round number). And there’s plenty of uncertainty about how much they will actually pay: how can anyone know how much of that earn-out will ultimately be paid?
That sounds backwards to me. It looks to me like they started at the round 2.0, set 20% in cash, 80% in stock, found that required 23.1m shares of facebook and rounded that to three digits to get within 2% of the 1.6 they wanted.
Also, the incentive pay of 300m is 15% of the fixed payment, less uncertainty than rounding to 1 significant figure.
It absolutely could be all of the above! But see I write questions like this fairly frequently: I notice something surprising and don’t have a good explanation for it. I then write down the question and pose a couple possible explanations which makes me think of more possible explanations. Frequently I realized that taken together the possibilities I thought of are enough to explain what I was surprised at and I don’t even ask the question. Other times, like this one, I still feel like I’m missing something. So I ask the question.
In this case it looks like the biggest thing I missed was how much these sales’ values depend upon the moment-to-moment stock prices of the parties involved and so that not rounding them hardly even makes sense, as you and bramflakes point out. Thanks!
Google’s IPO was e * 10^9$#In_computer_culture) :)
No open_thread tag. (‘Latest Open Thread’ doesn’t link to here)
Edit: For some reason the one before doesn’t have the tag either..
Fixed, I hope entering the tag post hoc fixes the issue.
People have been posting in both recently.
Scott Aaronson on subjectivity of qualia:
Lol.
Seriously though, that’s a really bad argument, why have you added it here?
You can read the full argument in the comments to http://www.scottaaronson.com/blog/?p=1753
“Dude, what if, like there’s a number in between zero and the smallest positive real?”
Are you indicating that only the relation between wavelengths and the brain’s information processing counts, and that differing conscious perceptions of these wavelengths are analogous to the use of different sets of symbols used to denote the additive relationship between “two-ness” and “four-ness” (two plus two equals four and deux plus deux égalent quatre)?
No, I just meant that, just because a ‘stoner’ can ask a question, doesn’t mean the answer to the question is permanently unknowable.
Edit: Or even that difficult to answer. In fact, that a stoner can ask a question is almost no evidence of anything at all. Applying principle of charity, if Scott meant that anyone can always ask that question, that’s true for any question; you can always keep asking if two plus two equals four. Now, if in fact Aaronson wants to present evidence for the claim that we can never know if your and my ‘blues and reds’ are the same, that would be cool, but there was no real argument given.
How to learn charisma.
Book on the subject
Recently I changed some of my basic opinions about life, in large part because of interaction with LessWrong (mostly along the axes Deism → Atheism, ethical naturalism → something else (?)).
It inspired me to try to summarize my most fundamental beliefs. The result is as follows:
Epistemology
1.1. Epistemic truth is to be determined solely by the scientific method / Occam’s razor.
1.2. The worldview of mainstream science is mostly correct.
1.3. The many religious / mystical traditions are wrong.
Philosophy of mind
2.1. Consciousness is the result of computing processes in the brain. In particular, if a machine would implement the same computations it would be conscious. However, in general I don’t know what consciousness is.
2.2. Identity is not fundamentally meaningful. However, there might be useful “fuzzy” variants of the concept.
Metaethics
3.1. Humans are agents with (approximately) well-defined utility functions.
3.2. The moral value of an action is the expectation value of the utility function of the respective agent.
3.3. I should take actions with as much value as possible. This is the only meaningful interpretation of “should”.
Ethics
4.1. Human utility functions are complex.
4.2. I cannot give anything close to a full description of my utility function, but it seems to involve terminal values such as: beauty, curiosity, humor, kindness, friendship, love, sexuality / romance, pleasure… These values are computed on all sufficiently human agents (but I don’t know what “sufficiently human” means). The weights for myself and my friends / loved ones might be higher but I’m not sure.
Less fundamental and less certain are:
Metaphysics
5.1. UDT is the correct decision theory.
5.2. Epistemic questions don’t make fundamental sense (I realize the apparent contradiction with 1.1 but 1.1 is still a useful approximation and there’s also a meta-epistemic level on which UDT itself follows from Occam’s razor) as opposed to decision-theoretic questions. Subjective expectations are ill-defined.
5.3. Temark’s level IV multiverse is real, or at least as “real” as anything is.
I’m curious to know how many LessWrongers have similar vs different worldviews.
What is UDT?
Updateless decision theory.
Is this an epistemic truth?
No :) See below.
1.1- Disagree, but I may not understand the claim (what’s ‘epistemic truth’?). 1.2- Agree. 1.3- Agree. 2.1- Agree that consciousness is the result of computing processes in the brain, disagree that a machine implementing the same computations would necessarily be conscious. (i.e. agree with physicalism, don’t agree with functionalism). 2.2- I don’t understand the claim. But I think I disagree. 3.1- Agnostic. 3.2- Disagree. 3.3- Disagree, especially with the claim that this is the only meaningful interpretation of ‘should’. 4.1- Agnostic. 5.1- Agnostic. 5.2- I don’t understand this at all. 5.3- I don’t understand your use of the word ‘real’.
By “epistemic truth” I mean truth regarding the physical universe. Maybe that is a poor choice of words. Physical truth?
So do you mean ‘the only grounds for knowledge about the physical universe is the scientific method/Occam’s razor’?
Yep. Although under a UDT / multiverse interpretation it becomes “knowledge about the region of the multiverse in which I am located”.
The Good, the Bad, and the Just: Justice Sensitivity Predicts Neural Response during Moral Evaluation of Actions Performed by Others.
I’ve seen a lot of discontent on LW about exercise. I know enough about physical training to provide very basic coaching and instruction to get people started, and I can optimize a plan for a variety of parameters (including effectiveness, duration of workout, frequency of workout, cost of equipment, space of equipment, gym availability, etc.). If anyone is interested in some free one-on-one help, post a request for your situation, budget, and needs and I’ll write up some basic recommendations.
I don’t have much in the ways of credentials, except that I’ve coached myself for all of my training and have made decent progress (from sedentary fat weakling to deadlifting 415lbs at 190lb BW and 13%BF). I’ve helped several friends, all of which have made pretty good progress, and I’ve been able to tailor workouts to specific situations.
I’ve been reading a bit of books that I guess could be classified as “pop psychology” and “pop economics” lately. (In this concept I include books like Kahneman’s Thinking Fast and Slow. Hence what I mean is by “pop” is not that it’s shallow but rather that it has a wide lay audience.) Now I’d like to turn to sociology—arguably the most general and allencompassing of the social sciences. But when I google “pop sociology”, all the books seem to have been written by economists or psychologists or non-academics such as Malcolm Gladwell. For instance, see here:
https://www.goodreads.com/shelf/show/pop-sociology
Are there no well-known pop-sociological books written by sociologists and, if so, what does this say about sociology as a discipline? You very seldom hear about sociological research in the media if you compare with economics and psychology, and surely there has to be an explanation of this?
Sociology: A Very Short Introduction (published Oxfrod University Press)
You can try Everything Is Obvious by Duncan Watts. He’s a multi-disciplinary type, but he is at least partially a sociologist. He also discusses a lot in there about sociology as a discipline.
Erving Goffman is said to be accessible and said to be a sociologist. One issue is that “sociology” has several meanings. His version shades into psychology.
The sociologist Charles Murray) says interesting things. I don’t usually agree with them, but they make me think.
Am I confused about frequentism?
I’m currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:
P(data at least as extreme as your data | Null hypothesis)
This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).
Put another way:
P(data | hypothesis) = 1 - p-value
and if 1 - p-value is high enough then you accept the hypothesis. (My use of “data” is handwaving and not quite correct but it doesn’t matter.)
But it seems more useful to me to calculate P(hypothesis | data). And that’s not quite the same thing.
So what I’m wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.
This is correct.
This is not correct. You seem to be under the impression that
P(data | null hypothesis) + P(data | complement(null hypothesis)) = 1,
but this is not true because
complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].
More generally, most people (both frequentists and bayesians) would object to “accepting the hypothesis” based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.
Huh? P(X|Y) + P(X|Y’) = P(X) and an event that has already occurred has a probably of one. Am I missing something?
That may be true if you have little influence over what data is available.
Frequentists are mainly interested in situations where they can create experiments that cause P(hypothesis) to approach 0 or 1. The p-value is intended to be good at deciding whether the hypothesis has been adequately tested, not at deciding whether to believe the hypothesis given crappy data.
Your conclusion
is correct. Frequentists do indeed claim that P(hypothesis | data) is meaningless for exactly the reasons you gave. However there are some little details in the rest of your post that are incorrect.
The hypothesis you are trying to test is typically not the complement of the null hypothesis. For example we could have:
where theta is some variable that we care about. Note that the region theta<0 isn’t in either hypothesis. If we were instead testing
then frequentists would suggest a different test. They would use a one-tailed test to test H1 and a two-tailed test to test H1′. See here.
No. This is just mathematically wrong. P(A|B) is not necessarily equal to 1-P(A|¬B). Just think about it for a bit and you’ll see why. If that doesn’t work, take A=”sky is blue” and B=”my car is red” and note that P(A|B)=P(A|¬B)~1.
It’s not meaningless, but people who follow R. A. Fisher’s ideas for rejecting the null do not use p(hypothesis | data). “Meaningless” would be if frequentists literally did not have p(hypothesis | data) in their language, which is not true because they use probability theory just like everybody else.
Don’t ask lesswrong about what frequentists claim, ask frequentists. Very few people on lesswrong are statisticians.
Many frequentists do insist that P(hypothesis) are meaningless, despite “using probability theory.”
Could you give me something to read? Who are these frequentists, and where do they insist on this?
Let us take a common phrase from the original comment “the hypothesis is either true or false”. The first google hit:
So from this statement you conclude that frequentists think P(hypothesis) is meaningless? Bayesians assign degrees of belief to things that are actually true or false also. The coin really is either fair or not fair, but you will never find out with finite trials. This is a map/territory distinction, I am surprised you didn’t get it. This quote has nothing to do with B/F differences.
A Bayesian version of this quote would point out that it is a type error to confuse the truth value of the underlying thing, and the belief about this truth value.
You have successfully explained why it is irrational for frequentists to consider P(hypothesis) meaningless. And yet they do. They would say that probabilities can only be defined as limiting frequencies in repeated experiments, and that for a typical hypothesis there is no experiment you can rerun to get a sample for the truth of the hypothesis.
You guys need to stop assuming frequentists are morons. Here are posts by a frequentist:
http://normaldeviate.wordpress.com/2012/12/04/nate-silver-is-a-frequentist-review-of-the-signal-and-the-noise/
http://normaldeviate.wordpress.com/2012/11/17/what-is-bayesianfrequentist-inference/
Some of the comments are good as well.
Yes, you’re right. Clearly many people who identify as frequentists do hold P(hypothesis) to be meaningful. There are statisticians all over the B/F spectrum as well as not on the spectrum at all. So when I said “frequentists believe …” I could never really be correct because various frequentists believe various different things.
Perhaps we could agree on the following statement: “Probabilities such as P(hypothesis) are never needed to do frequentist analysis.”
For example, the link you gave suggests the following as a characterisation of frequentism:
Since frequency guarantees are typically of the form “for each possible true value of theta doing the construction blah on the data will, with probability at least 1-p, yield a result with property blah”. Then since this must hold true for each theta, the distribution for the true value of theta is irrelevant.
The interesting questions to me are: (a) “what is the steelman of the frequentist position?” (folks like Larry are useful here), and (b) “are there actually prominent frequentist statisticians who say stupid things?”
By (b) I mean “actually stupid under any reasonable interpretation.”
Quote from the url I linked:
“Keep your identity small”—advice familiar to a LW audience.
I guess you disagree with Larry’s take: B vs F is about goals not methods. I could do Bayesian looking things while having a frequentist interpretation in mind.
In the spirit of collaborative argumentation, can we agree on the following:
We have better things to do than engage in identity politics.
It is not the same thing and knowing P(hypothesis | data) would be very useful. Unfortunately, it is also very hard to estimate because usually the best you can do is calculate the probability, given the data, of a hypothesis out of a fixed set of hypotheses which you know about and for which you can estimate probabilities. If your understanding of the true data-generation process is not so good (which is very common in real life) your P(hypothesis | data) is going to be pretty bad and what’s worse, you have no idea how bad it is.
Not having a good grasp on the set of all hypotheses does not distinguish bayesians from frequentists and does not seem to me to motivate any difference in their methodologies.
Added: I don’t think it has much to do with the original comment, but testing a model without specific competition is called “model checking.” It is a common frequentist complaint that bayesians don’t do it. I don’t think that this is an accurate complaint, but it is true that it is easier to fit it into a frequentist framework than a bayesian framework.
I have said nothing about the differences between bayesians and frequentists. I just pointed out some issues with trying to estimate P(hypothesis | data).
As far as I can tell, you’re correct.
Extreme fallacy of gray illustrated by SMBC.
I don’t think that’s really the fallacy of grey, if I have to map it to a piece of LessWrong-speak, I’d call it privileging the hypothesis; it’s not saying you can’t be sure of anything, if you’re a good Bayesian it’s not really telling you anything new there, it’s just bringing one particular super-implausible hypothesis to your attention so it occupies much more of your mind than it has any right to.
The Hitlers are grey, though…
:D
I am assembling a list of interesting blogs to read and for that purpose I’d love to see the kind of blog the people in this community recommend as a starting point. Don’t see this just as a request to post blogs according to my unknown taste but as a request to post blogs according to your taste in the hope that the recommendation scratches an itch in this community.
Here’s a sampling of the best in my RSS reader:
Scott Aaronson, theoretical computer science/physics
Tyler Cowen, economics, Cowen is good about sharing surprising info
Lambda the Ultimate, programming language theory
John Baez, “from math to physics to earth science and biology, computer science and the technologies of today and tomorrow,” plus stuff on catastrophic risk w.r.t. climate change.
Jeremy Kun, computer science mostly
The n-Category Cafe, “A group blog on math, physics and philosophy”
Andrew Gelman, pointing out bad statistics in social science. More than once, I’ve revised my beliefs about some study months later when he points out a failure to replicate or other problem.
Timothy Gowers, math, analysis I stuff recently, mostly over my head
Terry Tao, math, number theory, mostly over my head
matthen, math visualizations
Gödel’s Lost Letter, theoretical computer science, complexity theory, often funny
MIRI blog, interviews, research recaps, computer sciencey
Overcoming Bias, X isn’t about Y
Dart Throwing Chimp, global politics
Carl Shulman, effective altruism, thoughtful analysis
Saturday Morning Breakfast Cereal, webcomic
So Your Life is Meaningless, webcomic
XKCD, webcomic
hbd* chick, population genetics, star wars, emoticons
Unqualified Reservations, neoreactionary, questioning everything since the enlightenment
Yvain, psychology, rationality, relationships
Ben Kuhn, effective altruism
Katja Grace, rationality
Brienne Strohl, rationality
And finally my blog, I try to share surprising information, cogsci/relationships/computer science/math
gwern posts on google+ and Kaj Sotala posts interesting stuff on Facebook. I also subscribe to a number of journal’s table of contents via this site to keep up with research and some stuff on arxiv.
I have to admit the intersection with my feed list is most definitely non-empty: I’d add Good Math Bad Math, mathematics, computer science and, sometimes, recipes and playlists.
After discussing diffusion of interesting news at the most recent London meetup, I was planning on asking something like this myself.
Futility Closet is nothing but “interesting stuff”. It describes itself as “a collection of entertaining curiosities in history, literature, language, art, philosophy, and mathematics, designed to help you waste time as enjoyably as possible”. It has more chess than I personally care for, but is updated with what I find to be novel content three times a day.
Conscious Entities is a blog on Philosophy of Mind. It takes an open position on a lot of questions we would consider to be settled on LessWrong, but I think it has value in a steel-manning / why-do-people-believe-this capacity.
(The categories on my feed reader are “Blogs”, “CS”, “Dance”, “Econ”, “Esoterica”, “Maths/Stats”, “Philosophy”, “Science” and “Webcomics”. I’d be interested in finding out how other people classify theirs.)
I have: “News”, “Friends”, “Comics”, “RPG”, “Android”, “LW” , “Climbing” and “Maths”.
There are some blogs mentioned on the wiki.
Mr.Money Mustache on personal finance.
In the Pipeline is a very good blog to keep up with what’s happen in big pharma and chemistry in general. It written by someone employed inside a pharma company, and not someone who criticizes pharma policy from a outsider standpoint. There are some posts about specific chemical reactions that I skip but if you want to understand how the healthcare industry works, I can recommend the blog very much. It also provides good coverage if a new relevant biomedical finding comes in the news. The comment section is usually full of people with domain experience.
I used to read Matt Talibies Rolling Stones blog column for Finanical news. Stories about Libor are explained in detail on that blog.
Matt now left the Rolling Stones and will lead his own magazine under the heading of First Look Media which is funded by the tech billionaire of ebay cofounder Pierre Morad Omidyar. At the moment First Look Media already publishes Glenn Greenwald & company which the primary job of still processing the pile of Snowden documents.
I used to read Glenn Greenwald for a long time but reading every other day about the NSA, used to become boring so I won’t read everything that comes out in the Intercept but I subscribe to the feed.
As a German speaking independent news sources I read hintergrund, fefe and the indepth podcast Alternativlos.
Alternativlos is an interesting project. Two members of the Chaos Computer Club basically concluded that speaking to the public and explaining them how things work politically is “without alternative” and started to explain topics in 2 hours. It explains a topic like stuxnet in detail in a detail that explains what exploits cost on the black market.
Today some public radio stations rebroadcast the podcast.
I don’t really read many blogs frequently. The one sort-of-bloggish site I visit daily is 3 Quarks Daily. They link to a number of interesting articles every day, and their taste in topics coincides quite nicely with mine.
I am wondering about the effect of the advent of self-driving cars on urban sprawl. Will it increase or decrease sprawl?
Urban sprawl is said to be an unintended consequence of the development of the US interstate highway system.
Passive voice / finding causes is hard / compare LA and SF.
This is not really a haiku.
Some factors pointing towards an increase: decreased emotional, health, and financial cost of commutes.
Some factors pointing towards a decrease: decreased cost and increased flexibility and convenience of car-sharing services, which work better in higher density locations.
I think the primary driver of urban sprawl is schooling (and thus home prices), not commuting. The growing acceptance of online schooling will likely decrease urban sprawl significantly.
Here’s an economics analysis: self-driving cars reduce the effective cost of a commute, by allowing the passenger to focus on other things than driving during the commute (reading, watching TV, playing games, etc). Since a significant limit to sprawl is the cost, broadly construed, of long commutes, self-driving cars will increase sprawl.
Based on the idea that you get what you incentivize, and irrespective of other factors, I’d expect a marginal to mild increase. Self-driving cars can make commutes a bit more pleasant and substantially less dangerous, but they can’t reduce commute times (until they reach saturation or close to it), and time’s the main limitation.
Actually, I think that there is great potential for self-driving cars to reduce congestion and thus commute time.
Some of the effects will depend on details of the implementation. For example, if self-driving cars are constrained to obey highway speed limits, the commute time may increase in some cases, at least initially. Upon achieving saturation of self-driving cars, I would expect shorter commute times on non-highways. Also, upon saturation, it may be seen as desirable to raise the highway speed limit.
Keep in mind that non-self-driving cars will always be cheaper and will always have a market no matter how good autonomous ones get since autonomous vehicles have more parts and maintenance needs. You will never have an area with only autonomous vehicles, absent massive government intervention.
I place the likelyhood of massive government intervention or the equivalent (insurance becoming flat out unavailable for manual drivers) at somewhere north of 90 %. Driver error has a really high cost in quality adjusted life years, every year. If eliminating that cost becomes an option, it will get used.
A friend of mine has mild anorexia (she’s on psych meds to keep it contained) and recently asked me some advice about working out. She told me that she is mainly interested in not being so skinny. I offered to work out with her one day of the week to make sure she’s going about things correctly, with proper form and everything.
The thing is, just going to the gym and working out isn’t effective if her diet and sleeping cycle aren’t also improved. I would normally be really blunt about these other facts, but her dealing with anorexia probably complicates things a bit… especially the proper diet part. I was thinking that if she has trouble eating enough, maybe she could try drinking some protein shakes. But I’m not sure if that would actually be effective in helping her reach her goal of putting on more weight if she’s not eating properly other times of the day. If anyone has any advice on how I could more effectively broach that subject without being insulting or belittling I would appreciate it.
If she’s on medicine to contain her anorexia she knows she has an issue. You could start with simple asking her what she eats and listen empathically.
I would also suggest that you think about your relationship with her. What does she want? Does she want your approval? Does she want that you tell her what to do, to not have to take the responsibility for herself? Does she care about looking beautiful to you? Does she want a relationship with you? Do you want a relationship with her?
Knowing answers to questions like that is important when you deal with deep psychological issues. It shapes how the words you say will be understood.
Do you have any thoughts about whether she’s at risk for an exercise disorder?
That’s actually a good question. Without disclosing too much of her psych history, she seems to be really impulsive and might even be prone to addiction. I suppose she could get an exercise disorder… this makes it even more complicated than I thought.
I’d caution that suspecting (out loud) that she might develop an exercise disorder would be one of those insulting or belittling things you were worried about (either because it seems like a cheap shot based on the anorexia diagnosis, or because this might be one approach to actually getting out from under the anorexia by exerting control over her body).
Likely a better approach to this concern would be to silently watch for those behaviours developing and worry about it if and when it actually does happen. (Note that refusing to help her with training and diet means she gets this help from someone who is not watching out for the possibility of exercise addiction).
There are a few approaches that might work for different people:
Talk as though she doesn’t have anorexia. Since you are aware, you can tailor your message to avoid saying anything seriously upsetting (i.e you can present the diet assuming control of diet is easy, or assuming control of diet is hard). I don’t recommend this approach.
Confront the issue directly (“Exercise is what tells your body to grow muscle, but food is what muscles are actually built out of, so without a caloric surplus your progress will be slow. I’m aware that this is probably a much harder challenge for you than most people...”). I don’t recommend this approach.
Ask her how she feels about discussing diet. (“Do you feel comfortable discussing diet with me? Feel free to say no. Also, don’t feel constrained by your answer to this question; if later you start wishing you’d said no, just say that, and I’ll stop.”). I recommend this approach.
In any case, make it clear from the outset you want to be respectful about it.
You may directly ask her in what terms she prefers to discuss those matters. That way you’ll get across your message, i.e. that proper diet is worth talking about, with little risk of involving the wrong message in the mix.
This form of question seems no less likely to raise problems than asking about the topic itself.
I’d suggest something more along the lines of “This is pretty standard advice, and it works for most people, but it’s built on some assumptions about an average diet. How much should I be tailoring it for you?” Which is basically an indirect request for a status report, without implying anything about whether or not her current eating pattern is unhealthy. From that response, you can probably gauge to what degree you can safely bring it up directly.
If she wants to get bigger, then I’d get her started with Greyskull LP. It’s a fairly basic beginner weight lifting program that, when combined with a caloric surplus, will get good results for size and strength. There isn’t much work involved (just three sets on 2-3 exercises; doing more is counterproductive for beginners) so it won’t use as much energy as a cardio or circuit intensive routine.
A couple of protein shakes with milk/almond milk are enough to get a caloric surplus going. You only need 250-500 extra calories to make good gains, and you can easily get that with a shake or two.
I don’t have good ideas about dealing with anorexia, but I think you should suggest to your friend that she is being used as a pawn by the psycho-pharmaceutical industry to extract dollars from her health insurance provider.
Evidence? Is this just a general anti-psych-meds comment or do you have a basis for thinking that in this particular case they’re problematic?
Every “proof” of Godel’s incompleteness theorem I’ve found online seems to stop after what I would consider to be the introduction. I find myself saying “yes, good, you’ve shown that it suffices to prove this fixed point theorem… now where’s the proof of the fixed point theorem, surely that’s the actual meat of the proof?” Anyone have a good source that shows the full proof, including why for a particular encoding of sentences as numbers the function “P → P is not provable” must have a fixed point?
http://en.wikipedia.org/wiki/Diagonal_lemma
Or read Godel, Escher, Bach. Actually, read GEB anyway.
I suggest reading a translation.
Here’s a cute/vexing decision theory problem I haven’t seen discussed before:
Suppose you’re performing an interference experiment with a twist: Another person, Bob, is inside the apparatus and cannot interact with the outside world. Bob observes which path the particle takes after the first mirror, but then you apply a super-duper quantum erasure to Bob so that they remember observing the path of the particle, but they don’t remember which path it took. Thus, at least from your perspective, the superposed versions of Bob interfere, and the particle always hits detector 2. (I can’t find the reference for super-duper quantum memory erasure, probably because it’s behind a paywall. Perhaps (Deutsch 1996) or (Lockwood 1989).)
Suppose that after Bob makes their observation, but before you observe Bob, you offer to play a game with Bob: If the particle hits detector 2, you give them $1; but if it hits detector 1, they give you $2. Before the experiment ran, this would have seemed to Bob like a guaranteed $1. But during the experiment, it seems to Bob that the game has expected value -$.50. What should Bob do?
If it seems unfair to wipe Bob’s memory, there’s an equivalent puzzle in which Bob doesn’t learn anything about the particle’s state, but the particle nevertheless becomes entangled with Bob’s body. In that case, the super-duper quantum erasure doesn’t change Bob’s epistemic state.
My grasp of quantum physics is rudimentary; please let me know if I’m completely wrong.
I disagree that Bob’s expected value drops to −0.5$ during the experiment. If Bob is aware that he will be “super-duper quantum memory erased”, then he should appropriately expect to receive 1$.
There may be more existential dread during the experiment, but the expectations about the outcome should stay the same throughout.
Ok, User:Manfred makes the same point here. It implies that at any point, heretofore invisible worlds could collide with ours, skewing the results of experiments and even leaving us with no future whatsoever (although admittedly with probability 0). Would you agree with that?
No, I don’t think that’s likely at all.
Worlds only interfere when they evolve into the same state. Because the state space is exponentially large, only worlds that are already almost-equivalent to our world are likely to “collide with us”.
If you’ve based a decision on some observation, worlds where that observation didn’t happen are not almost-equivalent. They differ in trillions (note: massive underestimate) of little ways that would all need to be corrected simultaneously, lest the differences continue to compound and push things even further apart. Their contributions to the branch we’re in is negligible.
Your thought experiment used a “super duper quantum eraser”, but in reality I don’t think such a thing is actually possible. The closest analogue I can think of is a quantum computer, but those prevent decoherence/collapse. They don’t undo it.
Bob cannot become entangled with the outside world while in the middle of a quantum erasure experiment, or else it doesn’t work. So he doesn’t really get to do anything :P
If Bob knows that the particle becomes entangled with him, then he still makes the same predictions.
Ok, that’s surprising. Here’s why I thought otherwise: From Bob’s perspective, a particle is prepared in a superposition of states B and C. Then Bob observes or becomes entangled with the particle, thus collapsing its state. Then the super-duper quantum erasure is performed, which preserves the state of the particle. Then the particle strikes the second half-silvered mirror. A collapse interpretation tells Bob to expect two outcomes with equal probability. Is this, then, an experiment where a collapse interpretation and a many-worlds interpretation give different predictions?
The collapse interpretation predicts that you can’t do the super-duper quantum erasure. Once the collapse has occurred the wavefunction can’t uncollapse.
Basically, there are a variety of collapse interpretations depending on where you make the collapse happen. Every time we’ve tested these hypotheses (e.g. by this sort of experiment), we haven’t been able to see an early collapse.
At this point, all actual physicists I know just postpone the collapse whenever necessary to get the right answer.
Hm, so that means that quantum physics predicts that our observations depend on the presence of parallel worlds in the universal wavefunction, which in theory might interfere with our experiments at any time, right?
Calling them parallel worlds is as always dangerous (you can’t go all buckaroo bonzai on them), but basically yes.
He can, in theory, make bets. Just so long as the bet he makes doesn’t depend on which way he saw the particle go.
Hm, good point. We could set aside a few instants for him to send a few photons that wouldn’t depend on the state of the particle. From a practical standpoint that’s pretty impossible, but forget practicality.
So, sure; Bob should accept the bet. Although if he makes his answer to the bet depend on the state of the particle at all, then he shouldn’t accept the bet :P There might be some interesting applications of this, say where the option “don’t accept” has some positive payoff if the particle changes directions. Bob can precommit to send out an entangled qubit to get some chance at that reward.
Rational thinking against fear in a TED talk by (ex) astronaut Chris Hadfield. Has anyone else seen it? I really enjoyed it, in particular the spider example.
Analyzing people’s premises by thinking about comments—the example was a recent tailgating incident.
It seems clear that for people with a bachelor’s in CS, from a purely monetary viewpoint, getting a master’s in the same area usually is dumb unless you plan on programming a long time.
This article says the average mid-career pay for MSc holders is $114,000. This says the mid-career bachelor’s salary is $102,000. A master’s means 12 to 24 months of lost pay and anywhere from a $20,000/year salary in some lucky cases to a $50,000+ debt. You need at least a decade of future work to justify this. And that likely overstates the benefits since it does not control for ability.
I don’t necessarily trust these statistics but employers can always make people write code on whiteboards to assess actual skill.
An exception might be if you want technically cutting-edge CS: Google for example prefers MSc/PhD guys. But I think most programming jobs are not like that.
FYI, it is possible to do a part-time masters program and some employers will pay for you to get a graduate degree (usually as part of an agreement to keep working for the company several years afterwards).
IMO the real problem is that academia teaches computer science whereas what programmers need to know to be valuable is software engineering. Those seem to be rather different disciplines.
Disclaimer: I didn’t study CS myself and this opinion is based on indirect evidence.
So I’ve kind of formulated a possible way to use markets to predict quantiles. It seems quite flawed looking back on it two and a half weeks later, but I still think it might be an interesting line of inquiry.
You want options (as in, the financial market instruments called “options”).
A sufficiently deep and wide options market basically provides most of the market-expected distribution of the future value of the underlying.
Thanks.
How big a deal is this? I only just recently started learning programming so I don’t know enough to understand the implications.
My comments earlier. Wolfram is a beautiful language with well designed libraries. If you are new to programming, you will learn a lot mucking about with it. It’s probably not going to take the world by storm.
I’m not sure whether this is a bigger problem than already exists in general, but if some of the information in the libraries is out of date or just plain wrong, it would be a challenge to notice that there’s a problem.
Seems like a product placement.
Apparently, founding mathematics on Homotopy Type Theory instead of ZFC makes automated proof checking much simpler and more elegant. Has anybody tried reformulating Max Tegmark’s Level IV Multiverse using Homotopy Type Theory instead of sets to see if the implied prior fits our anthropic observations better?
Homotopy type theory differs from ZFC in two ways. One way is that it, like ordinary type theory, is constructive and ZFC is not. The other is that it is based in homotopy theory. It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.
Tegmark is quite explicit that he has no measure and thus no prior. Switching foundations doesn’t help.
I found a textbook after reading the slides, which may be clearer. I really don’t think their mathematical aspirations are limited to homotopy theory, after reading the book’s introduction—or even the small text blurb on the site:
Which implied prior? My understanding is that the problem with Multiverse theories is that we don’t have a way to assign probability measures to the different possible universes, and therefore we cannot formulate an unambiguous prior distribution.
The two usual implied prior taken from Level IV are a)that every possible universe is equally likely and b)that universe are likely in direct proportion to the simplicity of their description. Some attempts have been made to show that the second falls out of the first.
Well, I don’t really math; but the way I understand it, computable universe theory suggests Solomonoff’s Universal prior, while the ZFC-based mathematical universe theory—being a superset of the computable—suggests a larger prior; thus weirder anthropic expectations. Unless you need to be computable to be a conscious observer, in which case we’re back to SI.
I enjoy reading perceptive / well-researched futurism materials. What are some good resources for this? I’m looking for books, blogs, newsfeeds, etc.. Also, I’m only looking for popular-level rather than academic material—I have neither the time nor the knowledge to read through most scholarly articles on the subject.
futurismic.com and maybe io9.com