Open thread, Dec. 21 - Dec. 27, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Correlation!=causation: returning to my old theme (latest example: is exercise/mortality entirely confounded by genetics?), what is the right way to model various comparisons?
By which I mean, consider a paper like “Evaluating non-randomised intervention studies”, Deeks et al 2003 which does this:
So we get pairs of studies, more or less testing the same thing except one is randomized and the other is correlational. Presumably this sort of study-pair dataset is exactly the kind of dataset we would like to have if we wanted to learn how much we can infer causality from correlational data.
But how, exactly, do we interpret these pairs? If one study finds a CI of 0-0.5 and the counterpart finds 0.45-1.0, is that confirmation or rejection? If one study finds −0.5-0.1 and the other 0-0.5, is that confirmation or rejection? What if they are very well powered and the pair looks like 0.2-0.3 and 0.4-0.5? A criterion of overlapping confidence intervals is not what we want.
We could try to get around it by making a very strict criterion: ‘what fraction of pairs have confidence intervals excluding zero for both studies, and the studies are opposite signed?’ This seems good: if one study ‘proves’ that X is helpful and the other study ‘proves’ that X is harmful, then that’s as clearcut a case of correlation!=causation as one could hope for. With a pair of studies like −0.5/-0.1 and +0.1-+0.5, that is certainly a big problem.
The problem with that is that it is so strict that we would hardly ever conclude a particular case was correlation!=causation (few of the known examples are so wellpowered clearcut), leading to systematic overoptimism, and it inherits the typical problems of NHST like generally ignoring costs (if exercise reduces mortality by 50% in correlational studies and 5% in randomized studies, then to some extent correlation=causation but the massive overestimate could easily tip exercise from being worthwhile to not being worthwhile).
We also can’t simply do a two-group comparison and get a result like ‘correlational studies always double the effect on average, so to correct, just halve the effect and then see if that is still statistically-significant’, which is something you can do with, say, blinding or publication bias because it turns out to not be that conveniently simple—it’s not an issue of researchers predictably biasing ratings toward the desired higher outcome or publishing only the results/studies which show the desired results. The randomized experiments seem to turn in larger, smaller, or opposite-signed results at, well, random.
This is a similar problem as with the Reproducibility Project: we would like the replications of the original psychology studies to tell us, in some sense, how ‘trustworthy’ we can consider psychology studies in general. But most of the methods seem to diagnose lack of power as much as anything (the replications were generally powered 80%+, IIRC, which still means that a lot will not be statistically-significant even if the effect is real). Using Bayes factors is helpful in getting us away from p-values but still not the answer.
It might help to think about what is going on in a generative sense. What do I think creates these results? I would have to say that the results are generally being driven by a complex causal network of genes, biochemistry, ethnicity, SES, varying treatment methods etc which throws up an even more complex & enormous set of multivariate correlations (which can be either positive or negative), while effective interventions are few & rare (likewise, can be both positive or negative) but drive the occasional correlation as well. When a correlation is presented by a researcher as an effective intervention, it might be drawn from the large set of pure correlations or it might have come from the set of causals. It is unlabeled and we are ignorant of which group it came from. There is no oracle which will tell us that a particular correlation is or is not causal (that would make life too easy), but then (in this case) we can test it, and get a (usually small) amount of data about what it does in a randomized setting. How do we analyze this?
I would say that what we have here is something quite specific: a mixture model. Each intervention has been drawn from a mixture of two distributions, all-correlation (with a wide distribution allowing for many large negative & positive values) and causal effects (narrow distribution around zero with a few large values), but it’s unknown which of the two it was drawn from and we are also unsure what the probability of drawing from one or the other is. (The problem is similar to my earlier noisy polls: modeling potentially falsified poll data.)
So when we run a study-pair through this, then if they are not very discrepant, the posterior estimate shifts towards having drawn from the causal group in that case—and also slightly increases the overall estimate of the probability of drawing from the causal group; and vice-versa if they are heavily discrepant, in which case it becomes much more probable that there was a draw from the correlational group, and slightly more probable that draws from the correlation group are more common. At the end of doing this for all the study-pairs, we get estimates of causal/correlation posterior probability for each particular study-pair (which automatically adjusts for power etc and can be further used for decision-theory like ’does this reduce the expected value of the specific treatment of exercise to <=$0?), but we also get an overall estimate of the switching probability—which tells us in general how often we can expect tested correlations like these to be causal.
I think this gives us everything we want. Working with distributions avoids the power issues, for any specific treatment we can give estimates of being causal, we get an overall estimate as a clear unambiguous probability, etc.
Hi.
I am not sure I understand your question.
If I got such data I would (a) be very happy, (b) use the RCT to inform policy, and (c) use the pair to point out how correct causal inference methods can recover the RCT result if assumptions hold (hopefully they hold in the observational study). We can try to combine strength of two studies, but then the results live or die by assumptions on how treatments were assigned in the observational study.
I am also not a fan of classifying biases like they do (it’s a common silly practice). For example, it’s really not informative to say “confounding bias,” in reality you can have a lot of types of confounding, with different solutions necessary depending on the type (I like to draw pictures to understand this).
I think Robins et al (?Hernan?) at some point recovered the result of an RCT via his g methods from observational data.
The paper you are referring to is “Observational Studies Analyzed Like Randomized Experiments: An application to Postmenopausal Hormone Therapy and Coronary Heart Disease” by Hernan et al. It is available at https://cdn1.sph.harvard.edu/wp-content/uploads/sites/343/2013/03/observational-studies.pdf
The controversy about hormone replacement therapy is fascinating as a case study. Until 2002, essentially all women who reached menopause got medical advise to start taking pills containing horse estrogen. It was very widely believed that this would reduce their risk of having a heart attack. This belief primarily based on biological plausibility: Estrogen is known to reduce cholesterol, and cholesterol is believed to increase the risk of heart disease. Also, there were many observational studies that seemingly suggested that women who took hormone replacement therapy (HRT) had less risk of heart disease. (In my view, this was not surprising: Observational studies always show what the investigators expect to find.)
In 2002, the Women’s Health Initiative randomized trial was stopped early because it showed that estrogen replacement therapy actually substantially increased the risk of having a heart attack. Overnight, the medical establishment stopped recommending estrogen for menopausal women. But a perhaps more important consequence was that many clinicians stopped trusting observational studies altogether. In my opinion, this was mostly a good thing.
The largest observational study to show a protective effect of estrogen the Nurses Health Study. In 2008, my thesis advisor Miguel Hernan re-analyzed this dataset using Jamie Robins’ g-methods (which are equivalent to Pearl), and was essentially able to reproduce the results of the WHI trial. Miguel’s paper uses valid methods and gets the correct results. In my view, this shows that the new methods might work, but the paper would have meant much more if it was published prior to the randomized trials.
Miguel and Jamie’s paper sparked off a very interesting methodological debate with the original investigators at the Nurses Health Study, who are still clinging to their original analysis. See http://www.ncbi.nlm.nih.gov/pubmed/18813017 .
Many people still believe that Estrogen/HRT is beneficial. The most popular theory is that WHI recruited too many old women (sometimes in their 90s!) and that estrogen is harmful if given that long after menopause. A new randomized trial which is limited to women at menopause is currently being conducted. A second theory is that the results in the trial were due to differences in statin usage. I analyzed the second theory for my doctoral thesis, but found that this had negligible impact on the results.
It is also interesting to note that while it is true that the trial found that estrogen increased the risk of heart disease, it also showed a (non-significant) reduction in all-cause mortality. So the increased risk of cardiovascular disease didn’t even result in more deaths. Presumably, people care more about all-cause mortality than heart attacks. However, since it was “non-significant”, not even the most dedicated proponents of estrogen treatment ever point out this fact.
A side question, prompted by an amusing factoid in the Hernan paper: ”...we restricted the population to women who had reported plausible energy intakes (2510 –14,640 kJ/d)”.
In the statistical analysis in this paper, and also as a general practice in medical publications based on questionnaire data, are there adjustments for uncertainty in the questionnaire responses?
When you have a data point that says, for example, that person #12345 reports her caloric intake as 4,000 calories/day, do you take it as a hard precise number, or do you take it as an imprecise estimate with its own error which propagates into the model uncertainty, etc.?
Keyword is “measurement error.” People think hard about this. Anders_H knows this paper in a lot more detail than I do, but I expect these particular authors to be careful.
This issue is also related to “missing data.” What you see might be different from the underlying truth in systematic ways, e.g. you get systematic bias in your data, and you need to deal with that. This is also related to that causal inference stuff I keep going on about.
People like engineers and physicists think a lot about this. I am not sure that medical researchers think a lot about this. The usual (easy) way is to throw out unreasonable-looking responses during the data cleaning and then take what remains as rock-solid. Accepting that your independent variables are uncertain leads to a lot of inconvenient problems (starting with the OLS regression not being a theoretically-correct form any more).
Yes, that’s another can of worms. In some areas (e.g. self-reported food intake) the problem is so blatant and overwhelming that you have to deal with it, but if it looks minor not many people want to bother.
Clinicians do not, “methodology people” (who often partner up with “domain experts”) to do data analysis, absolutely do.
Yes, I was told the full gory details of this story (not going to repeat it here). Thanks for sharing this!
By the way, are you at Stanford now? I should find a way to drop by, Jacob’s there too.
Just putting the idea out for comment in case there’s some way this fails to deliver what I want it to deliver. Excerpting out all the comparisons and writing up the mixture model in JAGS would be a lot of work; just reading the papers takes long enough as it is.
Indeed. You can imagine that when I stumbled across Deeks and the rest of then in Google Scholar (my notes), I was overjoyed by their obvious utility (and because it meant I didn’t have to do it myself, as I was musing about doing using FDA trials) but also completely baffled: why had I never heard of these papers before?
I am not following your mixture model idea. For every data point you know if it comes from the RCT or observational study. You don’t need uncertainty about treatment assignment. What you need is figuring out how to massage observational data to get causal conclusions (e.g. what I think about all day long).
If you have specific observational data you want to look at, email me if you want to chat more.
No, the uncertainty here isn’t about which of the two studies a datapoint came from, but about whether (for a specific treatment/intervention) the correlational study datapoint was drawn from the same distribution as the randomized study datapoint or a different one, and (over all treatments/interventions) what the probability of being drawn from the same distribution is. Maybe it’ll be a little clearer if I narrate how the model might go.
So say you start off with a prior probability of 50-50 about which group a result is drawn from, a switching probability that will be tweaked as you look at data. (If you are studying turtles which could be from a large or a small species, then if you find 2 larger turtles and 8 smaller, you’re probably going to update from P=0.5 to a mixture probability more like P>0.20, since it’s most likely—but not certain—that 1 or 2 of the larger turtles came from the large species and the 8 smaller ones came from the small species.)
For your first datapoint, you have a pair of results: xyzcillin reduces all-cause mortality to RR=0.5 from a correlational study (cohort, cross-sectional, case-control, whatever), and the randomized study of xyzcillin has RR=1.1. What does this mean? Now, of course you know that 0.5 is the correlational result and 1.1 is the randomized result, but we can imagine two relatively distinct scenarios here: ‘xyzcillin actually works but the causal effect is really more like RR=0.7 and the randomized trial was underpowered’, or, ‘xyzcillin has no causal effect whatsoever on mortality and it’s just a bunch of powerful confounds producing results like RR=0.6-0.8’. We observe that 1.1 supports the latter more, and we update towards ‘xyzcillin has 0 effect’ and now give ‘non-causal scenarios are 55% likely’, but not too much because the xyzcillin studies were small and underpowered and so they don’t support the latter scenario that much.
Then for the next datapoint, ‘abcmycin reduces lung cancer’, we get a pair looking like 0.9 and 0.7, and we observe these large trials are very consistent with each other and so they highly support the former theory instead and we update towards ‘abcmycin causally reduces lung cancer’ and ‘noncausal scenarios are 39% likely’.
Then for the third datapoint about defracic surgery for backpain, we again get consistency like d=0.7 and d=0.5 and we update the probability that ‘defracic surgery reduces back pain’ and also push even further ’noncausal scenarios are 36% likely” because their sample sizes were decent.
And we do update for each pair we finish, and after bouncing back and forth with each pair, we wind up with an estimate that Nature draws from the non-causal scenario 37% of the time (ie the switching probability of the mixture is p=0.37). And now we can use that as a prior in evaluating any new medicine or surgery.
If you want to look at specific study-pairs, they’re all listed & properly cited in the papers I’ve collated & provided fulltext links for. I suspect that the more advanced methods will require individual level patient data, which sadly only a very few studies will release, but perhaps you can still find enough of those to make it worth your while and analyze if Robins et al can get a publishable paper out of just 1 RCT.
If I understood you correctly, there are two separate issues here.
The first is what people call “transportability” (how to sensibly combine results of multiple studies if units in those studies aren’t the same). People try all sorts of things (Gelman does random effects models I think?) Pearl’s student Elias Barenboim (now at Purdue) thinks about that stuff using graphs.
I wish I could help, but I don’t know as much about this subject as I want. Maybe I should think about it more.
The second issue is that in addition to units in two studies “not being the same” one study is observational (has weird treatment assignment) and one is randomized properly. That part I know a lot about, that’s classical causal inference—how to massage observational data to make it look like an RCT.
I would advise thinking about these problems separately, that is start trying to solve combining two RCTs.
edit: I know you are trying to describe things to me on the level of individual points to help me understand. But I think a more helpful way to go is to ignore sampling variability entirely, and just start with two joint distributions P1 and P2 that represent variables in your two studies (in other words you assume infinite sample size, so you get the distributions exactly). How do we combine them into a single conclusion (let’s say the “average causal effect”: difference in outcome means under treatment vs placebo)? Even this is not so easy to work out.
I think when you break it into two separate problems like that, you miss the point. Combining two RCTs is reasonably well-solved by multilevel random effects models. I’m also not trying to solve the problem of inferring from a correlational dataset to specific causal models, which seems well in hand by Pearlean approaches. I’m trying to bridge between the two: assume a specific generative model for correlation vs causation and then infer the distribution.
But this is exactly the problem! Apparently, there is no meaningful ‘average causal effect’ between correlational and causational studies. In one study, it was much larger; in the next, it was a little smaller; in the next, it was much smaller; in the one after that, the sign reversed… If you create a regular multilevel meta-analysis of a bunch of randomized and correlational studies, say, and you toss in a fixed-effect covariate and regress ‘Y ~ Randomized’, you get an estimate of ~0. The actual effect in each case may be quite large, but the average over all the studies is a wash.
This is different from other methodological problems. With placebos, there is a predictable systematic bias which gives you a large positive bias. Likewise, publication bias skews effects up. Likewise, non-blinding of raters. And so on and so forth. You can easily estimate with an additive fixed-effect / linear model and correct for particular biases. But with random vs correlation, it seems that there’s no particular direction the effects head in, you just know that whatever they are, they’ll be different from your correlational results. So you need to do something more imaginative in modeling.
OK, let’s imagine all our studies are infinite sized. I collect 5 study-pairs, correlational vs randomized, d effect size:
0.5 vs 0.1 (difference: 0.4)
-0.22 vs −0.22 (difference: 0)
0.8 vs −0.2 (difference: −1.0)
0.3 vs 0.3 (difference: 0
0.5 vs −0.1 (difference: 0.6)
I apply my mixture model strategy.
We see that in study #2 and #4, the correlational and causal effects are identical, 100% confidence, and thus both were drawn from the randomized distribution. With two datapoints −0.22 and 0.3, we begin to infer that the distribution of causal effects is probably fairly narrow around 0 and we update our normal distribution appropriately to be skeptical about any claims of large causal effects.
We see in study #1, #3, and #5, that the correlational and causal effects differ, 100% confidence, and thus we know that the correlational effect for that particular treatment was drawn from the general correlational distribution. The correlational effects are .5, -.8. .5 - all quite large, and so we infer that correlational effects tend to be quite large and its distribution has a large standard deviation (or whatever).
We then note that in 2⁄5 of the pairs, the correlational effect was the causal effect, and so we estimate that the probability of a correlational effect having been drawn from the causal distribution rather than the correlation distribution is P=2/5. Or in other words, correlation=causality 40% of the time. However, if we had tried to calculate an additive variable like in a meta-regression, we would find that the Randomized covariate was estimated at exactly 0 (
mean(c(0.4, 0, -1.0, 0, 0.6)) ~> [1] 0
) and certainly is not statistically-significant.Now when someone comes to us with an infinite-sized correlational trial that purified Egyptian mummy reduces allergy symptoms by d=0.5, we feed it into our mixture model and we get a useful posterior distribution which exhibits a bimodal pattern where it is heavily peaked at 0 (reflecting the more-likely-than-not scenario that mummy is mummery) but also peaked at d=0.4 or so, reflecting shrinkage of the scenario that mummy is munificent, which will predict better than if we naively tried to just shift the d=0.5 posterior distribution up or down some units.
The problem with real studies is that they are not infinitely sized, so when the point-estimates disagree and we get 0.45 vs 0.5, obviously we cannot strongly conclude which distribution in the mixture it was drawn from, and so we need to propagate that uncertainty through the whole model and all its parameters.
I am pretty sure I am not, but let’s see. What you are basically saying is “analysis ⇒ synthesis doesn’t work.”
Hierarchical models are a particular parametric modeling approach for data drawn from multiple sources. People use this type of stuff to good effect, but saying it “solves the problem” here is sort of like saying linear regression “solves” RCTs. What if the modeling assumptions are wrong? What if you are not sure what the model should be?
Let’s call them “interventionist approaches.” Pearl is just the guy people here read. People have been doing causal analysis from observational data since at least the 70s, probably earlier in certain special cases.
Ok.
This is what we should talk about.
If there is one RCT, we have a treatment A (with two levels a, and a’) and outcome Y. Of interest is outcome under hypothetical treatment assignment to a value, which we write Y(a) or Y(a’). “Average causal effect” is E[Y(a)] - E[Y(a’)]. So far so good.
If there is one observational study, say A is assigned based on C, and C affects Y, what is of interest is still Y(a) or Y(a’). Interventionist methods would give you a formula for E[Y(a)] - E[Y(a’)] in terms of p(A,C,Y). You can then construct an estimator for that formula, and life is good. So far so good.
Note that so far I made no modeling assumptions on the relationship of A and Y at all. It’s all completely unrestricted by choice of statistical model. I can do crazy non-parametric random forest to model the relationship of A and Y if I wanted. I can do linear regression. I can do whatever. This is important—people often smuggle in modeling assumptions “too soon.” When we are talking about prediction problems like in machine learning, that’s ok. We don’t care about modeling too much we just want good predictive performance. When we care about effects, the model is important. This is because if the effect is not strong and your model is garbage, it can mislead you.
If there are two RCTs, we have two sets of outcomes: Y1(a), Y1(a’) and Y2(a), Y2(a’). Even here, there is no one causal effect so far. We need to make some sort of assumption on how to combine these. For example, we may try to generalize regression models, and say that a lot of the way A affects Y is the same regression across the two studies, but some of the regression terms are allowed to differ to model population heterogeneity. This is what hierarchical models do.
In general we have E[f(Y1(a), Y2(a))] - E[f(Y1(a’),Y2(a’))], for some f(.,.) that we should justify. At this level, things are completely non-parametric. We can model the relationship of A and Y1,Y2 however we want. We can model f however we want.
If we have one RCT and one observational study, we still have Y1(a), Y1(a’) for the RCT, and Y2(a), Y2(a’) for the observational study. To determine the latter we use “interventionist approaches” to express them in terms of observational data. We then combine things using f(.,.) as before. As before we should justify all the modeling we are doing.
I am pretty sure Barenboim thought about this stuff (but he doesn’t do statistical inference, just the general setup).
I am pretty sure it is not going to let you take an effect size and a standard error from a correlation study and get out a accurate posterior distribution of the causal effect without doing something similar to what I’m proposing.
Ok, and how do we model them? I am proposing a multilevel mixture model to compare them.
Which is not going to work since in most, if not all, of these studies, the original patient-level data is not going to be available and you’re not even going to get a correlation matrix out of the published paper, and I haven’t seen any intervention-style algorithms which work with just the effect sizes which is what is on offer.
To work with the sparse data that is available, you are going to have to do something in between a meta-analysis and an interventionist analysis.
Ok. You can use whatever statistical model you want, as long as we are clear what the underlying object is you are dealing with. The difficulty here isn’t the statistical modeling, but being clear about what it is that is being estimated (in other words the interpretation of the parameters of the model). This is why I don’t talk about statistical modeling at first.
If all you have is reported effect sizes you won’t get anything good out. You need the data they used.
Is there anyone you would recommend studying in addition?
Depends on what you want. It doesn’t matter “who has priority” when it comes to learning the subject. Pearl’s book is good, but one big disadvantage of reading just Pearl is Pearl does not deal with the statistical inference end of causal inference very much (by choice). Actually, I heard Pearl has a new book in the works, more suitable for teaching.
But ultimately we must draw causal conclusions from actual data, so statistical inference is important. Some big names that combine causal and statistical inference: Jamie Robins, Miguel Hernan, Eric Tchetgen Tchetgen, Tyler VanderWeele (Harvard causal group), Mark van der Laan (Berkeley), Donald Rubin et al (Harvard), Frangakis, Rosenblum, Scharfstein, etc. (Johns Hopkins causal group), Andrea Rotnitzky (Harvard), Susan Murphy (Michigan), Thomas Richardson (UW), Phillip Dawid (Cambridge, but retired, incidentally the inventor of conditional independence notation). Lots of others.
I believe Stephen Cole posts here, and he does this stuff also (http://sph.unc.edu/adv_profile/stephen-r-cole-phd/).
Miguel Hernan and Jamie Robins are working on a new causal inference book that is more statistical, might be worth a look. Drafts available online:
http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/
You specialise in identifying the determinants of biases in causal inference? Just curious :) Interesting
And how to make those biases go away, yes.
You’re using correlation in what I would consider a weird way. Randomization is intended to control for selection effects to reduce confounds, but when somebody says correlational study I get in my head that they mean an observational study in which no attempt was made to determine predictive causation. When an effect shows up in a nonrandomized study, it’s not that you can’t determine whether the effect was causative; it’s that it’s more difficult to determine whether the causation was due to the independent variable or an extraneous variable unrelated to the independent variable. It’s not a question of whether the effect is due to correlation or causation, but whether the relationship between the independent and dependent variable even exists at all.
(1) Observational studies are almost always attempts to determine causation. Sometimes the investigators try to pretend that they aren’t, but they aren’t fooling anyone, least of all the general public. I know they are attempting to determine causation because nobody would be interested in the results of the study unless they were interested in causation. Moreover, I know they are attempting to determine causation because they do things like “control for confounding”. This procedure is undefined unless the goal is to estimate a causal effect
(2) What do you mean by the sentence “the study was causative”? Of course nobody is suggesting that the study itself had an effect on the dependent variable?
(3) Assuming that the statistics were done correctly and that the investigators have accounted for sampling variability, the relationship between the independent and dependent variable definitely exists. The correlation is real, even if it is due to confounding. It just doesn’t represent a causal effect
You are assuming a couple of things which are almost always true in your (medical) field, but are not necessarily true in general. For example,
Nope. Another very common reason is to create a predictive model without caring about actual causation. If you can’t do interventions but would like to forecast the future, that’s all you need.
That further assumes your underlying process is stable and is not subject to drift, regime changes, etc. Sometimes you can make that assumption, sometimes you cannot.
You’d also like a guarantee that others can’t do interventions, or else your measure could be gamed. (But if there’s an actual causal relationship, then ‘gaming’ isn’t really possible.)
(1) I just think calling a nonrandomized study a correlational study is weird.
(2) I meant to say effect; not study; fixed
(3) If something is caused by a confounding variable, then the independent variable may have no relationship with the dependent variable. You seem to be using correlation to mean the result of an analysis, but I’m thinking of it as the actual real relationship which is distinct from causation. So y=x does not mean y causes x or that x causes y.
I don’t understand what you mean by “real relationship”. I suggest tabooing the terms “real relationship” and “no relationship”.
I am using the word “correlation” to discuss whether the observed variable X predicts the observed variable Y in the (hypothetical?) superpopulation from which the sample was drawn. Such a correlation can exist even if neither variable causes the other.
If X predicts Y in the superpopulation (regardless of causality), the correlation will indeed be real. The only possible definition I can think of for a “false” correlation is one that does not exist in the superpopulation, but which appears in your sample due to sampling variability. Statistical methodology is in general more than adequate to discuss whether the appearance of correlation in your sample is due to real correlation in the superpopulation. You do not need causal inference to reason about this question. Moreover, confounding is not relevant.
Confounding and causal inference are only relevant if you want to know whether the correlation in the superpopulation is due to the causal effect of X on Y. You can certainly define the causal effect as the “actual real relationship”, but then I don’t understand how it is distinct from causation.
Right. Which is the problem randomization attempts to correct for, which I think of as a separate problem from causation.
No. Randomization abolishes confounding, not sampling variability
If your problem is sampling variability, the answer is to increase the power.
If your problem is confounding, the ideal answer is randomization and the second best answer is modern causality theory.
Statisticians study the first problem, causal inference people study the second problem
Intersample variability is a type of confound. Increasing sample size is another method for reducing confounding due to intersample variability. Maybe you meant intrasample variability, but that doesn’t make much sense to me in context. Maybe you think of intersample variability as sampling error? Or maybe you have a weird definition of confounding?
Either way, confounding is a separate problem from causation. You can isolate the confounding variables from the independent variable to determine the correlation between x and y without determining a causal relationship. You can also determine the presence of a causal relationship without isolating the independent variable from possible confounding variables.
The nonrandomized studies are determining causality; they’re just doing a worse job at isolating the independent variable, which is what gwern appears to be talking about here.
No it isn’t
I use the standard definition of confounding based on whether E(Y| X=x) = E(Y| Do(X=x)), and think about it in terms of whether there exists a backdoor path between X and Y.
The concept of confounding is defined relative to the causal query of interest. If you don’t believe me, try to come up with a coherent definition of confounding that does not depend on the causal question.
With standard statistical techniques you will be able to determine the correlation between X and Y. You will also be able to determine the correlation between X and Y conditional on Z. These are both valid questions and they are both are true correlations. Whether either of those correlations is interesting depends on your causal question and on whether Z is a confounder for that particular query.
No you can’t. (Unless you have an instrumental variable, in which case you have to make the assumption that the instrument is unconfounded instead of the treatment of interest)
Anders_H, you are much more patient than I am!
(re: last sentence, also have to assume no direct effect of instrument, but I am sure you knew that, just emphasizing the confounding assumption since discussion is about confounding).
Grand parent’s attitude is precisely what is wrong with LW culture’s complete and utter lack of epistemic/social humility (which I think they inherited from Yudkowsky and his planet-sized ego). Him telling you of all people that you are using a weird definition of confounding is incredibly amusing.
I just realized the randomized-nonrandomized study was just an example and not what you were talking about.
I just published an article in the conservative FrontPageMag on college safe spaces. It uses a bit of LW like reasoning.
Congrats on the courage to pick this fight.
Thanks.
Publishing that article in Front Page Mag doesn’t take much courage, so far as I can see. It’s not as if many FPM readers are going to disagree with it. What would take courage would be a vigorous defence of “safe spaces” in FPM, or an article like James’s in a lefty magazine.
[EDITED to add:] For the avoidance of doubt, I’m not objecting to James’s article or saying that publishing it was an act of cowardice! Only that it doesn’t seem to require any unusual bravery.
That’s not the point. There is a very active witchhunt going on at many US colleges. Mobs with torches and pitchforks are diligently searching for while males guilty of, and I quote
Any published text or blog post or a tweet, etc. can be construed as evidence of wrongthink. It’s basically the Chinese Cultural Revolution, this time as a farce. Most people in academia keep their heads down—see e.g. this.
Well… from my point of view, there’s a lot of that in the US. And I come from Italy, for Omega’s sake, we have news about the Pope every lunch and dinner. I cannot not imagine how the view from Denmark should be.
With this, I don’t mean that witchhunting is the answer, just that it can possibly be understood as a transient overshoot, to which the proper response is a transient undershoot.
I would venture a guess that your estimate has a lot to do with what kind of media you read and a fair amount to do with what your baseline is :-/
Note that the person on your last link, despite professing to be terrified of his students, seems to have been happy enough to publish that article with his real name on it. Note also that he links to a number of other pieces by other academics expressing similar opinions, all also apparently not so terrified as to avoid publishing such opinions with their names attached.
So far as I know, no academic has in fact got into any sort of trouble for expressing opinions like those, or like the (milder) ones expressed in James’s article.
People died in the Cultural Revolution. This is not in any useful sense “basically the Cultural Revolution”. Nor does it bear anything like the same resemblance to the Cultural Revolution as Louis Napoleon’s assumption of power did to his uncle’s. Calling someone courageous for daring to say openly that the idea of “safe spaces” may have gone too far is like calling someone courageous for daring to say “Merry Christmas”. Can we get a bit of perspective here?
Vox:
Edward Schlosser is a college professor, writing under a pseudonym.
Ha. I actually checked for that, but obviously not carefully enough. My apologies.
[EDITED to add:] OK, so I went back and searched the page, and it doesn’t say that anywhere. (Though buried in the middle of the article is a statement along the lines of “all controversial things I write, like this article, are anonymous or pseudonymous”, so I still should have known.) Perhaps it’s because I’m reading on a mobile device?
You get that line if you click on the author’s name.
The article starts by saying:
I'm a professor at a midsize state school.
If you read between the lines that’s a decision against revealing the name of the school and thus a decision to protect anonymity.In general the media likes to use pseudonyms when it can’t use the real name, so the fact that you have a name on the top is no good evidence that the article isn’t written anonymously or under a pseudonym.
That’s why I looked for a statement at the start or end that the name was pseudo. I think not finding such a thing genuinely was evidence of non-pseudonymity, though clearly not enough evidence was it turned out. I didn’t think of clicking on the name because I’m an idiot.
Let me enhance your knowledge.
“History repeats itself, first as tragedy, second as farce.”—Karl Marx
You link to three stories, but so far as I can see only the first of them is actually anything like an example of what we were talking about. Still, that’s one more than I knew of, so thank you.
The way many students at Yale responded to Christakis is shocking, for sure. But, again, this is a long long long way from the Cultural Revolution. She didn’t lose her life or even her job. And this is an unusually extreme case.
I knew where the quotation comes from, and what it refers to, and what it means, as you could have worked out:
There days everybody can share a link to an article and students in James Miller’s college can pass around an article written by their prof.
You could test this hypothesis by providing them a way to give you anonymous feedback. The provided method would have to avoid two things:
Despite anonymity, you have to prevent spamming, where one student would give you dozen answers, indistinguishable from dozen answers given by dozen students. This rules out methods like “send me an e-mail from a throwaway account”.
Not only the content of the feedback, but even the fact whether a student gave you feedback or not, must be kept secret. Otherwise it is easy for the majority to decide not to give you feedback, exposing every student giving you feedback as a likely traitor. This rules out methods like “here is a questionnaire, check the appropriate boxes and throw it into this basket”.
Of course, an overly complicated feedback method would be a trivial inconvenience, so less people would respond. Also, a complicated method would make them question whether it is really anonymous.
Great idea. If I’m ever asked to speak at Amherst I could give out forms to be immediately filled out. To protect people against being discovered I could first say “Everyone think of a number between 1 and 10. OK if you thought of the number 3 give false answers on this form.”
Last week was a gathering of physicists in Oxford to discuss string theory and the philosophy of science.
From the article:
That the Bayesian view is news to so many physicists is itself news to me, and it’s very unsettling news. You could say that modern theoretical physics has failed to be in-touch with other areas of science, but you could also make the argument that the rationalist community has failed to properly reach out and communicate with scientists.
The character from Molière learns a fancy name (“speaking in prose”) for the way he already communicates. David Gross isn’t saying that he is unfamiliar with the Bayesian view, he’s saying that “Bayesian confirmation theory” is a fancy name for his existing epistemic practice.
Rationalist community needs to learn a little humility. Do you realize the disparity in intellectual firepower between “you guys” and theoretical physicists?
This is the overgeneralized IQ fantasy. A really smart physicist may be highly competent at say string theory, but know very little about french pasteries or cuda programming or—more to the point—solomonoff induction.
As I said, here I am. Tell me how Solomonoff induction is going to change how I do my business. I am listening.
You are already a lesswronger—would you say that lesswrong has changed the way you think at all? Why do you keep coming back?
I post here, but I don’t identify as a rationalist. Two most valuable ideas (to me) that circulate here are tabooing and steelmanning (but they were not invented here).
I think I try to cultivate what you would call the “rationalist mindset” in order to do math. But I view it as a tool for certain problems only, not a part of my identity.
Do you want me to leave?
I like you being here.
That wasn’t my point. My point was, you are the best one to answer your own question.
Solomonoff induction is uncomputable, so it’s not going to help you in any way.
But Jaynes (who was a physicist) said that using Bayesian methods to analyze magnetic resonance data helped him gain an unprecedented resolution. Quoting from his book:
jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.
Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.
If we are talking about Bayesian methods for data analysis, almost no one on LW who is breathlessly excited about Bayesian stuff actually knows what they are talking about (with 2-3 exceptions, who are stats/ML grad students or up). And when called on it retreat to the “Bayesian epistemology” motte.
Bayesian methods didn’t save Jaynes from being terminally confused about causality and the Bell inequalities.
I still haven’t figured out what you have against Bayesian epistemology. It’s not like this is some sort of LW invention—it’s pretty standard in a lot of philosophical and scientific circles, and I’ve seen plenty of philosophers and scientists who call themselves Bayesians.
My understanding is that Solomonoff induction is usually appealed to as one of the more promising candidates for a formalization of Bayesian epistemology that uses objective and specifically Occamian priors. I haven’t heard Solomonoff promoted as much outside LW, but other similar proposals do get thrown around by a lot of philosophers.
Of course Bayesianism isn’t a cure-all by itself, and I don’t think that’s controversial. It’s just that it seems useful in many fundamental issues of epistemology. But in any given domain outside of epistemology (such as causation or quantum mechanics), domain-relevant expertise is almost certainly more important. The question is more whether domain expertise plus Bayesianism is at all helpful, and I’d imagine it depends on the specific field. Certainly for fundamental physics it appears that Bayesianism is often viewed as at least somewhat useful (based on the conference linked by the OP and by a lot of other things I’ve seen quoted from professional physicists).
I don’t have any problem with Bayesian epistemology at all. You can have whatever epistemology you want.
What I do have a problem with is this “LW myopia” where people here think they have something important to tell to people like Ed Witten about how people like Ed Witten should be doing their business. This is basically insane, to me. This is strong evidence that the type of culture that gets produced here isn’t particularly sanity producing.
Solomonoff induction is useless to know about for anyone who has real work to do (let’s say with actual data, like physicists). What would people do with it?
In many cases I’d agree it’s pretty crazy, especially if you’re trying to go up against top scientists.
On the other hand, I’ve seen plenty of scientists and philosophers claim that their peers (or they themselves) could benefit from learning more about things like cognitive biases, statistics fallacies, philosophy of science, etc. I’ve even seen experts claim that a lot of their peers make elementary mistakes in these areas. So it’s not that crazy to think that by studying these subjects you can have some advantages over some scientists, at least in some respects.
Of course that doesn’t mean you can be sure that you have the advantage. As I said, probably in most cases domain expertise is more important.
Absolutely agree it is important for scientists to know about cognitive biases. Francis Bacon, the father of the empirical method, explicitly used cognitive biases (he called them “idols,” and even classified them) as a justification for why the method was needed.
I always said that Francis Bacon should be LW’s patron saint.
So it sounds like you’re only disagreeing with the OP in degree. You agree with the OP that a lot of scientists should be learning more about cognitive biases, better statistics, epistemology, etc., just as we are trying to do on LW. You’re just pointing out (I think) that the “informed laymen” of LW should have some humility because (a) in many cases (esp. for top scientists?) the scientists have indeed learned lots of rationality-relevant subject matter, perhaps more than most of us on LW, (b) domain expertise is usually more important than generic rationality, and (c) top scientists are very well educated and very smart.
Is that correct?
Yup!
edit: Although I should say LW “trying to learn better statistics” is too generous. There is a lot more “arguing on the internet” and a lot less “reading” happening.
I nominate Carneades, the inventor of the idea of degrees of certainty.
Harry J.E. Potter did receive Bacon’s diary as a gift from his DADA teacher, after all.
I think a more charitable read would go like this: being smarter doesn’t necessarily mean that you know everything there’s to know nor that you are more rational than other people. Since being rational or knowing about Bayesian epistemology is important in every field of science, physicists should be motivated to learn this stuff. I don’t think he was suggesting that French pastries are literally useful to them.
Well, LW was born as a forum about artificial intelligence. Solomonoff induction is like an ideal engine for generalized intelligence, which is very cool!
That’s unfortunate, but we cannot ask of anyone, even geniuses, to transcend their time. Leonardo da Vinci held some ridiculous beliefs, for our standars, just like Ramanujan or Einstein. With this I’m not implying that Jaynes was a genius of that caliber, I would ascribe that status more to Laplace. On the ‘bright’ side, in our time nobody knows how to reconcile epistemic probability and quantum causality :)
That seems to be a pretty big claim. Can you articulate why you believe it to be true?
As far as I am aware, Solomonoff induction describes the singularly correct way to do statistical inference in the limits of infinite compute. (It computes generalized/full Bayesian inference)
All of AI can be reduced to universal inference, so understanding how to do that optimally with infinite compute perhaps helps one think more clearly about how practical efficient inference algorithms can exploit various structural regularities to approximate the ideal using vastly less compute.
Because AIXI is the first complete mathematical model of a general AI and is based on Solomonoff induction.
Also, computable approximation to Solomonoff prior has been used to teach small AI to play videogames unsupervised.
So, yeah.
If you don’t consider Jaynes to be comtemporary, which author do you consider to be his successor that updated where Jaynes went wrong?
While Bretthorst is his immediate and obvious successor, unfortunately nobody that I know of has taken up the task to develop the field the way Jaynes did.
I am pretty sure jacob_connell specifically brought up Solomonoff induction. I am still waiting for him to explain why I (let alone Ed Witten) should care about this idea.
How do you know what is important in every field of science? Are you a scientist? Do you publish? Where is your confidence coming from, first principles?
Whether Solomonoff induction is cool or not is a matter of opinion (and “mathematical taste,”) but more to the point the claim seems to be it’s not only cool but vital for physicists to know about. I want to know why. It seems fully useless to me.
Jaynes died in 1997. Bayesian networks (the correct bit of math to explain what is going on with Bell inequalities) were written up in book form in 1988, and were known about in various special case forms long before that.
???
Well, yes of course. Cox’ theorem. Journals are starting to refute papers based on the “p<0.05” principle. Many studies in medicine and psychology cannot be replicated. Scientists are using inferior analysis methods when better are available just because they were not taught to.
I do say there’s a desperate need to divulge Bayesian thinking.
I wasn’t referring to that. Jaynes knew that quantum mechanics was incompatible with the epistemic view of probability, and from his writing, while never explicit, it’s clear that he was thinking about a hidden variables model.
Undisputable violation of the Bell inequalities were performed only this year. Causality was published in 2001. We still don’t know how to stitch epistemic probabilities and quantum causality.
What I’m saying is that the field was in motion when Jaynes died, and we still don’t know a large deal about it. As I said, we cannot ask anyone not to hold crazy ideas from time to time.
Datapoint: in [biological] systematics in its broadest sense, Bayesian methods are increasingly important (molecular evolution studies,...), but I’ve never heard about pure Bayesian epistemology being in demand. Maybe because we leave it all to our mathematicians.
Part of the issue I keep harping about is people keep confusing Bayes rule, Bayesian networks, Bayesian statistical inference, and Bayesian epistemology. I don’t have any issue with a thoughtful use of Bayesian statistical inference when it is appropriate—how could I?
My issue is people being confused, or people having delusions of grandeur.
Nah—I was just using that as an example of things physicists (regardless of IQ) don’t automatically know.
Most physicists were trained to think in terms of Popperian epistemology, which is strictly inferior to (dominated by) Bayesian epistemology (if you don’t believe that, it’s not worth my time to debate). In at least some problem domains, the difference in predictive capability between the two methodologies are becoming significant.
Physicists don’t automatically update their epistemologies, it isn’t something they are using to having to update.
Heh, ok. Thanks for your time!
Ok, so I lied, I’ll bite.
I equate “Bayesian epistemology” with a better approximation of universal inference. It’s easy to generate example environments where Bayesian agents dominate Popperian agents, while the converse is never true. Popperian agents completely fail to generalize well from small noisy datasets. When you have very limited evidence, popperian reliance on hard logical falsifiability just fails.
This shouldn’t even really be up for debate—do you actually believe the opposite position, or are you just trolling?
French pastries (preferably from a Japanese/Korean pastry shop) are better than Solomonoff induction—they are yummier.
Ha, but a robot programmed with a Solomonoff induction-like software will learn to do French pastries long before pastries will learn how to do Solomonoff induction!
French pastries correspond to a pretty long bit-string so you may have to wait for a very long time (and eat a lot of very bad possibly-pastries in the meantime :-P). A physicist can learn to make pastries much quicker.
It could be that the attitude/belief that theoretical physicists are far smarter than anyone else (and therefore, by implication, do not need to listen to anyone else) is part of the problem I’m outlining.
It could be, but I think theoretical physicists actually are very intelligent. Do you disagree?
edit: But let’s leave them aside, and talk about me, since I am actually here. I am not in the same league as Ed Witten, not even close. Do you (generic sense) have something sensible to communicate to me about how I go about my business?
When did you become a theoretical physicist?
I am not. But I do theory work, and some of it is even related to analyzing data (and I am actually here to have this conversation, whereas Ed is not). So—what do you have to teach me?
I dunno. I have PhD in engineering. In my graduate research and in my brief life as a practicing scientist, I used rationalist skills like “search for more hypotheses” and “think exclusively about the problem for five minutes before doing anything else” and generally leveraged LW-style thinking, that I didn’t learn in school, to be more successful and productive than I probably would have been otherwise. I could probably write a lengthy article about how I perceive LW to have helped me in my life, but I know that it would seem extremely post hoc and you could also probably say that the skills I’m using are not unique to LW. All I can say is that the core insight that formed the crux of my dissertation arose because I was using a very LW-style approach to analyzing a problem.
The thing about rationalist skills is that LW does not any cannot have a monopoly on them. In fact, the valuable function of LW (at least in the past) has been to both aggregate and sort through potentially actionable strategic directives and algorithms.
What’s interesting to me is that school doesn’t do that at all. I got through however-many years of schooling and earned a PhD without once taking a class about Science, about how to actually do it, about what the process of Science is. I absorbed some habits from advisers and mentors, that’s about it. The only place that I even know of where people talk at length about the inner operations of mind that correspond to the outer reality where one observes discoveries being made is Less Wrong.
And if you’re an entrepreneur and don’t care about science, then Less Wrong is also one of a few places where people talk at length about how to marshal your crappy human brain and coax it to working productively on tasks that you have deliberately and strategically chosen.
One problem is that I’m probably thinking of the Less Wrong of four years ago rather than the Less Wrong of today. In any case, all those old posts that I found so much value in are still there.
I feel like this is an important point that goes a long way to give one the intellectual / social humility IlyaShpitser is pointing at, and I agree completely that the value of LW as a site/community/etc. is primarily in sorting and aggregating. (It’s the people that do the creating or transferring.)
You are correct in that surveys of IQ and other intelligence scores consistently show physicists having some of the highest. But mathematics, statistics, computer science, and engineering are the same, and most studies I’ve seen generally see very little, if any, significant difference in intelligence scores between these fields.
‘Rationalist’ isn’t a field or specialization, it’s defined more along the lines of refining and improving rational thinking. Based on the lesswrong survey, fields like mathematics and computer science are heavily represented here. There are actually more physicists (4.3%) than philosophers (2.4%). If this is inconsistent with your perception of the community, update your prior.
From all of this it is safe to assume that the average LW’er is ‘very smart’, and that LW contains a mini-community of rationalist scientists. One data point: Me. I have a PhD in engineering and I’m a practising scientist. Maybe I should have phrased my initial comment as: “It might be better if the intersection of rationalists and scientists were larger.”
While 4.3% of LW people are physicists the reverse isn’t true.
If only smart people were automatically bias free...
Could you expand on this further? I’m not sure I understand your argument. Also, intellectual humility or social humility?
Re: your last question: yes.
(a) It is very difficult to perceive qualitative differences for people 1 sigma+ above “you” (for any value of “you”), but it is enormous.
(b) How much “science process” does this community actually understand? How many are practicing scientists, as in publish real stuff in journals?
The outside view worry is there might be a bit of a “twenty something knowitall” going on. You read some stuff, and liked it. That’s great! If the stuff isn’t universally adopted by very smart folks, there are probably very good reasons for that! Read more!
My argument boils down to: “no, really, very smart people are actually very smart.”
The median IQ at LessWrong is 139, the average Nobel laureate is reputed to have an IQ of 145. Presumably that means many people at LessWrong are in a position to understand the reasoning of Nobel laureates, at least.
The gap between the average Nobel laureate (in physics, say) and the average LWer is enormous. If your measure says it isn’t, it’s a crappy measure.
I calculate about 128 for the average IQ of a survey respondent who provides one and I suspect that nonresponse means the actual average is closer to 124 or so. (Thus I agree with you that there is a significant gap between the average Nobel laureate and the average LWer.)
I think the right way to look at LW’s intellectual endowment is that it’s very similar to a top technical college, like Harvey Mudd. There are a handful of professor/postdoc/TA types running around, but as a whole the group skews very young (graph here, 40 is 90th percentile) and so even when people are extraordinarily clever they don’t necessarily have the accomplishments or the breadth for that to be obvious. (And because of how IQ distributions work, especially truncated ones with a threshold, we should expect most people to be close to the threshold.)
I agree with this. I think looking at a typical LWer as a typical undergrad at Harvey Mudd is a good model. (This is not a slur, btw, Harvey Mudd is great).
I was confused for a moment.
What makes you so confident that your model is correct, instead of the data disproving it?
No sarcasm, it’s a honest question.
I look at the IQ results for the survey every year. A selected handful of comments:
Karma vs. multiple IQ tests: positive correlation (.45) between self-report and Raven’s for users with positive karma, negative correlation (-.11) between self-report and Raven’s for users without positive karma.
SATs are very high: 96th percentile in the general population is lower quartile here. (First place I make the Harvey Mudd comparison.)
SAT self-report vs. IQ self-report: average SAT, depending on which one you look at and how you correct it, suggests that the average LWer is somewhere between 98th and 99.5th percentile. (IQ self-report average is above 99.5th percentile, and so I call the first “very high” and the second “extremely high.”)
I’ve interacted with a handful of Nobel laureates, I’m familiar with the professors and students at two top 15 graduate physics programs, and I’ve interacted with a bunch of LWers. LW as whole seems roughly comparable to a undergraduate physics department, active LWers roughly comparable to a graduate physics department, and there are top LWers at the level of the Nobel laureates (but aging means the ~60 year old Nobel laureates are not a fair comparison to the ~30 year old top LWers, and this is selecting just the math genius types from the top LWers, not the most popular top LWers). Recall Marcello comparing Conway and Yudkowsky.
That last link is kinda cringeworthy.
I just spent the last minute or so trying to figure out what you didn’t like about my percentile comparisons. ;)
The underlying subject is often painful to discuss, so even handled well there will be things to cringe about.
I don’t know if you knew that my question was directed at IlyaShpitser and not at you… I do not doubt your data.
Because I hung out with some top academic people, I know what actual genius is like.
Incidentally, when I talk about people being “very smart” I don’t mean “as measured by IQ.” As I mentioned lots of times before, I think IQ is a very poor measure of math smarts, and a very poor measure of generalized smarts at the top end. Intelligence is too heterogeneous, and too high dimensional. But there is such as thing as being “very smart,” it’s just a multidimensional thing.
So in this case, I just don’t think there is a lot of info in the data. I much prefer looking at what people have done as a proxy for their smarts. “If you are so smart, where are all your revolutionary papers?” This also correctly adjusts for people who actually are very smart, but who bury their talents (and so their hypothetical smarts are not super interesting to talk about).
I’ve already had this discussion with someone else, about another topic: I pointed out that statistically, lottery winners end up not happier than they were before winning. He said that he knew how to spend them well to be effectively much happier.
In our discussion, you have some insight that from my perspective are biased, but from your point of view are not. Unfortunately, your data rely on uncommunicable evidence, so we should just disagree and call it a day.
Lottery winners do end up happier.
Thanks, I updated!
Well, you don’t have to agree if you don’t want.
But the situation is not as hopeless as it seems. Try to find some people at the top of their game, and hang out with them for a bit. Honestly, if you think “Mr. Average Less Wrong” and Ed Witten are playing in the same stadium, you are being a bit myopic. But this is the kind of thing more info can help with. You say you can’t use my info (and don’t want to take my word for it), but you can generate your own if you care.
Will do! We’ll see how it pans out :D
Is there a reliable source for this?
[1] is one source. Its method is: “Jewish IQ is distributed like American-of-European-ancestry IQ, but a standard deviation higher. If you look at the population above a certain IQ threshold, you see a higher fraction of Jews than in the normal population. If you use the threshold of 139, you see 27% Jews, which is the fraction of Jews who are Nobel laureates. So let’s assume that Nobel laureate IQ is distributed like AOEA IQ after you cut off everyone with IQ below 139. It follows that Nobel laureates have an average IQ of 144.”
I hope you’ll agree that this seems dubious.
[2] agrees that it’s dubious, and tries to calculate it a different way (still based on fraction of Jews), and gets 136. (It’s only reported by field, but it would be the same as chemistry and literature, because they’re both 27% Jews.) It gets that number by doing a bunch of multiplications which I suspect are the wrong multiplications to do. (Apparently, if IQ tests had less g loading, and if self-identified ethnicity correlated less with ancestry, then the g loading of Jewishness would go up?) But even if the calculations do what they’re supposed to, it feels like a long chain of strong assumptions and noisy data, and this method seems about equally dubious to me.
What gets me more is the guy who was complaining that the atomic theory is left in the same framework with 1-epsilon probability.
No, this is not a problem.
I tried to get a discussion going on this exact subject in my post this week, but there seemed to be little interest. A major weakness of the standard Bayesian inference method is that it assumes a problem only has two possible solutions. Many problems involve many possible solutions, and many times the number of possible solutions is unknown, and in many cases the correct solution hasn’t been thought of yet. In such instances, confirmation through inductive inference may not be the best way of looking at the problem.
Where did you get this from? Maintaining beliefs over an entire space of possible solutions is a strength of the Bayesian approach. Please don’t talk about Bayesian inference after reading a single thing about updating beliefs on whether a coin is fair or not. That’s just a simple tutorial example.
If I have 3 options, A, B, and C, and I’m 40% certain the best option is A, 30% certain the best option is B, and 30% certain the best option is C, would it be correct to say that I’ve confirmed option A instead of say my best evidence suggests A? This can sort of be corrected for with the standard Bayesian confirmation model, but the problem becomes larger as the number of possibilities increases to the point where you can’t get a good read on your own certainty, or to the point where the number of possibilities is unknown.
I don’t understand your question. Is this about maintaining beliefs over hypotheses or decision-making?
I’m arguing that Bayesian confirmation theory as a philosophy was originally conceived as a model using only two possibilities (A and ~A), and then this model was extrapolated into problems with more than two possibilities. If it had been originally conceived using more than two possibilities, it wouldn’t have made any sense to use the word confirmation. So explanations of Bayesian confirmation theory will often entail considering theories or decisions in isolation rather than as part of a group of decisions or theories.
So if there are 20 possible explanations for a problem, and there is no strong evidence suggesting any one explanation, then I will have 5% certainty of the average explanation. Unless I am extremely good at calibration, then I can’t confirm any of them, and if I consider each explanation in isolation from the other explanations, then all of them are wrong.
It doesn’t matter whether we’re talking about hypotheses or decision-making.
I’m not sure whether this is true, but it’s irrelevant. Bayesian confirmation theory works just fine with any number of hypotheses.
If by “confirm” you mean “assign high probability to, without further evidence”, yes. That seems to me to be exactly what you’d want. What is the problem you see here?
You sound confused. The “confirmation” stems from
(source)
So what if p(H) = 1, p(H|A) = .4, p(H|B) = .3, and p(H|C) = .3? The evidence would suggest all are wrong. But I have also determined that A, B, and C are the only possible explanations for H. Clearly there is something wrong with my measurement, but I have no method of correcting for this problem.
H is Hypothesis. You have three: HA, HB, and HC. Let’s say your prior is that they are equally probable, so the unconditional P(HA) = P(HB) = P(HC) = 0.33
Let’s also say you saw some evidence E and your posteriors are P(HA|E) = 0.4, P(HB|E) = 0.3, P(HC|E) = 0.3. This means that evidence E confirms HA because P(HA|E) > P(HA). This does not mean that you are required to believe that HA is true or bet your life’s savings on it.
That’s a really good explanation of part of the problem I was getting at. But that requires considering the three hypotheses as a group rather than in isolation from all other hypotheses to calculate 0.33.
No, it does not.
Let’s say you have a hypothesis HZ. You have a prior for it, say P(HZ) = 0.2 which means that you think that there is a 20% probability that HZ is true and 80% probability that something else is true. Then you see evidence E and it so happens that the posterior for HZ becomes 0.25, so P(HZ|E) = 0.25. This means that evidence E confirmed hypothesis HZ and that statement requires nothing from whatever other hypotheses HA,B,C,D,E,etc. might there be.
How would you calculate that prior of 0.2? In my original example, my prior was 1, and then you transformed it into 0.33 by dividing by the number of possible hypotheses. You wouldn’t be able to do that without taking the other two possibilities into account. As I said, the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is unknown. However, frequently philosophical theories of bayesian confirmation theory don’t consider this problem. From this paper by Morey, Romeijn, and Rouder:
You need to read up on basic Bayesianism.
Priors are always for a specific hypothesis. If your prior is 1, this means you believe this hypothesis unconditionally and no evidence can make you stop believing it.
You are talking about the requirement that all mutually exclusive probabilities must sum to 1. That’s just a property of probabilities and has nothing to do with Bayes.
Yes, it can. To your “known” hypotheses you just add one more which is “something else”.
Really, just go read. You are confused because you misunderstand the basics. Stop with the philosophy and just figure out how the math works.
I’m not arguing with the math; I’m arguing with how the philosophy is often applied. Consider the condition where my prior is greater than my evidence for all choices I’ve looked at, the number of possibilities is unknown, but I still need to make a decision about the problem? As the paper I was originally referencing mentioned, what if all options are false?
You are not arguing, you’re just being incoherent. For example,
...that sentence does not make any sense.
Then the option “something else” is true.
But you can’t pick something else; you have to make a decision
What does “have to make a decision” mean when “all options are false”?
Are you thinking about the situation when you have, say, 10 alternatives with the probabilities of 10% each except for two, one at 11% and one at 9%? None of them are “true” or “false”, you don’t know that. What you probably mean is that even the best option, the 11% alternative, is more likely to be false than true. Yes, but so what? If you have to pick one, you pick the RELATIVE best and if its probability doesn’t cross the 50% threshold, well, them’s the breaks.
Yes that is exactly what I’m getting at. It doesn’t seem reasonable to say you’ve confirmed the 11% alternative. But then there’s another problem, what if you have to make this decision multiple times? Do you throw out the other alternatives and only focus on the 11%? That would lead to status quo bias. So you have to keep the other alternatives in mind, but what do you do with them? Would you then say you’ve confirmed those other alternatives? This is where the necessity of something like falsification comes into play. You’ve got to continue analyzing multiple options as new evidence comes in, but trying to analyze all the alternatives is too difficult, so you need a way to throw out certain alternatives, but you never actually confirm any of them. These problems come up all the time in day to day decision making such as deciding on what’s for dinner tonight.
In the context of the Bayesian confirmation theory, it’s not you who “confirms” the hypothesis. It’s evidence which confirms some hypothesis and that happens at the prior → posterior stage. Once you’re dealing with posteriors, all the confirmation has already been done.
Do you get any evidence to update your posteriors? Is there any benefit to picking different alternatives? If no and no, then sure, you repeat your decision.
No, it would not. That’s not what the status quo bias is.
You keep on using words without understanding their meaning. This is a really bad habit.
When I say throw out I’m talking about halting tests, not changing the decision.
If your problem is which tests to run, then you’re in the experimental design world. Crudely speaking, you want to rank your available tests by how much information they will give you and then do those which have high expected information and discard those which have low expected information.
True.
All you have to do is not simultaneously use “confirm” to mean both “increase the probability of” and “assign high probability to”.
As for throwing out unlikely possibilities to save on computation: that (or some other shortcut) is sometimes necessary but it’s an entirely separate matter from Bayesian confirmation theory or indeed Popperian falsificationism. (Popper just says to rule things out when you’ve disproved them. In your example, you have a bunch of things near to 10% and Popper gives you no licence to throw any of them out.
Yes, sorry. I’m considering multiple sources which I recognize the rest of you haven’t read, and trying to translate them into short comments which I’m probably not the best person to do so, so I recognize the problem I’m talking about may come out a bit garbled, but I think the quote from the Morey et al. paper I quoted above describes the problem the best.
You see how Morey et al call the position they’re criticizing “Overconfident Bayesianism”? That’s because they’re contrasting it with another way of doing Bayesianism, about which they say “we suspect that most Bayesians adhere to a similar philosophy”. They explicitly say that what they’re advocating is a variety of Bayesian confirmation theory.
The part about deduction from the Morey et al. paper:
Who has been claiming that Bayesian confirmation theory is a tool for generating models?
(It can kinda-sorta be used that way if you have a separate process that generates all possible models, hence the popularity of Solomonoff induction around here. But that’s computationally intractable.)
As stated in my original comment, confirmation is only half the problem to be considered. The other half is inductive inference which is what many people mean when they refer to Bayesian inference. I’m not saying one way is clearly right and the other wrong, but that this is a difficult problem to which the standard solution may not be best.
You’d have to read the Andrew Gelman paper they’re responding to to see a criticism of confirmation.
You don’t need to know the number, you need to know the model (which could have infinite hypotheses in it).
Your model (hypothesis set) could be specified by an infinite number of parameters, say “all possible means and variances of a Gaussian.” You can have a prior on this space, which is a density. You update the density with evidence to get a new density. This is Bayesian stats 101. Why not just go read about it? Bishop’s machine learning book is good.
True, but working from a model is not an inductive method, so it can’t be classified as confirmation through inductive inference which is what I’m criticizing.
You are severely confused about the basics. Please unconfuse yourself before getting to the criticism stage.
??? IlyaShpitser if I understand correctly is talking about creating a model of a prior, collecting evidence, and then determining whether the model is true or false. That’s hypothesis testing, which is deduction; not induction.
You don’t understand.
You have a (possibly infinite) set of hypotheses. You maintain beliefs about this set. As you get more data, your beliefs change. To maintain beliefs you need a distribution/density. To do that you need a model (a model is just a set of densities you consider). You may have a flexible model and let the data decide how flexible you want to be (non-parametric Bayes stuff, I don’t know too much about it), but there’s still a model.
Suggesting for the third and final time to get off the internet argument train and go read a book about Bayesian inference.
Oh, sorry I misunderstood your argument. That’s an interesting solution.
That interesting solution is exactly what people doing Bayesian inference do. Any criticism you may have that doesn’t apply to what Ilya describes isn’t a criticism of Bayesian inference.
As much as I hate to do it, I am going to have to agree with Lumifer, you sound confused. Go read Bishop.
Not really. A hypothesis’s prior probability comes from the total of all of your knowledge; in order to determine that P(HA)=0.33 Lumifer needed the additional facts that there were three possibilities that were all equally likely.
It works just as well if I say that my prior is P(HA)=0.5, without any exhaustive enumeration of the other possibilities. Then evidence E confirms HA if P(HA|E)>P(HA).
(One should be suspicious that my prior probability assessment is a good one if I haven’t accounted for all the probability mass, but the mechanisms still work.)
Which is one of the other problems I was getting at
If you start with inconsistent assumptions, you get inconsistent conclusions. If you believe P(H)=1, P(A&B&C)=1, and P(H|A) etc. are all <1, then you have already made a mistake. Why are you blaming this on Bayesian confirmation theory?
You are confused. If p(H) = 1, p(H, anything) = 1 or 0, so p(H | anything) = 1 or 0, if p(anything) > 0.
Wait, how would you get P(H) = 1?
Fine. p(H) = 0.5, p(H|A) = 0.2, p(H|B) = 0.15, p(H|C) = 0.15 It’s not really relevant to the problem.
The relevance is that it’s a really weird way to set up a problem. If P(H)=1 and P(H|A)=0.4 then it is necessarily the case that P(A)=0. If that’s not immediately obvious to you, you may want to come back to this topic after sleeping on it.
Fair enough.
\sum_i p(H|i) need not add up to p(H) (or indeed to 1).
No, it doesn’t.
Edit—I’m agreeing with you. Sorry if that wasn’t clear.
This is not true at all.
A large chunk of academics would say that it is. For example, from the paper I was referencing in my post:
That doesn’t at all say Bayesian reasoning assumes only two possibilities. It says Bayesian reasoning assumes you know what all the possibilities are.
True, but how often do you see an explanation of Bayesian reasoning in philosophy that uses more than two possibilities?
. . .
This is a weird sentence to me. I learned about Bayesian inference through Jaynes’ book and surely it doesn’t portray that inference as having only two possible solutions.
The other book I know about, Sivia’s, doesn’t do this either.
You’re referring to how it is described in statistics textbooks. I’m talking about confirmation theory as a philosophy.
From Omnilibrium:
The politics of invasive species
Gives me the Wellkept Gardens Die by Pacifism feel.
How isomorphic is society and online communities? Can the Wellkept Gardens argument be applied that liberally?
How much do you trust economic data released by the Chinese government? I had assumed that economic indicators were manipulated, but recent discussion suggests it is just entirely fabricated, at least as bad as anything the Soviet Union reported. For example, China has reported a ~4.1% unemployment rate for over a decade. Massive global recession? 4.1% unemployment. Huge economic boom? 4.1% unemployment.
One of the largest, most important economies in the world, and I don’t know that we can reliably say much about it at all.
Not much.
If you want to explore further, I recommend this, for example this post.
One interesting point, not expanded up on, is this:
Balding dismisses this by citing Premier Li Keqiang, but I think this objection illustrates a deeper problem with the way the phrase “conspiracy theory” is used. It’s frequently used to dismiss any suggestion that someone in authority is behaving badly regardless of whether an actual conspiracy would be required.
Let’s look at what it would take for Chinese economic data to be bad. The data is gathered by the central government by delegating gathering the data to appropriate individual branches, by province, industry, etc. So what happens if someone at that level decides to fudge with the data for whatever reason (possibly to make his province and/or industry look better). The aggregate data will be wrong. And that’s just one person on one level. In reality, of course, there are many levels in the hierarchy and many corrupt people in all of them.
Stamp’s Law.
Josiah Stamp, 1st Baron Stamp
You misunderstand Balding—he asserts loudly and explicitly that the Chinese authorities misbehave with respect to statistical data. The conspiracy theories he is talking about are the conspiracies of China-watchers and the point of them would be to sow FUD about the Chinese economic development, presumably after shorting China.
I believe VoiceOfRa was talking about the person Balding quoted.
Do you have a 90% confidence interval for what you consider China’s real GDP to be?
Why would I care?
That was a bit… strange.
Huw Price, a professional philosopher who happens to be one of the founders and the Academic Director of the Centre for the Study of Existential Risk (the one in Cambridge, UK), wrote a piece which is quite optimistic about cold fusion in general and Andrea Rossi in particular.
I don’t follow LENR research closely, but Rossi seems like one of the least trustworthy people in the field, which speaks poorly of Huw Price’s judgement, since he especially emphasizes the plausibility of E-Cat.
I’m very OK with using “sociological” factors to make judgments about these things. Rossi has been involved in a number of extremely suspicious operations and did a stint in prison for fraud. Here’s a skeptic’s look at the “independent tests” verifying Rossi’s device.
LENR is under-populated. Independent of whether it is valid or not the social effects dominate the scientific ones.
Also interesting: The Fleischman-Pons-Effect may be unreliable in general but the heat/helium ratio is claimed to be stable.
Added: I don’t think the paper by the swedish physicists is smelly either (except in so far as it mentions the E-Cat): Nuclear Spallation and Neutron Capture Induced by Ponderomotive Wave Forcing—note that the specific resonance frequency of the effect could explain the unreliability of the experiment.
It may appear strange that one of the authors Rickard Lundin is an astrophysicist but he well established there (look at the citations) and does have significant experience with interactions of ions in strong fields.
Saying this implies that you know what the proper population level is. How do you know?
Social effects dominate the attempts to build a perpetuum mobile as well.
In this I rely on the evaluation of Huw Price who surely has a much better grasp of the field(s) than I do.
Indeed strange. Following up on the linked citations finds things that smell pretty dubious.
Could you link to the citations you find smelly?
Top comment here on the Alexander Parkhomov replication: http://www.e-catworld.com/2014/12/30/alexander-parkhomov-on-calibration-in-his-test/
Claims that this: http://animpossibleinvention.com/2015/10/15/swedish-scientists-claim-lenr-explanation-break-through/ bolsters the case smells like typical aggrandizing claim since it is not a replication, but simply a speculative paper on the causal mechanism if such an effect exists. As has been repeated many times, no one is questioning that the energy is there, it’s the mechanism by which it actually provides excess power at low temperatures that is under question, see the comments thread in the next big future piece here:http://nextbigfuture.com/2015/06/chinas-lenr-is-getting-excess-600-watts.html#soa_062bbe85
A review of the more credible replication does cause an update in the positive direction, but only a small one: http://www.infinite-energy.com/iemagazine/issue118/analysis.html
I am confused about free will. I tried to read about it (notably from the sequences) but am still not convinced.
I make choices, all the time, sure, but why do I chose one solution in particular?
My answer would be the sum of my knoledge and past experiences (nurture) and my genome (nature), with quantum randomness playing a role as well, but I can’t see where does free will intervene.
It feels like there is something basic I don’t understand, but I can’t grasp it.
Let’s try a car analogy for a compatibilist position, as I understand it: there is car, and why does it move? Because it has an engine and wheels and other parts all arranged in a specific pattern. There is no separate “carness” that makes it move (“automobileness” if you will), it is the totality of its parts that makes it a car.
Will is the same, it is the totality of your identity which creates a process by which choices are made. This doesn’t mean there is no such thing any more than the fact that a car is composed of identifiable parts means that no car exists, it is just not a basic indivisible thing.
Your insight is pretty consistent with a lot of philosophers, including my own personal favorite Daniel Dennett. Even if there is a pseudorandom number generator (or a quantum random number generator which might not be pseudo), that our “choices” would be random in this way does not really feel like what people want free will to mean. My reading of Dennett is that our “choices” arise from the law-like operation of our minds, which may be perfectly predictable (if there is no randomness only pseudorandmness of classical thermal noise) or might be as predicatable as any other physical phenomenon within the limits of quantum unpredictability (if you accept that explanation for what is seen in experiments such as two slit and so on).
The thing that amazes me about “free will” is that the “inputs” to what our brain does include the previous “outputs” of what our brain does. So I have decided that obviously if I believe that will power exists, the choices my brain makes in the future will be more likely consistent with what I consciously want my brain to do. So in some sense, free will does exist, well almost, if I get myself believing I have choices my brain will fall more often in the direction of choosing what I consciously want.
There is no substitute for reading Dennett in my opinion, and it is not an easy thing to do.
That sum you speak of is encoded in a massive biological calculator called a brain. Free will is the introspective module of that computer as it examines it’s own calculations, and that data affects the state of the network to be part of future calculations.
Is that actually the ‘strange loop’ that Hofstadter writes about?
Hofstadter (as I remember—it’s been a long time) took it a step further, granting consciousness to our models of others, and to the models of us that we model in others, etc....
You’ve stated compatibilism, and from that perspective free will tends to look trivial (“you can choose things”) or like magical thinking.
Many people have wanted there to be something special about the act of choosing or making decisions. This is necessary for several moral theories, as they demand a particular sense in which you are responsible for your actions that does not obtain if all your actions have prior causes. This is often related to theories that call for a soul, some sort of you apart from your body, brain, genetics, environment, and randomness. You have a sense of self and many people want that to be very important, as you think of yourself as important (to you, if no one else).
You may have read Douglas Adams and recall him describing the fundamental question of philosophy as what life is all about when you really get down to it, really, I mean really. A fair amount of philosophy can be understood as people tacking “really” onto things and considering that a better question. “Sure you choose, but do you choose what you choose to choose? Is our will really free? I mean really, fundamentally free, when you take away everything else, really?”
Thoughts this week:
Career stategy
Thiel isn’t decisive on the topic. Is the definite-optimist view is the dominant approach to candidacy in the grand marketplace of talent today?
Kumon
Kumon franchises are cheap. The branding and rep is good. Tutoring is a very attractive market in general and kumon makes it easier for the teachers. But is it ethical, I wonder? To me it’s ethical if it delivers value to the students. A caveat is that it seemed cruel the kind of mind-numbing maths done by my classmates as a kid who attended Kumon.
A study with very small sample size) says ‘that there may be a significant relationship between participation in the Kumon programme and development in computation skills (p = 0.053), but not with development in mathematical reasoning skills (p = 0.867)’
Going by that alone, it would seem short-term experimental studies don’t explain the small-sample (2 out of 2) size correlation between Kumon attendance as kids and extreme adult success I see in my friends today. A stackexchange-esque Q&A suggests no further effects are suggested by other experimental Kumon literature
Political psychology
If human rights were reframed as entitlements, and laws that protect those contrasted with laws that protect discretions, those two having obvious tensions, I wonder what impact that would have on human rights law reform. Branding is influential.
Effective Altruism politics
Based on Sam Deere, Effective Altruism affiliate and former ALP staffer: tractability of political ideas for conversion to legislation comes down to: novelty, high impact, cost-effectiveness, budgetary considerations, interaction with other policies, popular, ideological and strategic/tactical considerations.
A hereustic for lobbying is to focus on Ministers, not departments, other politicians, yada yada...
Nara
I have heard of an exotic Namibian plant, crowned with thorns that grows in the desert called Nara that bears a fruit which when dried, tastes like chocolate but is highly nutritious. I’m skeptical. It sounds too good to be true. I wonder why it isn’t a common treat around the world now. If the rumours are true, I would love some to grow in the Australian outback, biosecurity permitting.
Is data science a profitable industry?
Anyway, it’s super easy to transmit machine learning how-to online, it’s the most popular class at Stanford. Can I get a ‘market efficiency’? It’s not long till we automate the basic tasks since machine learning is an empirical field too, so it can be subject to it’s own learning in polynomial time. I reckon people neglect this (and thus, the market efficiency is quickly gained) because there is little public education to self taught programmers about tihs kind of thing, outside of boring lectures
If you’re a data science training provider—yes.
If you’re employing data scientists in a data rich operating environment in 2016 - yes.
If you’re anyone else...I doubt it.
I’m extremely skeptical about the data science boom. Just because a field is valuable doesn’t mean companies around them are strategically places to capture that value in a market for their owners.
Data science currently operates as:
(1) Big data products—on marketplaces like Amazon Web Services where machine learning algorithms are available via the cloud.
(2) Product—offline data analysis automation tools
(3) Service—manually doing (1) or (2)
Amazon currently dominates (1). Microsoft is a close 2nd. Their delivery is on point. They’ve basically created a platform for people to trade their knowledge about data science as algorithms. Since there are a finite number of algorithms, that can be combinatorially generated and tagged for particular applications, the only real challenge is creating a system for triaging a user’s needs into which particular algorithmic application.
If there is to be an big money to be made in this space by new players, it will be solving that problem.
The issue of algorithm generation for statistical analysis should not be confused with the sophisticated tasks of software and application development. The former requires little creativity, while the latter can utilise immense creativity. I say that as someone who’s forte is data analysis, rather than software.
As for (2), there is likely to be high efficiency in a market between cloud based algorithms and algorithms implemented offline due to extreme low barriers to entry. Basically those first person in with a good method of translating those algorithms offline, surmounting potential legal hazards, and scaling up (no trivial tasks) will make a quick buck. Though, these are problems that I can define. If I can define them, the big names probably already have and are working on solutions for them. You’re out of luck, garage entrepreneurs.
Now (3), the one lay people think of when they think data science, and the one aspiring data scientists entering Kaggle competitions and hopping onto data camp think of. This is a highly commodifiable area, subject to total automation, and outsourcing. There may be some money to be made here as the ubiquity of data rises and you find loyal, computer illiterate clients. However, you’ll be picking up scraps the same way that web design as a profitable avenue for freelancing oddballs to make money worked and continues to work—by essentially ripping people off who don’t know better, when absurdly user friendly DIY web design options are a reality or the guy down the road is a million times better than you, your client just doesn’t know better. Sure, it may be profitable, but ethically it’s questionable.
First principles
Elon Musk cites first principle thinking in physics as a key to identifying neglected market opportunities. Can someone give me an example of how it may work in that application?
Social skills
Simply reframing ″approach anxiety″ to the crude, macho ’bitch butterflies″ has done wonders to dampen the phenomenon. I wonder what if that formula could dampen other anxieties...
Concept learning
I wonder what it is like to have genius level verbal abstract reasoning: e.g. 2SD+for instance as reported by the usual neuropsychological tests. The Wechsler Adult Intelligence Scale (WAIS) ‘Similarities’ subtest measures verbal abstract reasoning or ‘concept learning’ (see concept learning on Wikipedia and the Edutechwiki.. Subjects are asked to say how two seemingly dissimilar items might in fact be similar.
When an average person talks to someone at just 30 points of IQ less than average (IQ 70 - the cut off point for intellectual disability), that experience may be comparable to a genius (130) talking to an average person. When it comes to particular subscales that relate to ‘’understand’’ such as concept learning, this may not in fact match up with IQ. It’s conceivable there may be savants with incredible high concept formation with incredibly low IQ’s. This presents an additional layer of complexity to a hypothetical interaction between a low IQ concept savant and high IQ person concept lay-person that I can’t even simulate, mentally. So, I’m opening it to the floor for a fun thought experiment.
The big reason for the rise of “data science” is that all operating environments are now, or will soon become, data rich.
An example: I have a friend who is a chemical engineer by training and works for E-Ink. His mandate is to improve the efficiency of the chemical manufacturing plants that produce the material. This work involves a small amount of actual chemistry, and a large amount of statistical analysis of the vast trove of sensor readings and measurements produced by the plant’s operation.
Recently moridinamael wrote about diswashers:
As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?
If you reason from first principles then there’s nothing stopping a device in which you input a pile of disher and that afterwards sorts them into the cupboard from existing. Especially with the recent advances in machine vision and google opensourcing Tensor flow.
Another nonautomated kitchen task is cutting vegetables. There no good reason why a robot shouldn’t cut vegetables as well as humans.
http://www.robot-coupe.com/en-exp/catalogue/vegetable-preparation-machines,3/
You could just have a two-dishwasher system where the dishwasher takes the place of the cupboard.
It seems like a robot that automated the task of moving clean dishes into a cupboard would be an idea where the potential benefits, if any, are too small to currently justify the major development effort that would be required. Maybe in the future when AI becomes far more widespread and ‘easy’ to develop.
I think that there are people who don’t like to deal with washing dishes even through they have a dishwasher. I don’t think the task is trival in a sense that people wouldn’t be willing to invest money into a device that fixes the issue.
Apart from that a redesigned device that builds on smart sensors and nanotech filters could also operate with a lot less water.
GE’s design of a kitchen of the future with a smart sink that can automatically wash dishes is also interesting.
If I look into my kitchen the most recent invention is the microwave.
A few health conscious people I know have nanotech water filters for the water in their sink but apart from that the kitchen is mostly didn’t change.
I think that it would be possible to build something better by investing the kind of money that went into Tesla and SpaceX.
I would expect that in a decade we see a lot more sensors in the average kitchen then today.
The orbital-systems shower is a good example how nanotech plus sensors can produce a shower that performs better than the old shower.
I think my dream system would be something I can pile all the dirty dishes into which melts them down, separates out the food into something that goes into the trash (this doesn’t have to be automated) and then reconstitutes the dishes.
Something like a Star Trek transporter for tableware?
Imagine what it was like before the dishwasher.
/goes off to watch Downton Abbey :-P
Yes, but the full solution to this is basically AI-complete.
Do you feel the same way about all within-firm IT services? (Including stuff like internal web design in ‘IT.’)
I think that various debates about postmoderism could be of that nature. Postmodernists often have a high amount of concepts.
If you hire a good webdesiner for doing your website the designer has experience in creating websites and ordering information. A good designer can do a better job.
In the same sense a person who doesn’t understand what concepts like sensitivity and specificity mean won’t be able to use data analysis tools well.
People are snapping up data scientists in preparation for the move from Data science as a product to data science as a commodity. Historically, big companies have been awful at making this transition, but once they realize their mistake, eager to make up lost time.. An entrepreneur who looks to be bought up buy one of these market laggards could to really well.
The classic Musk example is taking the cost of raw materials to make a spaceship—he saw that they were many orders of magnitude the actual cost of a spaceship, so he figured there were probably efficiency problems that people simply hadn’t solved.
Could somebody who has the English translation of The Spanish Ballad by Feuchtwanger post that piece about Lancelot being in disgrace over his hesitation to sit in the cart into rationality quotes thread? Thank you.
Could you post your own translation?
I only have the Russian translation from German:(
The Fed recently announced a small interest rate hike, but rates remain astonishingly low in the US and in most other countries. In several countries the interest rate is negative - you have to pay the bank to hold your money—a bizarre situation which many economists previously dismissed as a theoretical impossibility.
How should individuals respond to this weird macroeconomic situation? My naive analysis is that demand for investment opportunities far outstrips supply, so we should be trying to find new ways to invest money. Perhaps we should all be doing part-time real estate investing? Are there other simple investment strategies that individuals are in a better position to pursue than big investment firms?
Do not buy bonds and be wary of bubbles (a lot of underutilized money sloshing around tends to lead to asset inflation).
I would probably say that the supply of investment funds far outstrips demand :-) but you would be concerned about “new ways to invest money” only if you had significant money to invest, which is not a common situation on LW.
Yes, there is class of investment strategies which go by the name of “liquidity constrained”. If there is a small… market inefficiency out of which you can extract, say, $100,000/year but no more, none of the big investment firms would bother—it’s not worth their time. But for an individual it often is.
Can you please say more about these and how to find them?
Liquidity is a characteristic of a financial asset which, without going into technicalities, is an indicator of how quickly and how cheaply can one buy or sell large amounts of this particular asset in the open market.
Some assets—like the common stock of Apple or US Treasury bills—are very liquid. There is a continuous market, very large volumes are changing hands daily, and orders to buy and sell are filled rapidly, with low transaction costs and without pushing the market.
Some assets—like specific bonds or, say, tracts of land in Maine—are not liquid. Buying or selling them will take time and will be expensive in terms of transaction costs. If you want to buy (or sell) a lot of these, you will likely push the market (if you’re buying you’ll push the price up, if you’re selling you’ll push the price down), sometimes considerably so.
The problem with investment strategies which rely on buying and selling illiquid assets is that they do not scale. You might be able to achieve high returns on small amounts of capital, but you cannot put more capital into this trade because the trade will then break. Hedge funds and such are not interested in investment strategies which do not scale because it’s too few dollars for too much hassle.
What this means is that trade opportunities in small, obscure, illiquid niches of financial markets are not exploited by the big fish and so could remain “open” for a long time. Remember that the self-adjusting feature of the market is not magic, it only works if somebody does commit capital to “fixing” the market inefficiency. If no one does, the inefficiency does not go away on its own.
This implies that if you search the (preferably obscure) little nooks and crannies of markets, your chances of finding a free lunch are much higher than in popular, liquid markets that everyone likes to play in.
Two warnings, though. First, what appears to be free cheese might turn out to be located in a mousetrap. Examine the circumstances carefully. Second, niche markets often have local players which understand this particular market better than you do, so see the first point.
Could you give examples of what you mean that existed a few years ago and that are now exploited, so that no further money can be made and you don’t lose something be openly sharing the information?
During the 80′s and 90′s a number of firms sprouted up around buying and selling penny stocks via strategies like cold calling.
I’m not sure that
hiring a bunch of people to do annoying phone calls
is what Lumifer has in mind when he talks about trading opportunities in illiquid niches of the financial markets.Here’s an example that did not scale well: The New York Time Magazine: Paper Boys
I don’t think that the article says something about problems that come with scale. It rather suggests that the first attempts were lucky.
There are also ethical issues with the business model of buying up debt and then hiring ex-convicts to collect that debt.
Another, much smaller, example.
Edit:typo.
I don’t think that’s an example of someone investing money into an asset. It’s a bet on default rates not changing.
Do you believe that people without special expertise are capable of finding and evaluating those opportunities?
What’s “special expertise”? Like most everything in life, this requires the capability and the willingness to learn and figure things out.
How many hours do you think the average person on LW would need to invest to pick up the relevant skills?
The question is ill-defined, it’s like asking in how many hours can you learn to program in, say, R. The answers can plausibly range from “a few hours” to “a lifetime”.
Besides, in this case some skills will be general (e.g. risk management), equivalent to the ability to code, and some will be very very specific (e.g. knowledge of the regulatory regime in a particular narrow field), equivalent to understanding some library very well.
I don’t think that anyone ever thought that paying the bank to hold your money was a theoretical impossibility—paid checking accounts are not a new thing. What is supposed to be ‘impossible’ is for bank loans have a negative interest rate—if the bank pays you to borrow money. Of course, even that was/is only ‘impossible’ with certain exceptions (specifically, deflation is bad for lenders; but they try to predict deflation, and try not to loan at a negative real rate).
If reports are correct, this is sort of an example of a transplant version of the Trolley problem in the wild: http://timesofindia.indiatimes.com/world/middle-east/Islamic-State-sanctioned-organ-harvesting-in-document-taken-in-US-raid/articleshow/50326036.cms
Not really.. It’s not a trolley problem to divert the trolley into a group of ants, and ISIS thinks of non-Muslims like ants.
Additionally it wants to scare it’s enemies by being brutal.
Where can I find The Browser’s Golden giraffes competition nominees? They have deleted the list and I don’t have an offline copy.
Can you find an archived copy on archive.org?
No, they don’t have it.
Thoughts this week, part 2
Sweat equity marketplaces
Anyone know why online sweat equity marketplaces never took off? Their website is basically non-functional. I can see the potential for sweat-equity marketplace focusing on a surprising number of fields—say cash strapped writers looking for an editor for instance.
Nuremburg principles
Love and subjective well-being
Love has too complex a relationship with happiness for me to want to try to make rational decisions in relation to (sentence structure pls)
Health prioritization
Wow. Did not expect that.
I read it in an article about stigma of mental health in entrepreneurs. To try and find the original source, I googled the statistic and found this lovely designed page, an advertisement
This led me to this lovely project which led me to this initiative which I forsee financial analysts using to make stock market predictions based on people’s sentiment!
The article says:
No existing term sheets seems to be a huge barrier for getting the project adopted.
The traditional model is that there are publishing houses that have expertise in judging which book is likely to be successful. A cash strapped writer can go to one of these and then the publishing house pays for all additional expeneses.
On the other hand the average editor doesn’t want to invest the time to see whether certain books are viable before investing into the book.
Why would an editor want to do work whose compensation is very risky (maybe you’re editing the next Harry Potter, but >80% of the time it’s going to lose money) instead of work whose compensation is certain?
This is compounded by adverse selection: the better my thing is, the less willing I will be to sell equity, and so the equity markets will be mostly flooded with garbage.