“And I don’t expect I will ever have to do that.”
You do not sound 100% certain.
“And I don’t expect I will ever have to do that.”
You do not sound 100% certain.
It would be nice if the top scoring all-time posts really reflected their impact. Right now there is some bias towards newer posts. Plus, Eliezer’s sequences appeared at OB first, which greatly reduced LW upvotes.
Possible solution: every time a post is linked to from a new post, it gets an automatic upvote (perhaps we don’t count it if linked to by same author). I don’t know if it’s technically feasible
I’d be glad to discuss it.
good point
That would be great. I’d love to see the results.
In the first example, you couldn’t play unless you had at least 100M dollars of assets. Why would someone with that much money risk 100M to win a measly 100K, when the expected payoff is so bad?
In cases where a scientist is using a software package that they are uncomfortable with, I think output basically serves as the only error checking. First, they copy some sample code and try to adapt it to their data (while not really understanding what the program does). Then, they run the software. If the results are about what they expected, they think “well, we most have done it right.” If the results are different than they expected, they might try a few more times and eventually get someone involved who knows what they are doing.
Good find. Thanks.
Error finding: I strongly suspect that people are better at finding errors if they know there is an error.
For example, suppose we did an experiment where we randomized computer programmers into two groups. Both groups are given computer code and asked to try and find a mistake. The first group is told that there is definitely one coding error. The second group is told that there might be an error, but there also might not be one. My guess is that, even if you give both groups the same amount of time to look, group 1 would have a higher error identification success rate.
Does anyone here know of a reference to a study that has looked at that issue? Is there a name for it?
Thanks
Yes, that’s a good point. Tthat would be considered using a data augmentation prior (Sander Greenland has advocated such an approach).
only if you keep specifying hyper-priors, which there is no reason to do
In the second example the person was speaking informally, but there is nothing wrong with specifying a probability distribution for an unknown parameter (and that parameter could be a probability for heads)
Hm, good point. Since the usual thing is .5, the claim should be the alternative. I was thinking in terms of trying to reject their claim (which it wouldn’t take much data to do), but I do think my setup was non-standard. I’ll fix it later today
Very good examples of perceptions driving self-selection.
It might be useful to discuss direct and indirect effects.
Suppose we want to compare fatality rates if everyone drove a Volvo versus if no one did. If the fatality rate was lower in the former scenario than in the latter, that would indicate that Volvo’s (causally) decrease fatality rates.
It’s possible that it is entirely through an indirect effect. For example, the decrease in the fatality rate might entirely be due to behavior changes (maybe when you get in a Volvo you think ‘safety’ and drive slower). On the DAG, we would have an arrow from volvo to behavior to fatality, and no arrow from volvo to fatality.
A total causal effect is much easier to estimate. We would need to assume ignorability (conditional independence of assignment given covariates). And even though safer drivers might tend to self-select into the Volvo group, it’s never uniform. Safe drivers who select other vehicles would be given a lot of weight in the analysis. We would just have to have good, detailed data on predictors of driver safety.
Estimating direct and indirect effects is much harder. Typically it requires assuming ignorability of the intervention and the mediator(s). It also typically involves indexing counterfactuals with non-manipulable variables.
as an aside: a machine learning graduate student worked with me last year, and in most simulated data settings that we explored, logistic regression outperformed SVM
In my opinion, the post doesn’t warrant −90 karma points. That’s pretty harsh. I think you have plenty to contribute to this site—I hope the negative karma doesn’t discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
How about spreading rationality?
This site, I suspect, mostly attracts high IQ analytical types who would have significantly higher levels of rationality than most people, even if they had never stumbled upon LessWrong.
It would be great if the community could come up with a plan (and implement it) to reach a wider audience. When I’ve sent LW/OB links to people who don’t seem to think much about these topics, they often react with one of several criticisms: the post was too hard to read (written at too high of a level); the author was too arrogant (which I think women particularly dislike); or the topic was too obscure.
Some have tried to reach a wider audience. Richard Dawkins seems to want to spread the good word. Yet, I think sometimes he’s too condescending. Bill Maher took on religion in his movie Religulous, but again, I think he turned a lot of people off with his approach.
A lot has been written here about why people think what they think and what prevents people from changing their minds. Why not use that knowledge to come up with a plan to reach a wider audience. I think the marginal payoff could be large.
But: “You can be a virtue ethicist whose virtue is to do the consequentialist thing to do”
Please elaborate.
Perhaps a better title would be “Bayes’ Theorem Illustrated (My Ways)”
In the first example you use shapes with colors of various sizes to illustrate the ideas visually. In the second example, you using plain rectangles of approximately the same size. If I was a visual learner, I don’t know if your post would help me much.
I think you’re on the right track in example one. You might want to use shapes that are easier to estimate the relative areas. It’s hard to tell if one triangle is twice as big as another (as measured by area), but it’s easier to do with rectangles of the same height (where you just vary the width). More importantly, I think it would help to show math with shapes. For example, I would suggest that figure 18 has P(door 2)= the orange triangle in figure 17 divided by the orange triangle plus the blue triangle from figure 17 (but where you show the division by shapes). When I teach, I sometimes do this with Venn diagrams (show division of chunks of circles and rectangles to illustrate conditional probability).
If you look at Table 2 in the paper, it shows doses of each vitamin for every study that is considered low risk for bias. I count 9 studies that have vitamin A <10,000 IU and vitamin E <300 IU, which is what PhilGoetz said are good dosage levels.
The point estimates from those 9 studies (see figure 2) are: 2.88, 0.18, 3.3, 2.11, 1.05, 1.02, 0.78, 0.87, 1.99. (1 favors control)
Based on this quick look at the studies, I don’t see any reason to believe that a “hockey stick” model will show a benefit of supplements at lower dose levels.