The influence can only proceed via their actual treatment.
But the question is whether it’s safe to advise people to wait, knowing that they can have surgery later if needed.
Anyway my main question was whether I’d done the stats right.
The influence can only proceed via their actual treatment.
But the question is whether it’s safe to advise people to wait, knowing that they can have surgery later if needed.
Anyway my main question was whether I’d done the stats right.
Well, I was only going to post all the minutiae if there was any interest...
http://jama.ama-assn.org/cgi/reprint/295/3/285.pdf
The two groups are as follows:
Assigned to “Watchful Waiting”:
336 patients
17 had problems after 2 years
Assigned to surgery:
317 patients
7 had problems after 2 years
Some patients crossed between the two groups, but this does not matter, as they were testing the effects of the initial assignment.
They report p = 0.52, but they also give a 95% confidence interval for the difference in risk, which just barely contains zero; which is a dead giveaway that p should be around 0.05, right? Anyway, doing a chi-squared test on the above numbers, I got p = 0.053.
The relevant bit is at the top of page 289 (page 6 of the PDF). Also relevant are the Results section of the abstract, and Figures 1 and 2. Essentially the entire problem is this statement:
At 2 years, intention-to-treat analyses showed that pain interfering with activities developed in similar proportions in both groups (5.1% for watchful waiting vs 2.2% for surgical repair; difference 2.86%; 95% confidence interval, −0.04% to 5.77%; P=.52)
I recently had to have some minor surgery. However, there’s a body of thought that says it’s safe to wait and watch for symptoms, and only have surgery later. There’s a peer reviewed (I assume) paper supporting this position.
Upon reading this paper I found what looked like a statistical error. Looking at outcomes between two groups, they report p = 0.52, but doing the sums myself I got p = 0.053. For this reason, I went and had the surgery.
Since I’m just a novice at statistics, I was wondering if I had in fact got it right—it’s disturbing to think that a peer reviewed paper stating an important conclusion would be wrong.
If any dan-level statistician here has the inclination, I’ll post a link to the paper here for your perusal...
In science, that step is already done.
Only in general, but not for specific questions like: does compound XYZ affect tumour growth?
why theologians never come up with arguments disproving the existence of God
Well if they do they get called philosophers of religion instead...
However to state the following: “one in which the null hypothesis is always true” is making a bold statement about your level of knowledge.
OK. But the point about what we can conclude about regular science stands even if this is only mostly correct.
I really like the idea of parapsychology as the control group for science; it deserves to be better known.
In a symmetric war
True, but these are pretty rare these days.
Of course, Kant distinguished between two different meanings of “should”: the hypothetical and the categorical.
If you want to be a better Go player, you should study the games of Honinbo Shusaku.
You should pull the baby off the rail track.
This seems useful here...
the only reason, as far as I can tell, why the MWI is being chosen as the source of the dilemma is because we’re already starting with the assumption that the MWI is correct and relevant here.
I think we’re starting with the assumption that it’s vastly more likely than the other possible explanations.
After all these experiments, all you know is that the LHC isn’t turning on. You don’t really have evidence of anything going in potential parallel universes.
Sure you do—the probability of you making the observation that the LHC persistently fails to turn on is something like 1 if MWI is true and if a functional LHC would destroy the world; it’s surely much lower otherwise.
“Statistically significant results” mean that there’s a 5% chance that results are wrong
Hmm. Assuming the experiment was run correctly, it means there’s a less than 5% chance that data this extreme would have been generated if the null hypothesis—that nothing interesting was happening—were true. The actual chance can be specified as e.g. 1%, 0.01%, or whatever.
Also, assuming everything was done correctly, it’s really the conclusions drawn from the results, rather than the results themselves, that might be wrong...
a “ko” rule which says that the location of the last move played can make a difference
That information could however be considered part of the current position.
I’m perfectly happy with the idea that there could be stuff that we can’t know about simply because it’s too “distant” in some sense for us to experience it; it sends no signals or information our way. I’m not sure anyone here would deny this possibility.
But if that stuff interacts with our stuff then we certainly can know about it.
On the other hand, there is no proof that X is not dependent upon or manipulated in (scientifically) unfathomable ways by a larger X-prime
But is there any reason to favour this more complex hypothesis?
to avoid circularity, it is sufficient to take the MRCA of a few indisputable mammalian groups such as primates, rodents, carnivores, ungulates, etc. to include all mammals
But the MRCA of “indisputable” groups won’t be an ancestor of basal groups like the monotremes or marsupials.
However, there’s no dispute about including monotremes. The clade that excludes them is called the Theria. Likewise with the marsupials: the clade that excludes both them and the monotremes is the Eutheria. Every clade potentially has a name; Mammalia is just a particularly well known one.
Things get dicey if the evolutionary relationships are unclear, of course, or if some conventional group is recognised as not being a true clade.
Read the comments to this TED talk, and try not to kill yourself in despair.
Mmm. You can usually tell that something’s a celestial object, and thus not a flying object, without being able to classify it further...
Argh how silly of me not to see that. I stop reading at the references! Honestly though, it’s annoying that the abstract remains wrong.