The authors say “non negligible” though. And it’s a simulation study. Besides in the limitations section they acknowledge the absence of literature on many biological parameters.
libero
My prior for fomite transmission in respiratory viruses is very low: 1988 article on Rhinovirus, hamsters SARS-CoV-2 study, human case series I don’t have time to make a serious review though.
As an aside, though it addresses the actual question, I’m developping a model to do precisely what you ask: providing health guidance and dashboards tailored to the desire to know yourself, propensity to risk (or better risk aversion) and data you can generate. I’m focusing on the PoC and business side for now, so can’t discuss the specifics here.
As you said, it’s a cohort study. The cohorts are pretty different (gluco/condro users were more white, educated, non-smorkers, physically active than non-users, though pretty older in comparisons) and the adjustment could be skewed by the small number of users. But, it’s mostly harmless and the effect size looks promising. Personally I’d make a quick review of the literature to decide whether to commit to taking the supplement and set up an alert on Pubmed.
New paper on downstream viral load stratified by source and severity
The evidence on viral load is still poor https://www.cebm.net/covid-19/sars-cov-2-viral-load-and-the-severity-of-covid-19/
Remdesevir (lopinavir + ritonavir) (HIV)
A little mistake with the parenthesis, they’re different things
I’m interested
Interesting question. I found this article https://arxiv.org/abs/1802.07740 together with the papers that cite it https://ui.adsabs.harvard.edu/abs/2018arXiv180207740R/citations as a good starting point.
Subscribe/get notification for new comments in a post. I have already enough tabs open on the browser to keep track of all the interesting posts :)
The title doesn’t seem to fit well the question: P-Hacking Detection does not map well to replicability, even if the presence of P-hacking usually means that the study will not replicate.
I’m interested in automatic summarization of papers key characteristics (PICO, sample size, methods) and I’m starting to build something soon.
You’re referring to the general population I guess, so it could be a reusable device where you can blow your nose into, then a (manual?) vacuum system could suck the remaining mucus from the nostrils. In order to avoid the contact between hands and the pathogens, the device would be pressed to the nasal base, maybe with the thumb and the middle finger under the nostrils, with the index on the bridge of the nose. A size that fits in the pockets should be similar to a vaporizer pen, with mini plastic bags to throw when full, transparent in order to examine the content for medical purposes, and the bags should be rechargeable.
I can see a lot of engineering problems with that, but the function would be performed efficiently, unless I’m missing something.
″ That’s useful when you have many professionals who need a common language but which disagree about the causes of mental illnesses. ”
Using the proposed framework, it means that the field lacks Foundational Understanding. Thus I wouldn’t feel comfortable calling the DSM an ontology, though there is i.e. the Mental Disease Ontology, which sometimes maps to DSM.
This post is so good! I was just thinking if this framework could be useful for prediction business, where the Foundational Understanding is crowd-sourced through e.g. academic literature, open data, manual curator. Ontologies might be created and curated by public consortia and evaluation could be a private-public endeavour.
We don’t really have a metric for meaning or impact though.
And even if we had decent metrics they would gain value with time since the impact of a discovery becomes evident only after a while (think patents, landmark papers, new disciplines).
For the most part it seems to me that people are scared to work on problems that are actually meaningful.
It appears to me that the incentives system is the real issue here. UBI or some basic job might release a lot of people from their publishing cages, allowing them to work on research fundamentals: gathering good data, working on theory and methodology, replicating studies.
Kudos for trying to address the issue, late is better than never. If you believe there are possible risks if the methods were used, the first things to do would be a retraction, since an erratum/corrigendum presuppose the consistency of the conclusion. Acting quickly could also prevent legal troubles.