Fascism is bad, Christian.
IlyaShpitser
My response is we have fancy computers and lots of storage—there’s no need to do psychometric models of the brain with one parameter anymore, we can leave that to the poor folks in the early 1900s.
How many parameters does a good model of the game of Go have, again? The human brain is a lot more complicated, still.
There are lots of ways to show single parameter models are silly, for example discussions of whether Trump is “stupid” or not that keep going around in circles.
“Well, suppose that factor analysis was a perfect model. Would that mean that we’re all born with some single number g that determines how good we are at thinking?”
″Determines” is a causal word. Factor analysis will not determine causality for you.
I agree with your conclusion, though, g is not a real thing that exists.
Should be doing stuff like this, if you want to understand effects of masks:
https://arxiv.org/pdf/2103.04472.pdf
https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).
---
Can’t do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).
Pretty interesting.
Since you are interested in policies that operate along some paths only, you might find these of interest:
https://pubmed.ncbi.nlm.nih.gov/31565035/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330047/
We have some recent stuff on generalizing MDPs to have a causal model inside every state (‘path dependent structural equation models’, to appear in UAI this year).
- Oct 22, 2021, 2:21 AM; 16 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
3: No, that will never work with DL by itself (e.g. as fancy regressions).
4: No, that will never work with DL by itself (e.g. as fancy regressions).
5: I don’t understand this question, but people already use DL for RL, so the “support” part is already true. If the question is asking whether DL can substitute for doing interventions, then the answer is a very qualified “yes,” but the secret sauce isn’t DL, it’s other things (e.g. causal inference) that use DL as a subroutine.
---
The problem is, most folks who aren’t doing data science for a living themselves only view data science advances through the vein of hype, fashion trends, and press releases, and so get an entirely wrong sense of what is truly groundbreaking and important.
If there is, I don’t know it.
There’s a ton of work on general sensitivity analysis in the semi-parametric stats literature.
If there is really both reverse causation and regular causation between Xr and Y, you have a cycle, and you have to explain what the semantics of that cycle are (not a deal breaker, but not so simple to do. For example if you think the cycle really represents mutual causation over time, what you really should do is unroll your causal diagram so it’s a DAG over time, and redo the problem there).
You might be interested in this paper (https://arxiv.org/pdf/1611.09414.pdf) that splits the outcome rather than the treatment (although I don’t really endorse that paper).---
The real question is, why should Xc be unconfounded with Y? In an RCT you get lack of confounding by study design (but then we don’t need to split the treatment at all). But this is not really realistic in general—can you think of some practical examples where you would get lucky in this way?
Christian, I don’t usually post here anymore, but I am going to reiterate a point I made recently: advocating for a vaccine that isn’t adequately tested is coming close to health fraud.
Testing requirements are fairly onerous, but that is for a good reason.
Recommending this to others seems to be coming pretty close to health fraud.
The reasonably ponderous systems in place for checking if things work and aren’t too risky are there for a reason.
“That’s the test. Would you put it in your arm rather than do nothing? And if the answer here is no, then, please, show your work.”
Seems to be an odd position to take to shift the burden of proof onto the vaccine taker rather than than the scientist.---
I think a lot of people, you included, are way overconfident on how transmissible B.1.1.7. is.
90% of the work ought to go into figuring out what fairness measure you want and why. Not so easy. Also not really a “math problem.” Most ML papers on fairness just solve math problems.
A whole paper, huh.
---
I am contesting the whole Extremely Online Lesswrong Way<tm> of engaging with the world whereby people post a lot and pontificate, rather than spending all day reading actual literature, or doing actual work.
“Unless you’d put someone vulnerable at risk, why are you letting another day of your life go by not living it to its fullest? ”
As soon as you start advocating behavior changes based on associational evidence you leave the path of wisdom.
---
You sure seem to have a lot of opinions about statisticians being conservative about making claims without bothering to read up on the relevant history and why this conservativism might have developed in the field.
You can read Halpern’s stuff if you want an axiomatization of something like the responses to the do-operator.
Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).
What Cummings is proposing is formalism with a thin veneer of silicon valley jargon, like “startups” or whatever, designed to be palatable to people like the ones who frequent this website.
He couldn’t be clearer, re: where his influences are coming from, he cites them at the end. It’s Moldbug, and Siskind (Siskind’s email leaks show what his real opinions are, he’s just being a bit coy).
The proposed system is not going to be more democratic, it is going to be more formalist.