Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.
https://twitter.com/michael_wiebe/status/1750197740603367689
Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.
https://twitter.com/michael_wiebe/status/1750197740603367689
New replication: I find that the results in Moretti (AER 2021) are caused by coding errors. The paper studies agglomeration effects for innovation (do bigger cities cause technological progress?), but the results supporting a causal interpretation don’t hold up.
https://twitter.com/michael_wiebe/status/1749462957132759489
What was the effect of reservists joining the protests? This says: “Some 10,000 military reservists were so upset, they pledged to stop showing up for duty.” Does that mean they were actively ‘on strike’ from their duties? It looks like they’re now doing grassroots support (distributing aid).
Yeah, I do reanalysis of observational studies rather than rerunning experiments.
Do you have any specific papers in mind?
But isn’t it problematic to start the analysis at “superhuman AGI exists”? Then we need to make assumptions about how that AGI came into being. What are those assumptions, and how robust are they?
Why start the analysis at superhuman AGI? Why not solve the problem of aligning AI for the entire trajectory from current AI to superhuman AGI?
Also came here to say that ‘latter’ and ‘former’ are mixed up.
In particular, we should be interested in how long it will take for AGIs to proceed from human-level intelligence to superintelligence, which we’ll call the takeoff period.
Why is this the right framing? Why not focus on the duration between 50% human-level and superintelligence? (Or p% human-level for general p.)
So it seems very likely to me that eventually we will be able to create AIs that can generalise well enough to produce human-level performance on a wide range of tasks, including abstract low-data tasks like running a company.
Notice how unobjectionable this claim is: it’s consistent with AGI being developed in a million years.
If you’re loss averse, the expected value could easily be negative: cost(voting for wrong candidate) > benefit((voting for right candidate).
I was astonished to find myself having ascended to the pantheon of those who have made major contributions to human knowledge
Is this your own evaluation of your work?
If the “tear apart the stars” prophecy just refers to Harry harvesting the stars for resources, then Voldemort looks really stupid for misinterpreting it.
Now Hermione learns Patronus 2.0 and destroys Azkaban. So both the Boy-Who-Lived and the Girl-Who-Revived can kill dementors. Sounds like “surviving/defeating Voldemort” is a plausible cover for explaining the origin of the ability to destroy dementors.
Isn’t Harry saying this to Draco after Draco has been obliviated? Draco has no idea what Harry’s talking about.
Shouldn’t Harry have fallen to his knees twenty seconds earlier, if he originally heard/saw the explosion via Voldie-simulcast?
“Harry, let me verify that your Time-Turner hasn’t been used,” said Professor McGonagall.
“LOOK OVER THERE!” Harry screamed, already sprinting for the door.
Should you “trust literatures, not papers”?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.
https://twitter.com/michael_wiebe/status/1750572525439062384