I’m trying to see what makes those numbers so implausible, and as far as I understand (at least without looking into regional data) the most surprising/suspicious thing is that number of new cases of Delta is dropping too fast.
But why shouldn’t it be dropping fast? Odds of people getting Omicron (as opposed to Delta) are growing fast enough—if we assume that they are (# of Omicron cases)/(# of Delta cases)*(some coefficient like their relative R_0), then due to Omicrons’s fast doubling it can go from 1:2 to 4:1 in just a week. That will make new Delta cases among the population for which Omicron and Delta compete (as in they are destined to get one or the other) drop from 66% to 20% -- more than three times.
In real world there are no people destined to get Covid. But there are unvaccinated people that go unmasked to a club with hundreds of other people like them—and continue to do it until they get Covid. This and other similar modes of behavior seem like a close enough approximation of “people destined to get a covid”. Is it close enough? Are there enough of people like that compared to people for whom Omicron and Delta don’t compete that much? I don’t know, quite possibly not.
Does it mean that in order to notice that nowcasts’ data is suspicious, I must have some knowledge about how different variants compete with each other? Can someone ELIU to me how this competition happens? Am I missing something else?
I’m no programmer, so I have no comment on “how to develop” part. The “safe” part seems extremely unsafe to me though.
1) Your strategy relies on human supervisor’s ability to recognize a threat that is disguised by superintelligence. Which is doomed to failure almost by definition.
2) Supervisor himself is not protected from possible threat. He is also one of the main targets that AI would want to affect.
3) >Moreover, the artificial agent won’t be able to change the operational system of the computer, its own code or any offline task that could fundamentally change the system.
I don’t see what kind of manual supervising could possibly accomplish that even if none of other problems existed.
4) Human experts don’t have “complete understanding” of any subject worth mentioning. Certainly nothing involving biology. So your AI will just produce a text that convinces them that proposed solution is safe. Being superintelligent, it’ll be able to do it even if the solution is not in fact safe. Or it might produce some other dangerous texts, like texts that convince them to lie to you that solution is safe.