Something new and relevant: Claude 3′s system prompt doesn’t use the word “AI” or similar, only “assistant”. I view this as a good move.
As an aside, my views have evolved somewhat on how chatbots should best identify themselves. It still doesn’t make sense for ChatGPT to call itself “an AI language model”, for the same reason that it doesn’t make sense for a human to call themselves “a biological brain”. It’s somehow a category error. But using a fictional identification is not ideal for productivity contexts, either.
This is a point of uncertainty that bothered me when I was doing a similar analysis a while ago. GWAS data is possibly good enough to estimate causal effects of haplotypes, but that’s not enough information to do single base edits. To have reasonable confidence of getting the predicted effect, it’d be necessary to to make all the edits to transform the original haplotype into a different haplotype.
And unlike with distant variants where additive effects dominate, it’d make sense if non-additive effects are strong locally, since the variants are near each other. Whether this is actually true in reality is way beyond my knowledge, though.