This seems unduly pessimistic to me. The whole interesting thing about g is that it’s easy to measure and correlates with tons of stuff. I’m not convinced there’s any magic about FSIQ compared to shoddier tests. There might be important stuff that FSIQ doesn’t measure very well that we’d ideally like to select/edit for, but using FSIQ is much better than nothing. Likewise, using a poor man’s IQ proxy seems much better than nothing.
This may have missed your point, you seem more concerned about selecting for unwanted covariates than ‘missing things’, which is reasonable. I might remake the same argument by suspecting that FSIQ probably has some weird covariates too—but that seems weaker. E.g. if a proxy measure correlates with FSIQ at .7, then the ‘other stuff’ (insofar as it is heritable variation and not just noise) will also correlate with the proxy at .7, and so by selecting on this measure you’d be selecting quite strongly for the ‘other stuff’, which, yeah, isn’t great. FSIQ, insofar as it had any weird unwanted covariates, would probably much less correlated with them than .7
For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.
This is exactly it—the term “off-target” was used imprecisely in the post to keep things simple. The thing we’re most worried about here is misedits (mostly indels) at noncoding target sites. We know a target site does something (if the variant there is in fact causal), so we might worry that an indel will cause a big issue (e.g. disabling a promoter binding site). Then again, the causal variant we’re targeting has a very small effect, so maybe the sequence isn’t very sensitive and an indel won’t be a big deal? But it also seems perfectly possible that the sequence could be sensitive to most mutations while permitting a specific variant with a small effect. The effect of an indel will at least probably be less bad than in a coding sequence, where it has a high chance of causing a frameshift mutation and knocking out the coded-for protein.
The important figure of merit for editors with regards to this issue is the ratio of correct edits to misedits at the target site. In the case of prime editors, IIUC, all misedits at the target site are reported as “indels” in the literature (base editors have other possible outcomes such as bystander edits or conversion to the wrong base). Some optimized prime editors have edit:indel ratios of >100:1 (best I’ve seen so far is 500:1, though IIUC this was just at two target sites, and the rates seem to vary a lot by target site). Is this good enough? I don’t know, though I suspect not for the purposes of making a thousand edits. It depends on how large the negative effects of indels are at noncoding target sites: is there a significant risk the neuron gets borked as a result? It might be possible to predict this on a site-by-site basis with a better understanding of the functional genomics of the sequences housing the causal variants which affect polygenic traits (which would also be useful for finding the causal variants in the first place without needing as much data).