I did consider the distinction between a model of humans vs. a model of you personally. But I can’t really see any realistic way of stopping the models from having better models of humans in general over time. So yeah, I agree with you that the small pockets of sanity are currently the best we can hope for. It was mainly to spread the pocket of sanity from infosec to the alignment space is why I wrote up this post. Because I would consider the minds of alignment researchers to be critical assets.
As to why predictive models of humans in general seems unstoppable—I thought it might be too much to ask to not even provide anonymized data because there are a lot of good capabilities that are enabled by that (e.g. better medical diagnoses). Even if it is not too heavy of a capability loss most people would still provide data because they simply don’t care or remain unaware. Which is why I used the wording—stem the flow of data and delay timelines instead of stopping the flow.
I did consider the distinction between a model of humans vs. a model of you personally. But I can’t really see any realistic way of stopping the models from having better models of humans in general over time. So yeah, I agree with you that the small pockets of sanity are currently the best we can hope for. It was mainly to spread the pocket of sanity from infosec to the alignment space is why I wrote up this post. Because I would consider the minds of alignment researchers to be critical assets.
As to why predictive models of humans in general seems unstoppable—I thought it might be too much to ask to not even provide anonymized data because there are a lot of good capabilities that are enabled by that (e.g. better medical diagnoses). Even if it is not too heavy of a capability loss most people would still provide data because they simply don’t care or remain unaware. Which is why I used the wording—stem the flow of data and delay timelines instead of stopping the flow.