My earlier comment on this question where I argue no, precisely for the same reasons (ie. if the generated samples are indistinguishable from human samples and ‘pollute’ the dataset then mission accomplished).
I tend to agree with you. But I am not sure that our way of distinguishing AI-generated from human-generated content will reach the perfection required for this to “work”. Assuming that the mechanism of distinguishing the two will remain imperfect at least a bit of a feedback loop will remain, which will slow down development.
My earlier comment on this question where I argue no, precisely for the same reasons (ie. if the generated samples are indistinguishable from human samples and ‘pollute’ the dataset then mission accomplished).
I tend to agree with you. But I am not sure that our way of distinguishing AI-generated from human-generated content will reach the perfection required for this to “work”. Assuming that the mechanism of distinguishing the two will remain imperfect at least a bit of a feedback loop will remain, which will slow down development.