One idea I was thinking about over the last few days: academic hoaxes have been used many times over the past few decades to reveal shoddy standards in journals/subfields. The Sokal affair is probably the most famous, but there’s a whole list of others linked on its wikipedia page. Thing is, that sort of hoax always took a fair bit of effort—writing bullshit which sounds good isn’t trivial! So, as a method for policing scientific rigor, it was hard to scale up without a lot of resources.
But now we have GPT2/3, which potentially changes the math dramatically.
I’d guess that a single small team—possibly even a single person—could generate and submit hundreds or even thousands of bullshit papers, in parallel. That sort of sustained pressure would potentially change journals’ incentives in a way which the occasional sting doesn’t. There’d probably be an arms race for a little while—journals/reviewers coming up with cheap ways to avoid proper checks, bullshit-generators coming up with ways around those defenses—but I think there’s a decent chance that the end result would be proper rigor in reviews.
I agree this is a likely outcome, though I also think there’s at least a 30% chance that the blackhats could find ways around it. Journals can’t just lock it down to people the editors know personally without losing the large majority of their contributors.
One idea I was thinking about over the last few days: academic hoaxes have been used many times over the past few decades to reveal shoddy standards in journals/subfields. The Sokal affair is probably the most famous, but there’s a whole list of others linked on its wikipedia page. Thing is, that sort of hoax always took a fair bit of effort—writing bullshit which sounds good isn’t trivial! So, as a method for policing scientific rigor, it was hard to scale up without a lot of resources.
But now we have GPT2/3, which potentially changes the math dramatically.
I’d guess that a single small team—possibly even a single person—could generate and submit hundreds or even thousands of bullshit papers, in parallel. That sort of sustained pressure would potentially change journals’ incentives in a way which the occasional sting doesn’t. There’d probably be an arms race for a little while—journals/reviewers coming up with cheap ways to avoid proper checks, bullshit-generators coming up with ways around those defenses—but I think there’s a decent chance that the end result would be proper rigor in reviews.
This would just greatly increase the amount of credentialism in academia.
I.e., unless you’re affiliated with some highly elite institution or renowned scholar, no one’s even gonna look at your paper.
I agree this is a likely outcome, though I also think there’s at least a 30% chance that the blackhats could find ways around it. Journals can’t just lock it down to people the editors know personally without losing the large majority of their contributors.
This tries to solve the problem of ‘bad papers getting published’, but doesn’t seem to touch ‘good papers not getting published’.