I’m positive that as these language models become more accessible and powerful, their misuse will grow massively. However, I believe open sourcing is the best option here; having access to such model allows us to create accurate automatic classifiers that detect outputs from such models. Media websites (e.g. Wikipedia, Twitter) could include this classifier in their pipeline for submitting new media.
Making such technologies closed source leaves researchers in the dark; due to the scaling-transformer hype, only a tiny fraction of the world’s population have the financial means to train a SOTA transformer model.
I’m positive that as these language models become more accessible and powerful, their misuse will grow massively. However, I believe open sourcing is the best option here; having access to such model allows us to create accurate automatic classifiers that detect outputs from such models. Media websites (e.g. Wikipedia, Twitter) could include this classifier in their pipeline for submitting new media.
Making such technologies closed source leaves researchers in the dark; due to the scaling-transformer hype, only a tiny fraction of the world’s population have the financial means to train a SOTA transformer model.
After some consideration, I agree with you. Still can’t say I’m happy about it, but it’s a better option than closed source, for sure.