Should we expect a future where most people use GPT-like tools to generate text, but 90% of people use the models trained by 2 or 3 large companies?
This could allow amazing thought control of the population. If you want to suppress some ideas, just train your model to be less likely to generate them. As a consequence, the ideas will disappear from many people’s articles, blogs, school essays.
Many people will publicly deny or downplay their use of GPT, so they will unknowingly provide a cover for this manipulation. People will underestimate the degree of control if they e.g. keep believing that news articles are still written by human journalists, when in fact the job of the journalist will consists of providing a prompt, and then choosing the best of a few generated articles.
Similarly, bloggers who generate their texts will be much more productive than bloggers who actually write them. Yes, many readers will reward quality over quantity, but the quality can be achieved in ways other than writing the articles, for example figuring out interesting prompts (such as “explain Fourier Transform using analogies from Game of Thrones”), or use some other tricks to give the blog a unique flavor.
What the companies need (and I do not know how difficult this would be technically) is to reverse-engineer why GPT produced certain outputs. For example, you train a model using some inputs. You ask it some questions, and select the inconvenient answers. Then you ask which input texts have contributed most strongly to generating the inconvenient answers. You remove those texts from the training set, and train a new model. This could even be fully automated if you can write an algorithm that ask the questions and predict which answers would be inconvenient.
Welcome to the glorious future where 99% of people support the party line on their social networks, because that is what their GPT-based comment-generating plugins have produced, and they were too lazy to change it.
Should we expect a future where most people use GPT-like tools to generate text, but 90% of people use the models trained by 2 or 3 large companies?
This could allow amazing thought control of the population. If you want to suppress some ideas, just train your model to be less likely to generate them. As a consequence, the ideas will disappear from many people’s articles, blogs, school essays.
Many people will publicly deny or downplay their use of GPT, so they will unknowingly provide a cover for this manipulation. People will underestimate the degree of control if they e.g. keep believing that news articles are still written by human journalists, when in fact the job of the journalist will consists of providing a prompt, and then choosing the best of a few generated articles.
Similarly, bloggers who generate their texts will be much more productive than bloggers who actually write them. Yes, many readers will reward quality over quantity, but the quality can be achieved in ways other than writing the articles, for example figuring out interesting prompts (such as “explain Fourier Transform using analogies from Game of Thrones”), or use some other tricks to give the blog a unique flavor.
What the companies need (and I do not know how difficult this would be technically) is to reverse-engineer why GPT produced certain outputs. For example, you train a model using some inputs. You ask it some questions, and select the inconvenient answers. Then you ask which input texts have contributed most strongly to generating the inconvenient answers. You remove those texts from the training set, and train a new model. This could even be fully automated if you can write an algorithm that ask the questions and predict which answers would be inconvenient.
Welcome to the glorious future where 99% of people support the party line on their social networks, because that is what their GPT-based comment-generating plugins have produced, and they were too lazy to change it.