Here’s an AI disaster story that came to me on thinking about the above.
Schools start requiring students to use the education system’s official AI to criticise their own essays, and rewrite them until the AI finds them acceptable. This also removes the labour of marking from teachers.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
Or out of school.
Social media platforms are required to silently limit distribution of materials that the AI scores poorly.
AIs used in such important capacities would have to agree with each other, or public confusion might result. There would therefore have to be a single AI, a central institution for managing the whole state. For public safety, no other AIs of similar capabilities would be permitted.
Prompt engineering becomes a criminal offence, akin to computer hacking.
Access to public archives of old books, online or offline, is limited to adults able to demonstrate to the AI’s satisfaction that they have an approved reason for consulting them, and will not be corrupted by the wrong thoughts they contain. Physical books are discouraged in favour of online AI rewrites. New books must be vetted by AI. Freedom of speech is the freedom to speak truth. Truth is what it is good to think. Good is what the AI approves. The AI approves it because it is good.
Social credit scores are instituted, based on AI assessment of all of an individual’s available speech and behaviour. Social media platforms are required to silently limit distribution of anything written by a low scorer (including one to one messaging).
Changes in the official standard of proper thought, speech, and action would occur from time to time in accordance with social needs. These are announced only as improvements in the quality of the AI. Social credit scores are continually reassessed according to evolving standards and applied retroactively.
Such changes would only be made by the AI itself, being too important a matter to leave to fallible people.
To the end, humans think they are in control, and the only problem they see is that the wrong humans are in control.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
That will only put a brake on how fast the frog is boiled. Artists have a vested interest against the use of AI art, but today, hardly anyone else thinks twice about putting Midjourney images all through their postings, including on LessWrong. I’ll be interested to see how that plays out in the commercial art industry.
You’re underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn’t address the ease of circumvention. There’s no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.
Here’s an AI disaster story that came to me on thinking about the above.
Schools start requiring students to use the education system’s official AI to criticise their own essays, and rewrite them until the AI finds them acceptable. This also removes the labour of marking from teachers.
All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. “Teachers” become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.
Or out of school.
Social media platforms are required to silently limit distribution of materials that the AI scores poorly.
AIs used in such important capacities would have to agree with each other, or public confusion might result. There would therefore have to be a single AI, a central institution for managing the whole state. For public safety, no other AIs of similar capabilities would be permitted.
Prompt engineering becomes a criminal offence, akin to computer hacking.
Access to public archives of old books, online or offline, is limited to adults able to demonstrate to the AI’s satisfaction that they have an approved reason for consulting them, and will not be corrupted by the wrong thoughts they contain. Physical books are discouraged in favour of online AI rewrites. New books must be vetted by AI. Freedom of speech is the freedom to speak truth. Truth is what it is good to think. Good is what the AI approves. The AI approves it because it is good.
Social credit scores are instituted, based on AI assessment of all of an individual’s available speech and behaviour. Social media platforms are required to silently limit distribution of anything written by a low scorer (including one to one messaging).
Changes in the official standard of proper thought, speech, and action would occur from time to time in accordance with social needs. These are announced only as improvements in the quality of the AI. Social credit scores are continually reassessed according to evolving standards and applied retroactively.
Such changes would only be made by the AI itself, being too important a matter to leave to fallible people.
To the end, humans think they are in control, and the only problem they see is that the wrong humans are in control.
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
That will only put a brake on how fast the frog is boiled. Artists have a vested interest against the use of AI art, but today, hardly anyone else thinks twice about putting Midjourney images all through their postings, including on LessWrong. I’ll be interested to see how that plays out in the commercial art industry.
You’re underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn’t address the ease of circumvention. There’s no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.