If you ever wondered what 4chan was doing with chatGPT, it was a combination of chatbot waifus and intentionally creating Shiri’s Scissors to troll/trick Reddit.
It doesn’t seem particularly difficult.
They also shared an apocalyptic vision of the internet in 2035, where the open internet is entirely ruined by LLM trolls/propaganda, and the only conversations to be had with real people are in real-ID-secured apps. This seems both scarily real and somewhat inevitable.
Ever read the various fiction by Peter Watts? In any of his work set after the mid 21st century the descendant of the internet is called the Maelstrom, completely overrun with self-replicating spambots and advertisements and ‘wild’ mutated descendants thereof, with actual messages making up <0.001% of the traffic and needing to be wrapped in endless layers of authentication and protection against modification
Yep, big fan of Watts and will +1 this recommendation to any other readers.
Curious if you’ve read much of Schroeder’s stuff? Lady of Mazes in particular explores, among many other things, an implementation of this tech that might-not-suck.
The quick version is a society in which everyone owns their GPT-Me, controls distribution of it, and has access to all of its outputs. They use them as a social interface—can’t talk to someone? Send your GPT-Me to talk to them. They can’t talk to you? Your GPT-Me can talk to GPT-Them and you both get the results fed back to you. etc etc.
As a biologist, my mind goes elsewhere. Specifically, to how viruses are by far the most common biological entity on Earth and how most coding sequences in your genome are selfish replicating reverse transcription apparatuses trying to shove copies of their own sequences anywhere they can get into. Other examples abound, including how the very spliceosome itself in the eukaryotic transcription apparatus seems to be a defense against invasion of the eukaryotic genome by reverse-transcribing introns that natural selection could no longer purge once eukaryotic cell size rose and population size fell enough to weaken natural selection against mild deleterious mutations, but in the process entrenched those selfish elements into something that could no longer leave.
tangential, but my intuitive sense is that the ideal long term ai safety algorithm would need to help those who wish to retain biology give their genetic selfish elements a stern molecular talking-to, by constructing a complex but lightweight genetic immune system that can reduce their tendency to cause new deleterious mutations. among a great many other extremely complex hotfixes necessary to shore up error defenses necessary to make a dna-based biological organism truly information-immortal-yet-runtime-adaptable.
If you ever wondered what 4chan was doing with chatGPT, it was a combination of chatbot waifus and intentionally creating Shiri’s Scissors to troll/trick Reddit.
It doesn’t seem particularly difficult.
They also shared an apocalyptic vision of the internet in 2035, where the open internet is entirely ruined by LLM trolls/propaganda, and the only conversations to be had with real people are in real-ID-secured apps. This seems both scarily real and somewhat inevitable.
Ever read the various fiction by Peter Watts? In any of his work set after the mid 21st century the descendant of the internet is called the Maelstrom, completely overrun with self-replicating spambots and advertisements and ‘wild’ mutated descendants thereof, with actual messages making up <0.001% of the traffic and needing to be wrapped in endless layers of authentication and protection against modification
Yep, big fan of Watts and will +1 this recommendation to any other readers.
Curious if you’ve read much of Schroeder’s stuff? Lady of Mazes in particular explores, among many other things, an implementation of this tech that might-not-suck.
The quick version is a society in which everyone owns their GPT-Me, controls distribution of it, and has access to all of its outputs. They use them as a social interface—can’t talk to someone? Send your GPT-Me to talk to them. They can’t talk to you? Your GPT-Me can talk to GPT-Them and you both get the results fed back to you. etc etc.
I have not.
As a biologist, my mind goes elsewhere. Specifically, to how viruses are by far the most common biological entity on Earth and how most coding sequences in your genome are selfish replicating reverse transcription apparatuses trying to shove copies of their own sequences anywhere they can get into. Other examples abound, including how the very spliceosome itself in the eukaryotic transcription apparatus seems to be a defense against invasion of the eukaryotic genome by reverse-transcribing introns that natural selection could no longer purge once eukaryotic cell size rose and population size fell enough to weaken natural selection against mild deleterious mutations, but in the process entrenched those selfish elements into something that could no longer leave.
tangential, but my intuitive sense is that the ideal long term ai safety algorithm would need to help those who wish to retain biology give their genetic selfish elements a stern molecular talking-to, by constructing a complex but lightweight genetic immune system that can reduce their tendency to cause new deleterious mutations. among a great many other extremely complex hotfixes necessary to shore up error defenses necessary to make a dna-based biological organism truly information-immortal-yet-runtime-adaptable.