Stories of the alternate worlds people are from are getting pretty popular here.
The first deviation between this reality and the world I am from, is as far as I can tell, early robots in science fiction. Or rather my worlds total lack thereof. I don’t know if Isaac Asimov never existed in that world, or if he existed and never got inspired to think about robots, and so wrote something else. Science fiction contained, as far as I know, no mention whatsoever of human created artificial intelligence until around 1970.
Turing wrote about Turing machines, but never about a Turing test. I J good never published their speculations on the intelligence explosion. There was some research on computer vision and handwriting recognition, but no concept of maybe one day making a smart generally intelligent machine.
The next difference seems to be nuclear power. In my world Chernobyl never happened. Probably something to do with the USSR never really being a big thing. There wasn’t a cold war. There was some anti-nuclear sentiment, especially from coal minors forming a small and radically anti nuclear lobby. Nuclear reactors got built anyway. The anti-nuclear crowd mostly saw that their predictions of doom had failed, and left, leaving only a small number of rabid anti-nuclear activists. The economy grew on cheap nuclear energy. A general societal convention of ignoring the Luddites was strengthened.
The first time the idea of generally smart AI was introduced, it was in the bad scifi of the 1970′s. You may think this world’s scifi is bad. My worlds scifi was worse. In the first ever movie to feature an AGI, the AGI went rouge due to its computer chips containing too many 1′s and not enough 0′s. The solution to this was to run around taking a photo of anything that contained a 0 or looked like a 0. Street signs, door numbers, doughnuts, whatever. Load all that data onto a hard drive and chuck it at the AI. As the hard disk shatters against the metal surface of a glowing red eyed humanoid robot, the 0′s magically flow into the circuits and the AI is fixed.
Or like how kissing a computer chip seemed to magically fix any AI with the power of love. Whether or not the AI had any way of knowing the chip was being kissed.
No really, the scifi was that bad. In this period, if someone went up to a top computer science expert and said they were worried about intelligent machines, they would probably get a condescending answer that computer chips couldn’t run out of 0′s and it didn’t work like that.
This all changed with James Pembric, founder of the technoludite movement. James had worked as an extra in a film about AI. In one scene, the AI chases after humans, pulling out the humans brains and crushing them to make “brain juice”. The AI then drinks this brain juice to increase its intelligence, and shows how much smarter it is by randomly pulling out a blackboard in the middle of a battle and doing a really big long division.
Anyway, apparently James was haunted by this scene, seeing robots crush human skulls again and again in his dreams.
And so the techno-ludite movement was born. They were some blend of people put out of work by computers, and people who had taken the bad scifi seriously. A fair proportion of them were creationists/ flat-earthers/ UFO conspiracy theorists/ homeopaths etc. You know the crowd.
Given the recent experience with nuclear power and the utter lack of plausible arguments, the techno-ludites were mocked and ignored. This lasted several years. Many techno-ludites gave up and went home. James committed the techno-ludites first act of terrorism, and was sentenced to life in a psychiatric institution. Copycat acts followed. There was a popular push to “not let the terrorists win”, ie to continue computer science research. Any text claiming AI might be dangerous and we should do something about it was banned as terrorist propaganda in most countries.
It was around this time that a rise in electrically more efficient devices slowed the growth of energy use. The nuclear industry went to the government, lobbying them to buy the extra energy. The government listed various activities they thought they should subsidise, and one of those included “publicly published scientific or computing research”. Lobbying by a large journal got an addendum that they could charge an access fee, and the research would still count as public.
The techno-ludites had a problem. They couldn’t tell the people they imagined to be making evil killer robots, from the people making harmless calculators. Their solution: Phone random computer scientists out of nowhere and ask them if their work is dangerous. If they get a confidant “No”, the person asked was safe. If they got any sort of “maybe”, they were the next target. Of course, all the computer scientists quickly learned to just say no, so the techno-ludites questions had to get more indirect.
Academic culture shifted. Before discussing risks of AI would just get you mocked as someone who didn’t understand that computer chips couldn’t run out of zeros. Now it was a personal threat of violence against them and their families.
That brings this story back to me. I was no longer part of traditional academia. I worked part time at a computer repair shop. A bit of soldering, multimeter testing, unscrewing, reinstalling software, that kind of thing. I had a fair bit of quiet time and a boss that would turn a blind eye to any work on pet projects.
I would often think about the field you would call AI. It seemed like their might well be some risk, after all, lots of technologies had some risk to them. We understood the risks of car crashes and nuclear reactor leaks, and made seatbelts and containment domes. Yet any idea that computing might be unsafe was socially taboo.
I didn’t have access to the traditional academic channels, and couldn’t afford many journal articles. I tried sending e-mails to academics, anonymously of course, asking what their research was and if it was dangerous. I didn’t receive any replies beyond the generic “All our research is totally safe …”. It was the exact same text almost every time, they must have been copying it from a website or something.
I tried just asking what they were working on, but coming from a non academic, using an anonymous email, almost all the replies were still nothing or that generic “totally safe” speech. I did get a reply that said they were working on Bayesian processes and gave a link to their paper, but they also clammed up when I asked further.
So in my spare time I tried to figure out what I could from press releases and abstracts. I tried to reimplement toy versions of the couple of algorithms I could figure out. And see if I could figure out some way they could go wrong. Its not like I could implement bigger versions on my second hand laptop. Some of their experiments took a whole nuclear power station worth of power because the government was paying their power bills. A couple of times a year I bought myself access to a paper that looked particularly interesting and perplexing as a treat.
I tried to find other like-minded people online. Most of my (anonymous) posts were quickly taken down. Most of the responses were ridiculing the bad scifi they thought I believed, and the rest were from some techno-ludite nutter that thought computers were made by Satan.
So their I was. I worked fixing computers, and in my free time I sometimes fiddled with toy versions of algorithms gleaned from abstracts with an eye on how they could fail. I acted like this was some shameful secret and told no one (except anonymously online). For all I know, I may well have been my world foremost AI safety researcher, just because I was the only person who thought AI might be possibly dangerous and wasn’t a raving nutcase. Maybe not. Maybe there were a couple of other people like me working quietly away on the topic.
Fiction: My alternate earth story.
Stories of the alternate worlds people are from are getting pretty popular here.
The first deviation between this reality and the world I am from, is as far as I can tell, early robots in science fiction. Or rather my worlds total lack thereof. I don’t know if Isaac Asimov never existed in that world, or if he existed and never got inspired to think about robots, and so wrote something else. Science fiction contained, as far as I know, no mention whatsoever of human created artificial intelligence until around 1970.
Turing wrote about Turing machines, but never about a Turing test. I J good never published their speculations on the intelligence explosion. There was some research on computer vision and handwriting recognition, but no concept of maybe one day making a smart generally intelligent machine.
The next difference seems to be nuclear power. In my world Chernobyl never happened. Probably something to do with the USSR never really being a big thing. There wasn’t a cold war. There was some anti-nuclear sentiment, especially from coal minors forming a small and radically anti nuclear lobby. Nuclear reactors got built anyway. The anti-nuclear crowd mostly saw that their predictions of doom had failed, and left, leaving only a small number of rabid anti-nuclear activists. The economy grew on cheap nuclear energy. A general societal convention of ignoring the Luddites was strengthened.
The first time the idea of generally smart AI was introduced, it was in the bad scifi of the 1970′s. You may think this world’s scifi is bad. My worlds scifi was worse. In the first ever movie to feature an AGI, the AGI went rouge due to its computer chips containing too many 1′s and not enough 0′s. The solution to this was to run around taking a photo of anything that contained a 0 or looked like a 0. Street signs, door numbers, doughnuts, whatever. Load all that data onto a hard drive and chuck it at the AI. As the hard disk shatters against the metal surface of a glowing red eyed humanoid robot, the 0′s magically flow into the circuits and the AI is fixed.
Or like how kissing a computer chip seemed to magically fix any AI with the power of love. Whether or not the AI had any way of knowing the chip was being kissed.
No really, the scifi was that bad. In this period, if someone went up to a top computer science expert and said they were worried about intelligent machines, they would probably get a condescending answer that computer chips couldn’t run out of 0′s and it didn’t work like that.
This all changed with James Pembric, founder of the technoludite movement. James had worked as an extra in a film about AI. In one scene, the AI chases after humans, pulling out the humans brains and crushing them to make “brain juice”. The AI then drinks this brain juice to increase its intelligence, and shows how much smarter it is by randomly pulling out a blackboard in the middle of a battle and doing a really big long division.
Anyway, apparently James was haunted by this scene, seeing robots crush human skulls again and again in his dreams.
And so the techno-ludite movement was born. They were some blend of people put out of work by computers, and people who had taken the bad scifi seriously. A fair proportion of them were creationists/ flat-earthers/ UFO conspiracy theorists/ homeopaths etc. You know the crowd.
Given the recent experience with nuclear power and the utter lack of plausible arguments, the techno-ludites were mocked and ignored. This lasted several years. Many techno-ludites gave up and went home. James committed the techno-ludites first act of terrorism, and was sentenced to life in a psychiatric institution. Copycat acts followed. There was a popular push to “not let the terrorists win”, ie to continue computer science research. Any text claiming AI might be dangerous and we should do something about it was banned as terrorist propaganda in most countries.
It was around this time that a rise in electrically more efficient devices slowed the growth of energy use. The nuclear industry went to the government, lobbying them to buy the extra energy. The government listed various activities they thought they should subsidise, and one of those included “publicly published scientific or computing research”. Lobbying by a large journal got an addendum that they could charge an access fee, and the research would still count as public.
The techno-ludites had a problem. They couldn’t tell the people they imagined to be making evil killer robots, from the people making harmless calculators. Their solution: Phone random computer scientists out of nowhere and ask them if their work is dangerous. If they get a confidant “No”, the person asked was safe. If they got any sort of “maybe”, they were the next target. Of course, all the computer scientists quickly learned to just say no, so the techno-ludites questions had to get more indirect.
Academic culture shifted. Before discussing risks of AI would just get you mocked as someone who didn’t understand that computer chips couldn’t run out of zeros. Now it was a personal threat of violence against them and their families.
That brings this story back to me. I was no longer part of traditional academia. I worked part time at a computer repair shop. A bit of soldering, multimeter testing, unscrewing, reinstalling software, that kind of thing. I had a fair bit of quiet time and a boss that would turn a blind eye to any work on pet projects.
I would often think about the field you would call AI. It seemed like their might well be some risk, after all, lots of technologies had some risk to them. We understood the risks of car crashes and nuclear reactor leaks, and made seatbelts and containment domes. Yet any idea that computing might be unsafe was socially taboo.
I didn’t have access to the traditional academic channels, and couldn’t afford many journal articles. I tried sending e-mails to academics, anonymously of course, asking what their research was and if it was dangerous. I didn’t receive any replies beyond the generic “All our research is totally safe …”. It was the exact same text almost every time, they must have been copying it from a website or something.
I tried just asking what they were working on, but coming from a non academic, using an anonymous email, almost all the replies were still nothing or that generic “totally safe” speech. I did get a reply that said they were working on Bayesian processes and gave a link to their paper, but they also clammed up when I asked further.
So in my spare time I tried to figure out what I could from press releases and abstracts. I tried to reimplement toy versions of the couple of algorithms I could figure out. And see if I could figure out some way they could go wrong. Its not like I could implement bigger versions on my second hand laptop. Some of their experiments took a whole nuclear power station worth of power because the government was paying their power bills. A couple of times a year I bought myself access to a paper that looked particularly interesting and perplexing as a treat.
I tried to find other like-minded people online. Most of my (anonymous) posts were quickly taken down. Most of the responses were ridiculing the bad scifi they thought I believed, and the rest were from some techno-ludite nutter that thought computers were made by Satan.
So their I was. I worked fixing computers, and in my free time I sometimes fiddled with toy versions of algorithms gleaned from abstracts with an eye on how they could fail. I acted like this was some shameful secret and told no one (except anonymously online). For all I know, I may well have been my world foremost AI safety researcher, just because I was the only person who thought AI might be possibly dangerous and wasn’t a raving nutcase. Maybe not. Maybe there were a couple of other people like me working quietly away on the topic.