I’ve always wanted to say that. But seriously, if you’re reading this, then you may be one of the few people who recognize the dangers we’re facing, and therefore one of the only people who can help.
Nothing Matters Other Than This
No, really. Climate change, cancer and war don’t matter anymore when AGI is about to arrive.
To oversimplify:
Aligned A.G.I. = heaven, at some point
Misaligned A.G.I. = we all die
Whether there’s a 1% chance of doom or a 99% chance of doom, I don’t think those scenarios should be treated that differently. Even a 1% chance of doom is far too much and calls for humanity’s every effort to reduce it. And it’s not 1%.
Even if you don’t think that doom is likely, even if you’re convinced that the right people are taking the safety issue seriously, even if you put a lot of stock in the idea of AI being helpful in solving alignment, you will either be dead in 20 years or death will be cured in 20 years. Are you sure that your day job still matters when you know that your time can help improve our chances?
And if you think that Yudkowsky is justified in his predications… then what are you doing just sitting there?
This is the only important thing we will ever do.
The Problem
The problem can be broken down into two parts:
Slowing down capability research
Speeding up alignment research
(Though granted, there may be some crossover between these two elements.)
The sub-problems:
It’s very hard to get governments to do things
It’s very hard to motivate corporations do anything other than make money
It’s very hard to get people with power or status to support an unpopular cause
It’s very hard to get AI capability researchers to sabotage or switch their professions
It’s very hard to paint a believable picture of doom for people (ironically, all of those other end of the world cries have hurt us greatly with this)
Other things that I haven’t thought of yet
The Solution—Raising Awareness
The main goal in one sentence is to scream about the problem as loud and convincingly as we can. We want the researchers to stop working on AI capability and to focus more on alignment.
We’re in a positive feedback loop with this. The more awareness we raise, the more people we have to help raise awareness.
The first idea that comes to mind is getting Yudkowsky out there. Ideally we could find people even more persuasive than him, but we should communicate with anyone we can who has a platform that would be suitable enough for him to speak with publically. Anyone else with a voice who cares about doom should be helped in speaking up as well.
Second, we should come up with better ways to convince the masses that this problem exists and that it matters. I’m sure that AI itself will be invaluable in doing this over the next few years as jobs start disappearing and Deep Fakes, AI’s you can have out loud conversations with and chatbots that can simulate your friends make their debut.
AI may or may not be helpful in alignment directly, but it will hopefully help us in other ways.
We should also focus on us directly convincing anyone with power or a platform and prominent AI researchers that the possibility of doom matters. Recruiting authorities in the field of AI and people who are naturally charismatic and convincing may be the optimal way to do this.
As I’m writing this, it occurs to me that we may wish to observe how cults have spread their own messages in the past.
No Individual Is Smarter than All of Us Together
If the alignment researchers need time and resources, then that’s what we’ll give them, and that’s something that anyone can help with.
Sadly, it will take some very, very creative solutions to get them what they need.
But fortunately...
When properly organized, large groups of people are often amazing at creative problem solving, far better than what any individual can do.
Whether in person or over the internet, people working together are great at coming up with ideas, dividing up to test them, raising up better sounding ideas and tossing out the ones that don’t work.
There are many stories of dedicated groups working together online (mostly through sites like 4Chan) to track down criminals, terrorist organizations, animal abusers, ordinary people who stirred up overblown outrage and even Shia LaBeouf—all faster than any government organization could do it. These were ordinary people with too much time on their hands that did amazing things.
Remember that time that Yudkowsky was outmatched by the HPMOR subreddit working together during the Final Exam?
If 4Chan can find terrorist groups better than the US military can, then we can be a lot more creative than that when our lives are on the line.
A Long Shot Idea—Can Non-Programmers Help in Solving Alignment?
Not being a programmer, this is where experts will have to chime in.
It doesn’t seem likely, but that said… Foldit.
Foldit is an online game created by the University of Washington that turned protein folding into a competitive sport, designed for people with little to no scientific background. Players used a set of tools to manipulate protein structure and try to find the most stable and efficient way of folding it, and the game’s scoring system rewarded players who did this the best. People collaborated with it like they would do with any game, and it went amazingly well.
Foldit players have been credited with numerous scientific discoveries, including solving the structure of a protein involved in the transmission of the AIDS virus—a problem that had stumped researchers for over a decade—and the design of a new enzyme that can break down plastic waste.
As I say, ordinary people working together are pretty much unbeatable.
If there is ANY possible way of doing this for alignment research, even in a small and strange way, it may be worth pursuing.
Getting Everybody Working Together
We’re currently not organized or trying very hard save our own lives, and no other online group seems to be either.
It’s frankly embarrassing that LessWrong collaborated so well on figuring out the ending to a Harry Potter fanfiction and yet we won’t work together to save the world. But hey, it might not be too late yet.
Fighting For Our Lives—What Ordinary People Can Do
tl;dr: Get organized and start brainstorming
___________________________________________________________
If you’re reading this, you are the resistance.
I’ve always wanted to say that. But seriously, if you’re reading this, then you may be one of the few people who recognize the dangers we’re facing, and therefore one of the only people who can help.
Nothing Matters Other Than This
No, really. Climate change, cancer and war don’t matter anymore when AGI is about to arrive.
To oversimplify:
Aligned A.G.I. = heaven, at some point
Misaligned A.G.I. = we all die
Whether there’s a 1% chance of doom or a 99% chance of doom, I don’t think those scenarios should be treated that differently. Even a 1% chance of doom is far too much and calls for humanity’s every effort to reduce it. And it’s not 1%.
Even if you don’t think that doom is likely, even if you’re convinced that the right people are taking the safety issue seriously, even if you put a lot of stock in the idea of AI being helpful in solving alignment, you will either be dead in 20 years or death will be cured in 20 years. Are you sure that your day job still matters when you know that your time can help improve our chances?
And if you think that Yudkowsky is justified in his predications… then what are you doing just sitting there?
This is the only important thing we will ever do.
The Problem
The problem can be broken down into two parts:
Slowing down capability research
Speeding up alignment research
(Though granted, there may be some crossover between these two elements.)
The sub-problems:
It’s very hard to get governments to do things
It’s very hard to motivate corporations do anything other than make money
It’s very hard to get people with power or status to support an unpopular cause
It’s very hard to get AI capability researchers to sabotage or switch their professions
It’s very hard to paint a believable picture of doom for people (ironically, all of those other end of the world cries have hurt us greatly with this)
Other things that I haven’t thought of yet
The Solution—Raising Awareness
The main goal in one sentence is to scream about the problem as loud and convincingly as we can. We want the researchers to stop working on AI capability and to focus more on alignment.
We’re in a positive feedback loop with this. The more awareness we raise, the more people we have to help raise awareness.
The first idea that comes to mind is getting Yudkowsky out there. Ideally we could find people even more persuasive than him, but we should communicate with anyone we can who has a platform that would be suitable enough for him to speak with publically. Anyone else with a voice who cares about doom should be helped in speaking up as well.
Second, we should come up with better ways to convince the masses that this problem exists and that it matters. I’m sure that AI itself will be invaluable in doing this over the next few years as jobs start disappearing and Deep Fakes, AI’s you can have out loud conversations with and chatbots that can simulate your friends make their debut.
AI may or may not be helpful in alignment directly, but it will hopefully help us in other ways.
We should also focus on us directly convincing anyone with power or a platform and prominent AI researchers that the possibility of doom matters. Recruiting authorities in the field of AI and people who are naturally charismatic and convincing may be the optimal way to do this.
As I’m writing this, it occurs to me that we may wish to observe how cults have spread their own messages in the past.
No Individual Is Smarter than All of Us Together
If the alignment researchers need time and resources, then that’s what we’ll give them, and that’s something that anyone can help with.
Sadly, it will take some very, very creative solutions to get them what they need.
But fortunately...
When properly organized, large groups of people are often amazing at creative problem solving, far better than what any individual can do.
Whether in person or over the internet, people working together are great at coming up with ideas, dividing up to test them, raising up better sounding ideas and tossing out the ones that don’t work.
There are many stories of dedicated groups working together online (mostly through sites like 4Chan) to track down criminals, terrorist organizations, animal abusers, ordinary people who stirred up overblown outrage and even Shia LaBeouf—all faster than any government organization could do it. These were ordinary people with too much time on their hands that did amazing things.
Remember that time that Yudkowsky was outmatched by the HPMOR subreddit working together during the Final Exam?
If 4Chan can find terrorist groups better than the US military can, then we can be a lot more creative than that when our lives are on the line.
A Long Shot Idea—Can Non-Programmers Help in Solving Alignment?
Not being a programmer, this is where experts will have to chime in.
It doesn’t seem likely, but that said… Foldit.
Foldit is an online game created by the University of Washington that turned protein folding into a competitive sport, designed for people with little to no scientific background. Players used a set of tools to manipulate protein structure and try to find the most stable and efficient way of folding it, and the game’s scoring system rewarded players who did this the best. People collaborated with it like they would do with any game, and it went amazingly well.
Foldit players have been credited with numerous scientific discoveries, including solving the structure of a protein involved in the transmission of the AIDS virus—a problem that had stumped researchers for over a decade—and the design of a new enzyme that can break down plastic waste.
As I say, ordinary people working together are pretty much unbeatable.
If there is ANY possible way of doing this for alignment research, even in a small and strange way, it may be worth pursuing.
Getting Everybody Working Together
We’re currently not organized or trying very hard save our own lives, and no other online group seems to be either.
It’s frankly embarrassing that LessWrong collaborated so well on figuring out the ending to a Harry Potter fanfiction and yet we won’t work together to save the world. But hey, it might not be too late yet.
So would anyone like to get organized?
___________________________________________________________
Naturally, if anyone can think of any good additions or edits for this post, do let me know.