Convince programmers to refuse to work on risky AGI projects:
Please provide constructive criticism.
We’re in an era where the people required to make AGI happen are in so much demand that if they refused to work on an AGI that wasn’t safe, they’d still have plenty of jobs left to choose from. You could convince programmers to adopt a policy of refusing to work on unsafe AGI. These specifics would be required:
Make sure that programmers at all levels have a good way to determine whether the AGI they’re working on has proper safety mechanisms in place. Sometimes employees get such a small view of their job and will be told such confident fluff by management, that they have no idea what is going on. I am not qualified to do this, but if someone reading this post is, it might be very important if you write some guidelines for how programmers can tell whether the AGI they’re working on might be unsafe from within their employment position. It may be more effective to give them a confidential hotline. Things can get complicated, both in programming, and in corporate culture, and employees may need help sorting out what’s going on.
You could create resources to help programmers organize a strike or programmer’s walk. Things like: An anonymous web interface where people interested in striking can post their intent—this would help momentum build. A place for people to post stories about how they took action against unsafe AI projects. They might not know how to organize otherwise (especially in large projects) or might need the inspiration to get moving.
If a union is formed around technological safety, the union could make demands that outside agencies must be allowed to check on the project, and that the company must be forthcoming with all safety related information.
Gwern responded to my comment in his Moore’s Law thread. I don’t know why he responded over there instead of over here but I decided that it was more organized to relocate the conversation to the comment it is about so I put my response to him here.
Herding programmers is like herding cats, so this works only in proportion to how many key coders there are—if you need to convince more than, say, 100,000, I don’t think it would work.
Do you have evidence one way or the other of what proportion of programmers get the existential risk posed by AGI? In any case, I don’t know how to tell whether you’re too pessimistic or whether I am too optimistic here.
researches for figures for this project
There are between 1,200,000 and 1,450,000 programmers depending on whether you want to count web people (who have been lumped together) in the USA according to the 2010 US Bureau of Labor Statistics. That’s not the entire world but getting the American programmers on board would be major progress and researching the figures for all 200 countries in the world is outside the scope of this comment, so I will stick to that for right now.
LessWrong has over 13,000 users and over 10,000,000 visits. It isn’t clear what percentage of the American programmer population has been exposed to AI and existential risk this way (and a bit over half the visits are from Americans) but since LessWrong has lots of programmers and has eight times as many visits as there are programmers in America, it’s possible that a majority of American programmers have at least heard of existential risk or SI. This is just the beginning though because LessWrong is growing pretty fast and it could grow even faster if I (or someone) were to get involved in web marketing such as improving the SEO or improving the site’s presentation (I may do both of these, though I want to address the risk of endless September first and I’m letting that one cool off for a while, at Luke’s advice, so that people don’t explode because I made too many meta threads).
I don’t see any research on what percentage of programmers believe that AI poses significant risks… I doubt there is any right now, but maybe you know of some?
In either case, if someone might create a method of getting through to them that is testable and works, then this is not a straight up “x percent of relevant programmers get it” sort of problem. Teachers and sales people do that for a living, so it’s not like there isn’t a whole bunch of knowledge about how to get through to people that could be used. Eliezer is very successful at certain super important teaching skills that would be necessary to do this. For instance, the sequences are rapidly gaining popularity, and he’s highly regarded by a lot of people who have read them. Whether or not readers understand everything they read is questionable, but that he is able to build rapport and motivate people to learn is pretty well supported. In addition to that, I became a sales person for a while after the IT bubble burst and was pretty good at it. I would be completely willing to assist in attempting to figure out a way of using consultative / question selling techniques (these work without using dark tactics by encouraging a person to consider each aspect of a decision and provide necessary information for all the choices required for their final decision) to convince programmers that AI poses existential risks.
I think this is worth formally researching. If, say, 50% of American programmers already know about it and get it, which is possible considering the figures above, then my idea is still plausible and it’s just a matter of organizing them. If not, Eliezer or somebody (me maybe?) can figure out a method of convincing programmers and test it, then we’d know there was a viable solution. Then it’s just a matter of having a way to scale it to the rest of the programmer population—but that’s what the field of marketing is for, so it’s not like there’s a need to despair there.
That would mean getting the word out to programmers around the world. This wouldn’t be a trivial effort, but if they were getting it, and most American programmers had been convinced, it would be a worthwhile effort, and this would make it worth investing in. Considering that programmers are well off pretty much everywhere and that technology oriented folks tend to want internet access, communicating a message to all the programmers in the world probably is not anywhere near as hard as it would at first seem. Especially since LW is already growing so fast and there is a web professional here who is willing to help it grow (me).
You know the people at SI better than I do. Do you think SI would have an interest in finding out what percentage of programmers get it, testing methods of getting through to them, and determining what web marketing strategies work for getting the message out?
Convince programmers to refuse to work on risky AGI projects:
Please provide constructive criticism.
We’re in an era where the people required to make AGI happen are in so much demand that if they refused to work on an AGI that wasn’t safe, they’d still have plenty of jobs left to choose from. You could convince programmers to adopt a policy of refusing to work on unsafe AGI. These specifics would be required:
Make sure that programmers at all levels have a good way to determine whether the AGI they’re working on has proper safety mechanisms in place. Sometimes employees get such a small view of their job and will be told such confident fluff by management, that they have no idea what is going on. I am not qualified to do this, but if someone reading this post is, it might be very important if you write some guidelines for how programmers can tell whether the AGI they’re working on might be unsafe from within their employment position. It may be more effective to give them a confidential hotline. Things can get complicated, both in programming, and in corporate culture, and employees may need help sorting out what’s going on.
You could create resources to help programmers organize a strike or programmer’s walk. Things like: An anonymous web interface where people interested in striking can post their intent—this would help momentum build. A place for people to post stories about how they took action against unsafe AI projects. They might not know how to organize otherwise (especially in large projects) or might need the inspiration to get moving.
If a union is formed around technological safety, the union could make demands that outside agencies must be allowed to check on the project, and that the company must be forthcoming with all safety related information.
On the feasibility of getting through to the programmers
See Also “Sabotage would not work”
Gwern responded to my comment in his Moore’s Law thread. I don’t know why he responded over there instead of over here but I decided that it was more organized to relocate the conversation to the comment it is about so I put my response to him here.
Do you have evidence one way or the other of what proportion of programmers get the existential risk posed by AGI? In any case, I don’t know how to tell whether you’re too pessimistic or whether I am too optimistic here.
researches for figures for this project
There are between 1,200,000 and 1,450,000 programmers depending on whether you want to count web people (who have been lumped together) in the USA according to the 2010 US Bureau of Labor Statistics. That’s not the entire world but getting the American programmers on board would be major progress and researching the figures for all 200 countries in the world is outside the scope of this comment, so I will stick to that for right now.
LessWrong has over 13,000 users and over 10,000,000 visits. It isn’t clear what percentage of the American programmer population has been exposed to AI and existential risk this way (and a bit over half the visits are from Americans) but since LessWrong has lots of programmers and has eight times as many visits as there are programmers in America, it’s possible that a majority of American programmers have at least heard of existential risk or SI. This is just the beginning though because LessWrong is growing pretty fast and it could grow even faster if I (or someone) were to get involved in web marketing such as improving the SEO or improving the site’s presentation (I may do both of these, though I want to address the risk of endless September first and I’m letting that one cool off for a while, at Luke’s advice, so that people don’t explode because I made too many meta threads).
I don’t see any research on what percentage of programmers believe that AI poses significant risks… I doubt there is any right now, but maybe you know of some?
In either case, if someone might create a method of getting through to them that is testable and works, then this is not a straight up “x percent of relevant programmers get it” sort of problem. Teachers and sales people do that for a living, so it’s not like there isn’t a whole bunch of knowledge about how to get through to people that could be used. Eliezer is very successful at certain super important teaching skills that would be necessary to do this. For instance, the sequences are rapidly gaining popularity, and he’s highly regarded by a lot of people who have read them. Whether or not readers understand everything they read is questionable, but that he is able to build rapport and motivate people to learn is pretty well supported. In addition to that, I became a sales person for a while after the IT bubble burst and was pretty good at it. I would be completely willing to assist in attempting to figure out a way of using consultative / question selling techniques (these work without using dark tactics by encouraging a person to consider each aspect of a decision and provide necessary information for all the choices required for their final decision) to convince programmers that AI poses existential risks.
I think this is worth formally researching. If, say, 50% of American programmers already know about it and get it, which is possible considering the figures above, then my idea is still plausible and it’s just a matter of organizing them. If not, Eliezer or somebody (me maybe?) can figure out a method of convincing programmers and test it, then we’d know there was a viable solution. Then it’s just a matter of having a way to scale it to the rest of the programmer population—but that’s what the field of marketing is for, so it’s not like there’s a need to despair there.
That would mean getting the word out to programmers around the world. This wouldn’t be a trivial effort, but if they were getting it, and most American programmers had been convinced, it would be a worthwhile effort, and this would make it worth investing in. Considering that programmers are well off pretty much everywhere and that technology oriented folks tend to want internet access, communicating a message to all the programmers in the world probably is not anywhere near as hard as it would at first seem. Especially since LW is already growing so fast and there is a web professional here who is willing to help it grow (me).
You know the people at SI better than I do. Do you think SI would have an interest in finding out what percentage of programmers get it, testing methods of getting through to them, and determining what web marketing strategies work for getting the message out?