Apparently, I’d be drinking absinthe and playing Eastside Hockey Manager 2007 at 2am while browsing the LW discussion section and obsessively glancing at my inbox to see if the girl I met while walking my dog today has emailed me back and wondering how the hell I’m going to wake up in time for work later today.
what course of action would you recommend to a small group of smart people, assuming for the moment that the danger is real?
An entirely different question! I will attempt to answer below.
What’s your hiring policy?
Having a hiring policy under which only people who wouldn’t betray the project get hired is a hack not dissimilar from securing the project’s prospective FAI in a sandbox from which it “can’t escape” by virtue of it being unconnected from the internet, etc. People are not secure systems, and no one should know enough to betray the project by having enough knowledge to create the technically easier unfriendly general AI.
To the extent this is not feasible, researchers who know the whole picture should monitor each other, the way Mormon missionaries do. It is the buddy system that allows them to discuss their religion with thousands with few enough of them losing faith.
How do you craft your message to the public?
You start from scratch with basic rationality. Otherwise, pull the levers of irrationality hard, with the dark arts, and be called out on it, losing prestige with the important people.
What would you want from the public, besides money, anyway? Prestige so people go into FAI instead of, say, string theory? There are more effective ways to achieve that than public belief that this research is important, notice that the public doesn’t think impractical research is important in other areas.
Do you keep your research secret?
Will it be advantageous to announce research projects before committing them, so you are obliged to share results? Science should work on this model, by precommitting to publishing experiments before they are conducted, rather than have completed experiments compete for journal space, but it doesn’t. If not, if you were to judge each individual piece of your knowledge and review it for publicization, would you be too secretive? If so, fix that incongruency of yours, and then judge each piece of information individually, after you acquire it.
Do you pursue alternate avenues like uploads, or focus only on FAI?
The most important thing is to figure out exactly how much easier GAI is than FAI. The less they differ in difficulty, the less important unrelated approaches are.
Apparently, I’d be drinking absinthe and playing Eastside Hockey Manager 2007 at 2am while browsing the LW discussion section and obsessively glancing at my inbox to see if the girl I met while walking my dog today has emailed me back and wondering how the hell I’m going to wake up in time for work later today.
An entirely different question! I will attempt to answer below.
Having a hiring policy under which only people who wouldn’t betray the project get hired is a hack not dissimilar from securing the project’s prospective FAI in a sandbox from which it “can’t escape” by virtue of it being unconnected from the internet, etc. People are not secure systems, and no one should know enough to betray the project by having enough knowledge to create the technically easier unfriendly general AI.
To the extent this is not feasible, researchers who know the whole picture should monitor each other, the way Mormon missionaries do. It is the buddy system that allows them to discuss their religion with thousands with few enough of them losing faith.
You start from scratch with basic rationality. Otherwise, pull the levers of irrationality hard, with the dark arts, and be called out on it, losing prestige with the important people.
What would you want from the public, besides money, anyway? Prestige so people go into FAI instead of, say, string theory? There are more effective ways to achieve that than public belief that this research is important, notice that the public doesn’t think impractical research is important in other areas.
Will it be advantageous to announce research projects before committing them, so you are obliged to share results? Science should work on this model, by precommitting to publishing experiments before they are conducted, rather than have completed experiments compete for journal space, but it doesn’t. If not, if you were to judge each individual piece of your knowledge and review it for publicization, would you be too secretive? If so, fix that incongruency of yours, and then judge each piece of information individually, after you acquire it.
The most important thing is to figure out exactly how much easier GAI is than FAI. The less they differ in difficulty, the less important unrelated approaches are.