If the danger is close (<50 years) you probably can’t wait for biology to catch up enough to work on uploads.
Your goal is to build a friendly AI quickly. (There’s no way to enforce that everybody else’s AI should be “required” to be friendly.)
You probably should ignore the general public and focus on those who can do the most to help you: rich futurist donors, the government, and the scientific community. The donors are the easiest to reach, but you can’t stop there—you don’t just need money, you need good researchers, and almost all the right kinds of brains are in science. I think if you can’t connect with scientists, you’ll wind up hiring mostly people who aren’t quite good enough to be scientists. Getting mainstream scientific credibility is a drag on speed, but I really don’t see how you can succeed without it. Convince a PhD or two to join your institute, get some papers published, try to get into conferences. In the process, you’ll not only pick up top scientists to hire, you’ll get valuable criticism.
Some research probably must be secret, but there’s “basic research” that doesn’t have to be. A policy of total secrecy makes you very, very likely to be wrong, and in this scenario that means we all die.
The Manhattan Project is a very misleading example. Yes, it was “secret”, in that nothing was published for outside review. But the project had a sizeable fraction of all the physics talent in the western world associated with it. Within the project, there was a great deal of information sharing and discussion; the scientific leadership was strongly against “need to know policies.”
At that scale, having outside review is a lot less necessary. Nobody in AI research is contemplating an effort of that scale, so the objection to secrecy is valid.
Also, the Manhattan project did a poor job of maintaining total secrecy. Reliably secret projects are possible only on a much smaller scale, and the likelihood of information leaking out grows very rapidly as soon as more than a handful of people are involved.
The Manhattan project was facing a huge coordinated enemy who could pay spies, etc. SingInst isn’t facing such an enemy yet, so secrecy should be easier for them.
Actually, most of the WW2-era Soviet spies were communists who spied out of genuine conviction, not as paid traitors. This makes the parallel even more interesting, considering that people engaged in a secret AI project might develop all sorts of qualms.
Some research probably must be secret, but there’s “basic research” that doesn’t have to be. A policy of total secrecy makes you very, very likely to be wrong, and in this scenario that means we all die.
I don’t know about being wrong, but secrecy makes it less likely that people will trust you.
If the danger is close (<50 years) you probably can’t wait for biology to catch up enough to work on uploads.
Your goal is to build a friendly AI quickly. (There’s no way to enforce that everybody else’s AI should be “required” to be friendly.)
You probably should ignore the general public and focus on those who can do the most to help you: rich futurist donors, the government, and the scientific community. The donors are the easiest to reach, but you can’t stop there—you don’t just need money, you need good researchers, and almost all the right kinds of brains are in science. I think if you can’t connect with scientists, you’ll wind up hiring mostly people who aren’t quite good enough to be scientists. Getting mainstream scientific credibility is a drag on speed, but I really don’t see how you can succeed without it. Convince a PhD or two to join your institute, get some papers published, try to get into conferences. In the process, you’ll not only pick up top scientists to hire, you’ll get valuable criticism.
Some research probably must be secret, but there’s “basic research” that doesn’t have to be. A policy of total secrecy makes you very, very likely to be wrong, and in this scenario that means we all die.
Thanks, that’s a new point to me. But it’s not always true, remember the Manhattan project.
The Manhattan Project is a very misleading example. Yes, it was “secret”, in that nothing was published for outside review. But the project had a sizeable fraction of all the physics talent in the western world associated with it. Within the project, there was a great deal of information sharing and discussion; the scientific leadership was strongly against “need to know policies.”
At that scale, having outside review is a lot less necessary. Nobody in AI research is contemplating an effort of that scale, so the objection to secrecy is valid.
Also, the Manhattan project did a poor job of maintaining total secrecy. Reliably secret projects are possible only on a much smaller scale, and the likelihood of information leaking out grows very rapidly as soon as more than a handful of people are involved.
The Manhattan project was facing a huge coordinated enemy who could pay spies, etc. SingInst isn’t facing such an enemy yet, so secrecy should be easier for them.
Actually, most of the WW2-era Soviet spies were communists who spied out of genuine conviction, not as paid traitors. This makes the parallel even more interesting, considering that people engaged in a secret AI project might develop all sorts of qualms.
I don’t know about being wrong, but secrecy makes it less likely that people will trust you.