For SI, movement building is directly progress more than it is for, say, Oxfam, because a big part of their mission is to try and persuade people not to do the very dangerous thing.
Good point. But I don’t see any evidence that anyone who was likely to create an AI soon, now won’t.
Those whose profession and status is in approximating AI largely won’t change course for what must seem to them like sci-fi tropes. [1]
Or, put another way, there are working computer scientists who are religious—you can’t expect reason everywhere in someone’s life.
[1] but in the long run, perhaps SI and others can offer a smooth transition for dangerously smart researchers into high-status alternatives such as FAI or other AI risk mitigation.
Update: It’s not really correct to say that Google has “an AGI team.” Moshe Looks has been working on program induction, and this guy said that some people are working on AI “on a large scale,” but I’m not aware of any publicly-visible Google project which has the ambitions of, say, Novamente.
The plausible story in movement-building is not convincing existing AGI PIs to stop a long program of research, but instead convincing younger people who would otherwise eventually become AGI researchers to do something safer. The evidence to look for would be people who said “well, I was going to do AI research but instead I decided to get involved with SingInst type goals”—and I suspect someone who knows the community better might be able to cite quite a few people for whom this is true, though I don’t have any names myself.
I didn’t think of that. I expect current researchers to be dead or nearly senile by the time we have plentiful human substitutes/emulations, so I shouldn’t care that incumbents are unlikely to change careers (except for the left tail—I’m very vague in my expectation).
For SI, movement building is directly progress more than it is for, say, Oxfam, because a big part of their mission is to try and persuade people not to do the very dangerous thing.
Good point. But I don’t see any evidence that anyone who was likely to create an AI soon, now won’t.
Those whose profession and status is in approximating AI largely won’t change course for what must seem to them like sci-fi tropes. [1]
Or, put another way, there are working computer scientists who are religious—you can’t expect reason everywhere in someone’s life.
[1] but in the long run, perhaps SI and others can offer a smooth transition for dangerously smart researchers into high-status alternatives such as FAI or other AI risk mitigation.
According to Luke, Moshe Looks (head of Google’s AGI team) is now quite safety conscious, and a Singularity Institute supporter.
Update: It’s not really correct to say that Google has “an AGI team.” Moshe Looks has been working on program induction, and this guy said that some people are working on AI “on a large scale,” but I’m not aware of any publicly-visible Google project which has the ambitions of, say, Novamente.
The plausible story in movement-building is not convincing existing AGI PIs to stop a long program of research, but instead convincing younger people who would otherwise eventually become AGI researchers to do something safer. The evidence to look for would be people who said “well, I was going to do AI research but instead I decided to get involved with SingInst type goals”—and I suspect someone who knows the community better might be able to cite quite a few people for whom this is true, though I don’t have any names myself.
I didn’t think of that. I expect current researchers to be dead or nearly senile by the time we have plentiful human substitutes/emulations, so I shouldn’t care that incumbents are unlikely to change careers (except for the left tail—I’m very vague in my expectation).