For example, should mankind vigorously pursue research on how to make Ron Fouchier’s alteration of the H5N1 bird flu virus even more dangerous and deadly to humans, because “higher safety can only be achieved by more research on all related topics”?
Yeah, I remember reading this argument and thinking how it does not hold water. The flu virus is a well-research area. It may yet hold some surprises, sure, but we think that we know quite a bit about it. We know enough to tell what is dangerous and what is not. AGI research is nowhere near this stage. My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
My answer is that much of the research in this outline of open problems doesn’t require us to know which AGI architecture will succeed first, for example the problem of representing human values coherently.
Do other AI researchers agree with your list of open problems worth researching? If you asked Dr. Wang about it, what was his reaction?
My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
I want to second that. Also, when reading through this (and feeling the—probably imagined—tension of both parties to stay polite) the viral point was the first one that triggered the “this is clearly an attack!” emotion in my head. I was feeling sad about that, and had hoped that luke would find another ingenious example.
Yeah, I remember reading this argument and thinking how it does not hold water. The flu virus is a well-research area. It may yet hold some surprises, sure, but we think that we know quite a bit about it. We know enough to tell what is dangerous and what is not. AGI research is nowhere near this stage. My comparison would be someone screaming at Dmitri Ivanovsky in 1892 “do not research viruses until you know that this research is safe!”.
Do other AI researchers agree with your list of open problems worth researching? If you asked Dr. Wang about it, what was his reaction?
I want to second that. Also, when reading through this (and feeling the—probably imagined—tension of both parties to stay polite) the viral point was the first one that triggered the “this is clearly an attack!” emotion in my head. I was feeling sad about that, and had hoped that luke would find another ingenious example.
Well, bioengineered viruses are on the list of existential threats...
And there aren’t naturally occurring AIs scampering around killing millions of people… It’s a poor analogy.
“Natural AI” is an oxymoron. There are lots of NIs (natural intelligences) scampering around killing millions of people.
And we’re only a little over a hundred years into virus research, much less on intelligence. Give it another hundred.
Wouldn’t a “naturally occurring AI” be an “intelligence” like humans?