I just came across an old post of mine that asked a similar question:
BTW, I still remember the arguments between Eliezer and Ben about
Friendliness and Novamente. As late as January 2005, Eliezer wrote:
And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.
I’m curious how that debate was resolved?
From the reluctance of anyone at SIAI to answer this question, I conclude that Ben Goertzel being the Director of Research probably represents the outcome of some internal power struggle/compromise at SIAI, whose terms of resolution included the details of the conflict being kept secret.
What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously? That would almost certainly hurt SIAI as a whole, but might strengthen “our” side of this conflict. Does anyone have other suggestions for how to push SIAI in a direction that we would prefer?
Have you updated that in light of the fact that Ben just convinced the Chinese government to start funding AGI? (See my article link earlier in this thread.)
Update for anyone that comes across this comment: Ben Goertzel recently tweeted that he will be taking over Hugo de Garis’s lab, pending paperwork approval.
Do you believe the given answer? And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?
I don’t know enough about his capabilities when it comes to contributing to unfriendly AI research to answer that. Being unable to think sanely about friendliness or risks may have little bearing on your capabilities with respect to AGI research. The modes of thinking have very little bearing on each other.
And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Indeed. I read part of this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. ‘Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code’ → OpenCog → nothing changes. ‘Eliezer Yudkowsky lacks academic credentials’ → Professor Ben Goertzel installed as Director of Research → nothing changes.”).
Indeed. I read this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes
That’s an impressive achievement! I wonder if they will be able to maintain it? I also wonder whether they will be able to distinguish those times when the objections are solid, not merely something to treat as PR concerns. There is a delicate balance to be found.
There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Does this suggest that founding a stealth AGI institute (to coordinate conferences, and communication between researchers) might be suited to oversee and influence potential undertakings that could lead to imminent high-risk situations?
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
There is one ‘mostly harmless’ for people who you think will fail at AGI. There is an entirely different ‘mostly harmless’ for actually have a research director who tries to make AIs that could kill us all. Why would I not think the SIAI is itself an existential risk if the criteria for director recruitment is so lax? Being absolutely terrified of disaster is the kind of thing that helps ensure appropriate mechanisms to prevent defection are kept in place.
What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously?
Yes. The SIAI has to convince us that they are mostly harmless.
I just came across an old post of mine that asked a similar question:
From the reluctance of anyone at SIAI to answer this question, I conclude that Ben Goertzel being the Director of Research probably represents the outcome of some internal power struggle/compromise at SIAI, whose terms of resolution included the details of the conflict being kept secret.
What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously? That would almost certainly hurt SIAI as a whole, but might strengthen “our” side of this conflict. Does anyone have other suggestions for how to push SIAI in a direction that we would prefer?
The short answer is that Ben and I are both convinced the other is mostly harmless.
Have you updated that in light of the fact that Ben just convinced the Chinese government to start funding AGI? (See my article link earlier in this thread.)
Hugo de Garis is around two orders of magnitude more harmless than Ben.
Update for anyone that comes across this comment: Ben Goertzel recently tweeted that he will be taking over Hugo de Garis’s lab, pending paperwork approval.
http://twitter.com/bengoertzel/status/16646922609
http://twitter.com/bengoertzel/status/16647034503
What about all the other people Ben might help obtain funding for, partly due to his position at SIAI?
And what about the public relations/education aspect? It’s harmless that SIAI appears to not consider AI to be a serious existential risk?
This part was not answered. It may be a question to ask someone other than Eliezer. Or just ask really loudly. That sometimes works too.
The reverse seems far more likely.
I don’t know how to parse that. What do you mean by “the reverse”?
Ben’s position at SIAI may reduce the expected amount of funding he obtains for other existentially risky persons.
How much of this harmlessness is perceived impotence and how much is it an approximately sane way of thinking?
Wholly perceived impotence.
Do you believe the given answer? And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?
I don’t know enough about his capabilities when it comes to contributing to unfriendly AI research to answer that. Being unable to think sanely about friendliness or risks may have little bearing on your capabilities with respect to AGI research. The modes of thinking have very little bearing on each other.
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Indeed. I read part of this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. ‘Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code’ → OpenCog → nothing changes. ‘Eliezer Yudkowsky lacks academic credentials’ → Professor Ben Goertzel installed as Director of Research → nothing changes.”).
That’s an impressive achievement! I wonder if they will be able to maintain it? I also wonder whether they will be able to distinguish those times when the objections are solid, not merely something to treat as PR concerns. There is a delicate balance to be found.
Does this suggest that founding a stealth AGI institute (to coordinate conferences, and communication between researchers) might be suited to oversee and influence potential undertakings that could lead to imminent high-risk situations?
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
Most likely, someone working there just happens to.
Fascinating.
Can we know how you came to that conclusion?
There is one ‘mostly harmless’ for people who you think will fail at AGI. There is an entirely different ‘mostly harmless’ for actually have a research director who tries to make AIs that could kill us all. Why would I not think the SIAI is itself an existential risk if the criteria for director recruitment is so lax? Being absolutely terrified of disaster is the kind of thing that helps ensure appropriate mechanisms to prevent defection are kept in place.
Yes. The SIAI has to convince us that they are mostly harmless.