I don’t know enough about his capabilities when it comes to contributing to unfriendly AI research to answer that. Being unable to think sanely about friendliness or risks may have little bearing on your capabilities with respect to AGI research. The modes of thinking have very little bearing on each other.
And if Ben is really that impotent, what do you think does it reveal about the SIAI, or whoever put Ben into a position within the SIAI?
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Indeed. I read part of this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. ‘Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code’ → OpenCog → nothing changes. ‘Eliezer Yudkowsky lacks academic credentials’ → Professor Ben Goertzel installed as Director of Research → nothing changes.”).
Indeed. I read this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes
That’s an impressive achievement! I wonder if they will be able to maintain it? I also wonder whether they will be able to distinguish those times when the objections are solid, not merely something to treat as PR concerns. There is a delicate balance to be found.
There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Does this suggest that founding a stealth AGI institute (to coordinate conferences, and communication between researchers) might be suited to oversee and influence potential undertakings that could lead to imminent high-risk situations?
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
I don’t know enough about his capabilities when it comes to contributing to unfriendly AI research to answer that. Being unable to think sanely about friendliness or risks may have little bearing on your capabilities with respect to AGI research. The modes of thinking have very little bearing on each other.
That they may be more rational and less idealistic than I may otherwise have guessed. There are many potential benefits the SIAI could gain from an affiliation with those inside the higher status AGI communities. Knowing who to know has many uses unrelated to knowing what to know.
Indeed. I read part of this post as implying that his position had at least a little bit to do with gaining status from affiliating with him (“It has similarly been a general rule with the Singularity Institute that, whatever it is we’re supposed to do to be more credible, when we actually do it, nothing much changes. ‘Do you do any sort of code development? I’m not interested in supporting an organization that doesn’t develop code’ → OpenCog → nothing changes. ‘Eliezer Yudkowsky lacks academic credentials’ → Professor Ben Goertzel installed as Director of Research → nothing changes.”).
That’s an impressive achievement! I wonder if they will be able to maintain it? I also wonder whether they will be able to distinguish those times when the objections are solid, not merely something to treat as PR concerns. There is a delicate balance to be found.
Does this suggest that founding a stealth AGI institute (to coordinate conferences, and communication between researchers) might be suited to oversee and influence potential undertakings that could lead to imminent high-risk situations?
By the way, I noticed from my server logs that the Institute for Defense Analyses seems to be reading LW. They visited my homepage, referred by my LW profile. So one should think about the consequences of discussing such matters in public, respectively not doing so.
Most likely, someone working there just happens to.
Fascinating.