As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added “consultation with SIAI” in the latter part of OpenCog’s roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
At the moment, they seem more interested in lobbing stink bombs in the general direction of the rest of the AI community—perhaps in the hope that it will drive some of the people there in its direction. Claiming that your opponents’ products may destroy the world is surely a classic piece of FUD marketing.
The “friendlier-than-thou” marketing battle seems to be starting out with some mud-slinging.
As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added “consultation with SIAI” in the latter part of OpenCog’s roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
At the moment, they seem more interested in lobbing stink bombs in the general direction of the rest of the AI community—perhaps in the hope that it will drive some of the people there in its direction. Claiming that your opponents’ products may destroy the world is surely a classic piece of FUD marketing.
The “friendlier-than-thou” marketing battle seems to be starting out with some mud-slinging.