It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly.
Is there a disagreement about this? Perhaps not as great as it seems.
The idea of a superhuman software is generally accepted on LW. Whether OpenCog is the right platform, is a technical detail, which we can skip at the moment.
Might this software turn out benevolent, if raised properly? Let’s become more specific about that part. If “might” only means “there is a nonzero probability of this outcome”, LW agrees.
So we should rather askhow high is the probability that a “properly raised” OpenCog system will turn out “benevolent”—depending on definitions of “benevolent” and “properly raised”. That is the part which makes the difference.
Is there a disagreement about this? Perhaps not as great as it seems.
The idea of a superhuman software is generally accepted on LW. Whether OpenCog is the right platform, is a technical detail, which we can skip at the moment.
Might this software turn out benevolent, if raised properly? Let’s become more specific about that part. If “might” only means “there is a nonzero probability of this outcome”, LW agrees.
So we should rather ask how high is the probability that a “properly raised” OpenCog system will turn out “benevolent”—depending on definitions of “benevolent” and “properly raised”. That is the part which makes the difference.