Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.
What does “trans-sapient performance” mean?
Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.
You understand that you just replaced some words with others without clarifying anything, right? “Sufficiently wide” doesn’t mean anything.
I cannot possibly disclose confidential research here, so you will have to be content with that.
At any rate, believing that human-level AI is an extremely dangerous technology is pseudo-scientific.
Humans can be extremely dangerous. Why wouldn’t a human-level AI be?