Many people here really aren’t that concerned about whether Goertzel or Yudkowsky has a better understanding of uFAI risks.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.
I am somewhat more interested in understanding why Gortzel would say what he says about AI. Just saying ‘Gortzel’s brain doesn’t appear to work right’ isn’t interesting. But the Hansonian signalling motivations behind academic posturing is more so.