(for what it’s worth, the current state of things has me believing that foom is likely to be much smaller than yudkowsky worries, but also nonzero. I don’t expect fully general, fully recursive self improvement to be a large boost over more coherent metalearning techniques we’d need to deploy to even get AGI in the first place.)
(for what it’s worth, the current state of things has me believing that foom is likely to be much smaller than yudkowsky worries, but also nonzero. I don’t expect fully general, fully recursive self improvement to be a large boost over more coherent metalearning techniques we’d need to deploy to even get AGI in the first place.)