If Deep Learning people suddenly starting working hard on models with dynamic architectures who self-modify (i.e. a network outputs its own weight and architecture update for the next time-step) and they *don’t* see large improvements in task performance, I would take that as evidence against AGI going FOOM.
(for what it’s worth, the current state of things has me believing that foom is likely to be much smaller than yudkowsky worries, but also nonzero. I don’t expect fully general, fully recursive self improvement to be a large boost over more coherent metalearning techniques we’d need to deploy to even get AGI in the first place.)
What evidence would convince you that AGI won’t go FOOM?
If Deep Learning people suddenly starting working hard on models with dynamic architectures who self-modify (i.e. a network outputs its own weight and architecture update for the next time-step) and they *don’t* see large improvements in task performance, I would take that as evidence against AGI going FOOM.
(for what it’s worth, the current state of things has me believing that foom is likely to be much smaller than yudkowsky worries, but also nonzero. I don’t expect fully general, fully recursive self improvement to be a large boost over more coherent metalearning techniques we’d need to deploy to even get AGI in the first place.)
How do you draw a line between weight updates and architecture updates?