Hey Steve, I might be wrong here but I don’t think Jon’s question was specifically about what architectures you’d be talking about. I think he was asking more specifically about how to classify something as Brain-like-AGI for the purposes of your upcoming series.
The way I read your answer makes it sound like the safety considerations you’ll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.
Though that might actually have been what you meant so I probably should have asked for clarification before I presumptively answered Jon for you.
the safety considerations you’ll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.
I’m confused; this statement makes it sound like “whether it’s trained via SL or RL” is NOT a possible candidate for a “brain-like-AGI property”. Why can’t it be? Or maybe I’m reading too much into your wording.
Oops, strangely enough I just wasn’t thinking about that possibility. It’s obvious now, but I assumed that SL vs RL would be a minor consideration, despite the many words you’ve already written on reward.
Hey Steve, I might be wrong here but I don’t think Jon’s question was specifically about what architectures you’d be talking about. I think he was asking more specifically about how to classify something as Brain-like-AGI for the purposes of your upcoming series.
The way I read your answer makes it sound like the safety considerations you’ll be discussing depend more on whether the NTM is trained via SL or RL rather than whether it neatly contains all your (soon to be elucidated) Brain-like-AGI properties.
Though that might actually have been what you meant so I probably should have asked for clarification before I presumptively answered Jon for you.
I’m confused; this statement makes it sound like “whether it’s trained via SL or RL” is NOT a possible candidate for a “brain-like-AGI property”. Why can’t it be? Or maybe I’m reading too much into your wording.
Oops, strangely enough I just wasn’t thinking about that possibility. It’s obvious now, but I assumed that SL vs RL would be a minor consideration, despite the many words you’ve already written on reward.