I agree with this line of thought regarding iterative developments of proto-AGI via careful bootstrapping. Humans will be inadequate for monitoring progress of skills. Hopefully, we’ll have a slew of diagnostic of narrow minded neural networks whose sole purpose is to tease out relevant details of the proto-super human intellect. What I can’t wrap my head around is whether super (or sub) human level intelligence requires consciousness. If consciousness is required, then is the world worse or better for it? Is an agent with the rich experience of fears, hopes, joys more or less likely to be built? Do reward functions reliably grow into feelings, which lead to emotional experiences? If they do, then perhaps an evolving intelligence wouldn’t always be as alien as we currently imagine it.
I agree with this line of thought regarding iterative developments of proto-AGI via careful bootstrapping. Humans will be inadequate for monitoring progress of skills. Hopefully, we’ll have a slew of diagnostic of narrow minded neural networks whose sole purpose is to tease out relevant details of the proto-super human intellect. What I can’t wrap my head around is whether super (or sub) human level intelligence requires consciousness. If consciousness is required, then is the world worse or better for it? Is an agent with the rich experience of fears, hopes, joys more or less likely to be built? Do reward functions reliably grow into feelings, which lead to emotional experiences? If they do, then perhaps an evolving intelligence wouldn’t always be as alien as we currently imagine it.