All that is indeed possible, but not guaranteed. The reason I was speculating that better brain imaging wouldn’t be especially useful for machine learning in the absence of better neuron models is that I’d assume that the optimization pressure that went into the architecture of brains was fairly heavily tailored to the specific behavior of the neurons that those brains are made of, and wouldn’t be especially useful relative to other neural network design techniques that humans come up with when used with artificial neurons that behave quite differently. But sure, I shouldn’t be too confident of this. In particular, the idea of training ML systems to imitate brain activation patterns, rather than copying brain architecture directly, is a possible way around this that I hadn’t considered.
All that is indeed possible, but not guaranteed. The reason I was speculating that better brain imaging wouldn’t be especially useful for machine learning in the absence of better neuron models is that I’d assume that the optimization pressure that went into the architecture of brains was fairly heavily tailored to the specific behavior of the neurons that those brains are made of, and wouldn’t be especially useful relative to other neural network design techniques that humans come up with when used with artificial neurons that behave quite differently. But sure, I shouldn’t be too confident of this. In particular, the idea of training ML systems to imitate brain activation patterns, rather than copying brain architecture directly, is a possible way around this that I hadn’t considered.