This is just a way to take a bunch of humans and copy paste till current pressing problems are solvable. If public opinion doesn’t affect deployment it doesn’t matter.
Models that can’t learn or change don’t go insane. Fine tuning on later brain data once subjects have learned a new capability can substitute. Getting the em/model to learn in silicon is a problem to solve after there’s a working model.
I edited the TL:DR to better emphasize that the preferred implementation is using brain data to train whatever shape of model the data suggests, not necessarily transformers.
The key point is that using internal brain state for training an ML model to imitate a human is probably the fastest way to get a passable copy of that human and that’s AGI solved.
This is just a way to take a bunch of humans and copy paste till current pressing problems are solvable. If public opinion doesn’t affect deployment it doesn’t matter.
Models that can’t learn or change don’t go insane. Fine tuning on later brain data once subjects have learned a new capability can substitute. Getting the em/model to learn in silicon is a problem to solve after there’s a working model.
I edited the TL:DR to better emphasize that the preferred implementation is using brain data to train whatever shape of model the data suggests, not necessarily transformers.
The key point is that using internal brain state for training an ML model to imitate a human is probably the fastest way to get a passable copy of that human and that’s AGI solved.