Interesting idea but I’m suspicious that LLMs are enough for us to accept these as EMs. I think more likely people will treat such trained models as not true EMs but rather like ghosts who are fixed on who the person was when the model was trained.
The idea from fiction that came to mind is the people in portraits in Harry Potter.
Of course, such a thing is still pretty useful! I’m not sure LLMs are good enough at the sort of online learning and ontological shifts and other complex things we expect from people and thus EMs.
This is just a way to take a bunch of humans and copy paste till current pressing problems are solvable. If public opinion doesn’t affect deployment it doesn’t matter.
Models that can’t learn or change don’t go insane. Fine tuning on later brain data once subjects have learned a new capability can substitute. Getting the em/model to learn in silicon is a problem to solve after there’s a working model.
I edited the TL:DR to better emphasize that the preferred implementation is using brain data to train whatever shape of model the data suggests, not necessarily transformers.
The key point is that using internal brain state for training an ML model to imitate a human is probably the fastest way to get a passable copy of that human and that’s AGI solved.
Interesting idea but I’m suspicious that LLMs are enough for us to accept these as EMs. I think more likely people will treat such trained models as not true EMs but rather like ghosts who are fixed on who the person was when the model was trained.
The idea from fiction that came to mind is the people in portraits in Harry Potter.
Of course, such a thing is still pretty useful! I’m not sure LLMs are good enough at the sort of online learning and ontological shifts and other complex things we expect from people and thus EMs.
This is just a way to take a bunch of humans and copy paste till current pressing problems are solvable. If public opinion doesn’t affect deployment it doesn’t matter.
Models that can’t learn or change don’t go insane. Fine tuning on later brain data once subjects have learned a new capability can substitute. Getting the em/model to learn in silicon is a problem to solve after there’s a working model.
I edited the TL:DR to better emphasize that the preferred implementation is using brain data to train whatever shape of model the data suggests, not necessarily transformers.
The key point is that using internal brain state for training an ML model to imitate a human is probably the fastest way to get a passable copy of that human and that’s AGI solved.