Hinton: “mortal” efficient analog hardware may be learned-in-place, uncopyable

This post comes in two forms: a 15 minute talk and link to partway through forward-forward’s initial paper (see also papers citing this).

Talk version (2x speed recommended, captions recommended):

I don’t find the discussion of forward-forward to be the most interesting part; it’s a plausible learning algorithm, perhaps, but what I’m really interested in is the impact he thinks it’ll have on how computers are designed: he claims they’re going to be chunks of trained matter that interface with the outside world at boundaries and are otherwise inscrutable and reliant on defects in the particular hardware’s shape.

This seems most relevant to me in terms of what effects it might have on the shape of selfhood, if any, of the AI living in that low-power silicon brain.


Sidenote: Amusingly, he opens with a claim that this implies that we can’t do brain uploads, and yet describes exactly how to do them anyway: distillation training of a student, including copying of mistakes, which seems to me like the obvious way to do incremental hardware replacement of human brains as well. I also think he’s overestimating how much this will prevent exact copying; exact copies won’t behave exactly the same way, but it seems likely to me that one would be able to copy knowledge out of a chip more precisely than just by distillation by also involving the kind of offline scanning hardware one would use to examine a CPU. Using the resulting scan would require the help of a learned scan-to-hardware converter AI, though.