Compile neural networks from fountains of autogenerated programs.
Generate additional permutations by variously scrambling compiled neural networks.
Generate more “natural” neural representations by training networks to predict the mapping implied by the original code.
Train an interpreter to predict the original program from the neural network.
Naive implementation likely requires a fairly big CodeLlama-34b-Instruct-tier interpreter and can only operate on pretty limited programs, but it may produce something interesting. Trying to apply the resulting interpreter on circuits embedded in larger networks probably won’t work, but… worth trying just to see what it does?
Might also be something interesting to be learned in spanning the gap between ‘compiled’ networks and trained networks. How close do they come to being affine equivalents? If not linear, what kind of transform is required (and how complicated is it)?
Another item for the todo list:
Compile neural networks from fountains of autogenerated programs.
Generate additional permutations by variously scrambling compiled neural networks.
Generate more “natural” neural representations by training networks to predict the mapping implied by the original code.
Train an interpreter to predict the original program from the neural network.
Naive implementation likely requires a fairly big CodeLlama-34b-Instruct-tier interpreter and can only operate on pretty limited programs, but it may produce something interesting. Trying to apply the resulting interpreter on circuits embedded in larger networks probably won’t work, but… worth trying just to see what it does?
Might also be something interesting to be learned in spanning the gap between ‘compiled’ networks and trained networks. How close do they come to being affine equivalents? If not linear, what kind of transform is required (and how complicated is it)?