Some braindumping, took me a while, many passes of editing in loom to see if I’d missed something—I rejected almost every loom branch though, this is still almost all my writing, sometimes the only thing I get from loom is knowing what I don’t intend to say:
What properties are easy to prove through large physical systems without knowing their internals? Are any of those properties selection theorems? Can I make an AI that segments real space in a way that allows me to prove that a natural abstraction is maintained through it?
Can we structure ai architectures so we have a guaranteed margin of natural abstraction? How much of existing physics knowledge can I hardcode safely, given that eventually the AI must do physics research and generalize correctly?
How can we become able to trade with ants?
What’s the deal with the game theory between GAN generator and GAN descriminator, and how does it compare to the reason why diffusion beats GANs? is there anything relevant to how to encode a utility function about the fact that diffusion is built out of noise-resistance, same as bio life has to be?
Can we build models of any of this in a less nonlinear simulator than quantum that adds properties not found in classical cellular automata? eg, I’m excited about particle lenia—what would it look like to build a test case for the game theory of thermodynamic coprotection in a lenia world? perhaps it needs more refinement?
What does a deep learning version of the discovering agents (causal discovery of systems being moved by reasons) algorithm look like? How do I actually run discovering agents on a language model, right now?
How do I have to add conditions that limit generality of formal statements in order to build a connected manifold of conditional statements that fully cover the behavior manifold of co-protective behavior? Can I make statements of co-protection knowing only what an agent is, not what a person is, and yet trust that the diffusion agency of the self-healing process will be maintained?
Am I correct that humanity have a moral obligation to become more efficient per watt in order to make room for more beings? What is the fair tradeoff of how much smarter per watt different beings are allowed to be before it’s moral to start a war about it? Seems like it’s probably a pretty wide window, but maybe there’s some ratio where one ai is obligated to attack another stronger one on behalf of a weaker one or something? I hope this does not occur and am interested in analyzing it to ensure we can build defenses against it
How does having infinite statements in your game-tree reasoning process (instead of a strictly finite game-tree) affect a self-modifying diffusion player with both symbolic neural models in ensemble? what is the myopic behavior of a diffusion model? [the most loom contribution to this one, and it shows, I find it less crisp than the others, which are themselves not the most crisp]
My current sense is I will be the one to answer exactly none of these. But who knows! anyway, here’s some. I think I have more knocking around somewhere in my head and/or my previous comments.
Some braindumping, took me a while, many passes of editing in loom to see if I’d missed something—I rejected almost every loom branch though, this is still almost all my writing, sometimes the only thing I get from loom is knowing what I don’t intend to say:
What properties are easy to prove through large physical systems without knowing their internals? Are any of those properties selection theorems? Can I make an AI that segments real space in a way that allows me to prove that a natural abstraction is maintained through it?
Can we structure ai architectures so we have a guaranteed margin of natural abstraction? How much of existing physics knowledge can I hardcode safely, given that eventually the AI must do physics research and generalize correctly?
How can we become able to trade with ants?
What’s the deal with the game theory between GAN generator and GAN descriminator, and how does it compare to the reason why diffusion beats GANs? is there anything relevant to how to encode a utility function about the fact that diffusion is built out of noise-resistance, same as bio life has to be?
Can we build models of any of this in a less nonlinear simulator than quantum that adds properties not found in classical cellular automata? eg, I’m excited about particle lenia—what would it look like to build a test case for the game theory of thermodynamic coprotection in a lenia world? perhaps it needs more refinement?
What does a deep learning version of the discovering agents (causal discovery of systems being moved by reasons) algorithm look like? How do I actually run discovering agents on a language model, right now?
How do I have to add conditions that limit generality of formal statements in order to build a connected manifold of conditional statements that fully cover the behavior manifold of co-protective behavior? Can I make statements of co-protection knowing only what an agent is, not what a person is, and yet trust that the diffusion agency of the self-healing process will be maintained?
Am I correct that humanity have a moral obligation to become more efficient per watt in order to make room for more beings? What is the fair tradeoff of how much smarter per watt different beings are allowed to be before it’s moral to start a war about it? Seems like it’s probably a pretty wide window, but maybe there’s some ratio where one ai is obligated to attack another stronger one on behalf of a weaker one or something? I hope this does not occur and am interested in analyzing it to ensure we can build defenses against it
How does having infinite statements in your game-tree reasoning process (instead of a strictly finite game-tree) affect a self-modifying diffusion player with both symbolic neural models in ensemble? what is the myopic behavior of a diffusion model? [the most loom contribution to this one, and it shows, I find it less crisp than the others, which are themselves not the most crisp]
My current sense is I will be the one to answer exactly none of these. But who knows! anyway, here’s some. I think I have more knocking around somewhere in my head and/or my previous comments.