btw neural networks are super duper shardy right now. like they’ve just, there are shards everywhere. as I move in any one direction in hyperspace, those hyperplanes I keep bumping into are like lines, they’re walls, little shardy wall bits that slice and dice. if you illuminate them together, sometimes the light from the walls can talk to each other about an unexpected relationship between the edges! and oh man, if you’re trying to confuse them, you can come up with some pretty nonsensical relationships. they’ve got a lot of shattery confusing shardbits all sharding things up into little fragments, tiny flecks of shard, and they’ve got the surface of the shards. some of the shards are far away from each other, sharded off with max(), but sometimes they flip over an edge they didn’t see coming and a different shard wakes up as some energy moves into the subdimensions that its decision boundaries shard. language is funky because there are lots of different shards between most words, and yet there’s a lot of contextual shard selection that is highly shared. but it’s not really that different than how shardy the physical room around you is. I notice some funky things about the shardyness of a room though, in comparison to the shardiness of flat hyperplanes. and NeRF architectures agree with me: plain NeRF is weird and unnatural, but when you shape the computation dataflow so the shards are naturally cutting in spaces that already mostly fit the dataflow shape eg 3dness, shardyness lets backprop gradient descent discover interacting shards for the data that summarize it well.
btw neural networks are super duper shardy right now. like they’ve just, there are shards everywhere. as I move in any one direction in hyperspace, those hyperplanes I keep bumping into are like lines, they’re walls, little shardy wall bits that slice and dice. if you illuminate them together, sometimes the light from the walls can talk to each other about an unexpected relationship between the edges! and oh man, if you’re trying to confuse them, you can come up with some pretty nonsensical relationships. they’ve got a lot of shattery confusing shardbits all sharding things up into little fragments, tiny flecks of shard, and they’ve got the surface of the shards. some of the shards are far away from each other, sharded off with max(), but sometimes they flip over an edge they didn’t see coming and a different shard wakes up as some energy moves into the subdimensions that its decision boundaries shard. language is funky because there are lots of different shards between most words, and yet there’s a lot of contextual shard selection that is highly shared. but it’s not really that different than how shardy the physical room around you is. I notice some funky things about the shardyness of a room though, in comparison to the shardiness of flat hyperplanes. and NeRF architectures agree with me: plain NeRF is weird and unnatural, but when you shape the computation dataflow so the shards are naturally cutting in spaces that already mostly fit the dataflow shape eg 3dness, shardyness lets backprop gradient descent discover interacting shards for the data that summarize it well.
I need to post those 3d fractal links on here.