Remember that you were only proposing discreet auditing systems to mollify the elves. They think of this as a privacy-preserving technology, because it is one, and that’s largely what we’re using it for.
Though it’s also going to cause tremendous decreases in transaction costs by allowing ledger state to be validated without requiring the validator to store a lot of data or replay ledger history. If most crypto investors could foresee how it’s going to make it harder to take rent on ledger systems, they might not be so happy about it.
Oh! “”10x” faster than RISC Zero”! We’re down to a 1000x slowdown then! Yay!
y’know, come to think of it… Training and inference differ massively in how much compute they consume. So after you’ve trained a massive system, you have a lot of compute free to do inference (modulo needing to use it to generate revenue, run your apps, etc). Meaning that for large scale, critical applications, it might in fact be feasible to tolerate some big, multiple OOMs, hit to the compute cost of your inference; if that’s all that’s required to get the zero knowledge benefits, and if those are crucial
never thought I’d die fighting side by side with an elf...
https://www.coindesk.com/tech/2024/04/09/venture-firm-a16z-releases-jolt-a-zero-knowledge-virtual-machine/
Remember that you were only proposing discreet auditing systems to mollify the elves. They think of this as a privacy-preserving technology, because it is one, and that’s largely what we’re using it for.
Though it’s also going to cause tremendous decreases in transaction costs by allowing ledger state to be validated without requiring the validator to store a lot of data or replay ledger history. If most crypto investors could foresee how it’s going to make it harder to take rent on ledger systems, they might not be so happy about it.
Oh! “”10x” faster than RISC Zero”! We’re down to a 1000x slowdown then! Yay!
Previous coverage btw.
y’know, come to think of it… Training and inference differ massively in how much compute they consume. So after you’ve trained a massive system, you have a lot of compute free to do inference (modulo needing to use it to generate revenue, run your apps, etc). Meaning that for large scale, critical applications, it might in fact be feasible to tolerate some big, multiple OOMs, hit to the compute cost of your inference; if that’s all that’s required to get the zero knowledge benefits, and if those are crucial