First, each of us has a different mind that produces a different thought cache, and most of us probably won’t be able to find much of a trunk build that we can agree on. To avoid conflicts, we’ll have to transition from the current monolithic architecture to a Unix-like modular architecture. But that will take years, because we’ll have to figure out who’s running what modules, and which modules each entry in the thought cache comes from. (You can’t count on lsmod to give complete or accurate results. I’d been running several unnamed modules for years before I found out they were a reimplementation of something called Singularitarianism.)
Second, how much data will we have to transfer (allowing for authentication, error correction, and Byzantine fault-tolerance), and are you sure anyone has enough input and output bandwidth?
most of us probably won’t be able to find much of a trunk build that we can agree on
I think you’re wrong as a question of fact, but I love the way you’ve expressed yourself.
It’s more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people’s thought caches.
Also, I don’t think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.
(Rationality as software development; it’s an interesting model, but perhaps we shouldn’t stretch the analogy too far)
Two problems.
First, each of us has a different mind that produces a different thought cache, and most of us probably won’t be able to find much of a trunk build that we can agree on. To avoid conflicts, we’ll have to transition from the current monolithic architecture to a Unix-like modular architecture. But that will take years, because we’ll have to figure out who’s running what modules, and which modules each entry in the thought cache comes from. (You can’t count on lsmod to give complete or accurate results. I’d been running several unnamed modules for years before I found out they were a reimplementation of something called Singularitarianism.)
Second, how much data will we have to transfer (allowing for authentication, error correction, and Byzantine fault-tolerance), and are you sure anyone has enough input and output bandwidth?
I think you’re wrong as a question of fact, but I love the way you’ve expressed yourself.
It’s more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people’s thought caches.
Also, I don’t think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.
(Rationality as software development; it’s an interesting model, but perhaps we shouldn’t stretch the analogy too far)