More importantly, I can’t imagine a colony or ship design (including submarine and deep-ocean vessels today, which do actually face these issues) that doesn’t have a local cluster for storage and proxy to anything outside.
Submarine’s today don’t need to face all these issues. It’s okay when a submarine can’t connect to npm because there’s likely nobody on the submarine to use npm.
I expect a submarine to have a bunch of custom build software for tasks that are supposed to be done in a submarine while all the other tasks that you can do on computers that need internet connections fail.
If you take a cruise today in a deep-ocean vessel and want to update the windows on your laptop, I don’t think you can currently access a local cluster of storage on the average cruise vessel that allows you to do this.
Given the NASA approach to ship design, NASA would likely contract out a bunch of custom software for tasks that they think should be done on a space ship. Then that software works and if someone wants to do something different they are out of luck.
When it comes for a colony you want that normal software like npm that’s not specifically build for the colony still works.
My company has our own servers for Windows updates because our IT department tests (or at least delays) them—I update my laptop from a local server, not from the internet. If I went on a cruise, I’d probably just avoid it until I was in port. That’s economics, not technology—the ships aren’t on low-bandwidth links for long enough to justify the expense of local servers.
A colony will have different economics, but similar fundamentals—there’ll be different metering and expectations for links to earth than locally, and application-specific proxies and local caches/resources will be the rule, not the exception.
Mirroring of servers is old technology (common in the ’80s), content-distribution networks are newer but not actually new. This topic is a major part of most scalable system designs—in fact, NPM is a good example because it’s near-trivial to use a local server for updates.
A colony will have different economics, but similar fundamentals—there’ll be different metering and expectations for links to earth than locally, and application-specific proxies
Application-specific proxies mean that the end-user on Mars can’t simply use whatever software from earth they want to use but has to ask the general proxy management entity specifically to setup a proxy for the application they want to use.
IPFS skips that and allows everything to work without setting up application-specific proxies.
To the extend that it’s old technology we now have more access control technology that makes sure that software is downloaded from the official server and not any other provider.
Submarine’s today don’t need to face all these issues. It’s okay when a submarine can’t connect to npm because there’s likely nobody on the submarine to use npm.
I expect a submarine to have a bunch of custom build software for tasks that are supposed to be done in a submarine while all the other tasks that you can do on computers that need internet connections fail.
If you take a cruise today in a deep-ocean vessel and want to update the windows on your laptop, I don’t think you can currently access a local cluster of storage on the average cruise vessel that allows you to do this.
Given the NASA approach to ship design, NASA would likely contract out a bunch of custom software for tasks that they think should be done on a space ship. Then that software works and if someone wants to do something different they are out of luck.
When it comes for a colony you want that normal software like npm that’s not specifically build for the colony still works.
My company has our own servers for Windows updates because our IT department tests (or at least delays) them—I update my laptop from a local server, not from the internet. If I went on a cruise, I’d probably just avoid it until I was in port. That’s economics, not technology—the ships aren’t on low-bandwidth links for long enough to justify the expense of local servers.
A colony will have different economics, but similar fundamentals—there’ll be different metering and expectations for links to earth than locally, and application-specific proxies and local caches/resources will be the rule, not the exception.
Mirroring of servers is old technology (common in the ’80s), content-distribution networks are newer but not actually new. This topic is a major part of most scalable system designs—in fact, NPM is a good example because it’s near-trivial to use a local server for updates.
Application-specific proxies mean that the end-user on Mars can’t simply use whatever software from earth they want to use but has to ask the general proxy management entity specifically to setup a proxy for the application they want to use.
IPFS skips that and allows everything to work without setting up application-specific proxies.
To the extend that it’s old technology we now have more access control technology that makes sure that software is downloaded from the official server and not any other provider.