What makes this more economical than running computation on the ground? The only benefit I saw that looks like a benefit is that the cooling is done by black-body radiation, but cooling is a mostly solved problem, right? (I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)
According to the wiki, the most likely technical show-stopper (which makes sense given the fact that m288 is outside of the inner Van Allen belt) is radiation damage. Proposed fixes include periodic annealing (heating the circuit with a heating element) to repair the damage, and the use of radiation-resistant materials for circuitry.
What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won’t fix that.
As well, periodic annealing eventually results in your circuit no longer being a circuit, as the wires and capacitors have diffused until there’s a short. You might be able to build these with a large enough heat budget that you can get a reasonable number of reheats out of it, but the lifespan is going to be fairly short.
(I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)
This struck me as an interesting estimate so here’s my attempt at checking it:
Wikipedia quotes 300W/kg solar cells. Medium Earth Orbit ranges from over 2,000km to 35,000km above sea level—let’s pick a fairly low estimate of 6000km.
Gravitational potential is 3.32 J for elevating a 1kg mass to 6000km from sea level. So the solar panel must operate for 1.28 days to recoup the energy cost of elevating the object. This is, of course, a lower bound (assuming perfect launch mechanism, no kinetic energy of orbit, etc. etc.), but it seems unreasonable to assume that launching solar panels has no benefit given this tiny lower bound. Furthermore, the fact that solar panels are routinely launched into orbit suggests that they do have a net energy production.
What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won’t fix that.
What do existing computers-in-space do? Shielding of some sort?
So the solar panel must operate for 1.28 days to recoup the energy cost of elevating the object
The object- what about the rocket? (I also should have included the energy cost of making the solar panel in the first place, which tends to seriously reduce their attractiveness.)
Furthermore, the fact that solar panels are routinely launched into orbit suggests that they do have a net energy production.
Well, solar is cheap to get to space. (I know our recent Mars rover is using nuclear energy (powered by decay, not fission or fusion) rather than solar panels to reduce the impact of Mars dust, and that deep space probes used similar technology because solar radiance decreases the further away you get.) Batteries in particular are pretty heavy- and so solar panels probably represent the most joules per kilogram in Earth’s orbit.
But the comparison isn’t “solar in space” vs. “chemical in space”, it’s “solar in space” vs. “anything on earth”. The idea of “let’s put computers out in space, where the variable cost of running them is zero” misses that the fixed cost of putting them in space is massively high, probably to the point where it eats any benefit from zero variable cost.
That is, this technology looks cool but I don’t yet see the comparative advantage.
What do existing computers-in-space do? Shielding of some sort?
Check out the wiki page on radiation hardening. I believe that the primary thing to do with cosmic rays is just noticing when they happen and fixing the flip. I think it’s a mostly solved problem, but that the hardware / software is slightly more expensive because of that. (Buying RAM with ECC appears to be difficult for general consumers, but I imagine it’s standard in the satellite industry.)
The term ‘fission’ is generally reserved for daughter species of vaguely similar mass. Decays are generally alpha (He-4) or beta (electron and neutrinos), maybe with some others mixed in.
ECC RAM is standard for servers, so it’s not especially hard to get. Fixing bit errors outside the memory (e.g. in CPU) is harder; I imagine something like http://en.wikipedia.org/wiki/Tandem_Computers, essentially running two computers in parallel and checking them against one another, would work. But all of this drives the cost up, which, as you note, is already a problem.
There are other clever things you can do, like including redundant hardware and error-checking within the CPU, but they all drive up the die area used. Some of this stuff might be able to actually drive down cost by increasing the manufacturing yield, but in general, it will probably be more expensive.
You seemed to have missed my sentence between the two that you quoted:
This is, of course, a lower bound (assuming perfect launch mechanism, no kinetic energy of orbit, etc. etc.), but it seems unreasonable to assume that launching solar panels has no benefit given this tiny lower bound.
My point was that even if the launch is only 0.1% efficient at moving solar cells into space, you’re looking at more than recouping the cost of the launch in the putting the solar panel up. If you think the launch is much less than 0.1% efficient, I’d be interested in hearing why you think that. They might actually be that inefficient, but I would be hesitant to assume such without having a reason to do so.
Now that lsparrish has posted a link to a better discussion of the subject, my post is more or less obsolete.
But the comparison isn’t “solar in space” vs. “chemical in space”, it’s “solar in space” vs. “anything on earth”.
I agree and was not trying to say that this plan was practical—I do not believe it is. I was just pointing out that something you stated as true doesn’t appear to be so from a very quick look at the numbers.
The idea of “let’s put computers out in space, where the variable cost of running them is zero” misses that the fixed cost of putting them in space is massively high
Sure. That’s why they would have to be very lightweight for this to work.
(I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)
This is answered in the wiki: it takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. The blackbody cooling is a significant reason as well. (Note: The 7 gram estimate given in the paper is slightly out of date—the wiki describes 3 grams as the current target.)
What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won’t fix that.
“The most radiation sensitive components are likely to be the flash memory. These incorporate error correction, but software error correction and frequent rewrites may be necessary to correct for radiation-induced charges. Some errors may need to be restored from caches on other thinsats partway around the orbit.”
So it looks like he is thinking of a combination of redundancy and memory-repair algorithms.
As well, periodic annealing eventually results in your circuit no longer being a circuit, as the wires and capacitors have diffused until there’s a short. You might be able to build these with a large enough heat budget that you can get a reasonable number of reheats out of it, but the lifespan is going to be fairly short.
This is also somewhat mentioned in the manufacturing section, where the concern is that differentials in material thermal properties could cause damage.
“The vast bulk of the material, and the largest pieces of of the thinsat, will be laminated engineering glass and metal. Since the thinsat undergoes wide temperature changes when it passes in and out of shadow, or undegoes thermal annealing, it will be more survivable if the glass can match silicon’s 2.6E-6/Kelvin coefficient of thermal expansion (CTE). Metals have very high CTEs, while SiO2 has a very low CTE, so slotted metal wires with SiO2 in the gaps is one way to make a “material” that is both conductive and has the same CTE as silicon.”
Also there is the fact the wires and capacitors are going to be all two dimensional in nature. My guess is that not all of the same assumptions necessarily apply in this situation as do for three dimensional wires and capacitors.
Thanks for the more direct links. I’m starting to update in favor of this working, but I’m still bothered by the amount of speculative tech involved. (If we’re going to use new RAM coming out in a few years that’ll be cheaper/faster/less error prone, our comparison needs to not be to current tech / costs, but to tech / costs after that new RAM has been integrated.)
I suspect it’ll be easier to replace silicon than to get the rest of the thinsat to match the thermal expansion of silicon, but that suspicion is rooted in professor friends who do semiconductor research, not industry, so the costs there might be way higher.
This page was the only thing I could find on the economics (he mentions elsewhere he wants to keep the business plan private).
Another thing to think about: have we sent up stacked things to space like this before, and managed to disengage them from each other? I believe a number of solar sails have failed to unfold correctly, and so there might be a similar problem here. Thankfully, they don’t need to be attached to each other, like solar sails do, but now it’s a problem if they do get attached to each other, and I don’t know which of those is a more difficult engineering problem. (The only description I saw of that on the wiki was ‘peeling’ them apart.)
What makes this more economical than running computation on the ground? The only benefit I saw that looks like a benefit is that the cooling is done by black-body radiation, but cooling is a mostly solved problem, right? (I expect it will take more energy to put into orbit than the solar panels will accumulate over their lifetime.)
What about memory? Bits being flipped by cosmic radiation is an issue on Earth; I imagine it must be more significant in space, and annealing won’t fix that.
As well, periodic annealing eventually results in your circuit no longer being a circuit, as the wires and capacitors have diffused until there’s a short. You might be able to build these with a large enough heat budget that you can get a reasonable number of reheats out of it, but the lifespan is going to be fairly short.
This struck me as an interesting estimate so here’s my attempt at checking it:
Wikipedia quotes 300W/kg solar cells. Medium Earth Orbit ranges from over 2,000km to 35,000km above sea level—let’s pick a fairly low estimate of 6000km. Gravitational potential is 3.32 J for elevating a 1kg mass to 6000km from sea level. So the solar panel must operate for 1.28 days to recoup the energy cost of elevating the object. This is, of course, a lower bound (assuming perfect launch mechanism, no kinetic energy of orbit, etc. etc.), but it seems unreasonable to assume that launching solar panels has no benefit given this tiny lower bound. Furthermore, the fact that solar panels are routinely launched into orbit suggests that they do have a net energy production.
What do existing computers-in-space do? Shielding of some sort?
The object- what about the rocket? (I also should have included the energy cost of making the solar panel in the first place, which tends to seriously reduce their attractiveness.)
Well, solar is cheap to get to space. (I know our recent Mars rover is using nuclear energy (powered by decay, not fission or fusion) rather than solar panels to reduce the impact of Mars dust, and that deep space probes used similar technology because solar radiance decreases the further away you get.) Batteries in particular are pretty heavy- and so solar panels probably represent the most joules per kilogram in Earth’s orbit.
But the comparison isn’t “solar in space” vs. “chemical in space”, it’s “solar in space” vs. “anything on earth”. The idea of “let’s put computers out in space, where the variable cost of running them is zero” misses that the fixed cost of putting them in space is massively high, probably to the point where it eats any benefit from zero variable cost.
That is, this technology looks cool but I don’t yet see the comparative advantage.
Check out the wiki page on radiation hardening. I believe that the primary thing to do with cosmic rays is just noticing when they happen and fixing the flip. I think it’s a mostly solved problem, but that the hardware / software is slightly more expensive because of that. (Buying RAM with ECC appears to be difficult for general consumers, but I imagine it’s standard in the satellite industry.)
Isn’t decay a subset of fission? (Excluding things like lone protons that don’t technically have a nucleus or whatever.)
Yeah, that was sloppy of me. I meant to specify that it was spontaneous fission rather than chain reaction fission.
The term ‘fission’ is generally reserved for daughter species of vaguely similar mass. Decays are generally alpha (He-4) or beta (electron and neutrinos), maybe with some others mixed in.
ECC RAM is standard for servers, so it’s not especially hard to get. Fixing bit errors outside the memory (e.g. in CPU) is harder; I imagine something like http://en.wikipedia.org/wiki/Tandem_Computers, essentially running two computers in parallel and checking them against one another, would work. But all of this drives the cost up, which, as you note, is already a problem.
There are other clever things you can do, like including redundant hardware and error-checking within the CPU, but they all drive up the die area used. Some of this stuff might be able to actually drive down cost by increasing the manufacturing yield, but in general, it will probably be more expensive.
You seemed to have missed my sentence between the two that you quoted:
My point was that even if the launch is only 0.1% efficient at moving solar cells into space, you’re looking at more than recouping the cost of the launch in the putting the solar panel up. If you think the launch is much less than 0.1% efficient, I’d be interested in hearing why you think that. They might actually be that inefficient, but I would be hesitant to assume such without having a reason to do so.
Now that lsparrish has posted a link to a better discussion of the subject, my post is more or less obsolete.
I agree and was not trying to say that this plan was practical—I do not believe it is. I was just pointing out that something you stated as true doesn’t appear to be so from a very quick look at the numbers.
Sure. That’s why they would have to be very lightweight for this to work.
This is answered in the wiki: it takes roughly 2 months for a 3 gram thinsat to pay for the launch energy if it gets 4 watts, assuming 32% fuel manufacturing efficiency. The blackbody cooling is a significant reason as well. (Note: The 7 gram estimate given in the paper is slightly out of date—the wiki describes 3 grams as the current target.)
This is discussed as well, albeit briefly:
“The most radiation sensitive components are likely to be the flash memory. These incorporate error correction, but software error correction and frequent rewrites may be necessary to correct for radiation-induced charges. Some errors may need to be restored from caches on other thinsats partway around the orbit.”
So it looks like he is thinking of a combination of redundancy and memory-repair algorithms.
This is also somewhat mentioned in the manufacturing section, where the concern is that differentials in material thermal properties could cause damage.
“The vast bulk of the material, and the largest pieces of of the thinsat, will be laminated engineering glass and metal. Since the thinsat undergoes wide temperature changes when it passes in and out of shadow, or undegoes thermal annealing, it will be more survivable if the glass can match silicon’s 2.6E-6/Kelvin coefficient of thermal expansion (CTE). Metals have very high CTEs, while SiO2 has a very low CTE, so slotted metal wires with SiO2 in the gaps is one way to make a “material” that is both conductive and has the same CTE as silicon.”
Also there is the fact the wires and capacitors are going to be all two dimensional in nature. My guess is that not all of the same assumptions necessarily apply in this situation as do for three dimensional wires and capacitors.
Thanks for the more direct links. I’m starting to update in favor of this working, but I’m still bothered by the amount of speculative tech involved. (If we’re going to use new RAM coming out in a few years that’ll be cheaper/faster/less error prone, our comparison needs to not be to current tech / costs, but to tech / costs after that new RAM has been integrated.)
I suspect it’ll be easier to replace silicon than to get the rest of the thinsat to match the thermal expansion of silicon, but that suspicion is rooted in professor friends who do semiconductor research, not industry, so the costs there might be way higher.
This page was the only thing I could find on the economics (he mentions elsewhere he wants to keep the business plan private).
Another thing to think about: have we sent up stacked things to space like this before, and managed to disengage them from each other? I believe a number of solar sails have failed to unfold correctly, and so there might be a similar problem here. Thankfully, they don’t need to be attached to each other, like solar sails do, but now it’s a problem if they do get attached to each other, and I don’t know which of those is a more difficult engineering problem. (The only description I saw of that on the wiki was ‘peeling’ them apart.)