I still think this is great. Some minor updates, and an important note:
Minor updates: I’m a bit less concerned about AI-powered propaganda/persuasion than I was at the time, not sure why. Maybe I’m just in a more optimistic mood. See this critique for discussion. It’s too early to tell whether reality is diverging from expectation on this front. I had been feeling mildly bad about my chatbot-centered narrative, as of a month ago, but given how ChatGPT was received I think things are basically on trend. Diplomacy happened faster than I expected, though in a less generalizeable way than I expected, so whatever. My overall timelines have shortened somewhat since I wrote this story, but it’s still the thing I point people towards when they ask me what I think will happen. (Note that the bulk of my update was from publicly available info rather than from nonpublic stuff I saw at OpenAI.)
Important note: When I wrote this story, my AI timelines median was something like 2029. Based on how things shook out as the story developed it looked like AI takeover was about to happen, so in my unfinished draft of what 2027 looks like, AI takeover happens. (Also AI takeoff begins, I hadn’t written much about that part but probably it would reach singularity/dysonswarms/etc. in around 2028 or 2029.) That’s why the story stopped, I found writing about takeover difficult and confusing & I wanted to get the rest of the story up online first. Alas, I never got around to finishing the 2027 story. I’m mentioning this because I think a lot of readers with 20+ year timelines read my story and were like “yep seems about right” not realizing that if you look closely at what’s happening in the story, and imagine it happening in real life, it would be pretty strong evidence that crazy shit was about to go down. Feel free to controvert that claim, but the point is, I want it on the record that when this original 2026 story was written, I envisioned the proper continuation of the story resulting in AI takeover in 2027 and singularity around 2027-2029. The underlying trends/models I was using as the skeleton of the story predicted this, and the story was flesh on those bones. If this surprises you, reread the story and ask yourself what AI abilities are crucial for AI R&D acceleration, and what AI abilities are crucial for AI takeover, that aren’t already being demonstrated in the story (at least in some weak but rapidly-strengthening form). If you find any, please comment and let me know, I am genuinely interested to hear what you’ve got & hopeful that you’ll find some blocker I haven’t paid enough attention to.
(Also AI takeoff begins, I hadn’t written much about that part but probably it would reach singularity/dysonswarms/etc. in around 2028 or 2029.
This is a good example of where I disagree. Dyson Swarms in 8 years requires basically physics-breaking tech and a desire to do so strongly that governments will fund significant GDPs on this. I give this a 99.9999% of not happening, with the 0.0001% chance where it does happen is “Holographic wormholes can be used to build time machines, instantly obsoleteing everything.
My timelines for AGI is in the mid 2030s, with actual singularity effects more in the 2050s-2060s.
Thanks for putting your disagreements on the record!
Dyson swarms in 8 years does not require breaking any known laws of physics. I don’t know how long it’ll take to build dyson swarms with mature technology, it depends on what the fastest possible doubling time of nanobots is. But less than a year seems plausible, as does a couple years.
Also, it won’t cost a substantial fraction of GDP, thanks to exponential growth all it takes is a seed. Also, governments probably won’t have much of a say in the matter.
Do you have any other disagreements, ideally about what’ll happen by 2026?
Dyson swarms in 8 years does not require breaking any known laws of physics. I don’t know how long it’ll take to build dyson swarms with mature technology, it depends on what the fastest possible doubling time of nanobots is. But less than a year seems plausible, as does a couple years.
Also, it won’t cost a substantial fraction of GDP, thanks to exponential growth all it takes is a seed. Also, governments probably won’t have much of a say in the matter.
Yeah, this might be my big disagreement. 80% chance that nanobots this capable of replicating fast enough for a Dyson Swarm cannot exist with known physics. I don’t know if you realize how much mass a Dyson Swarm has. You’re asking for nanobots that dismantle planets like Mercury in several months at most.
My general disagreement is the escalation is too fast and basically requires the plan going perfectly the first time, which is a bad sign. It only works to my mind because you think AI can plan so well the first time that it succeeds without any obstscles, like thermodynamics ruining that nanobot plan.
I don’t know if you realize how much mass a Dyson Swarm has. You’re asking for nanobots that dismantle planets like Mercury in several months at most.
Have you read Eternity in Six Hours? I’d be interested to hear your thoughts on it, and also whether or not you had already read it before writing this comment. They calculate a 30-year mercury disassembly time, but IIRC they use a 5-year doubling time for the miner-factory-launcher-satellite complexes. If instead it was, say, a 6 month doubling time, then maybe it’d be 3 years instead of 30. And if it was a one month doubling time, 6 months to disassemble Mercury. IIRC ordinary grass has something like a one-month doubling time, and ordinary car factories produce something like their own weight in cars every year, so it’s plausible to me that with super-advanced technology some sort of one-month-doubling-time fully-automated industry can be created.
Why do you think what I’m saying requires a plan going perfectly the first time? I definitely don’t think it requires that.
I have read Eternity in Six Hours and I can say that it violates the Second Law of Thermodynamics through the violation of the Constant Radiance Theorem. The Power density they deliver to Mercury exceeds the power density of radiation exiting the sun by 6 orders of magnitude!
The Power density they deliver to Mercury exceeds the power density of radiation exiting the sun by 6 orders of magnitude!
I don’t follow. What does power density have to do with anything and how can any merely geometrical theorem matter? You are concentrating the power of the sun by the megaengineering (solar panels in this case), so the density can be whatever you want to pay for. (My CPU chip has much higher power density than the equivalent square inches of Earth’s atmosphere receiving sunlight, but no one says it ‘violates the laws of thermodynamics’.) Surely only the total power matters.
The sun emits light because it is hot. You can’t concentrate thermal emission to be brighter than the source. (if you could, you could build a perpetual motion machine).
Eternity in Six Hours describes very large lightweight mirrors concentrating solar radiation onto planet Mercury.
The most power you could deliver from the sun to Mercury is the power of the sun times the square of the ratio of the radius of Mercury to the radius of the sun.
The total solar output is 4*10^26 Watts. The ratio of the sun’s radius to that of mercury is half a million. So you can focus about 10^15 Watts onto Mercury at most.
Figure 2 of Eternity in Six Hours projects getting 10^24 Watts to do the job.
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal conversion doesn’t change things much (but allows for a more clean-cut analysis of entropy and efficiency; see Badescu’s work). The statite style however reduces the material requirements many orders of magnitude: Mercury is safe, I only need the biggest asteroids.
Still, detailed modelling of the actual raw material conversion process would be nice. My main headache is not so much the energy input/waste heat removal (although they are by no means trivial and may slow things down for too concentrated mining operations—another reason to do it in the asteroid belt in many places), but how to solve the operations management problem of how many units of machine X to build at time t. Would love to do this in more detail!
The conservation of etendué is merely a particular version of the second law of thermodynamics. Now, You are trying to invoke a multistep photovoltaic/microwave/rectenna method of concentrating energy, but you are still violating the second law of thermodynamics.
If one could concentrate the energy as you propose, one could build a perpetual motion machine.
I don’t see how they are violating the second law of thermodynamics—“all that conversion back and forth induces losses.” They are concentrating some of the power of the Sun in one small point, at the expense of further dissipating the rest of the power. No?
DK> “I don’t see how they are violating the second law of thermodynamics”
Take a large body C, and a small body H. Collect the thermal radiation from C in some manner and deposit that energy on H. The power density emitted from C grows with temperature. The temperature of H grows with the power density deposited. If, without adding external energy, we concentrate the power density from the large body C to a higher power density on the small body H, H gets hotter than C. We may then use a heat engine between H an C to make free energy. This is not possible, therefore we cannot do the concentration.
The Etendue argument is just a special case where the concentration is attempted with mirrors or lenses. Changing the method to involve photovoltaic/microwave/rectenna power concentration doesn’t fix the issue, because the argument from the second law is broader, and encompasses any method of concentrating the power density as shown above.
When we extrapolate exponential growth, we must take care to look for where the extrapolation fails. Nothing in real life grows exponentially without bounds. “Eternity in Six Hours” relies on power which is 9 orders of magnitude greater than the limit of fundamental physical law.
But in laboratory experiments, haven’t we produced temperatures greater than that of the surface of the sun? A quick google seems to confirm this. So, it is possible to take the power of the sun and concentrate it to a point H so as to make that point much hotter than the sun. (Since I assume that whatever experiment we ran, could have been run powered by solar panels if we wanted to)
I think the key idea here is that we can add external energy—specifically, we can lose energy. We collect X amount of energy from the sun, and use X/100 of it to heat our desired H, at the expense of the remaining 99X/100. If our scheme does something like this then no perpetual motion or infinite power generation is entailed.
How much extra energy external energy is required to get an energy flux on Mercury of a billion times that leaving the sun? I have an idea, but my statmech is rusty. (the fourth root of a billion?)
And do we have to receive the energy and convert it to useful work with 99.999999999% efficiency to avoid melting the apparatus on Mercury?
I have no idea, I never took the relevant physics classes.
For concreteness, suppose we do something like this: We have lots of solar panels orbiting the sun. They collect electricity (producing plenty of waste heat etc. in the process, they aren’t 100% efficient) and then send it to lasers, which beam it at Mercury (producing plenty more waste heat etc. in the process, they aren’t 100% efficient either). Let’s suppose the efficiency is 10% in each case, for a total efficiency of 1%. So that means that if you completely surrounded the sun with a swarm of these things, you could get approximately 1% of the total power output of the sun concentrated down on Mercury in particular, in the form of laser beams.
What’s wrong with this plan? As far as I can tell it couldn’t be used to make infinite power, because of the aforementioned efficiency losses.
To answer your second question: Also an interesting objection! I agree melting the machinery is a problem & the authors should take that into account. I wonder what they’d say about it & hope they respond.
Yeah, though not for the reason you originally said.
I think I’d like to see someone make a revised proposal that addresses the thermal management problem, which does indeed seem to be a tricky though perhaps not insoluble problem.
Ok, I could be that someone. here goes. You and the paper author suggest a heat engine. That needs a cold side and a hot side. We build a heat engine where the hot side is kept hot by the incoming energy as described in this paper. The cold side is a surface we have in radiative communication with the 3 degrees Kelvin temperature of deep space. In order to keep the cold side from melting, we need to keep it below a few thousand degrees, so we have to make it really large so that it can still radiate the energy.
From here, we can use Stefan–Boltzmann law, to show that we need to build a radiator much bigger than a billion times the surface area of Mercury. It goes as the fourth power of the ratio of temperatures in our heat engine.
The paper’s contribution is the suggestion of a self replicating factory with exponential growth. That is cool. But the problem with all exponentials is that, in real life, they fail to grow indefinitely. Extrapolating an exponential a dozen orders of magnitude, without entertaining such limits, is just silly.
I’m still interested in this question. I don’t think you really did what I asked—it seems like you were thinking ‘how can I convince him that this is impossible’ not ‘how can I find a way to build a dyson swarm.’ I’m interested in both but was hoping to have someone with more engineering and physics background than me take a stab at the latter.
My current understanding of the situation is: There’s no reason why we can’t concentrate enough energy on the surface of Mercury, given enough orbiting solar panels and lasers; the problem instead seems to be that we need to avoid melting all the equipment on the surface. Or, in other words, the maximum amount of material we can launch off Mercury per second is limited by the maximum amount of heat that can be radiated outwards from Mercury (for a given operating temperature of the local equipment?) And you are claiming that this amount of heat radiation ability, for radiators only the size of Mercury’s surface, is OOMs too small to enable dyson swarm construction. Is this right?
Interesting. I googled “eternity in six hours” and found http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf , which looks to be a preprint of the same paper (dated March 12, 2013); the preprint version does say “The lightest design would be to have very large lightweight mirrors concentrating solar radiation down on focal points” and contains the phrase “disassembly of Mercury” 3 times; while the published article Daniel Kokotajlo linked to lacks all of that. Indeed, in the published article, the entire 8-page “The launch phase” section has been cut down to one paragraph.
Maybe you should read the preprint too. I’ll excuse him for reading the wrong obsolete preprint even though that search would also show him that it was published at #3 and so he should be checking his preprint criticisms against the published version (I don’t always bother to jailbreak a published version either), but you are still failing to read the next sentence after the one you quoted, which you left out. In full (and emphasis added):
The lightest design would be to have very large lightweight mirrors concentrating solar radiation down on focal points, where it would be transformed into useful work (and possibly beamed across space for use elsewhere). The focal point would most likely some sort of heat engine, possibly combined with solar cells (to extract work from the low entropy solar radiation).
If he read that version, personally, I think that reading error is even more embarrassing, so I’m happy to agree with you that that’s the version weverka misread in his attempt to dunk on the paper… Even worse than the time weverka accused me of not reading a paper published 2 years later, IMO.
(And it should be no surprise that you screwed up the reading in a different way when the preprint was different, because either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics; however, it is more likely you made a trivial error of reading than he made a trivial error of physics, so the only question is what specific reading error you made… cf. Muphry’s law.)
So, to reiterate: his geometric point is irrelevant and relies on him (and you) being bad at reading and attacking a strawman, because he ignored the fact that the solar mirrors are merely harvesting energy before concentrating it with ordinary losses, and aren’t some giant magnifying glass to magically losslessly melt Mercury. There are doubtless problems with the mega-engineering proposal, which may even bump the time required materially from 6 hours to, say, 600 hours instead—but you’re going to need to do more work than that.
For the record, I find that scientists make such errors routinely. In public conferences when optical scientists propose systems that violate the constant radiance theorem, I have no trouble standing up and saying so. It happens often enough that when I see a scientist propose such a system, It does not diminish my opinion of that scientist. I have fallen into this trap myself at times. Making this error should not be a source of embarrassment.
either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics;
I did not expect this to revert to credentialism. If you were to find out that my credentials exceed this other guy’s, would you change your position? If not, why appeal to credentials in your argument?
Basically, no amount of mirrors and lenses can result in the energy beaming down on Mercury being denser per square meter than the energy beaming out of a square meter of Sun surface. The best you can do is make it so that Mercury is effectively surrounded entirely by Sun. And if that’s not good enough, then you are out of luck… I notice I’m a bit confused, because surely that is good enough. Wouldn’t that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn’t that what would happen if you dropped Mercury into the Sun?
>Kokotajlo writes:Wouldn’t that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn’t that what would happen if you dropped Mercury into the Sun?
It would cause a severe heat dissipation problem. All that energy is going to be radiated as waste heat and, in equilibrium, will be radiated as fast as it comes in. The temperature required to radiate at the requisite power level would be in excess of the temperature at the surface of the sun, any harvesting machinery on the surface of the planet would melt unless it is built from something unknown to modern chemistry.
I feel like your predictions for 2022 are just a touch over the mark, no? GPT-3 isn’t really ‘obsolete’ yet or is that wrong?
I’m sure it will be in a minute, but I’d probably update that benchmark to probably occurring mid 2023, or potentially whenever GPT-4 gets released.
I really feel like you should be updating slightly longer, but maybe I misunderstand where we’re at right now with chatbots. I would love to hear otherwise.
In some sense it’s definitely obsolete, namely, theres pretty much no reason to use original GPT-3 anymore. Also, up until recently there was public confusion because a lot of the stuff people attributed to GPT-3 was really GPT-3.5, so original GPT-3 is probably a bit worse than you think. Idk, play around with the models and then decide for yourself whether the difference is big enough to count as obsolete.
I do think it’s reasonable to interpret my original prediction as being more bullish on this matter than what actually transpired. In fact I’ll just come out and admit that when I wrote the story I expected the models of december 2022 to be somewhat better than what’s actually publicly available now.
I think that yes it is reasonable to say that GPT-3 is obsolete. Also, you mentioned loads AGI startups being created in 2023 while it already happened a lot in 2022. How many more AGI startups do you expect in 2023?
I still think this is great. Some minor updates, and an important note:
Minor updates: I’m a bit less concerned about AI-powered propaganda/persuasion than I was at the time, not sure why. Maybe I’m just in a more optimistic mood. See this critique for discussion. It’s too early to tell whether reality is diverging from expectation on this front. I had been feeling mildly bad about my chatbot-centered narrative, as of a month ago, but given how ChatGPT was received I think things are basically on trend.
Diplomacy happened faster than I expected, though in a less generalizeable way than I expected, so whatever. My overall timelines have shortened somewhat since I wrote this story, but it’s still the thing I point people towards when they ask me what I think will happen. (Note that the bulk of my update was from publicly available info rather than from nonpublic stuff I saw at OpenAI.)
Important note: When I wrote this story, my AI timelines median was something like 2029. Based on how things shook out as the story developed it looked like AI takeover was about to happen, so in my unfinished draft of what 2027 looks like, AI takeover happens. (Also AI takeoff begins, I hadn’t written much about that part but probably it would reach singularity/dysonswarms/etc. in around 2028 or 2029.) That’s why the story stopped, I found writing about takeover difficult and confusing & I wanted to get the rest of the story up online first. Alas, I never got around to finishing the 2027 story. I’m mentioning this because I think a lot of readers with 20+ year timelines read my story and were like “yep seems about right” not realizing that if you look closely at what’s happening in the story, and imagine it happening in real life, it would be pretty strong evidence that crazy shit was about to go down. Feel free to controvert that claim, but the point is, I want it on the record that when this original 2026 story was written, I envisioned the proper continuation of the story resulting in AI takeover in 2027 and singularity around 2027-2029. The underlying trends/models I was using as the skeleton of the story predicted this, and the story was flesh on those bones. If this surprises you, reread the story and ask yourself what AI abilities are crucial for AI R&D acceleration, and what AI abilities are crucial for AI takeover, that aren’t already being demonstrated in the story (at least in some weak but rapidly-strengthening form). If you find any, please comment and let me know, I am genuinely interested to hear what you’ve got & hopeful that you’ll find some blocker I haven’t paid enough attention to.
This is a good example of where I disagree. Dyson Swarms in 8 years requires basically physics-breaking tech and a desire to do so strongly that governments will fund significant GDPs on this. I give this a 99.9999% of not happening, with the 0.0001% chance where it does happen is “Holographic wormholes can be used to build time machines, instantly obsoleteing everything.
My timelines for AGI is in the mid 2030s, with actual singularity effects more in the 2050s-2060s.
Thanks for putting your disagreements on the record!
Dyson swarms in 8 years does not require breaking any known laws of physics. I don’t know how long it’ll take to build dyson swarms with mature technology, it depends on what the fastest possible doubling time of nanobots is. But less than a year seems plausible, as does a couple years.
Also, it won’t cost a substantial fraction of GDP, thanks to exponential growth all it takes is a seed. Also, governments probably won’t have much of a say in the matter.
Do you have any other disagreements, ideally about what’ll happen by 2026?
Yeah, this might be my big disagreement. 80% chance that nanobots this capable of replicating fast enough for a Dyson Swarm cannot exist with known physics. I don’t know if you realize how much mass a Dyson Swarm has. You’re asking for nanobots that dismantle planets like Mercury in several months at most.
My general disagreement is the escalation is too fast and basically requires the plan going perfectly the first time, which is a bad sign. It only works to my mind because you think AI can plan so well the first time that it succeeds without any obstscles, like thermodynamics ruining that nanobot plan.
Have you read Eternity in Six Hours? I’d be interested to hear your thoughts on it, and also whether or not you had already read it before writing this comment. They calculate a 30-year mercury disassembly time, but IIRC they use a 5-year doubling time for the miner-factory-launcher-satellite complexes. If instead it was, say, a 6 month doubling time, then maybe it’d be 3 years instead of 30. And if it was a one month doubling time, 6 months to disassemble Mercury. IIRC ordinary grass has something like a one-month doubling time, and ordinary car factories produce something like their own weight in cars every year, so it’s plausible to me that with super-advanced technology some sort of one-month-doubling-time fully-automated industry can be created.
Why do you think what I’m saying requires a plan going perfectly the first time? I definitely don’t think it requires that.
I haven’t read that, and I must admit I underestimated just how much nanobots can do in real life.
I have read Eternity in Six Hours and I can say that it violates the Second Law of Thermodynamics through the violation of the Constant Radiance Theorem. The Power density they deliver to Mercury exceeds the power density of radiation exiting the sun by 6 orders of magnitude!
I don’t follow. What does power density have to do with anything and how can any merely geometrical theorem matter? You are concentrating the power of the sun by the megaengineering (solar panels in this case), so the density can be whatever you want to pay for. (My CPU chip has much higher power density than the equivalent square inches of Earth’s atmosphere receiving sunlight, but no one says it ‘violates the laws of thermodynamics’.) Surely only the total power matters.
The sun emits light because it is hot. You can’t concentrate thermal emission to be brighter than the source. (if you could, you could build a perpetual motion machine).
Eternity in Six Hours describes very large lightweight mirrors concentrating solar radiation onto planet Mercury.
The most power you could deliver from the sun to Mercury is the power of the sun times the square of the ratio of the radius of Mercury to the radius of the sun.
The total solar output is 4*10^26 Watts. The ratio of the sun’s radius to that of mercury is half a million. So you can focus about 10^15 Watts onto Mercury at most.
Figure 2 of Eternity in Six Hours projects getting 10^24 Watts to do the job.
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal conversion doesn’t change things much (but allows for a more clean-cut analysis of entropy and efficiency; see Badescu’s work). The statite style however reduces the material requirements many orders of magnitude: Mercury is safe, I only need the biggest asteroids.
Still, detailed modelling of the actual raw material conversion process would be nice. My main headache is not so much the energy input/waste heat removal (although they are by no means trivial and may slow things down for too concentrated mining operations—another reason to do it in the asteroid belt in many places), but how to solve the operations management problem of how many units of machine X to build at time t. Would love to do this in more detail!
The conservation of etendué is merely a particular version of the second law of thermodynamics. Now, You are trying to invoke a multistep photovoltaic/microwave/rectenna method of concentrating energy, but you are still violating the second law of thermodynamics.
If one could concentrate the energy as you propose, one could build a perpetual motion machine.
I don’t see how they are violating the second law of thermodynamics—“all that conversion back and forth induces losses.” They are concentrating some of the power of the Sun in one small point, at the expense of further dissipating the rest of the power. No?
DK> “I don’t see how they are violating the second law of thermodynamics”
Take a large body C, and a small body H. Collect the thermal radiation from C in some manner and deposit that energy on H. The power density emitted from C grows with temperature. The temperature of H grows with the power density deposited. If, without adding external energy, we concentrate the power density from the large body C to a higher power density on the small body H, H gets hotter than C. We may then use a heat engine between H an C to make free energy. This is not possible, therefore we cannot do the concentration.
The Etendue argument is just a special case where the concentration is attempted with mirrors or lenses. Changing the method to involve photovoltaic/microwave/rectenna power concentration doesn’t fix the issue, because the argument from the second law is broader, and encompasses any method of concentrating the power density as shown above.
When we extrapolate exponential growth, we must take care to look for where the extrapolation fails. Nothing in real life grows exponentially without bounds. “Eternity in Six Hours” relies on power which is 9 orders of magnitude greater than the limit of fundamental physical law.
But in laboratory experiments, haven’t we produced temperatures greater than that of the surface of the sun? A quick google seems to confirm this. So, it is possible to take the power of the sun and concentrate it to a point H so as to make that point much hotter than the sun. (Since I assume that whatever experiment we ran, could have been run powered by solar panels if we wanted to)
I think the key idea here is that we can add external energy—specifically, we can lose energy. We collect X amount of energy from the sun, and use X/100 of it to heat our desired H, at the expense of the remaining 99X/100. If our scheme does something like this then no perpetual motion or infinite power generation is entailed.
How much extra energy external energy is required to get an energy flux on Mercury of a billion times that leaving the sun? I have an idea, but my statmech is rusty. (the fourth root of a billion?)
And do we have to receive the energy and convert it to useful work with 99.999999999% efficiency to avoid melting the apparatus on Mercury?
I have no idea, I never took the relevant physics classes.
For concreteness, suppose we do something like this: We have lots of solar panels orbiting the sun. They collect electricity (producing plenty of waste heat etc. in the process, they aren’t 100% efficient) and then send it to lasers, which beam it at Mercury (producing plenty more waste heat etc. in the process, they aren’t 100% efficient either). Let’s suppose the efficiency is 10% in each case, for a total efficiency of 1%. So that means that if you completely surrounded the sun with a swarm of these things, you could get approximately 1% of the total power output of the sun concentrated down on Mercury in particular, in the form of laser beams.
What’s wrong with this plan? As far as I can tell it couldn’t be used to make infinite power, because of the aforementioned efficiency losses.
To answer your second question: Also an interesting objection! I agree melting the machinery is a problem & the authors should take that into account. I wonder what they’d say about it & hope they respond.
A billion times the energy flux from the surface of the sun, over any extended area is a lot to deal with. It is hard to take this proposal seriously.
Yeah, though not for the reason you originally said.
I think I’d like to see someone make a revised proposal that addresses the thermal management problem, which does indeed seem to be a tricky though perhaps not insoluble problem.
Ok, I could be that someone. here goes. You and the paper author suggest a heat engine. That needs a cold side and a hot side. We build a heat engine where the hot side is kept hot by the incoming energy as described in this paper. The cold side is a surface we have in radiative communication with the 3 degrees Kelvin temperature of deep space. In order to keep the cold side from melting, we need to keep it below a few thousand degrees, so we have to make it really large so that it can still radiate the energy.
From here, we can use Stefan–Boltzmann law, to show that we need to build a radiator much bigger than a billion times the surface area of Mercury. It goes as the fourth power of the ratio of temperatures in our heat engine.
The paper’s contribution is the suggestion of a self replicating factory with exponential growth. That is cool. But the problem with all exponentials is that, in real life, they fail to grow indefinitely. Extrapolating an exponential a dozen orders of magnitude, without entertaining such limits, is just silly.
I’m still interested in this question. I don’t think you really did what I asked—it seems like you were thinking ‘how can I convince him that this is impossible’ not ‘how can I find a way to build a dyson swarm.’ I’m interested in both but was hoping to have someone with more engineering and physics background than me take a stab at the latter.
My current understanding of the situation is: There’s no reason why we can’t concentrate enough energy on the surface of Mercury, given enough orbiting solar panels and lasers; the problem instead seems to be that we need to avoid melting all the equipment on the surface. Or, in other words, the maximum amount of material we can launch off Mercury per second is limited by the maximum amount of heat that can be radiated outwards from Mercury (for a given operating temperature of the local equipment?) And you are claiming that this amount of heat radiation ability, for radiators only the size of Mercury’s surface, is OOMs too small to enable dyson swarm construction. Is this right?
Awesome critique, thanks! I’m going to email the authors and ask what they think of this. I’ll credit you of course.
Ah, so you’re just bad at reading. I thought that was why you were wrong (it does not describe mirrors), but I didn’t want to say it upfront.
Interesting. I googled “eternity in six hours” and found http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf , which looks to be a preprint of the same paper (dated March 12, 2013); the preprint version does say “The lightest design would be to have very large lightweight mirrors concentrating solar radiation down on focal points” and contains the phrase “disassembly of Mercury” 3 times; while the published article Daniel Kokotajlo linked to lacks all of that. Indeed, in the published article, the entire 8-page “The launch phase” section has been cut down to one paragraph.
Perhaps weverka read the preprint.
thanks for showing that Gwern’s statement that I am “bad at reading” is misplaced.
Maybe you should read the preprint too. I’ll excuse him for reading the wrong obsolete preprint even though that search would also show him that it was published at #3 and so he should be checking his preprint criticisms against the published version (I don’t always bother to jailbreak a published version either), but you are still failing to read the next sentence after the one you quoted, which you left out. In full (and emphasis added):
If he read that version, personally, I think that reading error is even more embarrassing, so I’m happy to agree with you that that’s the version weverka misread in his attempt to dunk on the paper… Even worse than the time weverka accused me of not reading a paper published 2 years later, IMO.
(And it should be no surprise that you screwed up the reading in a different way when the preprint was different, because either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics; however, it is more likely you made a trivial error of reading than he made a trivial error of physics, so the only question is what specific reading error you made… cf. Muphry’s law.)
So, to reiterate: his geometric point is irrelevant and relies on him (and you) being bad at reading and attacking a strawman, because he ignored the fact that the solar mirrors are merely harvesting energy before concentrating it with ordinary losses, and aren’t some giant magnifying glass to magically losslessly melt Mercury. There are doubtless problems with the mega-engineering proposal, which may even bump the time required materially from 6 hours to, say, 600 hours instead—but you’re going to need to do more work than that.
For the record, I find that scientists make such errors routinely. In public conferences when optical scientists propose systems that violate the constant radiance theorem, I have no trouble standing up and saying so. It happens often enough that when I see a scientist propose such a system, It does not diminish my opinion of that scientist. I have fallen into this trap myself at times. Making this error should not be a source of embarrassment.
I did not expect this to revert to credentialism. If you were to find out that my credentials exceed this other guy’s, would you change your position? If not, why appeal to credentials in your argument?
I think weverka is referring to the phenomenon explained here: https://what-if.xkcd.com/145/
Basically, no amount of mirrors and lenses can result in the energy beaming down on Mercury being denser per square meter than the energy beaming out of a square meter of Sun surface. The best you can do is make it so that Mercury is effectively surrounded entirely by Sun. And if that’s not good enough, then you are out of luck… I notice I’m a bit confused, because surely that is good enough. Wouldn’t that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn’t that what would happen if you dropped Mercury into the Sun?
weverka, care to elaborate further?
>Kokotajlo writes:Wouldn’t that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn’t that what would happen if you dropped Mercury into the Sun?
How do you get hours?
I didn’t do any calculation at all, I just visualized Mercury falling into the sun lol. Not the most scientific method.
Yeah, that’s where you got things wrong.
I have sinned! I repent and learn my lesson.
Specifically, you can focus 10^15 watts on mercury, but Eternity in 6 hours proposes 10^24 watts to be used. It’s a 9 order of magnitude difference.
It would cause a severe heat dissipation problem. All that energy is going to be radiated as waste heat and, in equilibrium, will be radiated as fast as it comes in. The temperature required to radiate at the requisite power level would be in excess of the temperature at the surface of the sun, any harvesting machinery on the surface of the planet would melt unless it is built from something unknown to modern chemistry.
Seems like a good point. I’d be interested to hear what the authors have to say about that.
I feel like your predictions for 2022 are just a touch over the mark, no? GPT-3 isn’t really ‘obsolete’ yet or is that wrong?
I’m sure it will be in a minute, but I’d probably update that benchmark to probably occurring mid 2023, or potentially whenever GPT-4 gets released.
I really feel like you should be updating slightly longer, but maybe I misunderstand where we’re at right now with chatbots. I would love to hear otherwise.
In some sense it’s definitely obsolete, namely, theres pretty much no reason to use original GPT-3 anymore. Also, up until recently there was public confusion because a lot of the stuff people attributed to GPT-3 was really GPT-3.5, so original GPT-3 is probably a bit worse than you think. Idk, play around with the models and then decide for yourself whether the difference is big enough to count as obsolete.
I do think it’s reasonable to interpret my original prediction as being more bullish on this matter than what actually transpired. In fact I’ll just come out and admit that when I wrote the story I expected the models of december 2022 to be somewhat better than what’s actually publicly available now.
I think that yes it is reasonable to say that GPT-3 is obsolete.
Also, you mentioned loads AGI startups being created in 2023 while it already happened a lot in 2022. How many more AGI startups do you expect in 2023?