Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz
Jackson Wagner
Maybe other people have a very different image of meditation than I do, such that they imagine it as something much more delusional and hyperreligious? Eg, some religious people do stuff like chanting mantras, or visualizing specific images of Buddhist deities, which indeed seems pretty crazy to me.
But the kind of meditation taught by popular secular sources like Sam Harris’s Waking Up app, (or that I talk about in my “Examining The Witness” youtube series about the videogame The Witness), seems to me obviously much closer to basic psychology or rationality techniques than to religious practices. Compare Sam Harris’s instructions about paying attention to the contents of one’s experiences, to Gendlin’s idea of “Circling”, or Yudkowsky’s concept of “sit down and actually try to think of solutions for five minutes”, or the art of “noticing confusion”, or the original Feynman essay where he describes holding off on proposing solutions. So it’s weird to me when people seem really skeptical of meditation and set a very high burden of proof that they wouldn’t apply for other mental habits like, say, CFAR techniques.
I’m not like a meditation fanatic—personally I don’t even meditate these days, although I feel bad about not doing it since it does make my life better. (Just like how I don’t exercise much anymore despite exercise making my day go better, and I feel bad about that too...) But once upon a time I just tried it for a few weeks, learned a lot of interesting stuff, etc. I would say I got some mundane life benefits out of it—some, like exercise or good sleep, that only lasted as long as I kept up the habit. and other benefits were more like mental skills that I’ve retained to today. I also got some very worthwhile philosophical insights, which I talk about, albeit in a rambly way mixed in with lots of other stuff, in my aforementioned video series. I certainly wouldn’t say the philosophical insights were the most important thing in my whole life, or anything like that! But maybe more skilled deeper meditation = bigger insights, hence my agnosticism on whether the more bombastic metitation-related claims are true.
So I think people should just download the Waking Up app and try meditating for like 10 mins a day for 2-3 weeks or whatever—way less of a time commitment than watching a TV show or playing most videogames—and see for themselves if it’s useful or not, instead of debating.
Anyways. For what it’s worth, I googled “billionares who pray”. I found this article (https://www.beliefnet.com/entertainment/5-christian-billionaires-you-didnt-know-about.aspx), which ironically also cites Bill Gates, plus the Walton Family and some other conservative CEOs. But IMO, if you read the article you’ll notice that only one of them actually mentions a daily practice of prayer. The one that does, Do Won Chang, doesn’t credit it for their business success… seems like they’re successful and then they just also pray a lot. For the rest, it’s all vaguer stuff about how their religion gives them a general moral foundation of knowing what’s right and wrong, or how God inspires them to give back to their local community, or whatever.
So, personally I’d consider this duel of first-page-google-results to be a win for meditation versus prayer, since the meditators are describing a more direct relationship between scheduling time to regularly meditate and the assorted benefits they say it brings, while the prayer people are more describing how they think it’s valuable to be christian in an overall cultural sense. Although I’m sure with more effort you could find lots of assorted conservatives claiming that prayer specifically helps them with their business in some concrete way. (I’m sure there are many people who “pray” in ways that resemble meditation, or resemble Yudkowsky’s sitting-down-and-trying-to-think-of-solutions-for-five-minutes-by-the-clock, and find these techniques helpful!)
IMO, probably more convincing than dueling dubious claims of business titans, is testimony from rationalist-community members who write in detail about their experiences and reasoning. Alexey Guzey’s post here is interesting, as he’s swung from being vocally anti-meditation, to being way more into it than I ever was. He seems to still generally have his head on straight (ie hasn’t become a religious fanatic or something), and says that meditation seems to have been helpful for him in terms of getting more things done: https://guzey.com/2022-lessons/
I think there are many cases of reasonably successful people who often cite either some variety of meditation, or other self-improvement regimes / habits, as having a big impact on their success. This random article I googled cites the billionaires Ray Dalio, Marc Benioff, and Bill Gates, among others. (https://trytwello.com/ceos-that-meditate/)
Similarly you could find people (like Arnold Schwarzenegger, if I recall?) citing that adopting a more mature, stoic mindset about life was helpful to them—Ray Dalio has this whole series of videos on “life principles” that he likes. And you could find others endorsing the importance of exercise and good sleep, or of using note-taking apps to stay organized.
I think the problem is not that meditation is ineffective, but that it’s not usually a multiple-standard-deviations gamechanger (and when it is, it’s probably usually a case of “counting up to zero from negative”, as TsviBT calls it), and it’s already a known technique. If nobody else in the world meditated or took notes or got enough sleep, you could probably stack those techniques and have a big advantage. But alas, a lot of CEOs and other top performers already know to do this stuff.
(Separately from the mundane life-improvement aspects, some meditators claim that the right kind of deep meditation can give you insight into deep philosophical problems, or the fundamental nature of conscious experience, and that this is so valuable that achieving this goal is basically the most important thing you could do in life. This might possibly even be true! But that’s different from saying that meditation will give you +50 IQ points, which it won’t. Kinda like how having an experience of sublime beauty while contemplating a work of art, might be life-changing, but won’t give you +50 IQ points.)
It feels sorta understandable to me (albeit frustrating) that OpenPhil faces these assorted political constraints. In my view this seems to create a big unfilled niche in the rationalist ecosystem: a new, more right-coded, EA-adjacent funding organization could optimize itself for being able to enter many of those blacklisted areas with enthusiasm.
If I was a billionare, I would love to put together a kind of “completion portfolio” to complement some of OP’s work. Rationality community building, macrostrategy stuff, AI-related advocacy to try and influence republican politicians, plus a big biotechnology emphasis focused on intelligence enhancement, reproductive technologies, slowing aging, cryonics, gene drives for eradicating diseases, etc. Basically it seems like there is enough edgy-but-promising stuff out there (like studying geoengineering for climate, or advocating for charter cities, or just funding oddball substack intellectuals to do their thing) that you could hope to create a kind of “alt-EA” (obviously IRL it shouldn’t have EA in the name) where you batten down the hatches, accept that the media will call you an evil villain mastermind forever, and hope to create a kind of protective umbrella for all the work that can’t get done elsewhere. As a bonus, you could engage more in actual politics (like having some hot takes on the US budget deficit, or on how to increase marriage & fertility rates, or whatever), in some areas that OP in its quest for center-left non-polarization can’t do.
Peter Thiel already lives this life, kinda? But his model seems 1. much more secretive, and 2. less directly EA-adjacent, than what I’d try if I was a billionare.
Dustin himself talks about how he is really focused on getting more “multipolarity” to the EA landscape, by bringing in other high-net-worth funders. For all the reasons discussed, he obviously can’t say “hi, somebody please start an edgier right-wing offshoot of EA!!” But it seems like a major goal that the movement should have, nonetheless.
Seems like you could potentially also run this play with a more fully-left-coded organization. The gains there would probably be smaller, since there’s less “room” to OP’s left than to their right. But maybe you could group together wild animal welfare, invertebrate welfare, digital minds, perhaps some David Pearce / Project Far Out-style “suffering abolition” transhumanist stuff, other mental-wellbeing stuff like the Organization for the Prevention of Intense Suffering, S-risk work, etc. Toss in some more aggressive political activism on AI (like PauseAI) and other issues (like Georgist land value taxation), and maybe some forward-looking political stuff on avoiding stable totalitarianism, regulation of future AI-enabled technologies, and how to distribute the gains from a positive / successful singularity (akin to Sam Altman’s vision of UBI supported by georgist/pigouvian taxes, but more thought-through and detailed and up-to-date.)
Finding some funders to fill these niches seems like it should be a very high priority of the rationalist / EA movement. Even if the funders were relatively small at first (like say they have $10M - $100M in crypto that they are preparing to give away), I think there could be a lot of value in being “out and proud” (publicising much of their research and philosophy and grantmaking like OP, rather than being super-secretive like Peter Thiel). If a small funder manages to build a small successful “alt-EA” ecosystem on either the left or right, that might attract larger funders in time.
There are actually a number of ways that you might see a permanently stable totalitarian government arise, in addition to the simplest idea that maybe the leader never dies:
https://80000hours.org/problem-profiles/risks-of-stable-totalitarianism/
I and perhaps other LessWrongers would appreciate reading your review (of any length) of the book, since lots of us loved HPMOR, the Sequences, etc, but are collectively skeptical / on the fence about whether to dive into Project Lawful. (What’s the best way to read the bizzare glowfic format? What are the main themes of the book and which did you like best? etc)
This is nice! I like seeing all the different subfields of research listed and compared; as a non-medical person I often just hear about one at a time in any given news story, which makes things confusing.
Some other things I hear about in longevity spaces:
- Senescent-cell-based theories and medicines—what’s up with these? This seems like something people were actually trying in humans; any progress, or is this a dud?
- Repurposing essentially random drugs that might have some effect on longevity—most famously the diabetes drug metformin (although people aren’t expecting a very large increase in lifespan from this, rather at best a kind of proof-of-concept), also the immunosuppresant rapamicyn. Anything promising here, or is this all small potatoes compared to more ambitious approaches like cellular reprogramming?
I enjoyed this other LessWrong post trying to investigate root causes of aging, which focuses more on macro-scale problems like atheroschlerosis (although many of these must ultimately driven by some kind of cellular-level problems like proteins getting messed up via oxidization).
Fellow Thiel fans may be interested in this post of mine called “X-Risk, Anthropics, & Peter Thiel’s Investment Thesis”, analyzing Thiel’s old essay “The Optimistic Thought Experiment”, and trying to figure out how he thinks about the intersection of markets and existential risk.
“Americans eat more fats and oils, more sugars and sweets, more grains, and more red meat; all four items that grew the most in price since 2003.”
Nice to know that you can eat healthy—fish, veggies, beans/nuts, eggs, fresh fruit, etc—and beat inflation at the same time! (Albeit these healthier foods still probably have a higher baseline price. But maybe not for much longer!)
The linked chart actually makes red meat look fine (beef has middling inflation, and pork has actually experienced deflation), but beverages, another generally unhealthy food, are near the top: https://www.ers.usda.gov/data-products/chart-gallery/gallery/chart-detail/?chartId=76961
As to the actual subject of the post, I have to imagine that:
housing inflation feels so much worse in superstar cities than everywhere else, so for us cosmopolitan types it’s hard to believe that the national average (brought lower by cheap housing across the Rust Belt, etc) isn’t way higher.
housing inflation is being measured in a way that doesn’t indicate the true severity of the economic distortion. Like you say, housing prices cause migration—SF is not just more expensive but also much smaller, less productive, etc, than it would be with better zoning laws. So only part of the tragedy caused by restrictive housing policy, actually shows up as high housing prices. (You could say the same for health and other things—healthcare gets more expensive, but surely that also means people forgo certain expensive-but-beneficial treatments? But maybe housing just sees more of this effect than healthcare or education.)
A thoughtful post! I think about this kind of stuff a lot, and wonder what the implications are. If we’re more pessimistic about saving lives in sub-saharan africa, should we:
promote things like lead removal (similar evidence-backed, scalable intervention as bednets, but aimed more directly at human capital)?
promote things like charter cities (untested crazy longshot megaproject, but aimed squarely at transformative political / societal improvements)?
switch to bednet-style lifesaving charities in South Asia, like you mention?
keep on trucking with our original Givewell-style africa-based lifesaving charities, because even after considering all the above, the original charities still look better than any of the three ideas above?
I would love it if you cross-posted this to the EA Forum (I’m sure you’d get some more criticism there vs Lesswrong, but I think it would nevertheless be a good conversation for them to have!) https://forum.effectivealtruism.org/
Re: your point #2, there is another potential spiral where abstract concepts of “greatness” are increasingly defined in a hostile and negative way by partisans of slave morality. This might make it harder to have that “aspirational dialogue about what counts as greatness”, as it gets increasingly difficult for ordinary people to even conceptualize a good version of greatness worth aspiring to. (“Why would I want to become an entrepeneur and found a company? Wouldn’t that make me an evil big-corporation CEO, which has a whiff of the same flavor as stories about the violent, insatiable conquistador villans of the 1500s?”)
Of course, there are also downsides when culture paints a too-rosy picture of greatness—once upon a time, conquistators were in fact considered admirable!
Feynman is imagining lots of components being made with “hand tools”, in order to cut down on the amount of specialized machinery we need. So you’d want sophisticated manipulators to use the tools, move the components, clean up bits of waste, etc. Plus of course for gathering raw resources and navigating Canadian tundra. And you’d need video cameras for the system to look at what it’s doing (otherwise you’d only have feed-forward controls in many situations, which would probably cause lots of cascading errors).
I don’t know how big a rasberry pi would be if it had to be hand-assembled from transistors big enough to pick up individually. So maybe it’s doable!
idk, you still have to fit video cameras and complex robotic arms and wifi equipment into that 1m^3 box, even if you are doing all the AI inference somewhere else! I have a much longer comment replying to the top-level post, where I try to analyze the concept of an autofac and what an optimized autofac design would really look like. Imagining a 100% self-contained design is a pretty cool intellectual exercise, but it’s hard to imagine a situation where it doesn’t make sense to import the most complex components from somewhere else (at least initially, until you can make computers that don’t take up 90% of your manufacturing output).
This was a very interesting post. A few scattered thoughts, as I try to take a step back and take a big-picture economic view of this idea:
What is an autofac? It is a vastly simplified economy, in the hopes that enough simplification will unlock various big gains (like gains from “automation”). Let’s interpolate between the existing global economy, and Feynman’s proposed 1-meter cube. It’s not true that “the smallest technological system capable of physical self-reproduction is the entire economy.”, since I can imagine many potential simplifications of the economy. Imagine a human economy with everything the same, but no pianos, piano manufacturers, piano instructors, etc… the world would be a little sadder without pianos, but eliminating everything piano-related would slightly simplify the economy and probably boost overall productivity. The dream of the Autofac involves many more such simplifications, of several types:
Eliminate luxuries (like pianos) and unnecessary complexity (do we really need 1000 types of car, instead of say, 5? The existence of so many different car manufacturers and car models is an artifact of capitalist competition and consumer preferences, not a physical necessity. Similarly, do we really need more than 5 different colors of paint / types of food / etc...).
Give up on internal production of certain highly complex products, like microchips, in order to further simplify the economy. Keep giving up on more and more categories of complex products until your remaining internal economy is simple enough that you can automate the entire thing. Hopefully, this remaining automated economy will still account for most of the mass/energy being manipulated, with only a small amount of imports (lubricants, electronics, etc) required.
Why make such a fuss about disentangling a “fully automatable” simplified subset of the economy from a distant “home base” that exports microchips and lubricant? I don’t think a self-sufficient autofac plan would ever make sense in the middle of, say, the city of Shenzhen in China, when you are already surrounded by an incredibly complex manufacturing ecosystem that can easily provide whatever inputs you need. I can think of two reasons why you might want to cleave an economy in half like this, rather than just cluster everything together in a normal Shenzhen-style agglomeration mishmash:
If you want to industrialize a distant, undeveloped location, and the cost of shipping goods there is very high, then it makes sense to focus on producing the heaviest materials locally and importing the smallest / most complex / most value-per-kg stuff.
If you can cut humans entirely out of the loop of the simplified half of the economy, then you don’t have to import or produce any of the stuff humans need (food, housing, healthcare, etc), which is a big efficiency win. This looks especially attractive if you want to industrialize a harsh, uninhabitable location (like Baffin Island, Antarctica, the Sahara desert, the bottom of the ocean, the moon, Mars, etc), where the costs of supporting humans are higher than normal.
Take an efficency hit in order to eliminate efficiency-boosting complexity. Perhaps instead of myriad types of alloy, we could get by with just a handful. Perhaps instead of myriad types of fastener, we could just use four standard sizes of screw. Perhaps instead of lots of specialized machines, we could make many of our tools “by hand” using generalized machine-shop tools.
But wait—I thought we were trying to maximize economic growth? Why give up things like carbide cutting tools in favor of inferior hardened-steel? Well, the hope is that if we simplify the economy enough, it will be possible to “automate” this simplified economy, and the benefits of this automation will make up for the efficiency losses.
Okay then, why does efficiency-imparing simplification help with automation? Couldn’t our autofac machine shop just as easily produce 10,000 types of fasteners, as 4 standard screws? Especially since the autofacs are making so many things “by hand?” Feynman seems very interested in an Autofac economy based almost entirely around steel—what’s the benefit of ditching useful materials like plastic, concrete, carbide, glass, rubber, etc? I see a few potential benefits to efficiency-imparing simplifications:
lt reduces the size/cost/complexity of the initial self-replicating system. (I think this motivation is misplaced, and we should be shooting for a much larger initial size than 1 meter cubed.)
It reduces the engineering effort needed to design the initial self-replicating system. (This motivation is reasonable, but it interacts in interesting ways with AI.)
By trying to minimize the use of inputs like rubber and plastic, we reduce our reliance on rare natural resources like rubber trees and oil wells, neither of which exist on Baffin Island, or the moon, or etc. (This motivation is reasonable, but it only applies to a few of the proposed simplifications.)
To me, it seems that the Autofac dream comes from a particular context—mid-20th-century visions of space exploration—that have unduly influenced Feynman’s current concept.
Why the emphasis on creating a very small, 1 meter cubed package size?? This is a great size for something that we are shipping to the moon on a Saturn V rocket, or landing on Mars via skycrane, or perhaps sending to a distant star system as a Von Neumann probe. But for colonizing Baffin Island or the Sahara Desert or anywhere else on earth, we can use giant containter ships to easily move a much larger amount of stuff. By increasing the minimum size of our self-replicating system, we can include lots of efficiency-boosting nice-to-haves (like different types of alloys, carbide cutting tools, lubricant factories, etc). Feynman imagines initially releasing 1000 one-meter-cubed autofacs (and then supporting them with a continual stream of inputs), but I think we should instead design a single, 1000x-size autofac (it doesn’t have to be one giant structure—rather a network of factories, resource-gathering drones, steel mills, power plants, etc), since that would allow for more efficiency-boosting complexity.
The remaining argument for 1000 one-meter-cubed autofacs is that it would be easier to design this much-smaller, much-simpler product. This is true! I’ll get back to this in a bit.
In general, I suspect that the ideal size of the autofac system should be proportional to the amount of transportation throughput you can support to Baffin Island / Mars / wherever. Design effort aside, it would be ideal to design the largest and most complex possible autofac which would fit into your transportation budget (eg, if you can afford five container ships to Baffin Island per year, then your autofac system should be large enough to fit into five container ships.
Cutting humans entirely out of the loop is very appealing for deep-space exploration, but less appealing for places like Baffin Island. As long as you are only relying on relatively unskilled labor (such that you aren’t worried about running out of humans to import, during the final stages of the industrialization of the island when millions and millions of windmills / steel mills / etc are going up), then importing a bunch of humans to handle a small percentage of high-value, hard-to-automate tasks, is probably worth it (even though it means you now have to provide housing, food, entertainment, law enforcement, etc).
As others have mentioned, this “compromise” vision seems similar to Tesla’s dreams of robotic factory workers (in large, container-ship-sized factories that still employ some human workers) and Spacex’s mars colonization plans (where you still have a few humans assembling a mostly-mechanical system of nuclear power plants and solar panels, habitable spaces, greenhouses for food, etc—but no 1-meter cubes to be seen, since Starship can carry 100 tons at a time to Mars).
But again, I admit that re-complexifying the economy by introducing humans, does greatly increase the design complexity and thus the design effort required at the beginning.
The one-meter-cubed autofac seems so pleasingly universal, like maybe once we’ve designed it, we could deploy it in all kinds of situations! But I think it is a lot less universal than it looks.
A Baffin-Island-Plan autofac wouldn’t fare well in the Sahara desert, where you’d want to manufacture solar panels (which rely more on chemistry and unique materials) instead of simple mechanical windmills that could be built almost entirely from steel. In the sahara, you’d also have less access to iron ore in the form of exposed rock; by contrast you’d have a lot of sillica that you could use to make glass. On the moon, you’d have no atmosphere at all for wind, and extreme temperatures + vacuum conditions would probably break a lot of the machine-shop tools (eg, liquid lubricants would freeze or sublimate). Etc.
The above point isn’t a fatal problem—just having one autofac system for deserts and another for tundra would cover plenty of use cases for industrializing the unused portions of the earth. But you’d also run into problems when you finished replicating and wanted to use all those Baffin Island autofacs to contribute back to the external, human economy. Probably it would be fine to just have the Baffin Island autofacs build wind turbines and export steel + electricity, while the desert autofacs build solar panels and export glass + electricity. But if you decided that you wanted your Baffin Island autofacs to start growing food, or manufacturing textiles, you would have a big problem. The autofacs would in some ways be more flexible than a human manufacturing economy (eg, because they are doing more things “by hand”, thus could switch to producing other types of steel products very quickly), but in other ways they would be much more rigid than a human manufacturing economy (if you want anything not based on steel, it might be pretty difficult for all the autofacs to reconfigure themselves).
Design effort & AI—if AI is good enough to replace machinists, won’t it be good enough to help design an autofac?
This post reminds me of Carl Shulman’s calculations (eg, on his recent 80,000 Hours podcast appearance) about the world economy’s doubling times, and how fast they could possibly get, based on analogies to biological systems.
Feynman says that, after many years, nowadays the dream of the Autofac is finally coming within reach, because AI is now good enough to operate robotics, navigate the world, use tools, and essentially replace the human machinist in a machine shop. This seems pretty likely come true, maybe in a few years.
But creating such a small, self-contained, simplified autofac seems like it is motivated by the desire to minimize the up-front design effort needed. If AI ever gets good enough to become a drop-in remote worker not just for machinsts, but also for engineers/designers, then design effort is no longer such a big obstacle, and many of the conclusions flip.
Consider how a paperclip-maximising superintelligence would colonize baffin island:
The first image that jumps to mind is one of automated factories tesselated across the terrain. I think this is correct insofar as there would be lots of repetition (the basic idea of industrial production is that you can get economies of scale, cheaply churning out many copies of the same product, when you optimize a factory for producing that product). But I don’t think these would be self-replicating factories.
A superintelligent AI could do lots of design work very quickly, and wouldn’t mind handling an extremely complex economy. I would expect the overall complexity of the global economy to go way up, and the minimum size of a self-replicating system to stay very large (ie, nearly the size of the entire planetary economy), and we just end up shipping lots of stuff to Baffin Island.
If we say that the superintelligence has to start “from scratch” on Baffin Island alone, with only a limited budget for imports, then I’d expect it to start with something that looks like the 1-meter-cubed autofac, but then continually scale up in the size and complexity of its creations over time, rather than creating thousands of identical copies of one optimized design.
A superintelligence-run economy might actually feature much less repetition than a human industrial economy, since the AI can juggle constant design changes and iteration and customization for local conditions, rather than needing to standardize things (as humans do to allow interoperability and reduce the costs of communicating between different humans).
Okay, I will try to sum up these scattered thoughts...
I think that the ideal autofac design for a given situation will vary a lot based on factors like:
what resources are locally available (wind vs solar, etc)
how expensive it is to support humans as part of the design, vs going fully automated
how much it costs to ship things to the location (the more you can ship, the bigger and more complex your autofac should be, other things equal)
the ultimate scale you’re aspiring to industrialize, relative to the size of your initial shipments (if you want to generate maximum energy on 1 acre of land using a 100 tons of payload, you should probably just import a 100-ton nuclear reactor and call it a day, rather than waste a bunch of money trying to design a factory to build a factory to build a nuclear reactor. Wheras if you are trying to generate power over the entire surface of Mars with a 100 ton payload, it is much more important to first create a self-replicating industrial base before you eventually turn towards creating lots of power plants.
how much it costs to design a given autofac system
a larger, more complex autofac will cost more to design, but will be more efficient
a more-completely-self-sufficient system (eg, including lubricant factories, or eliminating the need for humans on Baffin Island) will cost more to design, but will save on shipping costs later
if you can use lots of already-existing designs, that will lower design costs (but it will increase complexity elsewhere, since now you have to manufacture all the 10,000 types of fasteners and alloys and etc used by today’s random equipment designs)
advanced AI might be able to help greatly reduce design costs
A fully-automated “autofac” design wins over a more traditional human-led industrialization effort, where the upfront costs of designing the mostly-self-sufficient autofac system manage to pay for themsleves by lowering the recurring costs of importing stuff, paying employees, etc, of a human-led industrialization effort.
Whoever decides to start the cool autofac startup, should probably spend a bunch of time considering these big-picture economic tradeoffs, trying to figure out what environment (baffin island, the sahara desert, the oceans, the moon, Mars, alpha centauri, etc) offers the most upside from an autofac-style approach (Baffin-Island-like tundra might indeed be the best and most practical spot), and what tradeoffs to make in terms of autofac size/complexity, how much and where to incorporate humans into the system, and etc.
I would personally love to get a better sense of where the efficiencies are really coming from, that help an autofac strategy win vs a human-led industrialization strategy. Contrast the autofac plan with a more traditional effort to have workers build roads and a few factories and erect windmills all over Baffin Island to export electricity—where are the autofac wins coming from? The autofac would seem to have some big disadvantages, like that its windmill blades will be made of heavy steel instead of efficient fiberglass. Are the gains mostly from the fact that we’re not paying as many worker salaries? Or is it mostly from the fact that we’re producing all our heavy materials on-site rather than having to ship them in? Or somewhere else?
1950s era computers likely couldn’t handle the complex AI tasks imagined here (doing image recognition; navigating rough Baffin Island terrain, finishing parts with hand tools, etc) without taking up much more than 1 meter cubed.
Socialism / communism is about equally abstract as Georgism, and it certainly inspired a lot of people to fight! Similarly, Republican campaigns to lower corporate tax rates, cut regulations, reduce entitlement spending, etc, are pretty abstract (and often actively unpopular when people do understand them!), but have achieved some notable victories over the years. Georgism is similar to YIMBYism, which has lots of victories these days, even though YIMBYism also suffers from being more abstract than conspiracy theories with obvious villains about people “hoarding” vacant housing or chinese investors bidding up prices or whatever. Finally, Georgism itself was extremely popular once, so it clearly has the potential!! Overall, I don’t think being abstract is fatal for a mass movement.
But I also don’t think that we need to have some kind of epic Georgist popular revolution in order to get Georgist policies—we can do it just by making small incremental technocratic reforms to local property tax laws—getting local governments to use tools like ValueBase (developed by Georgist Lars Doucet) to do their property value assessments, getting reforms in a few places and then hopefully seeing success and pointing to that success to build more momentum elsewhere, etc.
As Lars Doucet tells it, the main problem with historical Georgism wasn’t unpopularity (it was extremely popular then!), but just the technical infeasibility of assessing land value separate from the value of the buildings on the land. But nowadays we have machine learning tools, GIS mapping systems, satellite imagery, successful home-value-estimation companies like Zillow and Redfin, etc. So nowadays we can finally implement Georgism on a technical level, which wasn’t possible in the 1890s. For more on this, see the final part of Lars’s epic series of georgism posts on Astral Codex Ten: https://www.astralcodexten.com/p/does-georgism-work-part-3-can-unimproved?utm_source=url
Future readers of this post might be interested this other lesswrong post about the current state of multiplex gene editing: https://www.lesswrong.com/posts/oSy5vHvwSfnjmC7Tf/multiplex-gene-editing-where-are-we-now
Future readers of this blog post may be interested in this book-review entry at ACX, which is much more suspicious/wary/pessimistic about prion disease generally:
They dispute the idea that having M/V or V/V genes reduces the odds of getting CJD / mad cow disease / etc.
They imply that Britain’s mad cow disease problem maybe never really went away, in the sense that “spontaneous” cases of CJD have quadrupled since the 80s, so it seems CJD is being passed around somehow?
https://www.astralcodexten.com/p/your-book-review-the-family-that
What kinds of space resources are like “mice & cheese”? I am picturing civilizations expanding to new star systems mostly for the matter and energy (turn asteroids & planets into a dyson swarm of orbiting solar panels and supercomputers on which to run trillions of emulated minds, plus constructing new probes to send onwards to new star systems).
re: the Three Body Problem books—I think the book series imagines that alien life is much, much more common (ie, many civilizations per galaxy) than Robin Hanson imagines in his Grabby Aliens hypothesis, such that there are often new, not-yet-technologically-mature civilizations popping up nearby each other, around the same time as each other. Versus an important part of the Grabby Aliens model is the idea that the evolution of complex life is actually spectacularly rare (which makes humans seem to have evolved extremely early relative to when you might expect, which is odd, but which is then explained by some anthropic reasoning related to the expanding grabby civilizations—all new civilizations arise “early”, because by the mid-game, everything has been colonized already). If you think that the evolution of complex life on other planets is actually a very common occurrence, then there is no particular reason to put much weight on the Grabby Aliens hypothesis.
In The Three Body Problem, Earth would be wise to keep quiet so that the Trisolarians don’t overheard our radio transmissions and try to come and take our nice temperate planet, with its nice regular pattern of seasons. But there is nothing Earth could do about an oncoming “grabby” civilization—the grabby civilization is already speeding towards Earth at near-lightspeed, and wants to colonize every solar system (inhabited and uninhabited, temperate planets with regular seasons or no, etc), since it doesn’t care about temperate continents, just raw matter that it can use to create dyson swarms. The grabby civilizations are already expanding as fast as possible in every direciton, coming for every star—so there is no point trying to “hide” from them.
Energy balance situation:
- the sun continually emits around 10^26 watts of light/heat/radiation/etc.
- per some relativity math at this forum comment, it takes around 10^18 joules to accelerate 1kg to 0.99c
- so, using just one second of the sun’s energy emissions, you could afford to accelerate around 10^8 kg (about the mass of very large cargo ships, and of the RMS Titanic) to 0.99c. Or if you spend 100 days’ worth of solar energy instead of one second, you could accelerate about 10^15 kg, the mass of Mt. Everest, to 0.99c.
- of course then you have to slow down on the other end, which will take a lot of energy, so the final size of the von neumann probe that you can deliver to the target solar system will have to be much smaller than the Titanic or Mt Everest or whatever.
- if you go slower, at 0.8c, you can launch 10x as much mass with the same energy (and you don’t have to slow down as much on the other end, so maybe your final probe is 100x bigger), but of course you arrive more slowly—if you’re travelling 10 light years, you show up 1.9 years later than the 0.99c probe. If you’re travelling 100 light years, you show up 19 years later.
- which can colonize the solar system and build a dyson swarm faster—a tiny probe that arrives as soon as possible, or a 100x larger probe that arrives with a couple years’ delay? this is an open question that depends on how fast your von neuman machine can construct solar panels, automated factories, etc. Carl Shulman in a recent 80K podcast figures that a fully-automated economy pushing up against physical limits, could double itself at least as quickly as once per year. So mabye the 0.99c probe would do better over the 100 light-year distance (arriving 19 years early gives time for 19 doublings!), but not for the 10 light-year distance (the 0.99c probe would only have doubled itself twice, to 4x its initial mass, by the time the 0.8c probe shows up with 100x as much mass)
- IMO, if you are trying to rapaciously grab the universe as fast as possible (for the ultimate purpose of maximizing paperclips or whatever), probably you don’t hop from nearby star to nearby star at efficient speeds like 0.8c, waiting to set up a whole new dyson sphere (which probably takes many years) at each stop. Rather, your already-completed dyson swarms are kept busy launching new probes all the time, targeting ever-more-distant stars. By the time a new dyson swarm gets finished, all the nearby stars have also been visited by probes, and are already constructing dyson swarms of their own. So you have to fire your probes not at the nearest stars, but at stars some distance further away. My intuition is that the optimal way to grab the most energy would end up favoring very fast expansion speeds, but I’m not sure. (Maybe the edge of your cosmic empire expands at 0.99c, and then you “mop up” some interior stars at more efficient speeds? But every second that you delay in capturing a star, that’s a whopping 10^26 joules of energy lost!)
Yes, it does have to be fast IMO, but I think fast expansion (at least among civilizations that decide to expand much at all) is very likely.
Of course the first few starships that a civilization sends to colonize the nearest stars will probably not be going anywhere near the speed of light. (Unless it really is a paperclips-style superintelligence, perhaps.) But within a million years or so, even with relatively slow-moving ships, you have colonized thousands of solar systems, built dyson swarms around every star, have a total population in the bajilions, and have probably developed about all the technology that it is physically possible to develop. So, at some point it’s plausible that you start going very close to the speed of light, because you’ll certainly have enough energy + technology to do so, and because it might be desirable for a variety of reasons:
- Maybe we are trying to maximize some maximizable utility function, be that paperclips or some more human notion, and want to minimize what Nick Bostrom calls “astronomical waste”.
- Maybe we fail to coordinate (via a strong central government or etc), and the race to colonize the galaxy becomes a free-for-all, rewarding the fastest and most rapacious settlers, a la Robin Hanson’s “Burning the cosmic commons”.
Per your own comment—if you only colonize at 0.8c so your ships can conserve energy, you are probably actually missing out on lots and lots of energy, since you will only be able to harvest resources from about half the volume that you could grab if you traveled at closer to lightspeed!
Satellites were also plausibly a very important military technology. Since the 1960s, some applications have panned out, while others haven’t. Some of the things that have worked out:
GPS satellites were designed by the air force in the 1980s for guiding precision weapons like JDAMs, and only later incidentally became integral to the world economy. They still do a great job guiding JDAMs, powering the style of “precision warfare” that has given the USA a decisive military advantage ever since 1991′s first Iraq war.
Spy satellites were very important for gathering information on enemy superpowers, tracking army movements and etc. They were especially good for helping both nations feel more confident that their counterpart was complying with arms agreements about the number of missile silos, etc. The Cuban Missile Crisis was kicked off by U-2 spy-plane flights photographing partially-assembled missiles in Cuba. For a while, planes and satellites were both in contention as the most useful spy-photography tool, but eventually even the U-2′s successor, the incredible SR-71 blackbird, lost out to the greater utility of spy satellites.
Systems for instantly detecting the characteristic gamma-ray flashes of nuclear detonations that go off anywhere in the world (I think such systems are included on GPS satellites), and giving early warning by tracking ballistic missile launches during their boost phase (the Soviet version of this system famously misfired and almost caused a nuclear war in 1983, which was fortunately forestalled by one Lieutenant colonel Stanislav Petrov) are obviously a critical part of nuclear detterence / nuclear war-fighting.
Some of the stuff that hasn’t:
The air force initially had dreams of sending soldiers into orbit, maybe even operating a military base on the moon, but could never figure out a good use for this. The Soviets even test-fired a machine-gun built into one of their Salyut space stations: “Due to the potential shaking of the station, in-orbit tests of the weapon with cosmonauts in the station were ruled out.The gun was fixed to the station in such a way that the only way to aim would have been to change the orientation of the entire station. Following the last crewed mission to the station, the gun was commanded by the ground to be fired; some sources say it was fired to depletion”.
Despite some effort in the 1980s, were were unable to figure out how to make “Star Wars” missile defence systems work anywhere near well enough to defend us against a full-scale nuclear attack.
Fortunately we’ve never found out if in-orbit nuclear weapons, including fractional orbit bombardment weapons, are any use, because they were banned by the Outer Space Treaty. But nowadays maybe Russia is developing a modern space-based nuclear weapon as a tool to destroy satellites in low-earth orbit.
Overall, lots of NASA activities that developed satellite / spacecraft technology seem like they had a dual-use effect advancing various military capabilities. So it wasn’t just the missiles. Of course, in retrospect, the entire human-spaceflight component of the Apollo program (spacesuits, life support systems, etc) turned out to be pretty useless from a military perspective. But even that wouldn’t have been clear at the time!