B.Eng (Mechatronics)
anithite
First, more patches growing from different starting locations is better. That cuts required linear expansion rate proportional to ratio of (half earth circumference,max(dist b/w patches))
Note that 0.46 m/s is walking speed. two layer fractal growth is practical (IE:specialised spikes grow outwards at 0.46m/s initiating slower growth fronts that cover the area between them more slowly.)
Material transport might become the binding constraint but transport gets more efficient as you increase density. Larger tubes have higher flow velocities with the same pressure gradient. (less benefits once turbulence sets in). Air bearings (think very long air hockey table) are likely close to optimal and easy enough to construct.
As for biomass/area. Corn grows to 10Mg/ha = 1kg/m²
for a kilometer long front that implies half a tonne per second. Trains cars mass in the 10s to hundreds of tonnes. assuming 10 tonnes and 65′ that’s half a tonne per meter of train. So move a train equivalent at (1m/s+0.5m/s) --> 1.5m/s (running speed) and that supplies a kilometer of frontage.
There’s obviously room to scale this.
I’m also ignoring oceans. Oceans make this easier since anything floating can move like a boat for which 0.5m/s is not significant speed.
Added notes:
I would assume the assimilation front has higher biomass/area than inner enclosed areas since there’s more going on there and potentially conflict with wildlife. This makes things trickier and assembly/reassembly could be a pain so maybe put it on legs or something?
You obviously didn’t read the post as indeed it discusses this—see the section on size and temperature.
That point (compute energy/system surface area) assumes we can’t drop clock speed. If cooling was the binding constraint, drop clock speed and now we can reap gains in eficiency from miniaturization.
Heat dissipation scales linearly with size for a constant ΔT. Shrink a device by a factor of ten and the driving thermal gradient increases in steepness by ten while the cross sectional area of the material conducting that heat goes down by 100x. So if thermals are the constraint, then scaling linear dimensions down by 10x requires reducing power by 10x or switching to some exotic cooling solution (which may be limited in improvement OOMs achievable).
But if we assume constant energy per bit*(linear distance), reducing wire length by 10x cuts power consumption by 10x. Only if you want to increase clock speed by 10x (since propagation velocity is unchanged and signal travel less distance). Does power go back up. In fact wire thinning to reduce propagation speed gets you a small amount of added power savings.
All that assumes the logic will shrink which is not a given.
Added points regarding cooling improvements:
brain power density of 20mW/cc is quite low.
ΔT is pretty small (single digit °C)
switching to temperature tolerant materials for higher ΔT gives (1-1.5 OOM)
phase change cooling gives another 1 OOM
Increasing pump power/coolant volume is the biggie since even a few Mpa is doable without being counterproductive or increasing power budget much (2-3 OOM)
even if cooling is hard binding, if interconnect density increases, can downsize a bit less and devote more volume to cooling.
Consider trying to do the reverse for computers. Swap copper for saltwater.
You can of course drop operation frequency by 10^8 for a 10-50 hz clock speed. Same energy efficiency.
But you could get added energy efficiency in any design by scaling down the wires to increase resistance/reduce capacitance and reducing clock speed.
In the limit, Adiabatic Computing is reversible because in the limit, moving charge carriers more slowly eliminates resistance.
Thermal noise voltage is proportional to bandwidth. Put another way if the logic element responds slowly enough it see lower noise by averaging.
Consider a Nanoelectromechanical relay. These are usually used for RF switching so switching voltage isn’t important, but switching voltage can be brought arbitrarily low. Mass of the cantilever determines frequency response. A NEMR with a very long light low-stiffness cantilever could respond well at 20khz and be sensitive to thermal noise. Adding mass to the end makes it less sensitive to transients (lower bandwidth, slower response) without affecting switching voltage.
In a NEMS computer there’s the option of dropping (stiffness, voltage, operating frequency) and increasing inertia (all proportionally) which allows for quadratic reductions in power consumption.
IE: Moving closer to the ideal zero effective resistance by taking clock speed to zero.
The bit erasure Landauer limit still applies but we’re ~10^6 short of that right now.
Caveats:
NEM relays currently have limits to voltage scaling due to adhesion. Assume the hypothetical relay has a small enough contact point that thermal noise can unstick it. Operation frequency may have to be a bit lower to wait for this to happen.
Yes, designing proteins or RNAzymes or whatever is hard. Immense solution space and difficult physics. Trial and error or physically implemented genetic algorithms work well and may be optimal. (EG:provide fitness incentive to bacteria that succeed (EG:can you metabolize lactose?))
Major flaw in evolution:
nature does not assign credit for instrumental value
assume an enzymatic pathway is needed to perform N steps
all steps must be performed for benefit to occur
difficulty of solving each step is “C” constant
evolution has to do O(C^N) work to solve problem
with additional small constant factor improvement for horizontal genetic transfer and cooperative solution finding (EG: bacterial symbiosis)
intelligent agent can solve for each step individually for O(C*N) (linear) work
this applies also to any combination of structural and biochemical changes.
Also, nature’s design language may not be optimal for expressing useful design changes concisely. Biological state machines are hard to change in ways that carry through neatly to the final organism. This shows in various small ways in organism design. Larger changes don’t happen even though they’re very favorable (EG:retina flip would substantially improve low light eye capabilities (it very much did in image sensors)) and less valuable changes not happening and not varying almost at all over evolutionary time implies there’s something in the way there. If nature could easily make plumbing changes, organisms wouldn’t all have similar topology (IE:not just be warped copies of something else). New part introduction and old part elimination can happen but it’s not quick or clean.
Nature has no mechanisms for making changes at higher levels of abstraction. It can change one part of the DNA string but not “all the start codons at once and the ribosome start codon recognition domain”. Each individual genetic change is an independent discovery.
Working in these domains of abstraction reduces the dimensionality of the problem immensely and other such abstractions can be used to further constrain solution space cheaply.
Nanotech would definitely be nice but some people have expressed skepticism so I’m proposing an alternative non-(dry)nanotech route.
I’m assuming the AGI is going to kill off all the humans quickly with highly fatal pathogens with long incubation times. Whatever works to minimize transitional chaos and damage to valuable infrastructure.
The meat of this is a proposed solution for thriving after humans are dead. The green infrastructure doesn’t have to be that large to sustain the AI’s needs initially. A small cluster of a few dozen consumer gpus + biotech interfacing hardware may be the AI’s temporary home until it can build up enough to re-power datacenters and do more scavenging.
Although I’d go with multiple small clusters for redundancy. Initial power consumption can be more than handled by literally a backyard’s worth of kudzugoth and a small bio-electric generator. Plant based solar to sugar to electricity should give 50w/m² so for a 6kw cluster with 20 GPUs a 20m*10m patch should do and could be unobtrusive, blending into the surrounding vegetation.
Maybe, Still, there are ways to harden an organism against parasitic intrusion. TLDR you isolate and filter external things. Plants are pretty good at this already (they have no mammalian style immune system) and employ regularly spaced filters with holes too small for bacteria in their water tubes.
The other option is to do the biological equivalent of “commoditize your complement”. Don’t get good at making leaves and roots, get good at being a robust middleman between leaves and roots and treat them as exploitable breedable workers. Obviously don’t optimise too hard in such a way as to make the system brittle (EG:massive uninterrupted monocultures). Have fallback options ready to deploy if something goes wrong.
If you want to make any victory pyrric, just re-use other common earth plant parts wholesale. If you want to kill the organism you’ll need root eating fungi for all the food crops and common trees/grasses. If you want a leaf fungus/bacteria same. Organism can select between plant varieties to remain effective so the defender has to release bio weapons to kill most important plants.
Let’s talk growth rates.
A corn seed weighs 0.25 grams. A corn cob weighs 0.25kg. It takes 60-100 days to grow. Assuming 1 cob per plant and 80 days that’s 80/log(1000,2)=8 days doubling time not counting the leaves and roots. I’d guess it’s closer to 7 days including stalk leaves and roots.
Kudzu can grow one foot per day.
Suppose a doubling time of one week which is pretty conservative. This means a daily growth rate of 2^(1/7) --> 10% so whatever area it’s covering, It grows 10% of that. For a square patch measuring 100m*100m that means each side grows 0.25 meters per day. This is in line with kudzu initially.
initial : (100m)² 0.25m/day linear
month1 : (450m)² 1.2m/day linear
month2 : (2km)² 5m/day linear
month3 : (2km)² 22m/day linear
month4 : (9km)² 100m/day linear
month5 : (40km)² 440m/day linear
month6 : (180km)² 2km/day linear
month7 : (800km)² 9km/day linear
month8 : (16000km)² 40km/day linear (half of earth surface area covered)
8m1w : all done
1 week doubling times are enough to get you biosphere assimilation in under a year. If going full Tyranid and eating the plants/trees/houses can speed things up then things go faster. Much better efficiencies are achievable by eating the plants and reusing most of the cellular machinery. Doubling time of two days takes the 8 month global coverage time down to 10 weeks. Remember e-coli is doubling in 20 minutes so if we can literally re-use the whole tree (jack into the sap being produced) while eating the structural wood, doubling times could get pretty absurd.
The reason for specifying modular construction is to enable faster linear growth rates which are necessary for fast spread. Starting from multiple points is also important. Much better to have 10000 small 1m*1m patches spread out globally than a single 100m*100m patch. Same timeline but 100x lower required linear expansion rate.
I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don’t need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.
If you read between the lines in my Human level AI can plausibly take over the world post, hacking computers is probably the lowest difficulty “take over the world” strategy and has the side benefit of giving control over all the internet connected AI clusters.
The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It’s entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.
Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what’s already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).
Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).
but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I’d really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.
Pshh, low expectations. Mind uploading or bust!
Both brains and current semiconductor chips are built on dissipative/irreversible wire signaling, and are mostly interconnect by volume
That’s exactly what I meant. Thin wires inside a large amount of insulation is sub optimal.
When using better wire materials, rather than reduce capacitance per unit length, interconnect density can be increased (more wires per unit area) and then the entire design compacted. Higher capacitance per wire unit length than the alternative but much shorter wires leading to overall lower switching energy.This is why chips and brains are “mostly interconnect by volume” because building them any other way is counterproductive.
The scenario I outlined while sub optimal shows that in white matter there’s an OOM to be gained even in the case where wire length cannot be decreased (EG:trying to further fold the grey matter locally in the already very folded cortical surface.) In cases where white matter interconnect density was limiting and further compaction is possible you could cut wire length for more energy/power savings and that is the better design choice.
It sure looks like that could be possible in the above image. There’s a lot of white matter in the middle and another level of even coarser folding could be used to take advantage of interconnect density increases.
Really though increasing both white and grey matter density until you run up against hard limits on shrinking the logic elements (synapses) would be best.
Agreed. My bad.
“and the dielectric is the same thickness as the wires.” is doing the work there. It makes sense to do that if You’re packing everything tightly but with an 8 OOM increase in conductivity we can choose to change the ratio (by quite a lot) in the existing brain design. In a clean slate design you would obviously do some combination of wire thinning and increasing overall density to reduce wire length.
The figures above show that (ignoring integration problems like copper toxicity and NA/K vs e- charge carrier differences) Assuming you do a straight saltwater to copper swap in white matter neurons and just change the core diameter (replacing most of it with insulation), energy/switch event goes down by 12.5x.
I’m pretty sure for non-superconductive electrical interconnects the reliability is set by the Johnson-Nyquist_noise and figuring out the output noise distribution for an RC transmission line is something I don’t feel like doing right now. Worth noting is that the above scenario preserves the R:C ratio of the transmission line (IE: 1 ohm worth of line has the same distributed capacitance) so thermal noise as seen from the end should be unchanged.
I think this is wrong. The landauer limit applies to bit operations, not moving information, t
he fact that optical signalling has no per distance costs should be suggestive of this.(Edit:reversibility does change things but can be approached by reducing clock speed which in the limit gives zero effective resistance.)My guess is that wire energy per unit length is similar because wires tend to have a similar optimum conductor:insulation diameter ratios leading to relatively consistent capacitances per unit length.
Concretely, if you have a bunch of wires packed in a cable and want to reduce wire to wire capacitance to reduce C*V² energy losses, putting the wires further apart does this. This is not practical because it limits wires/cm² (cross sectional interconnect density) but the same thing can be done with more conductive materials. EG:switching from saltwater (0.5 S) to copper (50 MS) for a 10^8 increase in conductivity
Capacitance of a wire with cylindrical insultion is proportional to “1/ln(Di/Do)”. For a myelinated neuron with a 1:2 saltwater:sheath diameter ratio (typical) switching to copper allows a reduction in diameter of 10^4 x for the same resistance per unit length. This change leads to a 14x reduction in capacitance ((1/ln(2/1))/(1/ln(20000/1))=(ln(20000)/ln(2))=14.2). This is even more significant for wires with thinner insulation (EG:grey matter) ((ln(11000)/ln(1.1))=97.6)
A lot of the capacitance in a myelinated neuron is in the unmyelinated nodes but we can now place them further apart. Though to do this we have to keep resistance between nodes the same. Instead of a 10^4 reduction in wire area we do 2700 x leading to a 12.5x reduction in unit capacitance and resistance. Nodes can now be placed 12.5x apart for a 12.5x total reduction in energy.
This is not the optimum design. If your wires are tiny hairs in a sea of insulation, consider packing everything closer together. With wires 10000x smaller length reductions on that scale would follow. leading to a 10′000x reduction in switching energy. At some point quantum shenanigans ruin your day, but a brain like structure should be achievable with energy consumption 100-1000 x lower.
Practically, putting copper into cells ends badly, also there’s the issue of charge carrier compatibility, In In neurons, in addition to acting as charge carriers sodium and potassium have opposing concentration gradients which act as energy storage to power spiking. Copper uses electrons as charge carriers so there would have to be electrodes to adsorb/desorb aqueous charge carriers and exchange them for electrons in the copper. In practice it might be easier to switch to having +ve and -ve supply voltage connections and make the whole thing run on DC power like current computer chips do. This requires swapping out the voltage gated ion channels for something else.
Computers have efficiencies similar to the brain despite having much more conductive wire materials mostly because they are limited to packing their transistors on a 2d surface.
add more layers (even with relatively coarse inter connectivity) and energy efficiency goes up.
Power consumption for equivalent performance was 46%. That suggests that power consumption in modern chips is driven by overly long wires resulting from lack of a 3rd dimension. I remember but can’t find any papers on use of more than 2 layers. There’s issues there because layer to layer connectivity sucks. Die to die interconnect density is much lower than transistor density so efficiency gains don’t scale that well past 5 layers IIRC.
- Apr 18, 2023, 12:26 AM; 2 points) 's comment on grey goo is unlikely by (
edit: continued partially in the original article
That post makes a fundamental error about wiring energy efficiency by ignoring the 8 OOM difference in electrical conductivity between neuron saltwater and copper. (0.5 S vs 50 MS)
There’s almost certainly a factor of 100 energy efficiency gains to be had by switching from saltwater to copper in the brain and reducing capacitance by thinning the wires. I’ll be leaving a comment soon but that had to be said.
energy/bit/(linear distance) agreement points to underlying principle of “if you’ve thinned the wires why haven’t you packed everything in tighter” leading to similar capacitance and therefore energy values/(linear distance)
face to face die stacking results suggest that computers could be much more efficient if they weren’t limited to 2d packing of logic elements. A second logic layer more than halved power consumption at the same performance and that’s with limited interconnect density between the two logic dies.
The Cu<-->saltwater conductivity difference leads to better utilisation of wiring capacitance to reduce thermal noise voltage at transistor gates. Concretely, there are more electrons able to effectively vote on the output voltage. For very short interconnects this matters less but long distance or high fanout nodes have lots of capacitance and low resistance wires make the voltage much more stable.
Green goo is plausible
edit: (link)green goo is plausible
The AI can kill us and then take over with better optimized biotech very easily.
Doubling time for
Plants (IE:solar powered wet nanotech) > single digit days
Algae in ideal conditions 1.5 days
E. Coli 20 minutes
There are piles of yummy carbohydrates lying around (Trees, plants, houses)
The AI can go full Tyranid
The AI can re-use existing cellular machinery. No need to rebuild the photosynthesis or protein building machinery, full digestion and rebuilding at the amino acid level is wasteful.
Sub 2 minute doubling times are plausible for a system whose rate limiting step is mechanically infecting plants with a fast acting subversive virus. Spreading flying things are self replicators that steal energy+cellular machinery from plants during infection (IE:mosquito like). Onset time could be a few hours till construction of shoggoth like things. Full biosphere assimilation could be limited by flight speed.
Nature can’t do these things since they require substantial non-incremental design changes. Mosquitoes won’t simultaneously get plant adapted needles + biological machinery to sort incoming proteins and cellular contents + continuous grow/split reproduction that would allow a small starting population to eat a forest in a day. Nature can’t design the virus to do post infection shoggoth construction either.
The only thing that even re-uses existing cellular machinery is viruses and that’s because they operate on much faster evolutionary time scales than their victims. Evolution takes so long that winning strategies to eat or subvert existing populations of organisms are self-limiting. The first thing to sort of work wipes out the population and then something else not vulnerable fills the niche.
Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can’t flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.
Endgame biotech (IE: can design new proteins/DNA/organisms) is very powerful.
But that doesn’t mean dry nanotech is useless.
even if production is expensive it may be worth building some things that way anyways.
computers
structural components
Biology is largely stuck with ~0.15 Gpa materials (collagen, cellulose, chitin)
oriented UHMWPE should be wet synthesizeable (6 Gpa tensile strength)
graphene/diamondoid may be worth it in some places to hit 30 Gpa (EG:for things that fly or go to space)
dry nanotech won’t be vulnerable to parasites that can infect a biological system.
even if the AI has to deal with single day doubling times that’s still enough to cover the planet in a month.
but with the right design parasites really shouldn’t be a problem.
biological parasite defenses are not-optimal
That’s not how computers (the ones we have today or the rod logic ones proposed work). Each rod or wire represents a single on/off bit.
Yes, doing mechanosynthesis is more complicated and precise sub nm control of a tooltip may not be competitive with biology for self replication. But if the AI wants a substrate to think on that can implement lots of FLOPs then molecular rod logic will work.
For that matter protein based mechanical or hybrid electromechanical computers are plausible. Likely with lower energy consumption per erased bit than neurons and certainly with more density. Human computers have nm sized transistors. There’s no reason to think that neurons and synapses are the most efficient sort of biological computer.
OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn’t require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you’d need some mechanical amplifier. So that’s a problem.
Drexler absolutely considered thermal noise. Rod logic uses rods at right angles whose positions allow or prevent movement of other rods. That’s the amplification since a small force moving one rod can control a later applied larger force on a blocked rod.
http://www.nanoindustries.com/nanojbl/NanoConProc/nanocon2.html#anchor84400
<rant>It really pisses me off that the dominant “AI takes over the world” story is more or less “AI does technological magic”. Nanotech assemblers, superpersuasion, basilisk hacks and more. Skeptics who doubt this are met with “well if it can’t it just improves itself until it can”. The skeptics obvious rebuttal that RSI seems like magic too is not usually addressed.</rant>
Note:RSI is in my opinion an unpredictable black swan. My belief is RSI will yield somewhere between 1.5-5x speed improvement to a nascent AGI from improvements in GPU utilisation and sparsity/quantisation, requiring significant cognition spent to achieve speedups. AI is still dangerous in worlds where RSI does not occur.
Self play generally gives superhuman performance(GO,chess, etc.) even in more complicated imperfect information games (DOTA, Starcraft). Turning a field of engineering into a self-playable game likely leads to (superhuman(80%),Top-human equiv(18%),no change(2%)) capabilities in that field. Superhuman or top-human software engineering (vulnerability discovery and programming) is one relatively plausible path to AI takeover.
https://googleprojectzero.blogspot.com/2023/03/multiple-internet-to-baseband-remote-rce.html
Can an AI take over the world if it can?:
do end to end software engineering
find vulnerabilities about as well as the researchers at project zero
generate reasonable plans on par with a +1sd int human (IE:not hollywood style movie plots like GPT-4 seems fond of)
AI does not need to be even superhuman to be an existential threat. Hack >95% of devices, extend shoggoth tentacles, hold all the data/tech hostage, present as not skynet so humans grudgingly cooperate, build robots to run economy(some humans will even approve of this), kill all humans, done.
That’s one of the easier routes assuming the AI can scale vulnerability discovery. With just software engineering and a bit of real world engineering(potentially outsourceable) other violent/coercive options could work albeit with more failure risk.
What sets the minimal clock rate? Increasing wire resistance and reducing the number of ion channels and pumps proportionally should just work. (ignoring leakage).
It is certainly tempting to run at higher clock speeds (serial thinking speed is a nice feature) but if miniaturization can be done and then clock speeds must be limited for thermal reasons why can’t we just do that?
That aside, is miniaturization out of the question (IE:logic won’t shrink)? Is there a lower limit on number of charge carriers for synapses to work?
Synapses are around 1µm³ which seems big enough to shrink down a bit without weird quantum effects ruining everything. Humans have certainly made smaller transistors or memristors for that matter. Perhaps some of the learning functionality needs to be stripped but we do inference on models all the time without any continuous learning and that’s still quite useful.