Desalination costs are irrelevant to uranium extraction. Uranium is absorbed in special plastic fibers arrayed in ocean currents that are then post processed to recover the uranium—it doesn’t matter how many cubic km of water must pass the fiber mats to deposit the uranium because that process is, like wind, free. The economics have been demonstrated in pilot scale experiments at ~$1000/kg level, easily cheap enough making Uranium an effectively inexhaustible resource at current civilisational energy consumption levels even after we run out of easily mined resources. Lots of published research on this approach (as is to be expected when it is nearing cost competitiveness with mining).
Foyle
Seems likely, neurons only last a couple of decades—memories older than that are reconstructions, - things we recall frequently or useful skills. If we live to be centuries old it is unlikely that we will retain many memories going back more than 50-100 years.
In the best envisaged 500GW-days/tonne fast breeder reactor cycles 1kg of Uranium can yield about $500k of (cheap) $40/MWh electricity.
Cost for sea water extraction (done using ion-selective absorbing fiber mats in ocean currents) of Uranium is currently estimated (using demonstrated tech) to be less than $1000/kg, not yet competitive with conventional mining, but is anticipated to drop closer to $100/kg which would be. That is a trivial fraction of power production costs. It is even now viable with hugely wasteful pressurised water uranium cycles and long term with fast reactor cycles there is no question as to its economic viability. It could likely power human civilisation for billions of years with replenishment from rock erosion.
A key problem for nuclear build costs is mobility of skilled workforces − 50 years ago skilled workers could be attracted to remote locations to build plants bringing families with them as sole-income families. But nowadays economics and lifestyle preferences make it difficult to find people willing to do that—meaning very high priced fly-in-fly-out itinerant workforces such as are seen in oil industry.
The fix is; coastal nuclear plants, build and decommission in specialist ship yards, floated to operating spot and emplace on sea bed—preferable with seabed depth >140m (ice age minimum). Staff flown or ferried in and out (e-VTOL). (rare) accidents can be dealt with by sea water dilution, and if there is a civilizational cataclysm we don’t get left with a multi-millenia death zone around decaying land-based nuclear reactors.
Goes without saying that we should shift to fast-reactors for efficiency and hugely less long term waste production. To produce 10TW of electricity (enough to provide 1st world living standards to everyone) would take about 10000 tonnes a year of uranium, less than 20% of current uranium mining in 500GW-day/tonne fast reactors.
Waste should be stuck down many km-deep holes on abyssal ocean floors. Dug using oil industry drilling rigs and capped with concrete. There is no flux of water through ocean bed floor, and local water pressures are huge so nothing will ever be released into environment—no chance of any bad impacts ever (aside from volcanism that can be avoided). Permanent perfect solution that requires no monitoring after creation.
Consciousness as recurrence, potential for enforcing alignment?
It appears that AI existential risk is starting to penetrate consciousness of general public in a ‘its not just hyperbole’ way.
There will inevitably be a lot of attention seeking influencers (not a bad thing in this case) who will pick up the ball and run with it now, and I predict the real-life Butlerian Jihad will rival the Climate Change movement in size and influence within 5 years as it has all the attributes of a cause that presents commercial opportunity to the unholy trinity of media, politicians and academia that have demonstrated an ability to profit from other scares. Not to mention vast hoards of people fearful of losing their careers.
I expect that AI will indeed become highly regulated in next few years in the west at least. Remains to be seen what will happen with regards to non-democratic nations.
46% of US adults at least “somewhat concerned” about AI extinction risk.
Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.
As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment absent the environmental pressures they impose.
International variance in levels of LGBT-ness in different cultures is high even amongst countries where social penalties are (probably?) low. The cultural promotion aspect is clearly powerful.
https://www.statista.com/statistics/1270143/lgbt-identification-worldwide-country/
I think cold war incentives with regards to tech development were atypical. Building 1000′s of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).
This is clearly not the case with AI—which is far cheaper to develop, easier to hide, and has myriad lucrative use cases. Policing a Dune-style “thou shalt not make a machine in the likeness of a human mind” Butlerian Jihad (interesting aside; Samuel Butler was a 19th century anti-industrialisation philosopher/shepard who lived at Erewhon in NZ (nowhere backwards) a river valley that featured as Edoras in the LOTR trilogy) would require radical openness to inspection everywhere all the time, that almost certainly won’t be feasible without establishment of liberal democracy basically everywhere in the world. Despots would be a magnet for rule breakers.
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that can produce such gains as old fashioned selective breeding.
It also explains why rich dynasties can maintain average IQ about +1SD above population in their children—by always being able to marry highly intelligent mates (attracted to the money/power/prestige)
Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg “Our Kind” a mass market anthropological survey of human culture and psychology] . Which of course colors how we view things deeply.
But to my view evolution strongly favours Vernor Vinge’s “Aggressively hegemonizing” AI swarms [“A fire upon the deep”]. If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext ‘win’ in evolutionary terms. This seems basically inevitable to me over long term. Perhaps we can get some insurance by learning to live in space. But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:
1. help at all?
2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)
I suspect that humans will turn out to be relatively simple to encode—quite small amounts of low-resolution memory that we draw on, with detailed understanding maps—smaller than LLMs that we’re creating. Added to which there is an array of motivation factors that will be quite universal but of varying levels of intensity in different dimensions for each individual.
If that take on things is correct then it may be that emulating a human by training a skeleton AI using constant video streaming etc over a 10-20 year period (about how long neurons last before replacement) to optimally better predict behaviour of the human being modelled will eventually arrive at an AI with almost exactly the same beliefs and behaviours as the human being emulated.
Without physically carving up brains and attempting to transcribe synaptic weightings etc that might prove the most viable means of effective up-loading and creation of highly aligned AI with human like values. And perhaps would create something closer to being our true children-of-the-mind
For AGI alignment; seems like there will at minimum need to be a perhaps multiple blind & independent hierarchies of increasingly smart AIs continually checking and assuring that next level up AIs are maintaining alignment with active monitoring of activities, because as AIs get smarter their ability to fool monitoring systems will likely grow as the relative gulf between monitored and monitoring intelligence grows.
I think a wide array of AIs is a bad idea. If there is a non-zero chance that an AI goes ‘murder clippy’ and ends humans, then that probability is additive—more independent AIs = higher chance of doom.
I don’t think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal). Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn’t be supported by less than a few tens to hundred million humans—that’s a lot of high-performance meat-actuators and squishy compute to supplant. Datacenter’s and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.
I think we’ll have many years to contemplate our impending doom after ASI is created. Though I wouldn’t be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.
I also think it won’t be that hard to get large proportion of human population clamoring to halt AI development—with sufficient political and financial strength to stop even rogue nations. A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness). We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster. Social media is a great tool to do these days if you have the budget.
Have just watched E.Y’s “Bankless” interview
I don’t disagree with his stance, but am struck that he sadly just isn’t an effective promoter for people outside of his peer group. His messaging is too disjointed and rambling.
This is, in the short term clearly an (existential) political rather than technical problem, and needs to be solved politically rather than technically to buy time. It is almost certainly solvable in the political sphere at least.
As an existence proof we have a significant percentage of western world’s pop stressing about (comparatively) unimportant environmental issues (generally 5-15% vote Green in western elections) and they have built up an industry that is collecting and spending 100′s of billions a year in mitigation activities—equivalent to something on the order of a million workers efforts directed toward it.
That psychology could certainly be redirected to the true existential threat of AI mageddon—there is clearly a large fraction of humans with patterns of belief needed to take this on this and other existential issues as a major cause if they have it explained in a compelling way. Currently Eliezer appears to lack the charismatic down-to-earth conversational skills to promote this (maybe media training could fix that), but if a lot of money was directed towards buying effective communicators/influencers with large reach into youth markets to promote the issue it would likely quickly gain traction. Elon would be an obvious person to ask for such financial assistance. And there are any number of elite influencers who would likely take a pay check to push this.
Laws can be implemented if there is are enough people pushing for it, elected politicians follow the will of the people—if they put their money where their mouths are, and rogue states can be economically and militarily pressured into compliance. A real Butlerian Jihad.
- Feb 22, 2023, 9:41 PM; 3 points) 's comment on Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky by (
Evolution favours organisms that grow as fast as possible. AGIs that expand aggressively are the ones that will become ubiquitous.
Computronium needs power and cooling. Only dense, reliable and highly scalable form of power available on earth is nuclear, why would ASI care about ensuring no release of radioactivity into the environment?
Similarly mineral extraction—which at huge scales needed for VInge’s “aggressively hegemonizing” AI will be using inevitably low grade ores becomes extremely energy intensive and highly polluting. Why would ASI care about the pollution?
If/when ASI power consumption rises to petaWatt levels the extra heat is going to start having a major impact on climate. Icecaps gone etc. Oceans are probably most attractive locations for high power intensity ASI due to vast cooling potential.
“I have better reason to trust authorities over skeptics” argumentum ad auctoritatem (appeal to authority) is a well known logical fallacy, and unwise in an era of orthodoxies enforced by brutal institutional financial menaces. Far better to adhere to nullius in verba (on the word of no one), the motto of the Royal Society, or as Deming said “In god we trust, all others must bring data”
Followed closely; the pandemic years have provided numerous clear examples of very old problems like bureaucratic reluctance to change direction even when strongly indicated—such as holding on to vaccine mandates for young in era of very low risk covid strains, the malign impacts of regulatory/institutional capture by rich corporates (eg pharma cutting-short vaccine trials without doing long term follow-up, and buying support from media and regulators to prevent dissent or contrary evidence and opinions seeing light) and high ranking individuals conspiring to corrupt scientific process (published mendacious statements dismissing Wuhan lab leak theories for political reasons) all of course abetted by Big Tech censorship. All these and a hyper partisan media and academic landscape that constantly threaten heretics and heterodox thinkers with financial destruction has broken the truth-finding and sense-making mechanisms of our world. Institutions do not deserve trust when dissenters are punished, that is the hallmark of religion not science.
Current concerns about vaccine harms seem to have a lot of signal in data; most clearly in excess death figures for New Zealand where covid, flu and RSV deaths were near zero due to effective zero-covid lock-downs from 2020 till end of 2021, and yet in 2021 excess deaths jumped by about 400 per million above the 2020 baseline in the 6 months after the vaccine programs started in Q1 2021 prior to covid becoming widespread in December 2021. The temporal correlation pointing to covid vaccination as the cause of these excess deaths is powerful in the absence of other reasonable explanations. And with a natural experimental ‘control’ population test of 5 million and 2000 extra deaths it is not a small number to be dismissed.
Hopefully the argument will be resolved scientifically over next few years, but it will be politically very difficult battle given large number of powerful people and corporations with reputations and fortunes on the line.
Sam Altman: “multiple AGIs in the world I think is better than one”. Strongly disagree. if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.
If any superintelligent AI is capable of wiping out humans should it decide to, it is better for humans to try and arrange initial conditions such that there are ultimately a small number of them to reduce probability of doom. The risk posed by 1 or 10 independent but vast SAI is lower than from a million or a billion independent but relatively less potent SAI where it may tend to P=1.
I have some hope the the physical universe will soon be fully understood and from there on prove relatively boring to SAI, and that the variety thrown up by the complex novelty and interactions of life might then be interesting to them
Human brains are estimated to be ~1e16flops equivalent, suggesting about 10-100 of these maxed-out GPUs a decade hence could be sufficient to implement a commodity AGI (Leading Nvidia A100 GPU already touts 1.2 p-ops Int8 with sparsity), at perhaps 10-100kW power consumption, (less than $5/hour if data center is in low electricity cost market). There are about 50x 1000mm² GPUs per 300mm wafer, and latest generation TSMC N3 process costs about $20000 per wafer—eg an AGI per wafer seems likely.
It’s likely then that (if it exists and is allowed) personal ownership of human-level AGI will be, like car ownership, within the financial means of a large proportion of humanity within 10-20 years, and their brain power will be cheaper to employ than essentially all human workers. Economics will likely hasten rather than slow an AI apocalypse.
Seems quite compelling—most previous claims of high temp superconductivity have been based on seeing only dips in resistance curves—not full array of superconducting behaviours recounted here, and sample preparation instructions are very straight forward—if it works we should see replication in a few days to weeks [that alone suggests its not a deliberate scam].
The critical field strength stated is quite low—only about 25% of what is seen in a Neodymium magnet and it’s unclear what critical current density is, but if field reported is as good as it gets then it is unlikely to have much benefit for motor design with B² dependent torque densities <10% of conventional designs, unless the applications are not mass/cost sensitive (wind turbines replacing permanent magnets?).
Meissner effect could be useful for some levitation designs (floating houses, hyperloop, toys?) Likely some novel space applications like magnetic sails, perhaps passive magnetic bearings for infinite life reaction control wheels and maybe some ion propulsion applications. But lightly biggest impacts will be in digital and power electronics with ultra-high q inductors, higher efficiency transformers, and maybe data processing devices.
It might be transformative for long distance renewable power distribution.
[Edit to add link to video of meissner effect being demonstrated]
Meissner effect video looks like the real deal. Imperfect disk sample is pushed around surface of a permanent magnet and tilts over to align with local field vector as gets closer to edge of cylindrical magnet end face. Permanent magnets in repulsive alignment are not stable in such arrangements (Earnshaw’s theorem) - they would just flip over, and diamagnetism in conventional materials—graphite the strongest—is too weak to do what is shown. The tilting shows the hall-marks of flux pinning working to maintain a consistent orientation of the superconductor with ambient magnetic field, which is a unique feature of superconductivity. No evidence of cooling in video.
If this is not being deliberately faked then I’d say this is a real breakthrough.