I think Elon will bring strong concern about AI to fore in current executive—he was an early voice for AI safety though he seems too have updated to a more optimistic view (and is pushing development through x-AI) he still generally states P(doom) ~10-20%. His antipathy towards Altman and Google founders is likely of benefit for AI regulation too—though no answer for the China et al AGI development problem.
Foyle
The era of AGI means humans can no longer afford to live in a world of militarily competing nations. Whatever slim hope there might be for alignment and AI not-kill-everyone is sunk by militaries trying to out-compete each other in development of creatively malevolent and at least somewhat unaligned martial AI. At minimum we can’t afford non-democratic or theocratically ruled nations, or even nations with unaccountable power-unto-themselves military, intelligence or science bureaucracies to control nukes, pathogen building biolabs or AGI. It will be necessary to enforce this even at the cost of war.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
It was a very frustrating conversation to listen to, because Wolfram really hasn’t engaged his curiosity and done the reading on AI-kill-everyoneism. So we just got a torturous number of unnecessary and oblique diversions from Wolfram who didn’t provide any substantive foil to Eliezer
I’d really like to find Yudkowsky debates with better prepared AI optimists prepared to try and counter his points. Do any exist?
It seems unlikely to me that there is potential to make large brain based intelligence advancements beyond the current best humans using human evolved biology. There will be distance scaling limitations linked to neural signal speeds.
Then there is Jeff Hawkins ‘thousand brains’ theory of human intelligence that our brains are made up of thousands of parallel processing cortical columns of a few mm cross section and a few mm thick with cross communication and recursion etc, but that fundamental processing core probably isn’t scalable in complexity, only in total number—your brain could perhaps be expanded to handle thinking about more things in parallel at once, but not at much higher levels of sophistication without paying a large coordination speed price (and evolution places a premium on reaction speed for animals that encounter violence)
I look at whales and other mammals with much much larger than human brains and wonder why they are not smarter—some combination of no evolutionary driver and perhaps a lot of their neurons are dedicated to delay-line processing needed for processing sonar and controlling large bodies with long signaling delays.
Regardless, if AI is a dominant part of our future then it seems likely to me that regardless of whether the future is human utopia or dystopia, non-transhuman humans will not exist in significant numbers in a few hundred years. Neural biology and perhaps all biology is going to be superseded as maladapted to the technological future.
Are any of the socio-economic-political-demographic problems of the world actually fixable or improvable in the time before the imminent singularity renders them all moot anyway? It all feels like bread-and-circuses to me.
The pressing political issues of today are unlikely to even be in the top-10 in a decade.
Fantastic life skill to be able to sleep in a noise environment on a hard floor. Most Chinese can do it so easily, and I would frequently less kids anywhere up to 4-5 years old being carried sleeping down the road by guardians.
I think super valuable when it comes to adulthood and sharing a bed—one less potential source of difficulties if adaption to noisy environment when sleeping makes snoring a non-issue.
It is the literary, TV and movie references, a lot of stuff also tied to technology and social developments of the 80′s-00′s (particularly Ank-Morpork situated stories) and a lot of classical and allusions. ‘Education’ used to lean on common knowledge of a relatively narrow corpus of literature and history Shakespeare, chivalry, European history, classics etc for the social advantage those common references gave and was thus fed to boomers and gen-x, y but I think it’s now rapidly slipping into obscurity as few younger people read and schools shift away from teaching it in face of all that’s new in the world. I guess there are a lot of jokes that pre-teens will get, but so many that they will miss. Seems a waste of such delightful prose.
Yeah, powering through it. I’ve tried adult Fiction and Sci-Fi but he’s not interested in it yet—not grokking adult motivations, attitudes and behaviors yet, so feeding him stuff that he enjoys to foster habit of reading.
I’ve just started my 11yr old tech minded son reading the Worm web serial by John Macrae (free and online, longer than Harry potter series). It’s a bit grim/dark and violent, but an amazing and compelling sci-fi meditation on superheroes and personal struggles. A more brutal and sophisticated world build along lines of popular ‘my hero academia’ anime that my boys watched compulsively. 1000′s of fanfics too.
Stories from Larry Niven’s “known space” universe. Lots of fun overcoming-challenges short stories and novellas that revolve around interesting physics or problems or ideas. And the follow up Man-Kzin War series by various invited authors have some really great stories too with a strong martial bent that will likely appeal to most boys.
At that age I read and loved Dune, The stars my destination (aka Tiger Tiger, a sci fi riff on Comte de Monte Christo), Enders Game. I think Terry Pratchett humor needs a more sophisticated adult knowledge base, with culture references that are dating badly.
My 11yr old loved the Expanse TV series, though I haven’t given them the books to read yet and I can’t recommend the transhumanism anime Pantheon on Amazon highly enough—its one of best sci fi series of all time.
All good to introduce more adult problems and thinking to kids in an exciting context.
We definitely want our kids involved in at times painful activities as a means of increasing confidence, fortitude and resilience against future periods of discomfort to steel them against the trials of later life. A lot of boys will seek it out as a matter of course in hobby pursuits including martial arts.
I think there is also value in mostly not interceding in conflicts unless there is an established or establishing pattern of physical abuse. Kids learn greater social skills and develop greater emotional strength when they have to deal with the knocks and unfairness themselves, and rewarding tattle-tailing type behavior with the exercise of parental power (or even attention) over the reported perpetrator creates some probably not-good crutch-like dynamics in children’s play stunting their learning of social skills.
I think it’s generally not good for kids to have power over others even if that power is borrowed, as it often enables maliciousness in kids that are (let’s face it) frequently little sociopaths trying to figure out how to gain power over others until they start developing more empathy in their teens. Their play interactions should be negotiated between them, not imposed by outside agents. Feign disinterest in their conflicts unless you see toxic dynamics forming. They should sort things out amongst themselves as much as possible.
For my boys (9,11) I’ll only intercede if they are getting to the point of physical harm or danger, or if there is a violent response to an accidental harm (must learn to control violent/vengeful impulses). But they frequently wrestle with each other in play. It is a challenge to balance with my 7 daughter though as lacking physical strength of her older brothers she works much harder to use parents as proxies to fight her conflicts.
Less cotton wool and helicopter parenting is mostly good.
“In many cases, however, evolution actually reduces our native empathic capacity—for instance, we can contextualize our natural empathy to exclude outgroup members and rivals.”
Exactly as it should be.
Empathy is valuable in close community settings, a ‘safety net’ adaption to make the community stronger with people we keep track of to ensure we are not being exploited by people not making concomitant effort to help themselves. But it seems to me that it is destructive at wider social scales enabled by social media where we don’t or can’t have effective reputation tracking to ensure that we are not being ‘played’ for the purpose of resource extraction by people making dishonest or exaggerated representations.
In essence at larger scales the instinct towards empathy rewards dishonest, exploitative, sociopathic and narcissistic behavior in individuals, and is perhaps responsible for a lot of the deleterious aspects of social media amongst particularly more naturally or generally empathic-by-default women. Eg ‘influencers’ (and before them exploitative televangelists) cashing in on follower empathy. It also rewards misrepresentations of victimhood/suffering for attention and approval—again in absence of more in depth knowledge of the person that would exist in a smaller community—that may be a source of rapid increase in ‘social contagion’ mental health pathologies amongst particularly young women instinctually desirous of attention most easily attained by inventing of exaggerating issues in absence of other attributes that might garner attention.
In short the empathic charitable instinct that works so well in families and small groups is socially destructive and dysfunctional at scales beyond community level.
I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ). So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells). +4sd mom+dad = +2sd kids on average. This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over general population by selecting (attractive/exciting) +4sd mates.
Probably the simplest and cheapest thing you can do to lift population IQ over long term is to explain this IQ-heritability reality to every female under the age of 40, make it common knowledge and a lot of them will choose genius sperm for themselves.
Beyond that intervention that can happen immediately there is little point in trying to do anything. In 20 years when ASIs stradle the earth like colossuses, and we are all their slaves or pets, they will (likely in even best case scenarios) be dictating our breeding and culling—or casually ignoring/exterminating us. In optimistic event of Banksian Culture like post-singularity utopia magic ASI tech will be developed to near-universally optimize human neural function via nootropics or genetic editing to reach a peak baseline (or domesticate us into compliant meat-bots). I think even a well aligned ASI is likely to push this on us.
I think there is far too much focus on technical approaches, when what is needed is a more socio-political focus. Raising money, convincing deep pockets of risks to leverage smaller sums, buying politicians, influencers and perhaps other groups that can be coopted and convinced of existential risk to put a halt to Ai dev.
It amazes me that there are huge, well financed and well coordinated campaigns for climate, social and environmental concerns, trivial issues next to AI risk, and yet AI risk remains strictly academic/fringe. What is on paper a very smart community embedded in perhaps the richest metropolitan area the world has ever seen, has not been able to create the political movement needed to slow things up. I think precisely because they pitching to the wrong crowd.
Dumb it down. Identify large easily influenceable demographics with a strong tendency to anxiety that can be most readily converted—most obviously teenagers, particularly girls and focus on convincing them of the dangers, perhaps also teachers as a community—with their huge influence. But maybe also the elederly—the other stalwart group we see so heavily involved in environmental causes. It would have orders of magnitude more impact than current cerebral elite focus, and history is replete with revolutions borne out of targeting conversion of teenagers to drive them.
They cannot just add an OOM of parameters, much less three.
How about 2 OOM’s?
HW2.5 21Tflops HW3 72x2 = 72 Tflops (redundant), HW4 3x72=216Tflops (not sure about redundancy) and Elon said in June that next gen AI5 chip for fsd would be about 10x faster say ~2Pflops
By rough approximation to brain processing power you get about 0.1Pflop per gram of brain so HW2.5 might have been a 0.2g baby mouse brain, HW3 a 1g baby rat brain HW4 perhaps adult rat, and upcoming HW5 a 20g small cat brain.
As a real world analogue cat to dog (25-100g brain) seems to me the minimum necessary range of complexity based on behavioral capabilities to do a decent job of driving—need some ability to anticipate and predict motivations and behavior of other road users and something beyond dumb reactive handling (ie somewhat predictive) to understand anomalous objects that exist on and around roads.
Nvidia Blackwell B200 can do up to about 10pflops of FP8, which is getting into large dog/wolf brain processing range, and wouldn’t be unreasonable to package in a self driving car once down closer to manufacturing cost in a few years at around 1kW peak power consumption.
I don’t think the rat brain HW4 is going to cut it, and I suspect that internal to Tesla they know it too, but it’s going to be crazy expensive to own up to it, better to keep kicking the can down the road with promises until they can deliver the real thing. AI5 might just do it, but wouldn’t be surprising to need a further oom to Nvidia Blackwell equivalent and maybe $10k extra cost to get there.
There has been a lot of interest in this going back to at least early this year and the 1.58bit LLM (ternary) logic paper https://arxiv.org/abs/2402.17764 so expect there has been a research gold rush and a lot of design effort going into producing custom hardware almost immediately that was revealed.
With Nvidia dual chip GB200 Grace Blackwell offering (sparse) 40Pflop fp4 at ~1kW there has already been something close to optimal hardware available—that fp4 performance may have been the reason the latest generation Nvidia GPU are in such high demand—previous generations haven’t offered it as far as I am aware. For comparison a human brain is likely equivalent to 10-100Pflops, though estimates vary.
Being able to up the performance significantly from a single AI chip has huge system cost benefits.
All suggesting that the costs for AI are going to drop yet again, and human level AGI operating costs are going to be measured in cents per hour when it arrives in a few years time.
The implications for autonomous robotics are likely tremendous, with potential OOM power savings likely to bring far more capable systems to smaller platforms, home robotics, fsd cars, and (scarily) military murderbots. Tesla has (according to Elon comments) a new HW5 autonomy chip coming out next year that is ~50x faster than their current FSD development baseline HW3 2 x 72Tflop chipset, but needs closer to 1kW power, so they will be extremely keen on implementing something that could save so much power.
AI safety desperately needs to buy in or persuade some high profile talent to raise public awareness. Business as usual approach of last decade is clearly not working—we are sleep walking towards the cliff. Given how timelines are collapsing the problem to be solved has morphed from being a technical one to a pressing social one—we have to get enough people clamouring for a halt that politicians will start to prioritise appeasing them ahead of their big tech donors.
It probably wouldn’t be expensive to rent a few high profile influencers with major reach amongst impressionable youth. A demographic that is easily convinced to buy into and campaign against end of the world causes.
Current Nvidia GPU prices are highly distorted by scarcity, with profit margins that are reportedly in the 80-90% of sale price range: https://www.tomshardware.com/news/nvidia-makes-1000-profit-on-h100-gpus-report
If these were commodified to the point that scarcity didn’t influence price then that $/flop point would seemingly leap up by an order of magnitude to above 1e15Flop/$1000 scraping the top of that curve, ie near brain equivalence computation power in $3.5k manufactured hardware cost, and latest Blackwell GPU has lifted that performance by another 2.5x with little extra manufacturing cost. Humans as useful economic contributors are so screwed, even with successful alignment the socioeconomic implications are beyond cataclysmic.
I’m going through this too with my kids. I don’t think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).
Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is going to largely evaporate as human labour becomes less and less valuable, and I can’t help but wonder how we are going to decide who gets the nice beach-house. If humans are still in control of an increasingly non-egalitarian world then society will almost certainly slide towards it’s corrupt old aristocratic/rentier ways and it becomes all about being part of the Nomenklature (communist elites).
A very large amount of human problem solving/innovation in challenging areas is creating and evaluating potential solutions, it is a stochastic rather than deterministic process. My understanding is that our brains are highly parallelized in evaluating ideas in thousands of ‘cortical columns’ a few mm across (Jeff Hawkin’s 1000 brains formulation) with an attention mechanism that promotes the filtered best outputs of those myriad processes forming our ‘consciousness’.
So generating and discarding large numbers of solutions within simpler ‘sub brains’, via iterative, or parallelized operation is very much how I would expect to see AGI and SI develop.