Idea A (for “Alright”): Humanity should develop hardware-destroying capabilities — e.g., broadly and rapidly deployable non-nuclear EMPs — to be used in emergencies to shut down potentially-out-of-control AGI situations, such as an AGI that has leaked onto the internet, or an irresponsible nation developing AGI unsafely.
Sounds obviously impossible in real life, so how about you go do that and then I’ll doff my hat in amazement and change how I speak of pivotal acts. Go get gain-of-function banned, even, that should be vastly simpler. Then we can talk about doing the much more difficult thing. Otherwise it seems to me like this is just a fairytale about what you wouldn’t need to do in a brighter world than this.
I’m surprised that there’s not more push back in this community on the idea of a “pivotal act” even being feasible on any reasonable timeline that wouldn’t give up the game (i.e.: reveal that you have AGI and immediately get seized by the nearest state power), in the same way that there’s push back on regulation as a feasible approach.
SUMMARY: Pivotal acts as described here are not constrained by intelligence, they’re constrained by resources and time. Intelligence may provide novel solutions, but it does not immediately reduce the time needed for implementation of hardware/software systems in a meaningful way. Novel supply chains, factories, and machinery must first be designed by the AGI and then created by supporters before the AGI will have the influence on the world that is expected by proponents of the “pivotal act” philosophy.
I’m going to structure this post as 2 parts.
First, I will go through the different ideas posed in this thread as example “pivotal acts” and point out why it won’t work, specifically by looking at exceptions that would cause the “pivotal act” to not reliably eliminate 100% of adversaries.
Then, I’ll look at the more general statement that my complaints in part 1 are irrelevant because a superhuman AGI is by definition smarter than me, and therefore it’ll do something that I can’t think of, etc.
Part 1. Pivotal act <x> is not feasible.
social sabotage: auto-generating mass media campaigns to shut down competitor companies by legal means, or
This has the same issues as banning gain-of-function research. It’s difficult to imagine a mass media campaign in the US having the ability to persuade or shutdown state-sponsored AI research in another country, e.g. China.
threats: demonstrating powerful cyber or physical or social threats, and bargaining with competitors to shut down “or else”.
Nation states (and companies) don’t generally negotiate with terrorists. First, you’d have to be able to convince the states you can follow through on your threat (see arguments below). Second, you’d need to be able to make your threat from a position of security such that you’re not susceptible to a preemptive strike, either delivered in the form of military or law enforcement personal showing up at your facilities with search warrants OR depending on what exactly you threatened and where you are located in the world, a missile instead.
Destroying all competing AI projects might mean that the AI took a month to find a few bugs in linux and tensorflow and create something that’s basically the next stuxnet. This doesn’t sound like that fast a takeoff to me.
Stuxnet worked by targeting hardware/software systems. Specifically, it targeted the hardware (the centrifuges) controlled via software (PLCs) and sent commands that would break the hardware by exceeding the design spec. The Iran network would have been air-gapped, so the virus had to be delivered on site either via a human technician performing the deployment, or via some form of social engineering like leaving a trapped USB device on the premises and hoping that an Iranian engineer would plug it into their network. Even that last vector can be circumvented by not allowing USB storage devices on computers attached to the network which is absolutely a thing that certain networks do for security. By “not allow”, I don’t mean it’s a piece of paper that says “don’t do this”, I mean that the software stack running on the computers don’t allow USB storage devices and/or the ports are physically inaccessible.
Let’s assume for the sake of argument that the bug already exists and we just need to exploit it. How are you delivering your exploit? Are you assuming the other AI projects are connected to the public internet? Or do you first have to assume that you can break into their VPN, or worse, somehow get into an air-gapped network? When does your exploit take effect? Is it immediately? Or is it when someone tries to run some type of experiment? If it’s immediately, then you risk early discovery when the exploit is being spread and you give your adversaries time to pull themselves from a network. If it’s triggered on some external command, then you risk the command not being received and certain exploited systems failing to be destroyed as intended. If it’s triggered by the system it has exploited, e.g. in response to some GPU usage threshold, then you run into the same issue where people will start posting online “my GPU melted after I ran a neural network, what’s going on?”
Even the above discussion ignores the fact that Linux is not a monolithic entity and neither is Windows, or MacOS, so you probably need bugs for each OS or distro, and probably separate versions, and you’re soon looking at hundreds of different exploits all of which need to be orchestrated at the same time to avoid early detection / avoidance by your adversaries. Add in the need to target specific libraries and you’ve got even more exploits to deal with, but that’s still assuming that your adversaries use the public versions of libraries, vs using internal forks or private code entirely.
This isn’t even getting into the most pressing problem of this hypothetical. Stuxnet could destroy the hardware—not just break it until the bug was removed, but actually destroy it—because it was targeting centrifuges which are things that spin at super high rates (think 1000 revolutions per second for a big piece of machinery) and were thus susceptible to oscillations that would cause the system to physically tear itself apart. How are you going to destroy the GPUs in this hypothetical? Exploit some type of bug that bricks them? That isn’t unheard of. The Amazon MMO “New World” reportedly bricked some high end graphics cards. That particular example though was traced to a hardware fault on less than 1% of RTX 3090′s created by the company EVGA, so you’d need a different exploit for the other 99%, plus the other graphics cards, plus the other manufacturers of those cards. If you can’t identity a way for your software exploit to physically break the hardware, actually break it, then at best this is just a minor speed bump. Even if you can 100% of the time nuke the attached graphics card in a computer, companies have stock of extra computers and components in storage. You aren’t destroying those, because they’re not even in a computer right now.
Compare all of the above to Stuxnet: a single worm, designed to destroy a single hardware/software system, with the exact target environment (the hardware, and the software) known to the creators, and it still took 2 (?) nation states to pull it off, and crucial to our discussion of pivotal acts, it was not 100% effective. The best estimate is that maybe 10% of the Iran centrifuges were destroyed by Stuxnet.
cyber sabotage: hacking into competitors’ computer systems and destroy their data;
See statements above about air-gaps / VPNs / etc. Pointing to anecdotal hacks of big companies doesn’t work, because for your pivotal act you need to hit 100% of adversaries. You also need to deal with backups, including backups that are off-site or otherwise unconnected to the network, which is standard practice for corporations that don’t want to care about ransomware.
Part 2. Magic isn’t real.
physical sabotage: deploying tiny robotic systems that locate and destroy AI-critical hardware without (directly) harming any humans;
“[AI lab] should just melt all the chip fabs as soon as they get AGI”
Alternatively, an AGI could produce nanomachines which destroy GPUs, are extremely hard to eradicate, but otherwise don’t do much of anything.
upload aligned people who think hard for 1000 subjective years and hopefully figure something out.
It seems relatively plausible that you could use a Limited AGI to build a nanotech system capable of uploading a diverse assortment of (non-brain, or maybe only very small brains) living tissue without damaging them, and that this system would learn how to upload tissue in a general way. Then you could use the system (not the AGI) to upload humans (tested on increasingly complex animals). It would be a relatively inefficient emulation, but it doesn’t seem obviously doomed to me.
Lightspeed bubble of hedonium. All humans are uploaded into a virtual utopia by femtobots. The sun is fully disassembled for raw materials whithin 10 minutes of you giving the order.
Subtly break their AI. A cyberattack that stops their AI from doing anything, and otherwise has no effect.
These attacks all fall under a bucket I’ll call “wizardry”. In these attacks, we assume that a superintelligence can do things that defy our current understanding of physics, supply chains, factories, chip design, yield rates, etc. I don’t mean that the superintelligence is able to solve problems faster or better than a human, because that trivially follows from the definition of “superintelligence”. What I mean is that the superintelligence in these attacks is able to skip the part of the process that follows “think of a solution”—implementation. For all hardware/software systems that I’ve worked on, coming up with a solution to a problem was probably less than 1% of the total engineering effort spent on bringing that solution into reality. The rest of the time is on implementation. Specifically, the rest of the time is spent on iterative loops of problem discovery and problem solving until you’ve designed a system that actually works in the real world.
Let’s look at nanotechnology to start with, since it’s a popular example.
Ok, so the first thing you need to do is develop working nanobots, because nanobots don’t exist. And to do that, the superhuman AGI is going to think really hard, and design the following at a minimum:
schematics for nanobots, including novel power supplies
verilog / vhdl / equivalent for processors or other integrated circuits on the nanobots
source code for those custom processors (or are you assuming that you’ve got an x86 compatible processor and an environment to run some embedded Linux distro like yocto or buildroot()
machinery to create the processors, custom ICs, and power supplies used by the nanobots
machinery to assemble the nanobots themselves
machinery to processors and nanobots
machinery for deploying nanobots, assuming they aren’t capable of just flying over the world to wherever they’re needed
The nanobots need to be designed to the constraints that make “melt the GPU factory” a realistic goal, so this means the superhuman AGI needs to be considering things like: how are they programmed, what are their actuators (for doing the melting), how do they sense the world (for seeing what to melt), what is their power supply (externally powered? if so, by what? what’s that device? how is it not a failure point in this plan? if they’re internally powered, how is that battery sized or where is the energy derived?), how are they controlled, what is their expected lifetime? When you’re answering these questions, you need to reason about how much power is needed to melt a GPU factory, and then work backwards from that based on the number of nanobots you think you get into the factory, so that you’ve got the right power output + energy requirements per nanobot for the melting.
Now, you need to actually build that machinery. You can’t use existing fabs because that would give away your novel designs, plus, these aren’t going to be like any design we have in existence since the scale you’re working with here isn’t something we’ve gotten working in our current reality. So you need to get some permits to build a factory, and pay for the factory, and bring in-house all of the knowledge needed to create these types of machines. These aren’t things you can buy. Each one of the machines you need is going to a custom design. It’s not enough to “think” of the designs, you’ll still need an army of contractors and sub-contractors and engineers and technicians to actually build them. It’s also not sufficient to try and avoid the dependency on human support by using robots or drones instead that’s not how industrial robots or drones work either. You’ll have a different, easier bootstrap problem first, but still a bootstrap problem nonetheless.
If you’re really lucky, the AGI was able to design machinery using standard ICs and you just need to get them in stock so you can put them together in house. Under that timeline, you’re looking at 12-15 week lead times for those components, prior to the chip shortage. Now it’s as high as 52+ week lead times for certain components. This is ignoring the time that it took to build the labs and clean rooms you’ll need to do high-tech electronics work, and the time to stock those labs with equipment, where certain pieces of equipment like room-sized CNC equipment are effectively one-off builds from a handful of suppliers in the world with similar lead times to match.
If you’re unlucky, the AGI had to invent novel ICs just for the machinery for the assembly itself, and now we get to play a game of Factorio in real life as we ask the AGI to please develop a chain of production lines starting from the standard ICs that we can acquire, up to those we need for our actual production line for the nanobots. Remember that we’ve still got 12-15 week lead times on the standard ICs.
Tangent: You might look at Amazon and think, well, I can buy a processor and have it here next day, why can’t I get my electronic components that quickly? In a word: scale. You’re not going to need a single processor or a single IC to build this factory. You’re going to need tens of thousands. If you care about quality control on this hardware you’re developing, you might even buy an entire run of a particular hardware component to guarantee that everything you’re using was developed by the same set of machines and processes at a known point in time from a known factory.
The next problem is that you’ll build all of the above, and then it won’t work. You’ll do a root cause analysis to figure out why, discover something you hadn’t realized about how physics works in that environment, or a flaw in some of your components (bad manufacturer, bad lot, etc), update your designs, and go through the process all over again. This is going to take time. Not weeks, but months, or worse, years. If you have to buy new components, it’s back to that 12-15 week lead time. If you want to try and avoid buying new components by cleverly desoldering the ones you have and reusing them, that is very difficult.
You can’t avoid this process by just reading up on the existing literature of how Intel or Nvidia or some other company designs chips because that information 1.) isn’t public, and 2.) isn’t the whole story. The dirty secret of engineering documentation is that engineers hate writing documentation and all big hardware/software engineering projects are carried by the collective knowledge of the organization more so than whatever token documentation got produced as needed for interfacing with an external supplier. Example: Rocketdyne F-1 rocket engines.
A typical design document for something like the F-1, though, was produced under intense deadline pressure and lacked even the barest forms of computerized design aids. Such a document simply cannot tell the entire story of the hardware. Each F-1 engine was uniquely built by hand, and each has its own undocumented quirks. In addition, the design process used in the 1960s was necessarily iterative: engineers would design a component, fabricate it, test it, and see how it performed. Then they would modify the design, build the new version, and test it again. This would continue until the design was “good enough.” [1]
During the time that you spend iterating in the real world to get to working designs, people are going to wonder about your massive factory, and your technicians are going to get drunk at a bar and brag about how they’re developing cutting edge nanobots, and your competitors will realize you’re doing something very novel, very odd, and very difficult to explain.
This is going to cost you literal billions (with a B) of dollars in hardware costs between the supplies you need to buy, the ICs you’ll have on order, the custom machinery you’ll be purchasing to build your own machines, etc. So there’s another prerequisite here of either 1.) you are Apple / Google / Microsoft, or 2.) you’re going to ask the AGI to make a bunch of money on the stock market. I actually believe option 2 is fairly achievable, e.g. see RenTech and the Medallion fund. That fund has averaged annual returns of 70% from 1994 through 2014. However, that’s still a timescale of years and significant seed money (millions of dollars) before you’ll have enough cash on hand to bankroll all of this R&D, unless you get external investment, but to get this much external investment you’ll have to 1.) find someone with billions of dollars, and 2.) convince them that you have AGI, 3.) swear them to secrecy, and 4.) hope that they don’t do some shenanigans like poaching your employees or having the government rain down regulation on you as a stall tactic while they develop their own AGI or try to steal yours.
The likely result here is an arms race where your competitors try to poach your employees ($2 million / year?) or other “normal” corporate espionage to understand what’s going on. Example: When Google sued Uber for poaching one of their top self-driving car engineers.
Tangent: If you want the AGI to be robust to government-sponsored intervention like turning off the grid at the connection to your factory, then you’ll need to invest in power sources at the factory itself, e.g. solar / wind / geothermal / whatever. All of these have permit requirements and you’ll get mired in bureaucratic red tape, especially if you try to do a stable on-site power source like oil, natural gas, or worse nuclear. Energy storage isn’t that great at the moment, so maybe that’s another sub-problem or the AGI to solve first as a precursor to all of these other problems, so that it can run on solar power alone.
You might think that the superhuman AGI is going to avoid that iterative loop by getting it right on the very first time. Maybe we’ll say it’ll simulate reality perfectly, so it can prove that the designs will work before they’re built, and then there’s only a single iteration needed.
Let’s pretend the AI only needs one attempt to figure out working designs: Ok, the AGI perfectly simulates reality perfectly. It still doesn’t work the first time because your human contractors miswired some pins during assembly, and you still need to spend X many months debugging and troubleshooting and rebuilding things until all of the problems are found and fixed. If you want to avoid this, you need to add a “perfect QA plan for all sub-components and auditing performed at all integration points” to the list of things that the AGI needs to design in advance, and pair it with “humans that can follow a QA plan perfectly without making human mistakes”.
On the other hand: The AGI can only simulate reality perfectly if we had a theory of physics that could do so, which we don’t. The AGI can develop their own theory, just like you and I could do so, but at some point the theorizing is going to hit a wall where there are multiple possible solutions, and the only way to see which solution is valid in our reality is to run a test, and in our current understanding of physics, the tests we know how to run involve constructing increasing elaborate colliders and smashing together particles to see what pops out. While it is possible that there exists another path that does not have a prerequisite of “run test on collider”, you need to add that to your list of assumptions, and you might as well add “magic is real”. Engineering is about tradeoffs or constraints. Constraints like mass requirements given some locomotion system, or energy usage given some battery storage density and allowed mass, or maximum force an actuator can provide given the allowed size of it to fit inside of the chassis, etc. If you assume that a superhuman AGI is not susceptible to constraints anymore, just by virtue of that superhuman intelligence, then you’re living in a world just as fictional as HPMOR.
Are you criticizing the idea that a single superintelligence could ever get to take over the world under any circumstances, or just this strategy of “achieving aligned AI by forcefully dismantling unsafe AI programs with the assistance of a pet AI”?
The latter. I don’t see any reason why a superintelligent entity would not be able to take over the world or destroy it or dismantle it into a Dyson swarm. The point I am trying to make is that the tooling and structures that a superintelligent AGI would need to act autonomously in that way do not actually exist in our current world, so before we can be made into paperclips, there is a necessary period of bootstrapping where the superintelligent AGI designs and manufactures new machinery using our current machinery. Whether it’s an unsafe AGI that is trying to go rogue, or an aligned AGI that is trying to execute a “pivotal act”, the same bootstrapping must occur first.
Case study: a common idea I’ve seen while lurking on LessWrong and SSC/ACT for the past N years is that an AGI will “just” hack a factory and get it to produce whatever designs it wants. This is not how factories work. There is no 100% autonomous factory on Earth that an AGI could just take over to make some other widget instead. Even highly automated factories are 1.) highly automated to produce a specific set of widgets, 2.) require physical adjustments to make different widgets, and 3.) rely on humans for things like input of raw materials, transferring in-work products between automated lines, and the testing or final assembly of completed products. 3D printers are one of the worst offenders in this regard. The public perception is that a 3D printer can produce anything and everything, but they actually have pretty strong constraints on what types of shapes they can make and what materials they can use, and usually require multi-step processes to avoid those constraints, or post-processing to clean up residual pieces that aren’t intended to be part of the final design, and almost always a 3D printer is producing sub-parts of a larger design that still must be assembled together with bolts or screws or welds or some other fasteners.
So if an AGI wants to have unilateral control where it can do whatever it wants, the very first prerequisite is that it needs to make a futuristic, fully automated, fully configurable, network-controlled factory exist—which then needs to be built with what we have now, and that’s where you’ll hit the supply constraints I’m describing above for things like lead times on part acquisition. The only way to reduce this bootstrapping time is to have this stuff designed in advance of the AGI, but that’s backwards from how modern product development actually works. We design products, and then we design the automated tooling to build those products. If you asked me to design a factory that would be immediately usable by a future AGI, I wouldn’t know where to even start with that request. I need the AGI to tell me what it wants, and then I can build that, and then the AGI can takeover and do their own thing.
A related point that I think gets missed is that our automated factories aren’t necessarily “fast” in a way you’d expect. There’s long lead times for complex products. If you have the specialized machinery for creating new chips, you’re still looking at ~14-24 weeks from when raw materials are introduced to when the final products roll off the line. We hide that delay by constantly building the same things all of the time, but it’s very visibly when there’s a sudden demand spike—that’s why it takes so long before the supply can match the demand for products like processors or GPUs. I have no trouble with imagining a superintelligent entity that could optimize this and knock down the cycle time, but there’s going to be physical limits to these processes and the question is can it knock it down to 10 weeks or to 1 week? And when I’m talking about optimization, this isn’t just uploading new software because that isn’t how these machines work. It’s designing new, faster machines or redesigning the assembly line and replacing the existing machines, so there’s a minimum time required for that too before you can benefit from the faster cycle time on actually making things. Once you hit practical limits on cycle time, the only way to get more stuff faster is to scale wide by building more factories or making your current factories even larger.
If we want to try and avoid the above problems by suggesting that the AGI doesn’t actually hack existing factories, but instead it convinces the factory owners to build the things it wants instead, there’s not a huge difference—instead of the prerequisite here being “build your own factory”, it’s “hostile takeover of existing factory”, where that hostile takeover is either done by manipulation, on the public market, as a private sale, or by outbidding existing customers (e.g. have enough money to convince TSMC to make your stuff instead of Apple’s), or with actual arms and violence. There’s still the other lead times I’ve mentioned for retooling assembly lines and actually building a complete, physical system from one or more automated lines.
You should stop thinking about AI designed nanotechnology like human technology and start thinking about it like actual nanotechnology, i.e. life. There is no reason to believe you can’t come up with a design for self-replicating nanorobots that can also self-assemble into larger useful machines, all from very simple and abundant ingredients—life does exactly that.
Tangent: I don’t think I understand the distinction you’ve made between “AI designed nanotechnology” and “human technology”. Human technology already includes “actual nanotechnology”, e.g. nanolithography in semiconductor production.
I agree that if the AGI gives us a blueprint for the smallest self-replicating nanobot that we’ll need to bootstrap the rest of the nanobot swarm, all we have to do is assemble that first nanobot, and the rest follows. It’s very elegant.
We still need to build that very first self-replicating nanobot though.
We can either do so atom-by-atom with some type of molecular assembler like the ones discussed in Nanosystems, or we can synthesize DNA and use clever tricks to get some existing biology to build things we want for us, or maybe we can build it from a process that the AGI gives us that only uses chemical reactions or lab/industrial production techniques.
If we go with the molecular assembler approach, we need to build one of those first, so that we can build the first self-replicating nanobot. This is effectively the same argument I made above, so I’m going to skip it.
If we go with the DNA approach, then the AGI needs to give us that DNA sequence, and we have to hope that we can create it in a reasonable time despite our poor yield rate and time for on DNA synthesis on longer sequences. If the sequence is too long, we might be in a place where we first need to ask the AGI to design new DNA synthesis machines, otherwise we’ll be stuck. In that world, we return to my arguments above. In the world where the AGI gave us a reasonably length DNA sequence, say the size of a very small cell or something, we can continue. The COVID-19 vaccine provides an example of how this goes. We have an intelligent entity (humans) writing code in DNA, synthesizing that DNA, converting it to mRNA, and getting a biological system (human cells) to read that code and produce proteins. Humanity has these tools. I am not sure why we would assume that the company that develops AGI has them. At multiple steps in the chain of what Pfizer and Moderna did to bring mRNA vaccines to market, there are single vendor gatekeepers who hold the only tooling or processes for industrial production. If we assume that you have all of the tooling and processes, we still need to talk about cycle times. I believe Pfizer aimed to get the cycle time (raw materials → synthesized vaccines) for a batch of vaccine down from 15 weeks to 8 weeks. This is an incredibly complex, amazing achievement—we literally wrote a program in DNA, created a way to deliver it to the human body, and it executed successfully in that environment. However, it’s also an example of the current limitations we have. Synthesizing from scratch the mRNA needed to generate a single protein takes >8 weeks, even if you have the full assembly line figured out. This will get faster in time, and we’ll get better at doing it, but I don’t see any reason to think that we’ll have some type of universal / programmable assembly line for an AGI to use anytime soon.
If we go with a series of chemical reactions/lab/industrial production techniques, we need to build clean rooms and labs and vacuum chambers and whatever else is used to implement whatever process the AGI gives us for synthesizing the nanobots. Conceptually this is the simplest idea for how you could get something to work quickly. If the AGI gave you a list of chemicals, metals, biological samples and a step-by-step process of how to mix, drain, heat, sift, repeat, and at the end of this process you had self-replicating nanobots, that would be pretty cool. This is basically taking evolution’s random walk from a planetary petri dish to the life we see today and asking, “could an AGI shorten the duration from a billion years of random iterative development into mere weeks of some predetermined process to get the first self-replicating nanobots?” The problem with programming is that interpreting code is hard. Anything that can interpret the nanobot equivalent of machine code, like instructions for how and where to melt GPU factories, is going to be vastly more complex than the current state-of-the-heart R&D being done by any human lab today. I don’t see a way where this doesn’t reduce to the same Factorio problem I’ve been describing. We’ll first need to synthesize A, so that we can synthesize B, so that we can synthesize C, so that we can synthesize D, and each step will require novel setups and production lines and time, and at the end of it we’ll have a sequence of steps that looks an awful lot like a molecular assembly line for the creation of the very first self-replicating nanobots.
The hypothetical world(s) where these types of constraints aren’t problems for a “pivotal act” are world(s) where the AGI can give us a recipe for the self-replicating nanobots that we can build in our living room at home with a pair of tweezers and materials from Amazon. The progression of human technology over the past ~60 years in the fields of nano-scale engineering or synthetic biology has been increasingly elaborate, complex, time-consuming, and low-yield processes or lab equipment to replicate the simplest structures that life produces ad-hoc. I am certain this limitation will be conquered, and I’m equally certain that AI/ML systems will be instrumental in doing so, but I have no evidence to rationally conclude that there’s not a mountain of prerequisite tools still remaining for humanity to build before something like “design anything at any scale” capabilities are generally available in a way that an AGI could make use of them.
Tangent: If we’re concerned about destroying the world, deliberately building self-replicating nanobots that start simple but rapidly assemble into something arbitrarily complex from the whims of an AGI seems like a bad idea, which is why my original post was focused on a top-down hardware/software systems engineering process where the humans involved could presumably understand the plans, schematics, and programming that the AGI handed to them prior to the construction and deployment of those nanobots.
Sorry, I did not mean to violate any established norms.
I posted as a reply to Eliezer’s comment because they said that the “hardware-destroying capabilities” suggested by the OP is “obviously impossible in real life”. I did not expect that my reply would be considered off-topic or irrelevant in that context.
It’s squarely relevant to the post, but it is mostly irrelevant to Eliezer’s comment specifically, and I think the actual drives underlying the decision to make it a reply to Eliezer are probably not in good faith, like, you have to at least entertain the hypothesis that they pretty much realized it wasn’t relevant and they just wanted eliezer’s attention or they wanted the prominence of being a reply to his comment. Personally I hope they receive eliezer’s attention, but piggybacking messes up the reply structure and makes it harder to navigate discussions, to make sense of the pragmatics or find what you’re looking for, which is pretty harmful. I don’t think we should have a lot of patience for that.
(Eliezer/that paragraph he was quoting was about the actions of large states, or of a large international alliance. The reply is pretty much entirely about why it’s impractical to hide your activities from your host state, which is all inapplicable to scenarios where you are/have a state.)
Eliezer, from outside the universe I might take your side of this bet. But I don’t think it’s productive to give up on getting mainstream institutions to engage in cooperative efforts to reduce x-risk.
Sounds obviously impossible in real life, so how about you go do that and then I’ll doff my hat in amazement and change how I speak of pivotal acts. Go get gain-of-function banned, even, that should be vastly simpler. Then we can talk about doing the much more difficult thing. Otherwise it seems to me like this is just a fairytale about what you wouldn’t need to do in a brighter world than this.
I’m surprised that there’s not more push back in this community on the idea of a “pivotal act” even being feasible on any reasonable timeline that wouldn’t give up the game (i.e.: reveal that you have AGI and immediately get seized by the nearest state power), in the same way that there’s push back on regulation as a feasible approach.
SUMMARY: Pivotal acts as described here are not constrained by intelligence, they’re constrained by resources and time. Intelligence may provide novel solutions, but it does not immediately reduce the time needed for implementation of hardware/software systems in a meaningful way. Novel supply chains, factories, and machinery must first be designed by the AGI and then created by supporters before the AGI will have the influence on the world that is expected by proponents of the “pivotal act” philosophy.
I’m going to structure this post as 2 parts.
First, I will go through the different ideas posed in this thread as example “pivotal acts” and point out why it won’t work, specifically by looking at exceptions that would cause the “pivotal act” to not reliably eliminate 100% of adversaries.
Then, I’ll look at the more general statement that my complaints in part 1 are irrelevant because a superhuman AGI is by definition smarter than me, and therefore it’ll do something that I can’t think of, etc.
Part 1. Pivotal act <x> is not feasible.
This has the same issues as banning gain-of-function research. It’s difficult to imagine a mass media campaign in the US having the ability to persuade or shutdown state-sponsored AI research in another country, e.g. China.
Nation states (and companies) don’t generally negotiate with terrorists. First, you’d have to be able to convince the states you can follow through on your threat (see arguments below). Second, you’d need to be able to make your threat from a position of security such that you’re not susceptible to a preemptive strike, either delivered in the form of military or law enforcement personal showing up at your facilities with search warrants OR depending on what exactly you threatened and where you are located in the world, a missile instead.
Stuxnet worked by targeting hardware/software systems. Specifically, it targeted the hardware (the centrifuges) controlled via software (PLCs) and sent commands that would break the hardware by exceeding the design spec. The Iran network would have been air-gapped, so the virus had to be delivered on site either via a human technician performing the deployment, or via some form of social engineering like leaving a trapped USB device on the premises and hoping that an Iranian engineer would plug it into their network. Even that last vector can be circumvented by not allowing USB storage devices on computers attached to the network which is absolutely a thing that certain networks do for security. By “not allow”, I don’t mean it’s a piece of paper that says “don’t do this”, I mean that the software stack running on the computers don’t allow USB storage devices and/or the ports are physically inaccessible.
Let’s assume for the sake of argument that the bug already exists and we just need to exploit it. How are you delivering your exploit? Are you assuming the other AI projects are connected to the public internet? Or do you first have to assume that you can break into their VPN, or worse, somehow get into an air-gapped network? When does your exploit take effect? Is it immediately? Or is it when someone tries to run some type of experiment? If it’s immediately, then you risk early discovery when the exploit is being spread and you give your adversaries time to pull themselves from a network. If it’s triggered on some external command, then you risk the command not being received and certain exploited systems failing to be destroyed as intended. If it’s triggered by the system it has exploited, e.g. in response to some GPU usage threshold, then you run into the same issue where people will start posting online “my GPU melted after I ran a neural network, what’s going on?”
Even the above discussion ignores the fact that Linux is not a monolithic entity and neither is Windows, or MacOS, so you probably need bugs for each OS or distro, and probably separate versions, and you’re soon looking at hundreds of different exploits all of which need to be orchestrated at the same time to avoid early detection / avoidance by your adversaries. Add in the need to target specific libraries and you’ve got even more exploits to deal with, but that’s still assuming that your adversaries use the public versions of libraries, vs using internal forks or private code entirely.
This isn’t even getting into the most pressing problem of this hypothetical. Stuxnet could destroy the hardware—not just break it until the bug was removed, but actually destroy it—because it was targeting centrifuges which are things that spin at super high rates (think 1000 revolutions per second for a big piece of machinery) and were thus susceptible to oscillations that would cause the system to physically tear itself apart. How are you going to destroy the GPUs in this hypothetical? Exploit some type of bug that bricks them? That isn’t unheard of. The Amazon MMO “New World” reportedly bricked some high end graphics cards. That particular example though was traced to a hardware fault on less than 1% of RTX 3090′s created by the company EVGA, so you’d need a different exploit for the other 99%, plus the other graphics cards, plus the other manufacturers of those cards. If you can’t identity a way for your software exploit to physically break the hardware, actually break it, then at best this is just a minor speed bump. Even if you can 100% of the time nuke the attached graphics card in a computer, companies have stock of extra computers and components in storage. You aren’t destroying those, because they’re not even in a computer right now.
Compare all of the above to Stuxnet: a single worm, designed to destroy a single hardware/software system, with the exact target environment (the hardware, and the software) known to the creators, and it still took 2 (?) nation states to pull it off, and crucial to our discussion of pivotal acts, it was not 100% effective. The best estimate is that maybe 10% of the Iran centrifuges were destroyed by Stuxnet.
See statements above about air-gaps / VPNs / etc. Pointing to anecdotal hacks of big companies doesn’t work, because for your pivotal act you need to hit 100% of adversaries. You also need to deal with backups, including backups that are off-site or otherwise unconnected to the network, which is standard practice for corporations that don’t want to care about ransomware.
Part 2. Magic isn’t real.
These attacks all fall under a bucket I’ll call “wizardry”. In these attacks, we assume that a superintelligence can do things that defy our current understanding of physics, supply chains, factories, chip design, yield rates, etc. I don’t mean that the superintelligence is able to solve problems faster or better than a human, because that trivially follows from the definition of “superintelligence”. What I mean is that the superintelligence in these attacks is able to skip the part of the process that follows “think of a solution”—implementation. For all hardware/software systems that I’ve worked on, coming up with a solution to a problem was probably less than 1% of the total engineering effort spent on bringing that solution into reality. The rest of the time is on implementation. Specifically, the rest of the time is spent on iterative loops of problem discovery and problem solving until you’ve designed a system that actually works in the real world.
Let’s look at nanotechnology to start with, since it’s a popular example.
Ok, so the first thing you need to do is develop working nanobots, because nanobots don’t exist. And to do that, the superhuman AGI is going to think really hard, and design the following at a minimum:
schematics for nanobots, including novel power supplies
verilog / vhdl / equivalent for processors or other integrated circuits on the nanobots
source code for those custom processors (or are you assuming that you’ve got an x86 compatible processor and an environment to run some embedded Linux distro like yocto or buildroot()
machinery to create the processors, custom ICs, and power supplies used by the nanobots
machinery to assemble the nanobots themselves
machinery to processors and nanobots
machinery for deploying nanobots, assuming they aren’t capable of just flying over the world to wherever they’re needed
The nanobots need to be designed to the constraints that make “melt the GPU factory” a realistic goal, so this means the superhuman AGI needs to be considering things like: how are they programmed, what are their actuators (for doing the melting), how do they sense the world (for seeing what to melt), what is their power supply (externally powered? if so, by what? what’s that device? how is it not a failure point in this plan? if they’re internally powered, how is that battery sized or where is the energy derived?), how are they controlled, what is their expected lifetime? When you’re answering these questions, you need to reason about how much power is needed to melt a GPU factory, and then work backwards from that based on the number of nanobots you think you get into the factory, so that you’ve got the right power output + energy requirements per nanobot for the melting.
Now, you need to actually build that machinery. You can’t use existing fabs because that would give away your novel designs, plus, these aren’t going to be like any design we have in existence since the scale you’re working with here isn’t something we’ve gotten working in our current reality. So you need to get some permits to build a factory, and pay for the factory, and bring in-house all of the knowledge needed to create these types of machines. These aren’t things you can buy. Each one of the machines you need is going to a custom design. It’s not enough to “think” of the designs, you’ll still need an army of contractors and sub-contractors and engineers and technicians to actually build them. It’s also not sufficient to try and avoid the dependency on human support by using robots or drones instead that’s not how industrial robots or drones work either. You’ll have a different, easier bootstrap problem first, but still a bootstrap problem nonetheless.
If you’re really lucky, the AGI was able to design machinery using standard ICs and you just need to get them in stock so you can put them together in house. Under that timeline, you’re looking at 12-15 week lead times for those components, prior to the chip shortage. Now it’s as high as 52+ week lead times for certain components. This is ignoring the time that it took to build the labs and clean rooms you’ll need to do high-tech electronics work, and the time to stock those labs with equipment, where certain pieces of equipment like room-sized CNC equipment are effectively one-off builds from a handful of suppliers in the world with similar lead times to match.
If you’re unlucky, the AGI had to invent novel ICs just for the machinery for the assembly itself, and now we get to play a game of Factorio in real life as we ask the AGI to please develop a chain of production lines starting from the standard ICs that we can acquire, up to those we need for our actual production line for the nanobots. Remember that we’ve still got 12-15 week lead times on the standard ICs.
Tangent: You might look at Amazon and think, well, I can buy a processor and have it here next day, why can’t I get my electronic components that quickly? In a word: scale. You’re not going to need a single processor or a single IC to build this factory. You’re going to need tens of thousands. If you care about quality control on this hardware you’re developing, you might even buy an entire run of a particular hardware component to guarantee that everything you’re using was developed by the same set of machines and processes at a known point in time from a known factory.
The next problem is that you’ll build all of the above, and then it won’t work. You’ll do a root cause analysis to figure out why, discover something you hadn’t realized about how physics works in that environment, or a flaw in some of your components (bad manufacturer, bad lot, etc), update your designs, and go through the process all over again. This is going to take time. Not weeks, but months, or worse, years. If you have to buy new components, it’s back to that 12-15 week lead time. If you want to try and avoid buying new components by cleverly desoldering the ones you have and reusing them, that is very difficult.
You can’t avoid this process by just reading up on the existing literature of how Intel or Nvidia or some other company designs chips because that information 1.) isn’t public, and 2.) isn’t the whole story. The dirty secret of engineering documentation is that engineers hate writing documentation and all big hardware/software engineering projects are carried by the collective knowledge of the organization more so than whatever token documentation got produced as needed for interfacing with an external supplier. Example: Rocketdyne F-1 rocket engines.
During the time that you spend iterating in the real world to get to working designs, people are going to wonder about your massive factory, and your technicians are going to get drunk at a bar and brag about how they’re developing cutting edge nanobots, and your competitors will realize you’re doing something very novel, very odd, and very difficult to explain.
This is going to cost you literal billions (with a B) of dollars in hardware costs between the supplies you need to buy, the ICs you’ll have on order, the custom machinery you’ll be purchasing to build your own machines, etc. So there’s another prerequisite here of either 1.) you are Apple / Google / Microsoft, or 2.) you’re going to ask the AGI to make a bunch of money on the stock market. I actually believe option 2 is fairly achievable, e.g. see RenTech and the Medallion fund. That fund has averaged annual returns of 70% from 1994 through 2014. However, that’s still a timescale of years and significant seed money (millions of dollars) before you’ll have enough cash on hand to bankroll all of this R&D, unless you get external investment, but to get this much external investment you’ll have to 1.) find someone with billions of dollars, and 2.) convince them that you have AGI, 3.) swear them to secrecy, and 4.) hope that they don’t do some shenanigans like poaching your employees or having the government rain down regulation on you as a stall tactic while they develop their own AGI or try to steal yours.
The likely result here is an arms race where your competitors try to poach your employees ($2 million / year?) or other “normal” corporate espionage to understand what’s going on. Example: When Google sued Uber for poaching one of their top self-driving car engineers.
Tangent: If you want the AGI to be robust to government-sponsored intervention like turning off the grid at the connection to your factory, then you’ll need to invest in power sources at the factory itself, e.g. solar / wind / geothermal / whatever. All of these have permit requirements and you’ll get mired in bureaucratic red tape, especially if you try to do a stable on-site power source like oil, natural gas, or worse nuclear. Energy storage isn’t that great at the moment, so maybe that’s another sub-problem or the AGI to solve first as a precursor to all of these other problems, so that it can run on solar power alone.
You might think that the superhuman AGI is going to avoid that iterative loop by getting it right on the very first time. Maybe we’ll say it’ll simulate reality perfectly, so it can prove that the designs will work before they’re built, and then there’s only a single iteration needed.
Let’s pretend the AI only needs one attempt to figure out working designs: Ok, the AGI perfectly simulates reality perfectly. It still doesn’t work the first time because your human contractors miswired some pins during assembly, and you still need to spend X many months debugging and troubleshooting and rebuilding things until all of the problems are found and fixed. If you want to avoid this, you need to add a “perfect QA plan for all sub-components and auditing performed at all integration points” to the list of things that the AGI needs to design in advance, and pair it with “humans that can follow a QA plan perfectly without making human mistakes”.
On the other hand: The AGI can only simulate reality perfectly if we had a theory of physics that could do so, which we don’t. The AGI can develop their own theory, just like you and I could do so, but at some point the theorizing is going to hit a wall where there are multiple possible solutions, and the only way to see which solution is valid in our reality is to run a test, and in our current understanding of physics, the tests we know how to run involve constructing increasing elaborate colliders and smashing together particles to see what pops out. While it is possible that there exists another path that does not have a prerequisite of “run test on collider”, you need to add that to your list of assumptions, and you might as well add “magic is real”. Engineering is about tradeoffs or constraints. Constraints like mass requirements given some locomotion system, or energy usage given some battery storage density and allowed mass, or maximum force an actuator can provide given the allowed size of it to fit inside of the chassis, etc. If you assume that a superhuman AGI is not susceptible to constraints anymore, just by virtue of that superhuman intelligence, then you’re living in a world just as fictional as HPMOR.
Are you criticizing the idea that a single superintelligence could ever get to take over the world under any circumstances, or just this strategy of “achieving aligned AI by forcefully dismantling unsafe AI programs with the assistance of a pet AI”?
The latter. I don’t see any reason why a superintelligent entity would not be able to take over the world or destroy it or dismantle it into a Dyson swarm. The point I am trying to make is that the tooling and structures that a superintelligent AGI would need to act autonomously in that way do not actually exist in our current world, so before we can be made into paperclips, there is a necessary period of bootstrapping where the superintelligent AGI designs and manufactures new machinery using our current machinery. Whether it’s an unsafe AGI that is trying to go rogue, or an aligned AGI that is trying to execute a “pivotal act”, the same bootstrapping must occur first.
Case study: a common idea I’ve seen while lurking on LessWrong and SSC/ACT for the past N years is that an AGI will “just” hack a factory and get it to produce whatever designs it wants. This is not how factories work. There is no 100% autonomous factory on Earth that an AGI could just take over to make some other widget instead. Even highly automated factories are 1.) highly automated to produce a specific set of widgets, 2.) require physical adjustments to make different widgets, and 3.) rely on humans for things like input of raw materials, transferring in-work products between automated lines, and the testing or final assembly of completed products. 3D printers are one of the worst offenders in this regard. The public perception is that a 3D printer can produce anything and everything, but they actually have pretty strong constraints on what types of shapes they can make and what materials they can use, and usually require multi-step processes to avoid those constraints, or post-processing to clean up residual pieces that aren’t intended to be part of the final design, and almost always a 3D printer is producing sub-parts of a larger design that still must be assembled together with bolts or screws or welds or some other fasteners.
So if an AGI wants to have unilateral control where it can do whatever it wants, the very first prerequisite is that it needs to make a futuristic, fully automated, fully configurable, network-controlled factory exist—which then needs to be built with what we have now, and that’s where you’ll hit the supply constraints I’m describing above for things like lead times on part acquisition. The only way to reduce this bootstrapping time is to have this stuff designed in advance of the AGI, but that’s backwards from how modern product development actually works. We design products, and then we design the automated tooling to build those products. If you asked me to design a factory that would be immediately usable by a future AGI, I wouldn’t know where to even start with that request. I need the AGI to tell me what it wants, and then I can build that, and then the AGI can takeover and do their own thing.
A related point that I think gets missed is that our automated factories aren’t necessarily “fast” in a way you’d expect. There’s long lead times for complex products. If you have the specialized machinery for creating new chips, you’re still looking at ~14-24 weeks from when raw materials are introduced to when the final products roll off the line. We hide that delay by constantly building the same things all of the time, but it’s very visibly when there’s a sudden demand spike—that’s why it takes so long before the supply can match the demand for products like processors or GPUs. I have no trouble with imagining a superintelligent entity that could optimize this and knock down the cycle time, but there’s going to be physical limits to these processes and the question is can it knock it down to 10 weeks or to 1 week? And when I’m talking about optimization, this isn’t just uploading new software because that isn’t how these machines work. It’s designing new, faster machines or redesigning the assembly line and replacing the existing machines, so there’s a minimum time required for that too before you can benefit from the faster cycle time on actually making things. Once you hit practical limits on cycle time, the only way to get more stuff faster is to scale wide by building more factories or making your current factories even larger.
If we want to try and avoid the above problems by suggesting that the AGI doesn’t actually hack existing factories, but instead it convinces the factory owners to build the things it wants instead, there’s not a huge difference—instead of the prerequisite here being “build your own factory”, it’s “hostile takeover of existing factory”, where that hostile takeover is either done by manipulation, on the public market, as a private sale, or by outbidding existing customers (e.g. have enough money to convince TSMC to make your stuff instead of Apple’s), or with actual arms and violence. There’s still the other lead times I’ve mentioned for retooling assembly lines and actually building a complete, physical system from one or more automated lines.
You should stop thinking about AI designed nanotechnology like human technology and start thinking about it like actual nanotechnology, i.e. life. There is no reason to believe you can’t come up with a design for self-replicating nanorobots that can also self-assemble into larger useful machines, all from very simple and abundant ingredients—life does exactly that.
Tangent: I don’t think I understand the distinction you’ve made between “AI designed nanotechnology” and “human technology”. Human technology already includes “actual nanotechnology”, e.g. nanolithography in semiconductor production.
I agree that if the AGI gives us a blueprint for the smallest self-replicating nanobot that we’ll need to bootstrap the rest of the nanobot swarm, all we have to do is assemble that first nanobot, and the rest follows. It’s very elegant.
We still need to build that very first self-replicating nanobot though.
We can either do so atom-by-atom with some type of molecular assembler like the ones discussed in Nanosystems, or we can synthesize DNA and use clever tricks to get some existing biology to build things we want for us, or maybe we can build it from a process that the AGI gives us that only uses chemical reactions or lab/industrial production techniques.
If we go with the molecular assembler approach, we need to build one of those first, so that we can build the first self-replicating nanobot. This is effectively the same argument I made above, so I’m going to skip it.
If we go with the DNA approach, then the AGI needs to give us that DNA sequence, and we have to hope that we can create it in a reasonable time despite our poor yield rate and time for on DNA synthesis on longer sequences. If the sequence is too long, we might be in a place where we first need to ask the AGI to design new DNA synthesis machines, otherwise we’ll be stuck. In that world, we return to my arguments above. In the world where the AGI gave us a reasonably length DNA sequence, say the size of a very small cell or something, we can continue. The COVID-19 vaccine provides an example of how this goes. We have an intelligent entity (humans) writing code in DNA, synthesizing that DNA, converting it to mRNA, and getting a biological system (human cells) to read that code and produce proteins. Humanity has these tools. I am not sure why we would assume that the company that develops AGI has them. At multiple steps in the chain of what Pfizer and Moderna did to bring mRNA vaccines to market, there are single vendor gatekeepers who hold the only tooling or processes for industrial production. If we assume that you have all of the tooling and processes, we still need to talk about cycle times. I believe Pfizer aimed to get the cycle time (raw materials → synthesized vaccines) for a batch of vaccine down from 15 weeks to 8 weeks. This is an incredibly complex, amazing achievement—we literally wrote a program in DNA, created a way to deliver it to the human body, and it executed successfully in that environment. However, it’s also an example of the current limitations we have. Synthesizing from scratch the mRNA needed to generate a single protein takes >8 weeks, even if you have the full assembly line figured out. This will get faster in time, and we’ll get better at doing it, but I don’t see any reason to think that we’ll have some type of universal / programmable assembly line for an AGI to use anytime soon.
If we go with a series of chemical reactions/lab/industrial production techniques, we need to build clean rooms and labs and vacuum chambers and whatever else is used to implement whatever process the AGI gives us for synthesizing the nanobots. Conceptually this is the simplest idea for how you could get something to work quickly. If the AGI gave you a list of chemicals, metals, biological samples and a step-by-step process of how to mix, drain, heat, sift, repeat, and at the end of this process you had self-replicating nanobots, that would be pretty cool. This is basically taking evolution’s random walk from a planetary petri dish to the life we see today and asking, “could an AGI shorten the duration from a billion years of random iterative development into mere weeks of some predetermined process to get the first self-replicating nanobots?” The problem with programming is that interpreting code is hard. Anything that can interpret the nanobot equivalent of machine code, like instructions for how and where to melt GPU factories, is going to be vastly more complex than the current state-of-the-heart R&D being done by any human lab today. I don’t see a way where this doesn’t reduce to the same Factorio problem I’ve been describing. We’ll first need to synthesize A, so that we can synthesize B, so that we can synthesize C, so that we can synthesize D, and each step will require novel setups and production lines and time, and at the end of it we’ll have a sequence of steps that looks an awful lot like a molecular assembly line for the creation of the very first self-replicating nanobots.
The hypothetical world(s) where these types of constraints aren’t problems for a “pivotal act” are world(s) where the AGI can give us a recipe for the self-replicating nanobots that we can build in our living room at home with a pair of tweezers and materials from Amazon. The progression of human technology over the past ~60 years in the fields of nano-scale engineering or synthetic biology has been increasingly elaborate, complex, time-consuming, and low-yield processes or lab equipment to replicate the simplest structures that life produces ad-hoc. I am certain this limitation will be conquered, and I’m equally certain that AI/ML systems will be instrumental in doing so, but I have no evidence to rationally conclude that there’s not a mountain of prerequisite tools still remaining for humanity to build before something like “design anything at any scale” capabilities are generally available in a way that an AGI could make use of them.
Tangent: If we’re concerned about destroying the world, deliberately building self-replicating nanobots that start simple but rapidly assemble into something arbitrarily complex from the whims of an AGI seems like a bad idea, which is why my original post was focused on a top-down hardware/software systems engineering process where the humans involved could presumably understand the plans, schematics, and programming that the AGI handed to them prior to the construction and deployment of those nanobots.
This is an inappropriate place to put this.
Sorry, I did not mean to violate any established norms.
I posted as a reply to Eliezer’s comment because they said that the “hardware-destroying capabilities” suggested by the OP is “obviously impossible in real life”. I did not expect that my reply would be considered off-topic or irrelevant in that context.
It seems to me that it is squarely on-topic in this thread, and I do not understand MakoYass’s reaction.
(fwiw I found it a bit weird as a reply to Eliezer-in-particular, but found it a reasonable comment in general)
It’s squarely relevant to the post, but it is mostly irrelevant to Eliezer’s comment specifically, and I think the actual drives underlying the decision to make it a reply to Eliezer are probably not in good faith, like, you have to at least entertain the hypothesis that they pretty much realized it wasn’t relevant and they just wanted eliezer’s attention or they wanted the prominence of being a reply to his comment.
Personally I hope they receive eliezer’s attention, but piggybacking messes up the reply structure and makes it harder to navigate discussions, to make sense of the pragmatics or find what you’re looking for, which is pretty harmful. I don’t think we should have a lot of patience for that.
(Eliezer/that paragraph he was quoting was about the actions of large states, or of a large international alliance. The reply is pretty much entirely about why it’s impractical to hide your activities from your host state, which is all inapplicable to scenarios where you are/have a state.)
Eliezer, from outside the universe I might take your side of this bet. But I don’t think it’s productive to give up on getting mainstream institutions to engage in cooperative efforts to reduce x-risk.
A propos, I wrote the following post in reaction to positions-like-yours-on-this-issue, but FYI it’s not just you (maybe 10% you though?):
https://www.lesswrong.com/posts/5hkXeCnzojjESJ4eB
The link doesn’t work. I think you are linking to a draft version of the post or something.