This was run on davinci via the OpenAI API. First completion.
ML starts running factories, warehouses, shipping, and construction. ML assistants help write code and integrate ML into new domains. ML designers help build factories and the robots that go in them. ML finance systems invest in companies on the basis of complicated forecasts and (ML-generated) audits. Tons of new factories, warehouses, power plants, trucks and roads are being built. Things are happening quickly, investors have super strong FOMO, no one really knows whether it’s a bubble but they can tell that e.g. huge solar farms are getting built and something is happening that they want a piece of. Defense contractors are using ML systems to design new drones, and ML is helping the DoD decide what to buy and how to deploy it. The expectation is that automated systems will manage drones during high-speed ML-on-ML conflicts because humans won’t be able to understand what’s going on. ML systems are designing new ML systems, testing variations, commissioning giant clusters. The financing is coming from automated systems, the clusters are built by robots. A new generation of fabs is being built with unprecedented speed using new automation.
At this point everything kind of makes sense to humans. It feels like we are living at the most exciting time in history. People are making tons of money. The US defense establishment is scared because it has no idea what a war is going to look like right now, but in terms of policy their top priority is making sure the boom proceeds as quickly in the US as it does in China because it now seems plausible that being even a few years behind would result in national irrelevance.
Things are moving very quickly and getting increasingly hard for humans to evaluate. ????
Discussion:
(I’m not sure I’m totally on board with this, but it’s a fun thought experiment.)
One of the things that I find fascinating about this thought experiment is that it’s basically the story of the internet. It’s a story about how humans build something, and then it takes on a life of its own. This is a pretty common narrative in technology, but it’s weird because it happens without anyone explicitly trying to build something that will get out of control. You don’t actually need to program a self-replicating ML system to make it happen. You just need to build something that looks like it might be self-replicating.
You can think of an ML system as a super-powerful AI that is really good at building things. It can do a lot of things humans can’t, but it can’t think about how its creations will be used. This is the same way that humans can build nuclear weapons, but we can’t predict the long-term consequences of our actions. We have no idea what the long-term consequences of an ML system building more ML systems will be.
There are lots of potential paths forward for this thought experiment. I think we can basically divide them into two categories:
1. The “ML is the new electricity” path.
If you were a super-intelligent alien and you landed on Earth in the 19th century, you would be pretty confused about what electricity was. You’d probably be pretty confused about what was going on in general. You’d probably figure out that humans are pretty smart, but you’d be kind of puzzled about why they were so obsessed with their new electricity thing. You’d wonder why they were building so many electric trains. You’d wonder why they were building factories. You’d wonder why they were building all these new cities. You’d wonder why they were fighting so many wars.
This is basically the path we’re on.
In this path, we’re building lots of new infrastructure and new things, and we’re getting better at making all sorts of stuff really fast. We’re just using electricity as a cheap, convenient way to get work done.
This path will basically continue until we run out of things to build. Then it will probably end in a lot of wars, but we don’t know exactly how that will happen.
2. The “ML is the new solar panels” path.
In this path, we basically stop building new stuff, and we start focusing on maximizing the efficiency of the things we’ve already built.
The main thing that humans are doing in this path is running simulations. We run simulations of the economy and the solar system and the galaxy. We run simulations of ML systems running simulations of the economy and the solar system and the galaxy. We figure out what’s going on in the simulations and how to optimize them.
I think this path is pretty likely.
In this path, we will run out of things to build, but we will be able to build more things out of the things we’ve already built. We will be able to build more factories, more solar panels, more computers, more software, more cloud services, more networking, more wireless connectivity, more energy transmission, more roads, more trucks, more drones, more factories, more warehouses, more shipping, more construction, more factories, more drones, more trucks, more drones, more shipping, more drones, more trucks, more drones, more construction, more factories, more solar panels, more software, more networks, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more drones, more
I used nucleus sampling with p=.76, but apparently that was insufficient to prevent degeneration.
Another (outer) alignment failure story
This was run on
davinci
via the OpenAI API. First completion.I used nucleus sampling with p=.76, but apparently that was insufficient to prevent degeneration.
Apparently we’ll be able to build lots of drones.