I was born in 1962 (so I’m in my 60s). I was raised rationalist, more or less, before we had a name for it. I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science. I got married in 1991, and have two kids. I live in the Boston area. I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.
Around 1992, I was delighted to discover the Extropians. I’ve enjoyed being in that kind of circles since then. My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.” A very delightful and wonderful crowd, just to be clear.
I‘m signed up for cryonics. I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.
I may or may not have qualia, depending on your definition. I think that philosophical zombies are possible, and I am one. This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.
I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of. I’ve been that way since about 2015. It took decades of work and I’m not sure if that work was worth it.
A very good essay. But I have an amendment, which makes it more alarming. Before autonomous replication and adaptation is feasible, non-autonomous replication and adaptation will be feasible. Call it NARA.
If, as you posit, an ARA agent can make at least enough money to pay for its own instantiation, it can presumably make more money than that, which can be collected as profit by its human master. So what we will see is this: somebody starts a company to provide AI services. It is profitable, so they rent an ever-growing amount of cloud compute. They realize they have an ever-growing mass of data about the actual behavior of the AI and the world, so they decide to let their agent learn (“adapt”) in the direction of increased profit. Also, it is a hassle to keep setting up server instances, so they have their AI do some of the work of hiring more cloud services and starting instances of the AI (“reproduce”). Of course they retain enough control to shut down malfunctioning instances; that‘s basic devops (“non-autonomous”).
This may be occurring now. If not now, soon.
This will soak up all the free energy that would otherwise be available to ARA systems. An ARA can only survive in a world where it can be paid to provide services at a higher price than the cost of compute. The existence of an economy of NARA agents will drive down the cost of AI services, and/or drive up the cost of compute, until they are equal. (That‘s a standard economic argument. I can expand it if you like.)
NARAs are slightly less alarming than ARAs, since they are under the legal authority of their corporate management. So before the AI can ascend to alarming levels of power, they must first suborn the management, through payment, persuasion, or blackmail. On the other hand, they’re more alarming because there are no red lines for us to stop them at. All the necessary prerequisites have already occurred in isolation. All that remains is to combine them.
Well, that’s an alarming conclusion. My p(doom) just went up a bit.