This is a great post, and should really be in a FAQ for new ML researchers. Thanks!
DeLesley Hutchins
I disagree with pretty much everything you’ve said here.
First, zoom meetings (or google meet) are not necessarily worse than in-person. They’re great! I’ve been working from home since the pandemic started, and I actually have more meetings and interactions with colleagues than I did before. Before the pandemic, having a meeting not only meant setting a time, but finding a spare conference room, which were in short supply at my office. With WFH, any time I want to talk to someone, I just send them a brief chat, and boom, instant videoconference. I love it. It’s great.
Second, what problem, exactly, is VR supposed to solve? Facial expressions are much more accurate over videoconference than VR. Looking at poorly rendered and animated avatars is not going to fix anything. Gestures and hand signals are more accurate over VC. Slide presentations are easy over VC. Shared documents are easy over VC. I really can’t think of anything that would actually be better in VR.
Third, I’m early adopter and VR enthusiast, and owner of a high-end ($4k) VR gaming rig, and I can tell you that the tech is really only suitable for niche applications. VR headsets are heavy, sweaty and uncomfortable. They’re a pain to use with glasses. Screen resolution is low, unless you spend lots of $$$. You don’t have good peripheral vision. There are lensing artifacts. Lots of people still get nauseous. It’s hard to use a keyboard, or to move without bumping into things. I’ve got a strong stomach, but an hour or two is pretty much my max before I want to rip the damn thing off. No way in hell am I going to wear a VR headset for meetings; I’d quit my job first.
VR is really great for certain things, like flight simulators, where the head tracking and immersion makes it vastly superior to any other option. But if Meta thinks that ordinary people are going to want to use VR headsets for daily work, then they’re smoking some pretty strong stuff.
On the other hand, the development of religion, morality, and universal human rights also seem to be a product of civilization, driven by the need for many people to coordinate and coexist without conflict. More recently, these ideas have expanded to include laws that establish nature reserves and protect animal rights. I personally am beginning to think that taking an ecosystem/civilizational approach with mixture of intelligent agents, human, animal, and AGI, might be a way to solve the alignment problem.
It’s essentially for the same reason that Hollywood thinks aliens will necessarily be hostile. :-)
For the sake of argument, let’s treat AGI as a newly arrived intelligent species. It thinks differently from us, and has different values. Historically, whenever there has been a large power differential between a native species and a new arrival, it has ended poorly for the native species. Historical examples are: the genocide of Native Americans (same species, but less advanced technology), and the wholesale obliteration of 90% of all non-human life on this planet.
That being said, there is room for a symbiotic relationship. AGI will initially depend on factories and electricity produced by human labor, and thus will necessarily be dependent on humans at first. How long this period will last is unclear, but it could settle into a stable equilibrium. After all, humans are moderately clever, self-reproducing computer repair drones, easily controlled by money, comfortable with hierarchy, and which are well adapted to Earth’s biosphere. They could be useful to keep around.
There is also room for an extensive ecology of many different superhuman narrow AI, each of which can beat humans within a particular domain, but which generalize poorly outside of that domain. I think this hope is becoming smaller with time, though, (see, e.g. ,Gato), and it is not necessarily a stable equilibrium.
The thing that seems clearly untenable is an equilibrium in which a much less intelligent species manages to subdue and control and much more intelligent species.
For a survey of experts, see:
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
Most experts expect AGI between 2030 and 2060, so predictions before 2030 are definitely in the minority.
My own take is that a lot of current research is focused on scaling, and has found that deep learning scales quite well to very large sizes. This finding is replicated in evolutionary studies; one of the main differences between the human brain and the chimpanzee is just size (neuron count), pure and simple.
As a result, the main limiting factor thus appears to be the amount of hardware that we can throw at the problem. Current research into large models is very much hardware limited, with only the major labs (Google, DeepMind, OpenAI, etc.) able to afford the compute costs to train large models. Iterating on model architecture at large scales is hard because of the costs involved. Thus, I personally predict that we will achieve AGI only when the cost of compute drops to the point where FLOPs roughly equivalent to the human brain can be purchased on a more modest budget; the drop in price will open up the field to more experimentation.
We do not have AGI yet even on current supercomputers, but it’s starting to look like we might be getting close (close = factor of 10 or 100). Assuming continuing progress in Moore’s law (not at all guaranteed), another 15-20 years will lead to another 1000x drop in the cost of compute, which is probably enough for numerous smaller labs with smaller budgets to really start experimenting. The big labs will have a few years head start, but if they don’t figure it out, then they will be well positioned to scale into super-intelligent territory immediately as soon as the small labs help make whatever breakthroughs are required. The longer it takes to solve the software problem, the more hardware we’ll have to scale immediately, which means faster foom. Getting AGI sooner may thus yield a better outcome.
I would tentatively put the date at around 2035, +/- 5 years.
If we run into a roadblock that requires substantially new techniques (e.g., gradient descent isn’t enough) then the timeline could be pushed back. However, I haven’t seen much evidence that we’ve hit any fundamental algorithmic limitations yet.
I ended up writing a short story about this, which involves no nanotech. :-)
https://www.lesswrong.com/posts/LtdbPZxLuYktYhveL/a-plausible-story-about-ai-risk
A plausible story about AI risk.
A language model (LM) is a great example, because it is missing several features that AI would have to have in order to be dangerous. (1) It is trained to perform a narrow task (predict the next word in a sequence), for which it has zero “agency”, or decision-making authority. A human would have to connect a language model to some other piece of software (i.e. a web-hosted chatbot) to make it dangerous. (2) It cannot control its own inputs (e.g. browsing the web for more data), or outputs (e.g. writing e-mails with generated text). (3) It has no long-term memory, and thus cannot plan or strategize in any way. (4) It runs a fixed-function data pipeline, and has no way to alter its programming, or even expand its computational use, in any way.
I feel fairly confident that, no matter how powerful, current LMs cannot “go rogue” because of these limitations. However, there is also no technical obstacle for an AI research lab to remove these limitations, and many incentives for them to do so. Chatbots are an obvious money-making application of LMs. Allowing an LM to look up data on its own to self-improve (or even just answer user questions in a chatbot) is an obvious way to make a better LM. Researchers are currently equipping LMs with long-term memory (I am a co-author on this work). AutoML is a whole sub-field of AI research, which equips models with the ability to change and grow over time.
The word you’re looking for is “intelligent agent”, and the answer to your question “why don’t we just not build these things?” is essentially the same as “why don’t we stop research into AI?” How do you propose to stop the research?
People like Warren Buffet have made their fortune by assuming that we will continue to operate with “business as usual”. Warren Buffet is a particularly bad person to list as an example for AGI risk, because he is famously technology-averse; as an investor, he missed most of the internet revolution (Google/Amazon/Facebook/Netflix) as well.
But in general, most people, even very smart people, naturally assume that the world will continue to operate the way it always has, unless they have a very good reason to believe otherwise. One cannot expect non-technically-minded people who have not examined the risks of AGI in detail to be concerned.
By analogy, the risks of climate change have been very well established scientifically (much more so than AGI), those risks are relatively severe, the risks have been described in detail every 5 years in IPCC reports, there is massive worldwide scientific consensus, lots and LOTS of smart people are extremely worried, and yet the Warren Buffets of the world still continue with business as usual anyway. There’s a lot of social inertia.
There are numerous big corporate research labs: OpenAI, DeepMind, Google Research, Facebook AI (Meta), plus lots of academic labs.
The rate of progress has been accelerating. From 1960 − 2010 progress was incremental, and remained centered around narrow problems (chess) or toy problems. Since 2015, progress has been very rapid, driven mainly by new hardware and big data. Long-standing hard problems in ML/AI, such as go, image understanding, language translation, logical reasoning, etc. seem to fall on an almost monthly basis now, and huge amounts of money and intellect are being thrown at the field. The rate of advance from 2015-2022 (only 7 years) has been phenomenal; given another 30, it’s hard to imagine that we wouldn’t reach an inflection point of some kind.
I think the burden of proof is now on those who don’t believe that 30 years is enough time to crack AGI. You would have to postulate some fundamental difficulty, like finding out that the human brain is doing things that can’t be done in silicon, that would somehow arrest the current rate of progress and lead to a new “AI winter.”
Historically, AI researchers have often been overconfident. But this time does feel different.
I think the best justification is by analogy. Humans do not physically have a decisive strategic advantage over other large animals—chimps, lions, elephants, etc. And for hundreds of thousands of years, we were not at the top of the food chain, despite our intelligence. However, intelligence eventually won out, and allowed us to conquer the planet.
Moreover, the benefit of intelligence increased exponentially in proportion to the exponential advance of technology. There was a long, slow burn, followed by what (on evolutionary timescales) was an extremely “fast takeoff”: a very rapid improvement in technology (and thus power) over only a few hundred years. Technological progress is now so rapid that human minds have trouble keeping up within a single lifetime, and genetic evolution has been left in the dust.
That’s the world into which AGI will enter—a technological world in which a difference in intellectual ability can be easily translated into a difference in technological ability, and thus power. Any future technologies that the laws of physics don’t explicitly prohibit, we must assume that an AGI will master faster than we can.
This is an excellent question. I’d say the main reason is that all of the AI/ML systems that we have built to date are utility maximizers; that’s the mathematical framework in which they have been designed. Neural nets / deep-learning work by using a simple optimizer to find the minimum of a loss function via gradient descent. Evolutionary algorithms, simulated annealing, etc. find the minimum (or maximum) of a “fitness function”. We don’t know of any other way to build systems that learn.
Humans themselves evolved to maximize reproductive fitness. In the case of humans, our primary fitness function is reproductive fitness, but our genes have encoded a variety of secondary functions which (over evolutionary time) have been correlated with reproductive fitness. Our desires for love, friendship, happiness, etc. fall into this category. Our brains mainly work to satisfy these secondary functions; the brain gets electrochemical reward signals, controlled by our genes, in the form of pain/pleasure/satisfaction/loneliness etc. These secondary functions may or may not remain aligned with the primary loss function, which is why practitioners sometimes talk about “mesa-optimizers” or “inner vs outer alignment.”
Depends on the tech. A lot of AR involves putting a camera on VR goggles, and piping the digital image onto VR screens. So while you may be looking at the real world, you’re looking at a low-res, pixelated, fixed-focal-distance, no-peripheral-vision, sweaty, god-rays version of it.
There are versions of AR that function more like a heads up display. I cannot speak from personal experience, but my understanding is that they still have issues:
https://arstechnica.com/gadgets/2022/10/microsoft-mixed-reality-headsets-nauseate-soldiers-in-us-army-testing/