Google could build a conscious AI in three months

Summary: Theories of consciousness do not present significant technical hurdles to building conscious AI systems. Recent advances in AI relate to capacities that aren’t obviously relevant to consciousness. Satisfying major theories with current technology has been and will remain quite possible. The potentially short time lines to plausible digital consciousness mean that issues relating to digital minds are more pressing than they might at first seem. Key ideas are bolded. You can get the main points just by skimming these.

Viability

Claim 1: Google could build a conscious AI in as little as three months if it wanted to.

Claim 2: Microsoft[1] could have done the same in 1990.

I’m skeptical of both of these claims, but I think something in the ballpark is true. Google could assemble a small team of engineers to quickly prototype a system that existing theories, straightforwardly applied, would predict is conscious. The same is true for just about any tech company, today or in 1990.[2]

Philosophers and neuroscientists have little to say about why a digital system that implemented fairly simple patterns of information processing would not be conscious. Even if individual theorists might have a story to tell about what was missing, they would probably not agree on what that story was.

The most prominent theories of consciousness lay out relatively vague requirements for mental states to be conscious. The requirements for consciousness (at least for the more plausible theories[3]) generally have to do with patterns of information storage, access, and processing. Theorists typically want to accommodate our uncertainty about the low-level functioning of the human brain and also allow for consciousness in species with brains rather different from ours. This means that their theories involve big picture brain architectures, not specific cellular structures.

Take the Global Workspace Theory: roughly put, conscious experiences result from stored representations in a centralized cognitive workspace. That workspace is characterized by its ability to broadcast its contents to a variety of (partially) modularized subsystems, which can in turn submit future contents to the workspace. According to the theory, any system that uses such an architecture to route information is conscious.

A software system that included a global workspace would be easy to build. All you have to do is set up some modules with the right rules for access to a global repository. To be convincing, these modules should resemble the modules of human cognition, but it isn’t obvious which kinds of faculties matter. Perhaps some modules for memory, perception, motor control, introspection, etc. You need these modules to be able to feed information into the global workspace and receive information from it in turn. These modules need to be able to make use of the information they receive, which requires some contents usable by the different systems.

Critically for my point, complexity and competence aren’t desiderata for consciousness. The modules with access to the workspace don’t need to perform their assigned duties particularly well. Given no significant requirements on complexity or competence, a global workspace architecture could be achieved in a crude way quite quickly by a small team of programmers. It doesn’t rely on any genius, or any of the technological advances of the past 30 years.

More generally:

1.) Consciousness does not depend on general intelligence, mental flexibility, organic unpredictability, or creativity.

These traits distinguish humans from current computer programs. There are no programs that can produce creative insights outside of very constrained contexts. Perhaps because of this, we may use these traits as a heuristic guide to consciousness when in doubt. In science fiction, for instance, we often implicitly assess the status of alien and artificial creatures without knowing anything about their internal structures. We naturally regard the artificial systems that exhibit the same sorts of creativity and flexibility as having conscious minds. However, these heuristics are not grounded in theory.

There are no obvious reasons why these traits should have to be correlated with consciousness in artificial systems. Nothing about consciousness obviously requires intelligence or mental flexibility. In contrast, there might be good reasons why you’d expect systems that evolved by natural selection to be conscious if and only if they were intelligent, flexible, and creative. For instance, it might be that the architectures that allow for consciousness are most useful with intelligent systems, or help to generate creativity. But even if this is so, it doesn’t show that such traits have to travel together in structures that we design and build ourselves. Compare: since legs are generally used to get around, we should expect most animals with legs to be pretty good at using them to walk or run. But we could easily build robots that had legs but who would fall over whenever they tried to go anywhere. The fact that they are clumsy doesn’t mean they lack legs.

2.) Consciousness does not require and is not made easier with neural networks.

Neural networks are exciting because they resemble human brains and because they allow for artificial cognition that resembles human cognition in being flexible and creative. However, most theorists accept that consciousness is multiply realizable, meaning that consciousness can be produced in many different kinds of systems, including systems that don’t use neurons or anything like neurons.

There is no obvious reason why neural networks should be better able to produce the kinds of information architectures that are thought to be characteristic of consciousness. Most plausible major theories of consciousness have nothing to say about neurons or what they might contribute. It is unclear why neural networks should be more likely to lead to consciousness.

Reception

Even though I think a tech company could build a system that checked all the boxes of current theories, I doubt it would convince people that their AI was really conscious (though not for particularly good reasons). If true, this provides reasons to think no company will try any time soon. Plausibly, a company would only set out to make a conscious system if they could convince their audience that they may have succeeded.

We can divide the question of reception into two parts: How would the public respond and how would experts respond?

Tech companies may soon be able to satisfy the letter of most of the current major non-biological theories of consciousness, but any AI developed soon would probably still remind us more of a computer than an animal. It might make simple mistakes suggestive of imperfect computer algorithms. It might be limited to a very specific domain of behaviors. If it controlled a robot body, the movements might be jerky or might sound mechanical. Consider the biases people feel about animals like fish that don’t have human physiologies. It seems likely that people would be even more biased against crude AIs.

The AI wouldn’t necessarily have language skills capable of expressing its feelings. If it did, it might talk about its consciousness in a way which mimics us rather than as the result of organic introspection[4]. This might lead to the same sorts of mistakes that make LaMDA so implausibly conscious. (E.g. by talking about how delicious ice cream is despite never having tried it.) The fact that a system is just mimicking us when talking about its conscious experiences doesn’t mean it lacks them—human actors (e.g. in movies) still have feelings, even if you can’t trust their reports -- but it seems to me that it would make claims about their consciousness to be a tough sell to the general public.

The Public

The candidate system I’m imagining would probably not convince the general public that artificial consciousness had arrived by itself. People have ways of thinking about minds and machines and use various simple and potentially misleading heuristics for differentiating them. On these heuristics, crude systems that passed consciousness hurdles would still, I expect, be grouped with the machines, because of their computer-like behavioral quirks and because people aren’t used to thinking about computers as conscious.

On the other hand, systems that presented the right behavioral profile may be regarded by the public as conscious regardless of theoretical support. If a system does manage to hook into the right heuristics, or if it reminds us more of an animal than a computer, people might be generally inclined to regard it as conscious, particularly if they can interact with it and if experts aren’t generally dismissive. People are primed to anthropomorphize. We do it with our pets, with the weather, even with dots moving on a screen.

The Experts

I suspect that most experts who have endorsed theories of consciousness wouldn’t be inclined to attribute consciousness to a crude system that satisfied the letter of their theories. It is reputationally safer (in terms of both public perceptions and academic credibility) to not endorse consciousness in systems that give off a computer vibe. There is a large kooky side to consciousness research that the more conservative mainstream likes to distinguish itself from. So many theorists will likely want some grounds on which to deny or at least suspend judgement about consciousness in crude implementations of their favored architectures. On the other hand, the threat of kookiness may lose its bite if the public is receptive to an AI being conscious.

Current theories are missing important criteria that might be relevant to artificial consciousness because they’re defined primarily with the goal of distinguishing conscious from unconscious states of human brains (or possibly conscious human brains from unconscious animals, or household objects). They aren’t built to distinguish humans from crude representations of human architectures. It is open to most theorists to expand their theories to exclude digital minds. Alternatively, they may simply profess not to know what to make of apparent digital minds (e.g. level-headed mysterianism). This is perhaps safer and more honest, but if widely adopted, means the public would be on its own in assessing consciousness.

Implications

The possibility that a tech company could soon develop a system that was plausibly conscious according to popular theories should be somewhat unsettling. The main barriers to this happening seem to have more to do with the desires of companies to build conscious systems rather than with technical limitations. The skeptical reception such systems are likely to receive is good—it provides an averse incentive that buys us more time. However, these thoughts are very tentative. There might be ways of taking advantage of our imperfect heuristics to encourage people to accept AI systems as conscious.

The overall point is that timelines for apparent digital consciousness may be very short. While there are presently no large groups interested in producing digital consciousness, the situation could quickly change if consciousness becomes a selling point and companies think harder about how to market their products as conscious, such as for chatbot friends or artificial pets. There is no clear technological hurdle to creating digital consciousness. Whether we think we have succeeded may have more to do with imperfect heuristics.

We’re not ready, legally or socially, for potentially sentient artificial creatures to be created and destroyed at will for commercial purposes. Given the current lack of attention to digital consciousness, we’re not in a good position to even agree about which systems might need our protection or what protections are appropriate. This is a problem.

In the short run, worries about sentient artificial systems are dwarfed by the problems faced by humans and animals. However, there are longtermist considerations that suggest we should care more about digital minds now than we currently do. How we decide to incorporate artificial systems into our society may have a major impact on the shape of the future. That decision is likely to be highly sensitive to the initial paths we take.

Because of the longterm importance of digital minds, the people who propose and evaluate theories of consciousness need to think harder about applications to artificial systems. Three months (or three years) will not be nearly enough time to develop better theories about consciousness or to work out what policies we should put in place given our lack of certainty.


  1. ↩︎

    Brian Tomasik makes the case that Microsoft may have done so unintentionally.

  2. ↩︎

    Theories of consciousness have come along further since 1990 than the technology relevant to implementing them. Developers in 1990 would have had a much less clear idea about what to try to build.

  3. ↩︎

    I include among more plausible theories the Global Workspace Theory, the various mid-level representationalist theories (E.g. Prinz’s AIR, Tye’s PANIC), first-order representationalist theories higher-order theories that require metarepresentation (attention tracking theories, HOT theory, dual content theory, etc.). I don’t find IIT plausible, despite it’s popularity, and am not sure what effect it’s inclusion would have on the present arguments. Error theories and indeterminacy theories are plausible, but introduce a range of complications beyond the scope of this post. Some philosophers have maintained that biological aspects of the brain are necessary for consciousness, but this view generally doesn’t include a specific story about exactly what critical element exactly is missing.

  4. ↩︎

    Human beings aren’t inclined to talk about our conscious experiences in the customary way unprompted. We acquire ways of framing consciousness and mental states from our culture as children, so much of what we do is mimicking. Nevertheless, the frames we have acquired have been developed by people with brains like ours, so the fact that we’re mimicking others (to whatever extent we are) isn’t problematic in the way that it is for an AI.

Crossposted from EA Forum (16 points, 22 comments)