An interesting point made by Brandon in the comments (the following quote combines two different comments):
I think there’s a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one’s particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom discusses, on both the risk and the prevention side, have lower-level, less efficient efficient analogues in institution-building.
A lot of the problems—perverse instantiation and principal agent problems, for instance—are standard issues in law and constitutional theory, and a lot of constitutional theory is concerned with addressing them. In checks and balances, for instance, we are ‘stunting’ and ‘tripwiring’ different institutions to make them work less efficiently in matters where we foresee serious risks. Enumeration of powers is an attempt to control a government by direct specification, and political theories going back to Plato that insist on the importance of education are using domesticity and indirect normativity. (Plato’s actually very interesting in this respect, because the whole point of Plato’s Republic is that the constitution of the city is deliberately set up to mirror the constitution of a human person, so in a sense Plato’s republic functions like a weird artificial intelligence.)
The major differences arise, I think, from two sources: (1) With almost all institutions, we are dealing with less-than-existential risks. If government fails, that’s bad, but it’s short of wiping out all of humanity. (2) The artificial character of an AI introduces some quirks—e.g., there are fewer complications in setting out to hardwire AIs with various things than trying to do it with human beings and institutions. But both of these mean that a lot of Bostrom’s work on this point can be seen as looking at the kind of problems and strategies involved in institutions, in a sort of pure case where usual limits don’t apply.
I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?
Here is what Bostrom himself says about this analogy:
Perhaps the closest existing analog to a rule set that could govern the actions of a superintelligence operating in the world at large is a legal system. But legal systems have developed through a long process of trial and error, and they regulate relaltively slow-changing human societies. Laws can be revised when necessary. Most importantly, legal systems are administered by judges and juries who generally apply a measure of common sense and human decency to ignore logically possible legal interpretations that are suffciently obviously unwanted and unintended by the lawgivers. It is probably humanly impossible to explicitly formulate a highly complex set of detailed rules, have them apply across a highly diverse set of circumstances, and get it right on the first implemntation.
This is great, thanks! I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today. I am glad constitutional law people have looked into this.
Well yeah. I don’t approve of working for the capitalist hell-monster, and I don’t think it has mercy on its better servants, but I also don’t have any illusions about what almost everyone ever has done and still does to survive long enough to get old.
Brazil is basically the biography of the 20th century. Brazil counts as fictional evidence about as much as Darkness at Noon (the events in that book did not literally happen, but...) The scariest thing about Brazil is that it is not strange at all, it is too familiar.
Brazil is basically the biography of the 20th century.
That’s a very interesting way of looking at the 20th century: humanity spent the first part building, tearing down, and rebuilding again its vast institutional artifices that are not always human-friendly. We then spent the very end and entered into the 21st century trying to tame them without having to kill large numbers of people on a regular basis.
Philosopher Richard Chapell gives a positive review of Superintelligence.
An interesting point made by Brandon in the comments (the following quote combines two different comments):
I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?
Here is what Bostrom himself says about this analogy:
Superintelligence, p. 139.
This is great, thanks! I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today. I am glad constitutional law people have looked into this.
Forgive my cynicism, but the answer mostly appears to be, “work in their employment”.
Have you ever seen Brazil (the movie)? You will still get eaten.
Well yeah. I don’t approve of working for the capitalist hell-monster, and I don’t think it has mercy on its better servants, but I also don’t have any illusions about what almost everyone ever has done and still does to survive long enough to get old.
Fictional evidence.
Brazil is basically the biography of the 20th century. Brazil counts as fictional evidence about as much as Darkness at Noon (the events in that book did not literally happen, but...) The scariest thing about Brazil is that it is not strange at all, it is too familiar.
That’s a very interesting way of looking at the 20th century: humanity spent the first part building, tearing down, and rebuilding again its vast institutional artifices that are not always human-friendly. We then spent the very end and entered into the 21st century trying to tame them without having to kill large numbers of people on a regular basis.
Here’s a salient MOOC that’s just started on political and legal philosophy, which I’m dipping in and out of for non-FAI reasons.
Political science, the art of manipulating humans for power and profit..?