One thing I think you underappreciate is that our society has already evolved solutions (imperfect-but-pretty-good ones, like most solutions) to some of these problems. Mostly, evolved these thru distributed trial-and-error over long time periods (much the way biological evolution works).
Most actors in society - businesses, governments, corporations, even families—aren’t monolithic entities with a single hierarchy of goals. They’re composed of many individuals, each with their own diverse goals.
We use this as a lever to prevent some of the pathologies you describe from getting too extreme—by letting organizations die, while the constituent individuals live on.
Early on you said The only reason we haven’t died of [hiding problems] yet is that it is hard to wipe out the human species with only 20th-century human capabilities.
I think instead that long before these problems get serious enough to threaten those outside the organization, the organization itself dies. The company goes out of business, the government loses an election, suffers a revolution, is conquered by a neighbor, the family breaks up. The individual members of the organization scatter and re-join other, heathier, organizations.
This works because virtually all organizations in modern societies face some kind of competition—if they become too dysfunctional, they lose business, lose support, lose members, and eventually die.
As well, we have formal institutions such as law, which is empowered to intervene from the outside when organizational behavior gets too perverse. And concepts like “human rights” to help delineate exactly what is “too” perverse. To take your concluding examples:
Corporations will deliver value to consumers as measured by profit. Eventually this mostly means manipulating consumers, capturing regulators, extortion and theft.
There’s always some of that, but it’s limited by the need to continue to obtain revenue from customers. And by competing corporations which try to redirect that revenue to themselves, by offering better deals. And in extremis, by law.
Investors will “own” shares of increasingly profitable corporations, and will sometimes try to use their profits to affect the world. Eventually instead of actually having an impact they will be surrounded by advisors who manipulate them into thinking they’ve had an impact.
Investors vary in tolerance and susceptibility to manipulation. Every increase in manipulation will drive some investors away (to other advisors or other investments) at the margin.
Law enforcement will drive down complaints and increase reported sense of security. Eventually this will be driven by creating a false sense of security, hiding information about law enforcement failures, suppressing complaints, and coercing and manipulating citizens.
Law enforcement competes for funding with other government expenses, and its success in obtaining resources is partly based on citizen satisfaction. In situations where citizens are free to leave the locality (“voting with their feet”), poorly secured areas depopulate themselves (see: Detroit). The exiting citizens take their resources with them.
Legislation may be optimized to seem like it is addressing real problems and helping constituents. Eventually that will be achieved by undermining our ability to actually perceive problems and constructing increasingly convincing narratives about where the world is going and what’s important.
For a while, and up to a point. When citizens feel their living conditions trail behind that of their neighbors, they withdraw support from the existing government. If able, they physically leave (recall the exodus from East Germany in 1989).
These are all examples a general feedback mechanism, which appears to work pretty well:
There are many organizations of any given type (and new ones are easy to start)
Each requires resources to continue
Resources come from individuals who if dissatisfied withhold them, or redirect those resources at different (competing) organizations
These conditions limit how much perversity and low performance organizations can produce and still survive.
The failure of an organization is rarely a cause for great concern—there are others to take up the load, and failures are usually well-deserved. Individual members/employees/citizens continue even as orgs die.
Most actors in society - businesses, governments, corporations, even families—aren’t monolithic entities with a single hierarchy of goals. They’re composed of many individuals, each with their own diverse goals.
The diversity of goals of the component entities is good protection to have. In the case of an AI, do we still have the same diversity? Is there a reason why a monolithic AI with a single hierarchy of goals cannot operate on the level of a many-human collective actor?
I’m not sure how the solutions our society have evolved apply to an AI due to the fact that it isn’t necessarily a diverse collective of individually motivated actors.
Even more importantly, the biggest reason our world is stable is that humans have a very narrow range of capabilities, and this importantly applies to intelligence, which is normally distributed, meaning that societies can usually defeat outlier humans. AI capabilities will not nearly be this constrained, and the variance is worrying because there a real chance that one AI will be far more intelligent than any human that has ever lived, and it’s relatively easy to cross the human range, ala Go and Starcraft. It’s a similar reason why superpowers in the real world would doom us by default.
EDIT: I no longer think superpowers would doom us by default.
Many good thoughts here.
One thing I think you underappreciate is that our society has already evolved solutions (imperfect-but-pretty-good ones, like most solutions) to some of these problems. Mostly, evolved these thru distributed trial-and-error over long time periods (much the way biological evolution works).
Most actors in society - businesses, governments, corporations, even families—aren’t monolithic entities with a single hierarchy of goals. They’re composed of many individuals, each with their own diverse goals.
We use this as a lever to prevent some of the pathologies you describe from getting too extreme—by letting organizations die, while the constituent individuals live on.
Early on you said The only reason we haven’t died of [hiding problems] yet is that it is hard to wipe out the human species with only 20th-century human capabilities.
I think instead that long before these problems get serious enough to threaten those outside the organization, the organization itself dies. The company goes out of business, the government loses an election, suffers a revolution, is conquered by a neighbor, the family breaks up. The individual members of the organization scatter and re-join other, heathier, organizations.
This works because virtually all organizations in modern societies face some kind of competition—if they become too dysfunctional, they lose business, lose support, lose members, and eventually die.
As well, we have formal institutions such as law, which is empowered to intervene from the outside when organizational behavior gets too perverse. And concepts like “human rights” to help delineate exactly what is “too” perverse. To take your concluding examples:
Corporations will deliver value to consumers as measured by profit. Eventually this mostly means manipulating consumers, capturing regulators, extortion and theft.
There’s always some of that, but it’s limited by the need to continue to obtain revenue from customers. And by competing corporations which try to redirect that revenue to themselves, by offering better deals. And in extremis, by law.
Investors will “own” shares of increasingly profitable corporations, and will sometimes try to use their profits to affect the world. Eventually instead of actually having an impact they will be surrounded by advisors who manipulate them into thinking they’ve had an impact.
Investors vary in tolerance and susceptibility to manipulation. Every increase in manipulation will drive some investors away (to other advisors or other investments) at the margin.
Law enforcement will drive down complaints and increase reported sense of security. Eventually this will be driven by creating a false sense of security, hiding information about law enforcement failures, suppressing complaints, and coercing and manipulating citizens.
Law enforcement competes for funding with other government expenses, and its success in obtaining resources is partly based on citizen satisfaction. In situations where citizens are free to leave the locality (“voting with their feet”), poorly secured areas depopulate themselves (see: Detroit). The exiting citizens take their resources with them.
Legislation may be optimized to seem like it is addressing real problems and helping constituents. Eventually that will be achieved by undermining our ability to actually perceive problems and constructing increasingly convincing narratives about where the world is going and what’s important.
For a while, and up to a point. When citizens feel their living conditions trail behind that of their neighbors, they withdraw support from the existing government. If able, they physically leave (recall the exodus from East Germany in 1989).
These are all examples a general feedback mechanism, which appears to work pretty well:
There are many organizations of any given type (and new ones are easy to start)
Each requires resources to continue
Resources come from individuals who if dissatisfied withhold them, or redirect those resources at different (competing) organizations
These conditions limit how much perversity and low performance organizations can produce and still survive.
The failure of an organization is rarely a cause for great concern—there are others to take up the load, and failures are usually well-deserved. Individual members/employees/citizens continue even as orgs die.
The diversity of goals of the component entities is good protection to have. In the case of an AI, do we still have the same diversity? Is there a reason why a monolithic AI with a single hierarchy of goals cannot operate on the level of a many-human collective actor?
I’m not sure how the solutions our society have evolved apply to an AI due to the fact that it isn’t necessarily a diverse collective of individually motivated actors.
Even more importantly, the biggest reason our world is stable is that humans have a very narrow range of capabilities, and this importantly applies to intelligence, which is normally distributed, meaning that societies can usually defeat outlier humans. AI capabilities will not nearly be this constrained, and the variance is worrying because there a real chance that one AI will be far more intelligent than any human that has ever lived, and it’s relatively easy to cross the human range, ala Go and Starcraft. It’s a similar reason why superpowers in the real world would doom us by default.
EDIT: I no longer think superpowers would doom us by default.