“It’s hard to beat signaling equilibria—because they’re “multi-factor markets”—which are special cases of coordination problems that create “inferior Nash equilibria”—which are so stuck in place that market controllers can seek rent on the value generated by captive participants.”
It’s things like this that makes me puzzled why the author is yet libertarian, even in spite of this perfectly succint description of the Awful state of affairs arrived at in the wild west, when a Leviathan could try to transcend/straddle these fruits/niches and force them upward into a more Pareto optimal condition, maybe even into the non-Nash E. states if we’re extra lucky. Or is Regulatory Capture and the Law of Unintended Consequences generally so strong that any attempt can be assumed futile, and we’ll always achieve more harm than good in some lateral unexpected way as the market circumvents our efforts? (The Germans have a wonderful word—Verschlimmbessern! When you try to help/improve something, with good intentions but end up breaking it worse) Is it only thus today because all regulators also come with the undesideratum of human error? Would an AI too be doomed & unable to remedy these inferior equilibria? Or does the libertarian view only apply pragmatically now at the human era.
Spooky-Inadequacy-at-a-Distance ?
All these depictions about multi-tiered stages through which action & incentives have to pass—like venture capital going thru multiple rounds A → B → C, or the recursion of “it’s more than just what you think, but also what you think others think”, and “what will the journalists say about what the politicians say outside the overton window” etc. seem to me to have a similar theme underlying them all, that ‘the further distance you are away, the more screwed up and misaligned the effects are’, and the more we’re locked-in to weird states of inadequate equilibria. And this reminds me of an analogy in Physics. (And I HOPE this is just a casual analogy, and not indicative of some deeper ‘spooky-inadequacy-at-a-distance’ that suggests we’re screwed in this matter all the way down to physics itself!) Like a chair, when you drag it across the kitchen floor and the legs vibrate with jerkiness against the floor and make a screech sound. Or how we have earthquakes—the plates are moving continuously, but the earthquakes occur in jerks, building up pressure to finally jump over a threshold. Or like thermal heating when your heater is heating up and it ticks loudly as the metal expands in jerks. All of this is underlied by the one physics phenomenon that static friction happens to be greater than kinetic friction. So even when you move that chair perfectly continuously and gradually at the top, the further distance out along a chain of steps away (in this case atoms), the more things get jerky at the endpoint. Kind of like the gradual winds of change of public opinion only getting realized in jerks, and revolutions, and rapid phase-transitions over a threshold. Anyway, that’s the thought I couldn’t shake while reading this chapter. I cling to hope that some of these meta-level dynamics to our problems depicted by EY here are remediable somehow… and we’re not predeterminedly screwed on a physical level, unable to confront them. Clearly we need to kick the shit out of these inadequacies! Maybe a rough rule of thumb could be to get up close and personal, right up next to them, not out at a distance of 3 recursive middle-men away (IF even possible to change at all) to lower entry-barriers and have a better chance of the ‘just-snap-out-of-it’? Proposal: 1. Get a Friendly Aligned-AI in power, 2. Go thru the whole tree-of-fruits minimizing the inadequacy-distance away systematically, and 3. If you got past step 1 and are both still alive and also not a paperclip, you’re in the clear, by Anthropic Principle. (Also don’t worry about step 2). So, uh.. just step 1 then.