This was somewhat enlightening, but also frustrating because EY kept sort-of-but-not-quite answering what I perceived as the most pressing or most interesting questions, and a lot of what he did answer was met with a vague “Well we’re just better at coordinating so it just works” or “Well we’re just better at economics so it just works”, without giving the nuts and bolts to actually understand how it works or how it fits together.
Part of the problem is it’s easier to describe what the results of a complex system should be than to describe how to do it. The shoddy economic and legal systems we are stuck with in our reality are partly because they are at least somewhat robust to forms of cheating that improved systems fail to. For example the non expert jury system for criminal trials. A completely reasonable patch would be to replace them with expert detectives as judges. This has major problems in practice and would probably send many innocent people to prison. (Because individual detectives overfit and “learn” how insufficient weak evidence means one of the suspects in front of them is guilty. And they have no access to ground truth data to unlearn these false policies)
When you look at these problems you end up with “ok we need smarter humans or superintelligence to design a better system as this is too complex”. Which pushes the problems into another corner. (You can’t trust the more intelligent system or people not to rig the new system in their favor)
it’s easier to describe what the results of a complex system should be than to describe how to do it.
Sure, but I’m almost tempted to ask what the point of the AMA was, if he wasn’t going to explain how dath ilan actually accomplishes things. (I’m not going to actually ask that, because questions merely asking what dath ilan is like, without asking why or how, are also valuable to ask and answer.)
Many questions were “How does dath ilan avoid and/or solve such-and-such problem?”, and often the response was essentially, “We’re good at [economics/coordination/etc.] so that doesn’t happen in the first place”, or “If this problem ever happened in dath ilan everyone would wonder how we could possibly have gotten into that position”, or “If this problem started happening everyone would notice and then fix it.” And like, that’s great for dath ilan, but that doesn’t explain how they solve(d) the problem. It not only doesn’t answer the question literally at all, it almost feels like a weird form of bragging or showing off. These are genuinely hard problems, that’s why they still exist. You can’t just reframe them in a way that makes them sound easy and trivial, without actually providing a solution, and expect anyone to be convinced or impressed.
I’m not saying EY should’ve known the answers to these questions. Like I said, these are hard problems; I don’t expect EY to have unique insights. I just feel like it would’ve been a lot more honest, and less braggy or show-offy, to either not respond to those questions, or to just say “I have no idea how dath ilan managed to achieve these things, because [I am not as smart as dath ilan/I don’t know our history/etc.].” (Or at least prepend that to the responses he actually gave.)
There’s perhaps more detail in Project Lawful and in some nearby stories (“for no laid course prepare”, “aviation is the most dangerous routine activity”).
There are still adversarial equilibria even if every person on the planet is as smart as you. Greater intelligence makes people more tryhard in their roles.
It is possible that today one of the reasons things work at all is because regulators get tired and let people do things, cops don’t remember all the laws so they allow people to break them, scientists do something illogical and accidentally make a major discovery, and so on.
But doctors and mortuary workers couldn’t scam people to be against cryonics because the average person is smart.
FDA couldn’t scam people to think slow drug approvals protect their lives because average people are smart.
Local housing authorities couldn’t scam people into affordable housing requirements because average people understand supply and demand.
Huh. I think you might be correct. Too bad evolution didn’t have enough incentive to make humans that smart.
I suspect that if the average citizen understands confirmation bias, economics 101, the prisoner’s dilemma, decision theory, coordinated action, the scientific method and the Pareto frontier… most of Moloch goes away or never arises in the first place.
You can have adversarial equilibria still, sure, but if everyone is smart and aware of hidden consequences and understands the idea of zero vs non-zero sum games properly you don’t have many adversarial equilibria which destroy net value.
So dath Ilan I understand is the thought experiment of “every human has about as much intelligence as Eliezer Yudnowsky”.
Starting with that assumption—the flaw is that I think a lot of the issues with current civilization isn’t that people are stupid, it’s Moloch.
The rules of the adversarial game creates situations where every actor is stuck in an inadequate equilibrium. No one has the power to fix anything, because each actor is just doing their own role and acting in their own interests.
Making the actors smarter doesn’t help—they just try hard their jobs even more. This might make the situation worse.
For an example: the FDA doesn’t exist to help human beings live longer, healthier lives. It exists to ensure every drug is “safe and effective”. Making then smarter means they allow even less errors in drug applications. But drug company workers also are smarter and make less obvious errors and cover up any lies in their clinical trial reports better, since their role is to get a drug approved so their parent company doesn’t go bankrupt.
So you are stuck in the same inadequate equilibrium where everyone is doing their role and the actual people humans should care about—human patients—suffers.
This was somewhat enlightening, but also frustrating because EY kept sort-of-but-not-quite answering what I perceived as the most pressing or most interesting questions, and a lot of what he did answer was met with a vague “Well we’re just better at coordinating so it just works” or “Well we’re just better at economics so it just works”, without giving the nuts and bolts to actually understand how it works or how it fits together.
Part of the problem is it’s easier to describe what the results of a complex system should be than to describe how to do it. The shoddy economic and legal systems we are stuck with in our reality are partly because they are at least somewhat robust to forms of cheating that improved systems fail to. For example the non expert jury system for criminal trials. A completely reasonable patch would be to replace them with expert detectives as judges. This has major problems in practice and would probably send many innocent people to prison. (Because individual detectives overfit and “learn” how insufficient weak evidence means one of the suspects in front of them is guilty. And they have no access to ground truth data to unlearn these false policies)
When you look at these problems you end up with “ok we need smarter humans or superintelligence to design a better system as this is too complex”. Which pushes the problems into another corner. (You can’t trust the more intelligent system or people not to rig the new system in their favor)
Sure, but I’m almost tempted to ask what the point of the AMA was, if he wasn’t going to explain how dath ilan actually accomplishes things. (I’m not going to actually ask that, because questions merely asking what dath ilan is like, without asking why or how, are also valuable to ask and answer.)
Many questions were “How does dath ilan avoid and/or solve such-and-such problem?”, and often the response was essentially, “We’re good at [economics/coordination/etc.] so that doesn’t happen in the first place”, or “If this problem ever happened in dath ilan everyone would wonder how we could possibly have gotten into that position”, or “If this problem started happening everyone would notice and then fix it.” And like, that’s great for dath ilan, but that doesn’t explain how they solve(d) the problem. It not only doesn’t answer the question literally at all, it almost feels like a weird form of bragging or showing off. These are genuinely hard problems, that’s why they still exist. You can’t just reframe them in a way that makes them sound easy and trivial, without actually providing a solution, and expect anyone to be convinced or impressed.
I’m not saying EY should’ve known the answers to these questions. Like I said, these are hard problems; I don’t expect EY to have unique insights. I just feel like it would’ve been a lot more honest, and less braggy or show-offy, to either not respond to those questions, or to just say “I have no idea how dath ilan managed to achieve these things, because [I am not as smart as dath ilan/I don’t know our history/etc.].” (Or at least prepend that to the responses he actually gave.)
There’s perhaps more detail in Project Lawful and in some nearby stories (“for no laid course prepare”, “aviation is the most dangerous routine activity”).
There are still adversarial equilibria even if every person on the planet is as smart as you. Greater intelligence makes people more tryhard in their roles.
It is possible that today one of the reasons things work at all is because regulators get tired and let people do things, cops don’t remember all the laws so they allow people to break them, scientists do something illogical and accidentally make a major discovery, and so on.
But doctors and mortuary workers couldn’t scam people to be against cryonics because the average person is smart.
FDA couldn’t scam people to think slow drug approvals protect their lives because average people are smart.
Local housing authorities couldn’t scam people into affordable housing requirements because average people understand supply and demand.
Huh. I think you might be correct. Too bad evolution didn’t have enough incentive to make humans that smart.
I suspect that if the average citizen understands confirmation bias, economics 101, the prisoner’s dilemma, decision theory, coordinated action, the scientific method and the Pareto frontier… most of Moloch goes away or never arises in the first place.
You can have adversarial equilibria still, sure, but if everyone is smart and aware of hidden consequences and understands the idea of zero vs non-zero sum games properly you don’t have many adversarial equilibria which destroy net value.
So dath Ilan I understand is the thought experiment of “every human has about as much intelligence as Eliezer Yudnowsky”.
Starting with that assumption—the flaw is that I think a lot of the issues with current civilization isn’t that people are stupid, it’s Moloch.
The rules of the adversarial game creates situations where every actor is stuck in an inadequate equilibrium. No one has the power to fix anything, because each actor is just doing their own role and acting in their own interests.
Making the actors smarter doesn’t help—they just try hard their jobs even more. This might make the situation worse.
For an example: the FDA doesn’t exist to help human beings live longer, healthier lives. It exists to ensure every drug is “safe and effective”. Making then smarter means they allow even less errors in drug applications. But drug company workers also are smarter and make less obvious errors and cover up any lies in their clinical trial reports better, since their role is to get a drug approved so their parent company doesn’t go bankrupt.
So you are stuck in the same inadequate equilibrium where everyone is doing their role and the actual people humans should care about—human patients—suffers.