Thanks. I’m sad there’s not more discussion here (maybe the people who are roughly on the same page as you are like “yep” and the people who disagree basically aren’t interested in engaging? But, I dunno, I do think the principles section is actually someone non-obvious and worth more hashing out. Maybe doing a good job hashing it out feels like work. Maybe it’s sort of overdetermined that anyone who can raise enough capital to run a frontier lab will be Selection Effected into being the sort of person who wouldn’t really aspire to this sort of thing.
I will say I’m not actually sure I’m about Principle 1:
Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
I do feel like I want this to be true, and I maybe the phrase “as broad as possible given the circumstances” is doing a lot of work. But, broad authority-legitimization-bases are often more confused, egregore-y, and lowest-common-denominator-y.
In the context of incentive design, I find thinking about integrity valuable because it feels to me like the natural complement to accountability. The purpose of accountability is to ensure that you do what you say you are going to do, and integrity is the corresponding virtue of holding up well under high levels of accountability.
Highlighting accountability as a variable also highlights one of the biggest error modes of accountability and integrity – choosing too broad of an audience to hold yourself accountable to.
There is tradeoff between the size of the group that you are being held accountable by, and the complexity of the ethical principles you can act under. Too large of an audience, and you will be held accountable by the lowest common denominator of your values, which will rarely align well with what you actually think is moral (if you’ve done any kind of real reflection on moral principles).
Too small or too memetically close of an audience, and you risk not enough people paying attention to what you do, to actually help you notice inconsistencies in your stated beliefs and actions. And, the smaller the group that is holding you accountable is, the smaller your inner circle of trust, which reduces the amount of total resources that can be coordinated under your shared principles.
I think a major mistake that even many well-intentioned organizations make is to try to be held accountable by some vague conception of “the public”. As they make public statements, someone in the public will misunderstand them, causing a spiral of less communication, resulting in more misunderstandings, resulting in even less communication, culminating into an organization that is completely opaque about any of its actions and intentions, with the only communication being filtered by a PR department that has little interest in the observers acquiring any beliefs that resemble reality.
I think a generally better setup is to choose a much smaller group of people that you trust to evaluate your actions very closely, and ideally do so in a way that is itself transparent to a broader audience. Common versions of this are auditors, as well as nonprofit boards that try to ensure the integrity of an organization.
This is all part of a broader reflection on trying to create good incentives for myself and the LessWrong team. I will try to follow this up with a post that more concretely summarizes my thoughts on how all of this applies to LessWrong concretely.
Top post claims that while principle one (seek broad accountability) mightbe useful in a more perfect world, but that here in reality it doesn’t work great.
Reasons include that the pressure to be held in high standards by the Public tend to cause orgs to Do PR, rather then speak truth.
Thanks. I’m sad there’s not more discussion here (maybe the people who are roughly on the same page as you are like “yep” and the people who disagree basically aren’t interested in engaging? But, I dunno, I do think the principles section is actually someone non-obvious and worth more hashing out. Maybe doing a good job hashing it out feels like work. Maybe it’s sort of overdetermined that anyone who can raise enough capital to run a frontier lab will be Selection Effected into being the sort of person who wouldn’t really aspire to this sort of thing.
I will say I’m not actually sure I’m about Principle 1:
I do feel like I want this to be true, and I maybe the phrase “as broad as possible given the circumstances” is doing a lot of work. But, broad authority-legitimization-bases are often more confused, egregore-y, and lowest-common-denominator-y.
Oliver’s Integrity and accountability are core parts of rationality comes to mind, which operationalizes the question of “who should you be accountable to?”. I’ll just copy the whole section here:
What are the claims/arguments here?
I feel like I said a bunch of arguments, so don’t know what you mean.
Can you list out any random three out of this ‘bunch’?
Top post claims that while principle one (seek broad accountability) mightbe useful in a more perfect world, but that here in reality it doesn’t work great.
Reasons include that the pressure to be held in high standards by the Public tend to cause orgs to Do PR, rather then speak truth.