If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
Yes to all this.