The sentences I put before the direct answer to Eliezer’s question were meant to correct some of Eliezer’s misapprehensions that were more fundamental than the object of his question. Eliezer’s infamous for uncharitably misinterpreting people and it was clear he’d misinterpreted some key aspects of my original comment, e.g. my purpose in writing it. If I’d immediately directly answered his question that would have been dishonest; it would have contributed further to his having a false view of what I was actually talking about. Less altruistically it would be like I was admitting to his future selves or to external observers that I agreed that his model of my purposes was accurate and that this model could legitimately be used to assert that I was unjustified in any of many possible ways. Thus I briefly (a mere two sentences) attempted to address what seemed likely to be Eliezer’s underlying confusions before addressing his object level question. (Interestingly Eliezer does this quite often, but unfortunately he often assumes people are confused in ways that they are not.)
Given these constraints, what should I have done? In retrospect I should have gone meta, of course, like always. What else?
Given those constraints, I would probably write something like “Money would pay for marginal output in the form of increased collaboration on the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI” for my first sentence and elaborate as strictly necessary. That seems rather more cumbersome than I’d like, but it’s also a lot of information to try and convey in one sentence!
Alternatively, I would consider something along the lines of “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI, but not directly—since the best Friendliness-cognizant x-rationalists would likely already be working on similar things, the money would go towards setting up better communication, coordination, and collaboration for that group.”
That said, I am unaware of any reputation Eliezer has in the field of interpreting people, and personally haven’t received the impression that he’s consistently unusually bad or uncharitable at it. Then again, I have something of a reputation—at least in person—for being too charitable, so perhaps I’m being too light on Eliezer (or you?) here.
The sentences I put before the direct answer to Eliezer’s question were meant to correct some of Eliezer’s misapprehensions that were more fundamental than the object of his question. Eliezer’s infamous for uncharitably misinterpreting people and it was clear he’d misinterpreted some key aspects of my original comment, e.g. my purpose in writing it. If I’d immediately directly answered his question that would have been dishonest; it would have contributed further to his having a false view of what I was actually talking about. Less altruistically it would be like I was admitting to his future selves or to external observers that I agreed that his model of my purposes was accurate and that this model could legitimately be used to assert that I was unjustified in any of many possible ways. Thus I briefly (a mere two sentences) attempted to address what seemed likely to be Eliezer’s underlying confusions before addressing his object level question. (Interestingly Eliezer does this quite often, but unfortunately he often assumes people are confused in ways that they are not.)
Given these constraints, what should I have done? In retrospect I should have gone meta, of course, like always. What else?
Thanks much for the critique.
Given those constraints, I would probably write something like “Money would pay for marginal output in the form of increased collaboration on the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI” for my first sentence and elaborate as strictly necessary. That seems rather more cumbersome than I’d like, but it’s also a lot of information to try and convey in one sentence!
Alternatively, I would consider something along the lines of “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI, but not directly—since the best Friendliness-cognizant x-rationalists would likely already be working on similar things, the money would go towards setting up better communication, coordination, and collaboration for that group.”
That said, I am unaware of any reputation Eliezer has in the field of interpreting people, and personally haven’t received the impression that he’s consistently unusually bad or uncharitable at it. Then again, I have something of a reputation—at least in person—for being too charitable, so perhaps I’m being too light on Eliezer (or you?) here.