Money would pay for marginal output, e.g. in the form of increased collaboration, I think, since the best Friendliness-cognizant x-rationalists would likely already be working on similar things.
I was trying to quickly gauge vague interest in a vague notion. I think that my original comment was at roughly the most accurate and honest level of vagueness (i.e. “aimed largely [i.e. primarily] at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty [e.g. how many Von Neumanns, Turings, and/or Aristotles would it take?] of Friendly AI for various (logical) probabilities of Friendliness [e.g. is the algorithm meta-reflective enough to fall into (one of) some imagined Friendliness attractor basin(s)?]”). Value of information regarding difficulty of Friendly-ish AI is high, but research into that question is naturally tied to Friendly AI theory itself. I’m thinking… Goedel machine stability more than ambient decision theory, history of computation more than any kind of validity semantics. To some extent it depends on who plans to actually work on what stuff from the open problems lists. There are many interesting technical threads that people might start pulling on soon, and it’s unclear to me to what extent they actually will pull on them or to what extent pulling on them will give us a better sense of the problem.
[Stuff it would take too many paragraphs to explain why it’s worth pointing out specifically:] Theory of justification seems to be roughly as developed as theory of computation was before the advent of Leibniz; Leibniz saw a plethora of connections between philosophy, symbolic logic, and engineering and thus developed some correctly thematically centered proto-theory. I’m trying to make a Leibniz, and hopefully SingInst can make a Turing. (Two other roughly analogous historical conceptual advances are natural selection and temperature.)
Well, my probability that you could or would do anything useful, given money, just dropped straight off a cliff. But perhaps you’re just having trouble communicating. That is to say: What the hell are you talking about.
If you’re going to ask for money on LW, plain English response, please: What’s the output here that the money is paying for; (1) a Friendly AI, (2) a theory that can be used to construct a Friendly AI, or (3) an analysis that purports to say whether or not Friendly AI is “feasible”? Please pick one of the pre-written options; I now doubt your ability to write your response ab initio.
Dude, it’s right there: “feasibility and difficulty”, in this sentence which I am now repeating for the second time:
aimed largely [i.e. primarily] at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty [e.g. how many Von Neumanns, Turings, and/or Aristotles would it take?] of Friendly AI for various (logical) probabilities of Friendliness [e.g. is the algorithm meta-reflective enough to fall into (one of) some imagined Friendliness attractor basin(s)?]”).
(Bold added for emphasis, annotations in [brackets] were in the original.)
The next sentence:
Value of information regarding difficulty of Friendly-ish AI is high, but research into that question is naturally tied to Friendly AI theory itself.
Or if you really need it spelled out for you again and again, the output would primarily be (3) but secondarily (2) as you need some of (2) to do (3).
Because you clearly need things pointed out multiple times, I’ll remind you that I put my response in the original comment that you originally responded to, without the later clarifications that I’d put in for apparently no one’s benefit:
If I had a viable preliminary Friendly AI research program, aimed largely at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty of Friendly AI for various values of “Friendly” [...]
(Those italics were in the original comment!)
If you’re going to ask for money on LW
I wasn’t asking for money on Less Wrong! As I said, “I was trying to quickly gauge vague interest in a vague notion.” What the hell are you talking about.
I now doubt your ability to write your response ab initio.
I’ve doubted your ability to read for a long time, but this is pretty bad. The sad thing is you’re probably not doing this intentionally.
I think the problem here is that your posting style, to be frank, often obscures your point.
In most cases, posts that consist of a to-the-point answer followed by longer explanations use the initial statement to make a concise case. For instance, in this post, my first sentence sums up what I think about the situation and the rest explains that thought in more detail so as to convey a more nuanced impression.
By contrast, when Eliezer asked “What’s the output here that the money is paying for,” your first sentence was “Money would pay for marginal output, e.g. in the form of increased collaboration, I think, since the best Friendliness-cognizant x-rationalists would likely already be working on similar things.” This does not really answer his question, and while you clarify this with your later points, the overall message is garbled.
The fact that your true answer is buried in the middle of a paragraph does not really help things much. Though I can see what you are trying to say, I can’t in good and honest conscience describe it as clear. Had you answered, on the other hand, “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI...” as your first sentence, I think your post would have been more clear and more likely to be understood.
The sentences I put before the direct answer to Eliezer’s question were meant to correct some of Eliezer’s misapprehensions that were more fundamental than the object of his question. Eliezer’s infamous for uncharitably misinterpreting people and it was clear he’d misinterpreted some key aspects of my original comment, e.g. my purpose in writing it. If I’d immediately directly answered his question that would have been dishonest; it would have contributed further to his having a false view of what I was actually talking about. Less altruistically it would be like I was admitting to his future selves or to external observers that I agreed that his model of my purposes was accurate and that this model could legitimately be used to assert that I was unjustified in any of many possible ways. Thus I briefly (a mere two sentences) attempted to address what seemed likely to be Eliezer’s underlying confusions before addressing his object level question. (Interestingly Eliezer does this quite often, but unfortunately he often assumes people are confused in ways that they are not.)
Given these constraints, what should I have done? In retrospect I should have gone meta, of course, like always. What else?
Given those constraints, I would probably write something like “Money would pay for marginal output in the form of increased collaboration on the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI” for my first sentence and elaborate as strictly necessary. That seems rather more cumbersome than I’d like, but it’s also a lot of information to try and convey in one sentence!
Alternatively, I would consider something along the lines of “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI, but not directly—since the best Friendliness-cognizant x-rationalists would likely already be working on similar things, the money would go towards setting up better communication, coordination, and collaboration for that group.”
That said, I am unaware of any reputation Eliezer has in the field of interpreting people, and personally haven’t received the impression that he’s consistently unusually bad or uncharitable at it. Then again, I have something of a reputation—at least in person—for being too charitable, so perhaps I’m being too light on Eliezer (or you?) here.
I think the problem here is that your posting style, to be frank, often obscures your point.
I acknowledge this. But it seems to me that the larger problem is that Eliezer simply doesn’t know how to read what people actually say. Less Wrong mostly doesn’t either, and humans in general certainly don’t. This is a very serious problem with LW-style rationality (and with humanity). There are extremely talented rationalists who do not have this problem; it is an artifact of Eliezer’s psychology and not of the art of rationality.
It’s hardly fair to blame the reader when you’ve got “sentences” like this:
I think that my original comment was at roughly the most accurate and honest level of vagueness (i.e. “aimed largely [i.e. primarily] at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty [e.g. how many Von Neumanns, Turings, and/or Aristotles would it take?] of Friendly AI for various (logical) probabilities of Friendliness [e.g. is the algorithm meta-reflective enough to fall into (one of) some imagined Friendliness attractor basin(s)?]”).
That was the second version of the sentence, the first one had much clearer syntax and even italicized the answer to Eliezer’s subsequent question. It looks the way it does because Eliezer apparently couldn’t extract meaning out of my original sentence despite it clearly answering his question, so I tried to expand on the relevant points with bracketed concrete examples. Here’s the original:
If I had a viable preliminary Friendly AI research program, aimed largely at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty of Friendly AI for various values of “Friendly” [...]
What you say might be true, but this one example is negligible compared to the mountain of other evidence concerning inability to read much more important things (which are unrelated to me). I won’t give that evidence here.
If that’s indeed the case (I haven’t noticed this flaw myself), I suggest that you write articles (or perhaps commission/petition others to have them written) describing this flaw and how to correct it. Eliminating such a flaw or providing means of averting it would greatly aid LW and the community in general.
Unfortunately that is not currently possible for many reasons, including some large ones I can’t talk about and that I can’t talk about why I can’t talk about. I can’t see any way that it would become possible in the next few years either. I find this stressful; it’s why I make token attempts to communicate in extremely abstract or indirect ways with Less Wrong, despite the apparent fruitlessness. But there’s really nothing for it.
Unrelated public announcement: People who go back and downvote every comment someone’s made, please, stop doing that. It’s a clever way to pull information cascades in your direction but it is clearly an abuse of the content filtering system and highly dishonorable. If you truly must use such tactics, downvoting a few of your enemy’s top level posts is much less evil; your enemy loses the karma and takes the hint without your severely biasing the public perception of your enemy’s standard discourse. Please.
(I just lost 150 karma points in a few minutes and that’ll probably continue for awhile. This happens a lot.)
Unfortunately that is not currently possible for many reasons, including some large ones I can’t talk about and that I can’t talk about why I can’t talk about.
Why can’t you talk about why you can’t talk about them?
I’m not a big fan of the appeal to secret reasons, so I think I’m going to have pull out of this discussion. I will note, however, that you personally seem to be involved in more misunderstandings than the average LW poster, so while it’s certainly possible that your secret reasons are true and valid and Eliezer just sucks at reading or whatever, you may want to clarify certain elements of your own communication as well.
I unfortunately predict that “going more meta” will not be strongly received here.
Unfortunately that is not currently possible for many reasons, including some large ones I can’t talk about and that I can’t talk about why I can’t talk about.
Are we still talking about improving general reading comprehension? What could possibly be dangerous about that?
Money would pay for marginal output, e.g. in the form of increased collaboration, I think, since the best Friendliness-cognizant x-rationalists would likely already be working on similar things.
I was trying to quickly gauge vague interest in a vague notion. I think that my original comment was at roughly the most accurate and honest level of vagueness (i.e. “aimed largely [i.e. primarily] at doing the technical analysis necessary to determine as well as possible the feasibility and difficulty [e.g. how many Von Neumanns, Turings, and/or Aristotles would it take?] of Friendly AI for various (logical) probabilities of Friendliness [e.g. is the algorithm meta-reflective enough to fall into (one of) some imagined Friendliness attractor basin(s)?]”). Value of information regarding difficulty of Friendly-ish AI is high, but research into that question is naturally tied to Friendly AI theory itself. I’m thinking… Goedel machine stability more than ambient decision theory, history of computation more than any kind of validity semantics. To some extent it depends on who plans to actually work on what stuff from the open problems lists. There are many interesting technical threads that people might start pulling on soon, and it’s unclear to me to what extent they actually will pull on them or to what extent pulling on them will give us a better sense of the problem.
[Stuff it would take too many paragraphs to explain why it’s worth pointing out specifically:] Theory of justification seems to be roughly as developed as theory of computation was before the advent of Leibniz; Leibniz saw a plethora of connections between philosophy, symbolic logic, and engineering and thus developed some correctly thematically centered proto-theory. I’m trying to make a Leibniz, and hopefully SingInst can make a Turing. (Two other roughly analogous historical conceptual advances are natural selection and temperature.)
Well, my probability that you could or would do anything useful, given money, just dropped straight off a cliff. But perhaps you’re just having trouble communicating. That is to say: What the hell are you talking about.
If you’re going to ask for money on LW, plain English response, please: What’s the output here that the money is paying for; (1) a Friendly AI, (2) a theory that can be used to construct a Friendly AI, or (3) an analysis that purports to say whether or not Friendly AI is “feasible”? Please pick one of the pre-written options; I now doubt your ability to write your response ab initio.
That was amusingly written, but probably too harsh. You want people to like you, even if it’s only so they say nice things about you.
Dude, it’s right there: “feasibility and difficulty”, in this sentence which I am now repeating for the second time:
(Bold added for emphasis, annotations in [brackets] were in the original.)
The next sentence:
Or if you really need it spelled out for you again and again, the output would primarily be (3) but secondarily (2) as you need some of (2) to do (3).
Because you clearly need things pointed out multiple times, I’ll remind you that I put my response in the original comment that you originally responded to, without the later clarifications that I’d put in for apparently no one’s benefit:
(Those italics were in the original comment!)
I wasn’t asking for money on Less Wrong! As I said, “I was trying to quickly gauge vague interest in a vague notion.” What the hell are you talking about.
I’ve doubted your ability to read for a long time, but this is pretty bad. The sad thing is you’re probably not doing this intentionally.
I think the problem here is that your posting style, to be frank, often obscures your point.
In most cases, posts that consist of a to-the-point answer followed by longer explanations use the initial statement to make a concise case. For instance, in this post, my first sentence sums up what I think about the situation and the rest explains that thought in more detail so as to convey a more nuanced impression.
By contrast, when Eliezer asked “What’s the output here that the money is paying for,” your first sentence was “Money would pay for marginal output, e.g. in the form of increased collaboration, I think, since the best Friendliness-cognizant x-rationalists would likely already be working on similar things.” This does not really answer his question, and while you clarify this with your later points, the overall message is garbled.
The fact that your true answer is buried in the middle of a paragraph does not really help things much. Though I can see what you are trying to say, I can’t in good and honest conscience describe it as clear. Had you answered, on the other hand, “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI...” as your first sentence, I think your post would have been more clear and more likely to be understood.
The sentences I put before the direct answer to Eliezer’s question were meant to correct some of Eliezer’s misapprehensions that were more fundamental than the object of his question. Eliezer’s infamous for uncharitably misinterpreting people and it was clear he’d misinterpreted some key aspects of my original comment, e.g. my purpose in writing it. If I’d immediately directly answered his question that would have been dishonest; it would have contributed further to his having a false view of what I was actually talking about. Less altruistically it would be like I was admitting to his future selves or to external observers that I agreed that his model of my purposes was accurate and that this model could legitimately be used to assert that I was unjustified in any of many possible ways. Thus I briefly (a mere two sentences) attempted to address what seemed likely to be Eliezer’s underlying confusions before addressing his object level question. (Interestingly Eliezer does this quite often, but unfortunately he often assumes people are confused in ways that they are not.)
Given these constraints, what should I have done? In retrospect I should have gone meta, of course, like always. What else?
Thanks much for the critique.
Given those constraints, I would probably write something like “Money would pay for marginal output in the form of increased collaboration on the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI” for my first sentence and elaborate as strictly necessary. That seems rather more cumbersome than I’d like, but it’s also a lot of information to try and convey in one sentence!
Alternatively, I would consider something along the lines of “Money would pay for the technical analysis necessary to determine as well as possible the feasibility and difficulty of FAI, but not directly—since the best Friendliness-cognizant x-rationalists would likely already be working on similar things, the money would go towards setting up better communication, coordination, and collaboration for that group.”
That said, I am unaware of any reputation Eliezer has in the field of interpreting people, and personally haven’t received the impression that he’s consistently unusually bad or uncharitable at it. Then again, I have something of a reputation—at least in person—for being too charitable, so perhaps I’m being too light on Eliezer (or you?) here.
I acknowledge this. But it seems to me that the larger problem is that Eliezer simply doesn’t know how to read what people actually say. Less Wrong mostly doesn’t either, and humans in general certainly don’t. This is a very serious problem with LW-style rationality (and with humanity). There are extremely talented rationalists who do not have this problem; it is an artifact of Eliezer’s psychology and not of the art of rationality.
It’s hardly fair to blame the reader when you’ve got “sentences” like this:
That was the second version of the sentence, the first one had much clearer syntax and even italicized the answer to Eliezer’s subsequent question. It looks the way it does because Eliezer apparently couldn’t extract meaning out of my original sentence despite it clearly answering his question, so I tried to expand on the relevant points with bracketed concrete examples. Here’s the original:
(emphasis in original)
Which starts with the word ‘if’ and fails to have a ‘then’.
If you took out ‘If I had’ and replaced it with ‘I would create’, then maybe it would be more in line with what you’re trying to say?
What you say might be true, but this one example is negligible compared to the mountain of other evidence concerning inability to read much more important things (which are unrelated to me). I won’t give that evidence here.
Certainly true, but that only means that we need to spend more effort on being as clear as possible.
If that’s indeed the case (I haven’t noticed this flaw myself), I suggest that you write articles (or perhaps commission/petition others to have them written) describing this flaw and how to correct it. Eliminating such a flaw or providing means of averting it would greatly aid LW and the community in general.
Unfortunately that is not currently possible for many reasons, including some large ones I can’t talk about and that I can’t talk about why I can’t talk about. I can’t see any way that it would become possible in the next few years either. I find this stressful; it’s why I make token attempts to communicate in extremely abstract or indirect ways with Less Wrong, despite the apparent fruitlessness. But there’s really nothing for it.
Unrelated public announcement: People who go back and downvote every comment someone’s made, please, stop doing that. It’s a clever way to pull information cascades in your direction but it is clearly an abuse of the content filtering system and highly dishonorable. If you truly must use such tactics, downvoting a few of your enemy’s top level posts is much less evil; your enemy loses the karma and takes the hint without your severely biasing the public perception of your enemy’s standard discourse. Please.
(I just lost 150 karma points in a few minutes and that’ll probably continue for awhile. This happens a lot.)
Why can’t you talk about why you can’t talk about them?
I’m not a big fan of the appeal to secret reasons, so I think I’m going to have pull out of this discussion. I will note, however, that you personally seem to be involved in more misunderstandings than the average LW poster, so while it’s certainly possible that your secret reasons are true and valid and Eliezer just sucks at reading or whatever, you may want to clarify certain elements of your own communication as well.
I unfortunately predict that “going more meta” will not be strongly received here.
I’m sorry to hear that you’re up against something so difficult, and I hope you find a way out.
Thank you… I think I just need to be more meta. Meta never fails.
Are we still talking about improving general reading comprehension? What could possibly be dangerous about that?
To save some time and clarify, this was option 3: an analysis that purports to say whether or not Friendly AI is “feasible”.