Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:
There is a core thing that Eliezer is trying to communicate. It’s not actually about timeline estimates, that’s an output of the thing. Its core message length is short, but all attempts to find short ways of expressing it, so far, have failed.
Mostly so have very long attempts to communicate it and its prerequisites, which to some extent at least includes the Sequences. Partial success in some cases, full success in almost none.
This post, and this whole series of posts, feels like its primary function is training data to use to produce an Inner Eliezer that has access to the core thing, or even better to know the core thing in a fully integrated way. And maybe a lot of Eliezer’s other communications is kind of also trying to be similar training data, no matter the superficial domain it is in or how deliberate that is.
The condescension is important information to help a reader figure out what is producing the outputs, and hiding it would make the task of ‘extract the key insights’ harder.
Similarly, the repetition of the same points is also potentially important information that points towards the core message.
That doesn’t mean all that isn’t super annoying to read and deal with, especially when he’s telling you in particular that you’re wrong. Cause it’s totally that.
There are those for whom this makes it easier to read, especially given it is very long, and I notice both effects.
My Inner Eliezer says that writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version. Also, it’s kind of text in several places.
The core message is what matters and the rest mostly doesn’t?
I am arrogant enough to think I have a non-zero chance that I know enough of the core thing and have enough skill that with enough work I could perhaps find an improved way to communicate it given the new training data, and I have the urge to try this impossible-level problem if I could find the time and focus (and help) to make a serious attempt.
I also stumbled on this point. I think it parses as
[attempt paraphrasing Zvi]
My Inner Eliezer says, “Writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version.” Also, besides my Inner Eliezer saying that, the preceding statement is almost explicit in the text in several places.
Evidence for the last bit is things like
[Eliezer’s OP]
Your grandpa is feeling kind of tired now and can’t debate this again with as much energy as when he was younger.
I endorse most of this comment; this “core thing” idea is exactly what I tried to understand when writing my recent post on deep knowledge according to Yudkowsky.
This post, and this whole series of posts, feels like its primary function is training data to use to produce an Inner Eliezer that has access to the core thing, or even better to know the core thing in a fully integrated way. And maybe a lot of Eliezer’s other communications is kind of also trying to be similar training data, no matter the superficial domain it is in or how deliberate that is.
Yeah, that sounds right. I feel like Yudkowsky always write mostly training data, and feels like explaining as precisely as he can the thing he’s talking about never works. I agree with him that it can’t work without the reader doing a bunch of work (what he calls homework), but I expect (from my personal experience) that doing the work while you have an outline of the thing is significantly easier. It’s easier to trust that there’s something valuable at the end of the tunnel when you have a half-decent description.
The condescension is important information to help a reader figure out what is producing the outputs, and hiding it would make the task of ‘extract the key insights’ harder.
Here though I feel like you’re overinterpreting. In older writing, Yudkowsky is actually quite careful to not directly insult people and be condescending. I’m not saying he never does it, but he tones it down a lot compared to what’s happening in this recent dialogue. I think that a better explanation is simply that he’s desperate, and has very little hope of being able to convey what he means because he’s being doing that for 13 years and no one catched on.
Maybe point 8 is also part of the explanation: doing this non-condescendingly sounds like far more work for him, and yet he does’t expect it to work, so he doesn’t take that extra charge for little expected reward.
My Inner Eliezer says that writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version. Also, it’s kind of text in several places.
Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:
There is a core thing that Eliezer is trying to communicate. It’s not actually about timeline estimates, that’s an output of the thing. Its core message length is short, but all attempts to find short ways of expressing it, so far, have failed.
Mostly so have very long attempts to communicate it and its prerequisites, which to some extent at least includes the Sequences. Partial success in some cases, full success in almost none.
This post, and this whole series of posts, feels like its primary function is training data to use to produce an Inner Eliezer that has access to the core thing, or even better to know the core thing in a fully integrated way. And maybe a lot of Eliezer’s other communications is kind of also trying to be similar training data, no matter the superficial domain it is in or how deliberate that is.
The condescension is important information to help a reader figure out what is producing the outputs, and hiding it would make the task of ‘extract the key insights’ harder.
Similarly, the repetition of the same points is also potentially important information that points towards the core message.
That doesn’t mean all that isn’t super annoying to read and deal with, especially when he’s telling you in particular that you’re wrong. Cause it’s totally that.
There are those for whom this makes it easier to read, especially given it is very long, and I notice both effects.
My Inner Eliezer says that writing this post without the condescension, or making it shorter, would be much much more effort for Eliezer to write. To the extent such a thing can be written, someone else has to write that version. Also, it’s kind of text in several places.
The core message is what matters and the rest mostly doesn’t?
I am arrogant enough to think I have a non-zero chance that I know enough of the core thing and have enough skill that with enough work I could perhaps find an improved way to communicate it given the new training data, and I have the urge to try this impossible-level problem if I could find the time and focus (and help) to make a serious attempt.
I would very much like to read your attempt at conveying the core thing—if nothing else, it’ll give another angle from which to try to grasp it.
What did you mean by this?
I also stumbled on this point. I think it parses as
[attempt paraphrasing Zvi]
Evidence for the last bit is things like
[Eliezer’s OP]
etc.
I endorse most of this comment; this “core thing” idea is exactly what I tried to understand when writing my recent post on deep knowledge according to Yudkowsky.
Yeah, that sounds right. I feel like Yudkowsky always write mostly training data, and feels like explaining as precisely as he can the thing he’s talking about never works. I agree with him that it can’t work without the reader doing a bunch of work (what he calls homework), but I expect (from my personal experience) that doing the work while you have an outline of the thing is significantly easier. It’s easier to trust that there’s something valuable at the end of the tunnel when you have a half-decent description.
Here though I feel like you’re overinterpreting. In older writing, Yudkowsky is actually quite careful to not directly insult people and be condescending. I’m not saying he never does it, but he tones it down a lot compared to what’s happening in this recent dialogue. I think that a better explanation is simply that he’s desperate, and has very little hope of being able to convey what he means because he’s being doing that for 13 years and no one catched on.
Maybe point 8 is also part of the explanation: doing this non-condescendingly sounds like far more work for him, and yet he does’t expect it to work, so he doesn’t take that extra charge for little expected reward.