Not sure what he means by “loose qualitative conclusions”.
Some context:
In this case, the best we can do is use the Weak Inside View—visualizing the causal process—to produce loose qualitative conclusions about only those issues where there seems to be lopsided support.
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: “AI go foom.”
One thing which makes me worry that something is “surface”, is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
He means, in this example anyway, that the reasoning “historical trends usually continue” applied to Moore’s Law doesn’t work when Moore’s Law itself creates something that affects Moore’s Law. In order to figure out what happens, you have to go deeper than “historical trends usually continue”.
I don’t know what the law of ‘Accelerating Change’ is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil’s inventions; it’s his claim that technological change accelerates itself.
Oh well
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up.
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can’t get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won’t learn anything new about reality.
ETA That isn’t the case for all people. I have read most of Yvain’s posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven’t read posts like ‘Rational Home Buying’ because I didn’t think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can’t say something like 99.99% and mean “most” by it.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven’t even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
I don’t know if it’s something you want to take public, but it might make sense to do a conscious analysis of what you’re expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don’t post, you might find out something about whether your snap judgement makes sense.
Some context:
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: “AI go foom.”
He means, in this example anyway, that the reasoning “historical trends usually continue” applied to Moore’s Law doesn’t work when Moore’s Law itself creates something that affects Moore’s Law. In order to figure out what happens, you have to go deeper than “historical trends usually continue”.
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil’s inventions; it’s his claim that technological change accelerates itself.
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
He’s not really giving up, he’s using a Roko algorithm again.
In retrospect I wish I would have never come across Less Wrong :-(
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
I am trying this for years now but just giving up sucks as well. So I’ll again log out now and (try) not come back for a long time (years).
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can’t get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won’t learn anything new about reality.
ETA That isn’t the case for all people. I have read most of Yvain’s posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven’t read posts like ‘Rational Home Buying’ because I didn’t think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can’t say something like 99.99% and mean “most” by it.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven’t even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
I don’t know if it’s something you want to take public, but it might make sense to do a conscious analysis of what you’re expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don’t post, you might find out something about whether your snap judgement makes sense.