I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can’t use historical evidence to evaluate completely new ideas.
Not sure what he means by “loose qualitative conclusions”.
He says that he can’t predict how long it will take an AI to solve various problems.
One thing which makes me worry that something is “surface”, is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn’t expect it to change with the advent of superintelligence.
I don’t know what the law of ‘Accelerating Change’ is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
Oh well...I’ll give up and come back to this when I have time to look up every term and concept and decrypt what he means.
Not sure what he means by “loose qualitative conclusions”.
Some context:
In this case, the best we can do is use the Weak Inside View—visualizing the causal process—to produce loose qualitative conclusions about only those issues where there seems to be lopsided support.
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: “AI go foom.”
One thing which makes me worry that something is “surface”, is when it involves generalizing a level N feature across a shift in level N-1 causes.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
He means, in this example anyway, that the reasoning “historical trends usually continue” applied to Moore’s Law doesn’t work when Moore’s Law itself creates something that affects Moore’s Law. In order to figure out what happens, you have to go deeper than “historical trends usually continue”.
I don’t know what the law of ‘Accelerating Change’ is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil’s inventions; it’s his claim that technological change accelerates itself.
Oh well
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up.
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can’t get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won’t learn anything new about reality.
ETA That isn’t the case for all people. I have read most of Yvain’s posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven’t read posts like ‘Rational Home Buying’ because I didn’t think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can’t say something like 99.99% and mean “most” by it.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven’t even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
I don’t know if it’s something you want to take public, but it might make sense to do a conscious analysis of what you’re expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don’t post, you might find out something about whether your snap judgement makes sense.
In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn’t take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn’t allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don’t have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. What is it that is missing that doesn’t allow one of them to prevail?
I would guess that the reason is people don’t work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know “A = B” and “B = C” and “C = D”, you can conclude that “A = D”. In real life your knowledge is more like “so far it seems to me that under usual conditions A is very similar to B”. A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying “give me a static point and a lever and I will move the world” only to realize that your lever is too floppy and you can’t move anything that is too far and heavy.
In Fund UberTool?, Robin Hanson talks about a hypothetical company that applies most of its resources to its own improvement until it would burst out and take over the world. He further asks what evidence it would take to convince you to invest in them.
This post goes straight to the heart of Pascal’s mugging, vast utilities that outweigh tiny probabilities. I could earn a lot by investing in such a company if it all works as promised. But should I do that? I have no idea.
What evidence would make me invest money into such a company? I am very risk averse. Given my inability to review mathematical proofs, and advanced technical proofs of concept, I’d probably hesitant and fear that they are bullshitting me.
By “a hypothetical company that applies most of its resources to its own improvement” do you mean a tech company? Because that’s exactly what tech companies do, and they seem to be pretty powerful, if not “take over the world” powerful. And I do invest in those companies.
In Friendly Teams Robin Hanson talks about the guy who tried to get his company to undergo recursive self-improvement and how he was a really smart fellow who saw a lot of things coming.
Robin Hanson further argues that key insights are not enough but that it takes many small insights that are the result of a whole society of agents.
Robin further asks what it is that makes the singleton AI scenario more reasonable if does not work out for groups of humans, not even remotely. Well, I can see that people would now say that an AI can directly improve its own improvement algorithm. I suppose the actual question that Robin asks is how the AI will reach that point in the first place. How is it going to acquire the capabilities that are necessary to improve its capabilities indefinitely.
I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
In The Weak Inside View Eliezer Yudkowsky writes that it never occured to him that his views about optimization ought to produce quantitative predictions.
Eliezer further argues that we can’t use historical evidence to evaluate completely new ideas.
Not sure what he means by “loose qualitative conclusions”.
He says that he can’t predict how long it will take an AI to solve various problems.
Argh...I am getting the impression that it was a really bad idea to start reading this at this point. I have no clue what he is talking about.
I don’t know what the law of ‘Accelerating Change’ is and what exogenous means and what ontologically fundamental means and why not even such laws can break down beyond a certain point.
Oh well...I’ll give up and come back to this when I have time to look up every term and concept and decrypt what he means.
Some context:
He means that, because the inside view is weak, it cannot predict exactly how powerful an AI would foom, exactly how long it would take for an AI to foom, exactly what it might first do after the foom, exactly how long it will take for the knowledge necessary to make a foom, and suchlike. Note how three of those things I listed are quantitative. So instead of strong, quantitative predictions like those, he sticks to weak general qualitative ones: “AI go foom.”
He means, in this example anyway, that the reasoning “historical trends usually continue” applied to Moore’s Law doesn’t work when Moore’s Law itself creates something that affects Moore’s Law. In order to figure out what happens, you have to go deeper than “historical trends usually continue”.
I didn’t know what exogenous means when I read this either, but I didn’t need to to understand. (I deigned to look it up. It means generated by the environment, not generated by organisms. Not a difficult concept.) Ontologically fundamental is a term we use on LW all the time; it means at the base level of reality, like quarks and electrons. The Law of Accelerating Change is one of Kurzweil’s inventions; it’s his claim that technological change accelerates itself.
Indeed, if you’re not even going to try to understand, this is the correct response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this rather than commenting on the original posts asking for explanations. And giving up on understanding rather than asking for explanations.
He’s not really giving up, he’s using a Roko algorithm again.
In retrospect I wish I would have never come across Less Wrong :-(
This is neither a threat nor a promise, just a question: do you estimate that your life would be improved if you could somehow be prevented from ever viewing this site again? Similarly, do you estimate that your life would be improved if you could somehow be prevented from ever posting to this site again?
I am trying this for years now but just giving up sucks as well. So I’ll again log out now and (try) not come back for a long time (years).
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky writes is low enough that I can’t get myself to invest a lot of time on it. How could I change my mind about that? It feels like reading a book on string theory, there are no flaws in the math but you also won’t learn anything new about reality.
ETA That isn’t the case for all people. I have read most of Yvain’s posts for example because I felt that it is worth it to read them right away. ETA2 Before someone is going to nitpick, I haven’t read posts like ‘Rational Home Buying’ because I didn’t think it would be worth it. ETA3 Wow I just realized that I really hate Less Wrong, you can’t say something like 99.99% and mean “most” by it.
I thought it might help people to see exactly how I think about everything as I read it and where I get stuck.
I do try, but I got the impression that it is wrong to invest a lot of time on it at this point when I haven’t even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that was rather due to a weakness of will and psychological distress than anything else. Deliberately reading the Sequences is very different here, because it takes an effort that is high enough to make me think about the usefulness of doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because people say I am wrong etc. so that I feel forced to reply.
I don’t know if it’s something you want to take public, but it might make sense to do a conscious analysis of what you’re expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the sequences are like your mental image of them, and even if you don’t post, you might find out something about whether your snap judgement makes sense.
In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn’t take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn’t allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don’t have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
I would guess that the reason is people don’t work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know “A = B” and “B = C” and “C = D”, you can conclude that “A = D”. In real life your knowledge is more like “so far it seems to me that under usual conditions A is very similar to B”. A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying “give me a static point and a lever and I will move the world” only to realize that your lever is too floppy and you can’t move anything that is too far and heavy.
In Fund UberTool?, Robin Hanson talks about a hypothetical company that applies most of its resources to its own improvement until it would burst out and take over the world. He further asks what evidence it would take to convince you to invest in them.
This post goes straight to the heart of Pascal’s mugging, vast utilities that outweigh tiny probabilities. I could earn a lot by investing in such a company if it all works as promised. But should I do that? I have no idea.
What evidence would make me invest money into such a company? I am very risk averse. Given my inability to review mathematical proofs, and advanced technical proofs of concept, I’d probably hesitant and fear that they are bullshitting me.
In the end I would probably not invest in them.
By “a hypothetical company that applies most of its resources to its own improvement” do you mean a tech company? Because that’s exactly what tech companies do, and they seem to be pretty powerful, if not “take over the world” powerful. And I do invest in those companies.
In Friendly Teams Robin Hanson talks about the guy who tried to get his company to undergo recursive self-improvement and how he was a really smart fellow who saw a lot of things coming.
Robin Hanson further argues that key insights are not enough but that it takes many small insights that are the result of a whole society of agents.
Robin further asks what it is that makes the singleton AI scenario more reasonable if does not work out for groups of humans, not even remotely. Well, I can see that people would now say that an AI can directly improve its own improvement algorithm. I suppose the actual question that Robin asks is how the AI will reach that point in the first place. How is it going to acquire the capabilities that are necessary to improve its capabilities indefinitely.