Example locations where this has been defined include Mass Driver’s post here where he defined it slightly differently as “to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”. I think he meant indefinitely large there, but the essential idea is the same. I note that you posted comments in that thread, so presumably you’ve seen that before, and you explicitly discussed fooming. Did you only recently decide that it wasn’t sufficiently well-defined? If so, what caused that decision?
Possibly a few hours or weeks?!?
Well, I’ve seen different timelines used by people in different contexts. Note that this isn’t just a function of definitions, but also when one exactly has an AI start doing this. An AI that shows up later, when we have faster machines and more nanotech, can possibly go foom faster than an AI that shows up earlier when we have fewer technologies to work with. But for what it is worth, I doubt anyone would call it going foom if the process took more than a few months. If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
Vague definitions are not worth critics bothering attacking.
It isn’t clear to me what you are finding too vague about the definition. Is it just the timeline or is it another aspect?
This might be a movie threat notion—if so, I’m sure I’ll be told.
I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it.
As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn’t required.
Yes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn’t need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.
Ok. So what caused you to use the term as if it had a specific definition when you didn’t think it did? Your behavior is very confusing. You’ve discussed foom related issues on multiple threads. You’ve been here much longer than I have; I don’t understand why we are only getting to this issue now.
The above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off
We’ve been using artificial intelligence for over 50 years now. If you haven’t start the clock already, why not? What exactly are you waiting for? There is never going to be a point in the future where machine intelligence “suddenly” arises. Machine intelligence is better than human intelligence in many domains today. [...]
There may well be other instances in between—but scraping together references on the topic seems as though it would be rather tedious.
So what caused you to use the term as if it had a specific definition when you didn’t think it did?
The quote you give focuses just on the issue of time-span. It also has already been addressed in this thread. Machine intelligence in the sense it is often used is not at all the same as artificial general intelligence. This has in fact been addressed by others in this subthread. (Although it does touch on a point you’ve made elsewhere that we’ve been using machines to engage in what amounts to successive improvement which is likely relevant.)
So what caused you to use the term as if it had a specific definition when you didn’t think it did?
I did what, exactly?
I would have thought that your comments in the previously linked thread started by Mass Driver would be sufficient, like when you said:
One “anti-foom” factor is the observation that in the early stages we can make progress partly by cribbing from nature—and simply copying it. After roughly “human level” is reached, that short-cut is no longer available—so progress may require more work after that.
And again in that thread where you said:
1 seems unlikely and 2 and 3 seem silly to me. An associated problem of unknown scale is the wirehead problem. Some think that this won’t be a problem—but we don’t really know that yet. It probably would not slow down machine intelligence very much, until way past human level—but we don’t yet know for sure what its effects will be.
Although rereading your post, I am now wondering if you were careful to put “anti-foom” in quotation marks because it didn’t have a clear definition. But in that case, I’m slightly confused to how you knew enough to decide that that was an anti-foom argument.
Right—so, by “anti-foom factor”, I meant: factor resulting in relatively slower growth in machine intelligence. No implication that the “FOOM” term had been satisfactorily quantitatively nailed down was intended.
I do get that the term is talking about rapid growth in machine intelligence. The issue under discussion is: how fast is considered to be “rapid”.
If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
Six weeks—from when? Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains. When is the clock supposed to start ticking? When is it supposed to stop ticking? What is supposed to have happened in the middle?
Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains.
There is a common and well-known distinction between what you mean by ‘machine intelligence’ and what is meant by ‘AGI’. Deep Blue is a chess AI. It plays chess. It can’t plan a stock portfolio because it is narrow. Humans can play chess and plan stock portfolios, because they have general intelligence. Artificial general intelligence, not ‘machine intelligence’, is under discussion here.
“to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
“to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
Tim, I have to wonder if you are reading what I wrote, given that the sentence right after the quote is “I think he meant indefinitely large there, but the essential idea is the same. ” And again, if you thought earlier that foom wasn’t well-defined what made you post using the term explicitly in the linked thread? If you have just now decided that it isn’t well-defined then a) what do you have more carefully defined and b) what made you conclude that it wasn’t narrowly defined enough?
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Maybe you can make up a definition—but what you said was “fooming has been pretty clearly described”. That may be true, but it surely needs to be referenced.
What exactly am I supposed to have said in the other thread under discussion?
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it, and the vagueness of the commonly referenced sources about it.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Recently, I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Arbitrarily large means just that in the mathematical sense. Indefinitely large is a term that would be used in other contexts. In the contexts that I’ve seen “indefinitely” used and the way I would mean it, it means so large as to not matter as the exact value for the purpose under discussion (as in “our troops can hold the fort indefinitely”).
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it,
Disagreement about something is not always a definitional issue. Indeed, when dealing with people on LW where people try to be rational as possible and have whole sequences about tabooing words and the like, one shouldn’t assign a very high probability to disagreements being due to definitions. Moreover, as one of the people who assigns a low probability to foom and have talked to people here about those issues, I’m pretty sure that we aren’t disagreeing on definitions. Our estimates for what the world will probably look like in 50 years disagree. That’s not simply a definitional issue.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Ok. So why are you now doing step 1 years later? And moreover, how long should this step take as you’ve phrased it, given that we know that there’s substantial disagreement in terms of predicted observations about reality in the next few years? That can’t come from definitions. This is not a tree in a forest.
Recently I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
Yes! Empirical evidence. Unfortunately, it isn’t very strong evidence. I don’t know if he meant in that context that he didn’t have a precise definition or just that he didn’t feel that he understood things well enough to assign a probability estimate. Note that those aren’t the same thing.
I don’t see how the proposed word substitution is supposed to help. If FOOM means: “to quickly, recursively self-improve so as to influence our world with indefinitely large strength and subtlety”, we still face the same issues—of how fast is “quickly” and how big is “indefinitely large”. Those terms are uncalibrated. For the idea to be meaningful or useful, some kind of quantification is needed. Otherwise, we are into “how long is a piece of string?” territory.
So why are you now doing step 1 years later?
I did also raise the issue two years ago. No response, IIRC. I am not too worried if FOOM is a vague term. It isn’t a term I use very much. However, for the folks here—who like to throw their FOOMs around—the issue may merit some attention.
If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Example locations where this has been defined include Mass Driver’s post here where he defined it slightly differently as “to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety”. I think he meant indefinitely large there, but the essential idea is the same. I note that you posted comments in that thread, so presumably you’ve seen that before, and you explicitly discussed fooming. Did you only recently decide that it wasn’t sufficiently well-defined? If so, what caused that decision?
Well, I’ve seen different timelines used by people in different contexts. Note that this isn’t just a function of definitions, but also when one exactly has an AI start doing this. An AI that shows up later, when we have faster machines and more nanotech, can possibly go foom faster than an AI that shows up earlier when we have fewer technologies to work with. But for what it is worth, I doubt anyone would call it going foom if the process took more than a few months. If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate.
It isn’t clear to me what you are finding too vague about the definition. Is it just the timeline or is it another aspect?
This might be a movie threat notion—if so, I’m sure I’ll be told.
I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it.
As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn’t required.
Yes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn’t need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.
No, I’ve been aware for the issue for a loooong time.
Ok. So what caused you to use the term as if it had a specific definition when you didn’t think it did? Your behavior is very confusing. You’ve discussed foom related issues on multiple threads. You’ve been here much longer than I have; I don’t understand why we are only getting to this issue now.
I did raise this closely-related issue over two years ago. To quote the most relevant bit:
There may well be other instances in between—but scraping together references on the topic seems as though it would be rather tedious.
I did what, exactly?
The quote you give focuses just on the issue of time-span. It also has already been addressed in this thread. Machine intelligence in the sense it is often used is not at all the same as artificial general intelligence. This has in fact been addressed by others in this subthread. (Although it does touch on a point you’ve made elsewhere that we’ve been using machines to engage in what amounts to successive improvement which is likely relevant.)
I would have thought that your comments in the previously linked thread started by Mass Driver would be sufficient, like when you said:
And again in that thread where you said:
Although rereading your post, I am now wondering if you were careful to put “anti-foom” in quotation marks because it didn’t have a clear definition. But in that case, I’m slightly confused to how you knew enough to decide that that was an anti-foom argument.
Right—so, by “anti-foom factor”, I meant: factor resulting in relatively slower growth in machine intelligence. No implication that the “FOOM” term had been satisfactorily quantitatively nailed down was intended.
I do get that the term is talking about rapid growth in machine intelligence. The issue under discussion is: how fast is considered to be “rapid”.
Six weeks—from when? Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains. When is the clock supposed to start ticking? When is it supposed to stop ticking? What is supposed to have happened in the middle?
There is a common and well-known distinction between what you mean by ‘machine intelligence’ and what is meant by ‘AGI’. Deep Blue is a chess AI. It plays chess. It can’t plan a stock portfolio because it is narrow. Humans can play chess and plan stock portfolios, because they have general intelligence. Artificial general intelligence, not ‘machine intelligence’, is under discussion here.
Nothing is “arbitrarily large” in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
Tim, I have to wonder if you are reading what I wrote, given that the sentence right after the quote is “I think he meant indefinitely large there, but the essential idea is the same. ” And again, if you thought earlier that foom wasn’t well-defined what made you post using the term explicitly in the linked thread? If you have just now decided that it isn’t well-defined then a) what do you have more carefully defined and b) what made you conclude that it wasn’t narrowly defined enough?
What distinction are you trying to draw between “arbitrarily large” and “indefinitely large” that turns the concept into one which is applicable to the real world?
Maybe you can make up a definition—but what you said was “fooming has been pretty clearly described”. That may be true, but it surely needs to be referenced.
What exactly am I supposed to have said in the other thread under discussion?
Lots of factors indicate that “FOOM” is poorly defined—including the disagreement surrounding it, and the vagueness of the commonly referenced sources about it.
Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way—and have a real disagreement—and not just a semantic one.
Recently, I participated in this exchange—where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
Arbitrarily large means just that in the mathematical sense. Indefinitely large is a term that would be used in other contexts. In the contexts that I’ve seen “indefinitely” used and the way I would mean it, it means so large as to not matter as the exact value for the purpose under discussion (as in “our troops can hold the fort indefinitely”).
Disagreement about something is not always a definitional issue. Indeed, when dealing with people on LW where people try to be rational as possible and have whole sequences about tabooing words and the like, one shouldn’t assign a very high probability to disagreements being due to definitions. Moreover, as one of the people who assigns a low probability to foom and have talked to people here about those issues, I’m pretty sure that we aren’t disagreeing on definitions. Our estimates for what the world will probably look like in 50 years disagree. That’s not simply a definitional issue.
Ok. So why are you now doing step 1 years later? And moreover, how long should this step take as you’ve phrased it, given that we know that there’s substantial disagreement in terms of predicted observations about reality in the next few years? That can’t come from definitions. This is not a tree in a forest.
Yes! Empirical evidence. Unfortunately, it isn’t very strong evidence. I don’t know if he meant in that context that he didn’t have a precise definition or just that he didn’t feel that he understood things well enough to assign a probability estimate. Note that those aren’t the same thing.
I don’t see how the proposed word substitution is supposed to help. If FOOM means: “to quickly, recursively self-improve so as to influence our world with indefinitely large strength and subtlety”, we still face the same issues—of how fast is “quickly” and how big is “indefinitely large”. Those terms are uncalibrated. For the idea to be meaningful or useful, some kind of quantification is needed. Otherwise, we are into “how long is a piece of string?” territory.
I did also raise the issue two years ago. No response, IIRC. I am not too worried if FOOM is a vague term. It isn’t a term I use very much. However, for the folks here—who like to throw their FOOMs around—the issue may merit some attention.
If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
The original issues were:
When to start the clock?
When to stop the clock?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
Huh? I don’t follow.