If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
If indefinitely large is still too vague, you can replace it with “”to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn’t gain resources by bothering to bargain with us.” Is that narrow enough?
The original issues were:
When to start the clock?
When to stop the clock?
What is supposed to have happened in the mean time?
You partly address the third question—and suggest that the clock is stopped “quickly” after it is started.
I don’t think that is any good. If we have “quickly” being the proposed-elsewhere “inside six weeks”, it is better—but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then.
Since I am the one pointing out this mess, maybe I should also be proposing solutions:
I think the problem is that people want to turn the “FOOM” term into a binary categorisation—to FOOM or not to FOOM.
Yudkowsky’s original way of framing the issue doesn’t really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic. I think the concept is challenging to quantify—and so there is some wisdom in not doing so. All that means is that you can’t really talk about: “to FOOM or not to FOOM”. Rather, there are degrees of FOOM. If you want to quantify or classify them, it’s your responsibility to say how you are measuring things.
It does look as though Yudkowsky has tried this elsewhere—and made an effort to say something a little bit more quantitative.
I’m puzzled a bit by your repeated questions about when to “start the clock” and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren’t talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The “clock” starts from when a a general intelligence with intelligence about as much as a bright human goes online.
Huh? I don’t follow.