This involves so many unknowns that it isn’t clear where to start. First, fooming isn’t well-defined to start with. Second, number of bits for something would change drastically depending on the substrate (default programming language and hardware). Third of all, we can’t even give much in the way of non-trivial bounds for minimum program size for well-defined algorithms (among other issues, it starts to lead to Halting problem/Godel issues if one has a way of answering this sort of question in general). To even get an upper bound we’d probably need some form of strong AI so we could point to it and say “that’s an upper bound.”
Yudkowsky apparently defines the term “FOOM” here:
“FOOM” means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology [...]
It’s weird and doesn’t seem to make much sense to me. How can the term “FOOM” be used to refer to a level of capability?
We should probably scratch that definition—even though it is about the only one provided.
If the term “FOOM” has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.
I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.
Creating molecular nanotechnology may be given as homework in the 29th century—but that’s quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress—as well as via a sudden rapid burst.
Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.
To even get an upper bound we’d probably need some form of strong AI so we could point to it and say “that’s an upper bound.”
We got humans with general intelligence, built into a stage where they can start learning from an extremely noisy and chaotic physical environment by a genome that fits on a CD-ROM and can probably be compressed further.
We got humans with general intelligence, built into a stage where they can start learning from an extremely noisy and chaotic physical environment by a genome that fits on a CD-ROM and can probably be compressed further.
A human is specified by a lot more than it’s genome. You have ribosomes and mitochondria and other starting stuff. And you grow in a very specific womb environment. And if you don’t have certain classes of interaction as a child you won’t end up as a very good general intelligence (isolation or lack of nutrients at early stages can both lead to serious problems.) This is directly analogous to my remark about substrates. So yes, you could use a human as some form of possible upper bound for general intelligence, but it isn’t clear if that meets the criteria for fooming and defining how many bits that is is a lot tougher than just pointing to the genome.
My intuition is that the cellular machinery and prenatal environment are required much more for meeting the biochemical needs of a human embryo than as providers of extra information. The hard part where you need to have a huge digital data string mostly exactly right is in the DNA, while the growth environment is more of a warm soup that has an intricate mixture of stuff but is far too noisy to actually carry anything close to the amount of actionable information the genome does.
Standard notions are also selling short the massive amount of very clever work the newborn baby’s brain is already doing when it starts to learn things that lets it bootstrap itself to full intelligence. It manages to do this from other people who mostly just give it food every now and then and make random attempts to engage it in conversation instead of doing the sort of massively intricate and laborous cognitive engineering they’d have to pull off if the newborn baby’s brain would actually need the similar sort of hard complexity a programmable general-purpose computer or a ovarian cell without a DNA does before it can have a go at turning into an intelligent entity.
I think you’re underselling the developmental power of a culture. Bits of your brain literally don’t grow properly if you’re not raised in a human culture. Ignore a baby at the wrong points in its development and it’ll fail to ever be able to learn any language, feel certain emotions or comprehend some social constraints. Etc.
That is, the hardware grows to meet the software and data, because (as usual) the data/software/hardware divides in the brain are very fuzzy indeed.
(This suggests Kurzweil was plausibly approximately correct about the genome having the information needed to make the brain of a fresh-out-the-womb newborn, but that the attention-catching claim he was implicitly making of emulating an interesting, adult-quality brain based on the amount of information in a genome is rather more questionable.)
(And, of course, it brings to mind all manner of horribly unethical experiments to work out the minimum quantity of culture needed to stimulate the brain to grow right, or what the achievable dimensions of “right” are. You just can’t get the funding for the really mad science these days.)
Of course, the baby’s brain goes actively looking for cultural data. I will always treasure the memory of my daughter meowing back at the cat and trying to have a conversation with it and learn its language. Made more fun by the fact that cats only meow like that in the first place as a way of getting humans to do things.
I’ve heard anecdotes about things like children spontaneously developing their own languages even when completely deprived of language in their environment, which would weakly indicate the contrary position. Unfortunately, I don’t know whether to trust said anecdotes—can anyone corroborate?
There are reports of twins bootstrapping off each other, from the principle of noise->action->repeatnoise, called idioglossia. Seems is not that great actually as language. This NYT blog post suggests the words are more babble than language, which matches how my daughter spoke to the cat: English intonation and facial expressions, meowy babble as words. The Wikipedia article on cryptophasia says “While sources claim that twins and children from multiple births develop this ability perhaps because of more interpersonal communication between themselves than with the parents, there is inadequate scientific proof to verify these claims.”
I’ve heard anecdotes about things like children spontaneously developing their own languages even when completely deprived of language in their environment, which would weakly indicate the contrary position. Unfortunately, I don’t know whether to trust said anecdotes—can anyone corroborate?
There are examples of groups of deaf people developing languages together, but generally over a generation or two, and in large groups. The most prominent such case is Nicaraguan sign language.
That’s not an example of “completely deprived of language in their environment”—the article says “by combining gestures and elements of their home-sign systems …”
Yes, you are correct. There were pre-existing primitive sign systems that started off. It isn’t an example of language developing completely spontaneously.
I think you’re underselling the developmental power of a culture. Bits of your brain literally don’t grow properly if you’re not raised in a human culture. Ignore a baby at the wrong points in its development and it’ll fail to ever be able to learn any language, feel certain emotions or comprehend some social constraints.
Not denying this at all. Just pointing out that the brain makes astonishingly good use of very noisy and arbitrary input when it does get exposed to other language-using humans, compared to what you’d expect any sort of machine learning AI to be capable of. I’m a lot more impressed at a thing made of atoms getting to be complex enough to be able to start the learning process than the further input it needs to actually learn the surrounding culture.
Think about it this way: Which is more impressive, designing and building a robot that can perceive the world and move around it and learn things as well as a human growing from infant to adulthood, or pointing things to the physically finished but still-learning robot and repeating their names, and doing the rest of the regular teaching about stuff thing people already do with children?
(For anyone offended at the implied valuation, since Parenting Human Children Is The Most Important Thing, imagine that the robot looks like a big metal spider and therefore doesn’t count as a Parented Child.)
My basic idea here is that the newborn baby crawling about is already a lot more analogous to an AI well in the way of going FOOM than a bunch of scattered clever pattern recognition algorithms and symbol representation models that just need the overall software architecture design to tie them together, since the things that stop humans from going FOOM might be a lot more related to physiological shortcomings than the lack of extremely clever further design. The baby has moved from being formed from the initial hard design information that went in it into discovering the new information it needs to grow from its surroundings. I’d be rather worried about an AI that reaches a similar stage.
My basic idea here is that the newborn baby crawling about is already a lot more analogous to an AI well in the way of going FOOM than a bunch of scattered clever pattern recognition algorithms and symbol representation models that just need the overall software architecture design to tie them together
I’ll credit that. A baby is a machine for going FOOM.
(Specifically, I’d guess, because so much has to be left out to produce a size of offspring that can be born without killing the mother too often. Hence the appalling, but really quite typical of evolution, hack of having the human memepool be essential to the organism expressed by the genes growing right.)
We got humans with general intelligence, built into a stage where they can start learning from an extremely noisy and chaotic physical environment by a genome that fits on a CD-ROM [...]
Humans are not very good self-improving systems, except on geological timescales. They:
This involves so many unknowns that it isn’t clear where to start. First, fooming isn’t well-defined to start with. Second, number of bits for something would change drastically depending on the substrate (default programming language and hardware). Third of all, we can’t even give much in the way of non-trivial bounds for minimum program size for well-defined algorithms (among other issues, it starts to lead to Halting problem/Godel issues if one has a way of answering this sort of question in general). To even get an upper bound we’d probably need some form of strong AI so we could point to it and say “that’s an upper bound.”
Yudkowsky apparently defines the term “FOOM” here:
It’s weird and doesn’t seem to make much sense to me. How can the term “FOOM” be used to refer to a level of capability?
I agree, though I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.
We should probably scratch that definition—even though it is about the only one provided.
If the term “FOOM” has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.
Creating molecular nanotechnology may be given as homework in the 29th century—but that’s quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress—as well as via a sudden rapid burst.
Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
My understanding of the FOOM process:
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.
We got humans with general intelligence, built into a stage where they can start learning from an extremely noisy and chaotic physical environment by a genome that fits on a CD-ROM and can probably be compressed further.
A human is specified by a lot more than it’s genome. You have ribosomes and mitochondria and other starting stuff. And you grow in a very specific womb environment. And if you don’t have certain classes of interaction as a child you won’t end up as a very good general intelligence (isolation or lack of nutrients at early stages can both lead to serious problems.) This is directly analogous to my remark about substrates. So yes, you could use a human as some form of possible upper bound for general intelligence, but it isn’t clear if that meets the criteria for fooming and defining how many bits that is is a lot tougher than just pointing to the genome.
My intuition is that the cellular machinery and prenatal environment are required much more for meeting the biochemical needs of a human embryo than as providers of extra information. The hard part where you need to have a huge digital data string mostly exactly right is in the DNA, while the growth environment is more of a warm soup that has an intricate mixture of stuff but is far too noisy to actually carry anything close to the amount of actionable information the genome does.
Standard notions are also selling short the massive amount of very clever work the newborn baby’s brain is already doing when it starts to learn things that lets it bootstrap itself to full intelligence. It manages to do this from other people who mostly just give it food every now and then and make random attempts to engage it in conversation instead of doing the sort of massively intricate and laborous cognitive engineering they’d have to pull off if the newborn baby’s brain would actually need the similar sort of hard complexity a programmable general-purpose computer or a ovarian cell without a DNA does before it can have a go at turning into an intelligent entity.
I think you’re underselling the developmental power of a culture. Bits of your brain literally don’t grow properly if you’re not raised in a human culture. Ignore a baby at the wrong points in its development and it’ll fail to ever be able to learn any language, feel certain emotions or comprehend some social constraints. Etc.
That is, the hardware grows to meet the software and data, because (as usual) the data/software/hardware divides in the brain are very fuzzy indeed.
(This suggests Kurzweil was plausibly approximately correct about the genome having the information needed to make the brain of a fresh-out-the-womb newborn, but that the attention-catching claim he was implicitly making of emulating an interesting, adult-quality brain based on the amount of information in a genome is rather more questionable.)
(And, of course, it brings to mind all manner of horribly unethical experiments to work out the minimum quantity of culture needed to stimulate the brain to grow right, or what the achievable dimensions of “right” are. You just can’t get the funding for the really mad science these days.)
Of course, the baby’s brain goes actively looking for cultural data. I will always treasure the memory of my daughter meowing back at the cat and trying to have a conversation with it and learn its language. Made more fun by the fact that cats only meow like that in the first place as a way of getting humans to do things.
I’ve heard anecdotes about things like children spontaneously developing their own languages even when completely deprived of language in their environment, which would weakly indicate the contrary position. Unfortunately, I don’t know whether to trust said anecdotes—can anyone corroborate?
There are reports of twins bootstrapping off each other, from the principle of noise->action->repeatnoise, called idioglossia. Seems is not that great actually as language. This NYT blog post suggests the words are more babble than language, which matches how my daughter spoke to the cat: English intonation and facial expressions, meowy babble as words. The Wikipedia article on cryptophasia says “While sources claim that twins and children from multiple births develop this ability perhaps because of more interpersonal communication between themselves than with the parents, there is inadequate scientific proof to verify these claims.”
There are examples of groups of deaf people developing languages together, but generally over a generation or two, and in large groups. The most prominent such case is Nicaraguan sign language.
That’s not an example of “completely deprived of language in their environment”—the article says “by combining gestures and elements of their home-sign systems …”
Yes, you are correct. There were pre-existing primitive sign systems that started off. It isn’t an example of language developing completely spontaneously.
Not denying this at all. Just pointing out that the brain makes astonishingly good use of very noisy and arbitrary input when it does get exposed to other language-using humans, compared to what you’d expect any sort of machine learning AI to be capable of. I’m a lot more impressed at a thing made of atoms getting to be complex enough to be able to start the learning process than the further input it needs to actually learn the surrounding culture.
Think about it this way: Which is more impressive, designing and building a robot that can perceive the world and move around it and learn things as well as a human growing from infant to adulthood, or pointing things to the physically finished but still-learning robot and repeating their names, and doing the rest of the regular teaching about stuff thing people already do with children?
(For anyone offended at the implied valuation, since Parenting Human Children Is The Most Important Thing, imagine that the robot looks like a big metal spider and therefore doesn’t count as a Parented Child.)
My basic idea here is that the newborn baby crawling about is already a lot more analogous to an AI well in the way of going FOOM than a bunch of scattered clever pattern recognition algorithms and symbol representation models that just need the overall software architecture design to tie them together, since the things that stop humans from going FOOM might be a lot more related to physiological shortcomings than the lack of extremely clever further design. The baby has moved from being formed from the initial hard design information that went in it into discovering the new information it needs to grow from its surroundings. I’d be rather worried about an AI that reaches a similar stage.
I’ll credit that. A baby is a machine for going FOOM.
(Specifically, I’d guess, because so much has to be left out to produce a size of offspring that can be born without killing the mother too often. Hence the appalling, but really quite typical of evolution, hack of having the human memepool be essential to the organism expressed by the genes growing right.)
How much larger do you estimate babies would be if they came pre-installed with the information they appallingly lack?
Presumably at least with a more fully-developed brain. It does quite a bit of growing in the first couple of years.
Humans are not very good self-improving systems, except on geological timescales. They:
Hit a ceiling;
Die quickly.