Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.
Yeah it’s a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term “FOOM”, though I agree that it’s important to have a concise definition at hand.
In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people’s understanding of FOOM, but I’m happy to be corrected.
I would use the term “singularity” to refer more generally to periods of rapid progress, so e.g. I’d be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?
[1] Perhaps that last “open-ended” clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.
My understanding of the FOOM process:
An AI is developed to optimise some utility function or solve a particular problem.
It decides that the best way to go about this is to build another, better AI to solve the problem for it.
The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
The process continues until we reach an AI of an inconceivable level of intelligence.