I didn’t pick it up from any reputable sources. The white paper on military theory that created the term was written many years ago, and since then I’ve only seen that explanation tossed around informally in various places, not investigated with serious rigor. OODA loops seem to be seldom discussed on this site, which I find kinda weird, but a good full explanation of them can be found here: Training Regime Day 20: OODA Loop
I tried to figure out on my own whether executing an OODA loop was necessary & sufficient condition for something to be an intelligent agent, (part of an effort to determine what the smallest & simplest thing which could still be considered true AGI might be) and I found that while executing OODA loops seems necessary for something to have meaningful agency, doing so is not sufficient for something to be an intelligent agent.
Thank you for your interest, though! I wish I could just reply with a link, but I don’t think the paper I would link to has been written yet.
I asked because that’s a reasonable one-line approximation of my own tentative theory of agency. I’m happy to hear that other people have similar intuitions! Alas that there isn’t a fleshed out paper I can go read. Do you have any… nonreputable sources to link me to, that I might benefit from reading?
I didn’t pick it up from any reputable sources. The white paper on military theory that created the term was written many years ago, and since then I’ve only seen that explanation tossed around informally in various places, not investigated with serious rigor. OODA loops seem to be seldom discussed on this site, which I find kinda weird, but a good full explanation of them can be found here: Training Regime Day 20: OODA Loop
I tried to figure out on my own whether executing an OODA loop was necessary & sufficient condition for something to be an intelligent agent, (part of an effort to determine what the smallest & simplest thing which could still be considered true AGI might be) and I found that while executing OODA loops seems necessary for something to have meaningful agency, doing so is not sufficient for something to be an intelligent agent.
Thank you for your interest, though! I wish I could just reply with a link, but I don’t think the paper I would link to has been written yet.
I asked because that’s a reasonable one-line approximation of my own tentative theory of agency. I’m happy to hear that other people have similar intuitions! Alas that there isn’t a fleshed out paper I can go read. Do you have any… nonreputable sources to link me to, that I might benefit from reading?