I’ve wondered lately while reading The Laws of Thought if BDDs might help human reasoning too, the kind that gets formalized as boolean logic, of course.
This article reminded me of your post elsewhere about lazy partial evaluation / explanation-based learning and how both humans and machines use it.
You do manipulate BDDs as a programmer when you deal with if- and cond-heavy code. For example, you reorder tests to make the whole cleaner. The code that you look at while refactoring is a BDD, and if you’re refactoring, a sequence of snapshots of your code would be an equivalence proof.
This is the lazy partial evaluation post, cut and pasted from my livejournal:
Campbell’s Heroic Cycle (very roughly) is when the hero experiences a call to adventure, and endures trials and tribulations, and then returns home, wiser for the experience, or otherwise changed for the better.
Trace-based just-in-time compilation is a technique for simultaneously interpreting and compiling a program. An interpreter interprets the program, and traces (records) its actions as it does so. When it returns to a previous state (e.g. when the program counter intersects the trace), then the interpreter has just interpreted a loop. On the presumption that loops usually occur more than once, the interpreter spends some time compiling the traced loop, and links the compiled chunk into the interpreted code (this is self-modifying code) then it continues interpreting the (modified, accelerated) program.
Explanation-based learning is an AI technique where an AI agent learns by executing a general strategy, and then when that strategy is done, succeed or fail, compressing or summarizing the execution of that strategy into a new fact or item in the agent’s database.
In general, if you want to make progress, it seems (once you phrase it that way) just good sense that, any time you find yourself “back in the same spot”, you should invest some effort into poring over your logs, trying to learning something—lest you be trapped in a do loop. However, nobody taught me that heuristic (or if they tried, I didn’t notice) in college.
What does “back in the same spot” mean? Well, returning from a recursive call, or backjumping to the top of an iterative loop, are both examples. It doesn’t mean you haven’t made any progress, it’s more that you can relate where you are now, to where you were in your memory.
Thanks for the analogy between those two algorithms! I think more could be done in the way of specifying when and how it is useful to go back and reflect, but deciding how to apply these algorithms to everyday thinking is really something that requires empiricism. These are habits to be perfected (or discarded) over longer periods of time.
I’ve wondered lately while reading The Laws of Thought if BDDs might help human reasoning too, the kind that gets formalized as boolean logic, of course.
This article reminded me of your post elsewhere about lazy partial evaluation / explanation-based learning and how both humans and machines use it.
You do manipulate BDDs as a programmer when you deal with if- and cond-heavy code. For example, you reorder tests to make the whole cleaner. The code that you look at while refactoring is a BDD, and if you’re refactoring, a sequence of snapshots of your code would be an equivalence proof.
This is the lazy partial evaluation post, cut and pasted from my livejournal:
Campbell’s Heroic Cycle (very roughly) is when the hero experiences a call to adventure, and endures trials and tribulations, and then returns home, wiser for the experience, or otherwise changed for the better.
Trace-based just-in-time compilation is a technique for simultaneously interpreting and compiling a program. An interpreter interprets the program, and traces (records) its actions as it does so. When it returns to a previous state (e.g. when the program counter intersects the trace), then the interpreter has just interpreted a loop. On the presumption that loops usually occur more than once, the interpreter spends some time compiling the traced loop, and links the compiled chunk into the interpreted code (this is self-modifying code) then it continues interpreting the (modified, accelerated) program.
Explanation-based learning is an AI technique where an AI agent learns by executing a general strategy, and then when that strategy is done, succeed or fail, compressing or summarizing the execution of that strategy into a new fact or item in the agent’s database.
In general, if you want to make progress, it seems (once you phrase it that way) just good sense that, any time you find yourself “back in the same spot”, you should invest some effort into poring over your logs, trying to learning something—lest you be trapped in a do loop. However, nobody taught me that heuristic (or if they tried, I didn’t notice) in college.
What does “back in the same spot” mean? Well, returning from a recursive call, or backjumping to the top of an iterative loop, are both examples. It doesn’t mean you haven’t made any progress, it’s more that you can relate where you are now, to where you were in your memory.
Thanks for the analogy between those two algorithms! I think more could be done in the way of specifying when and how it is useful to go back and reflect, but deciding how to apply these algorithms to everyday thinking is really something that requires empiricism. These are habits to be perfected (or discarded) over longer periods of time.