Let the user describe an intuitive concept, maybe with some additional examples, and let the system return one or several type signatures that match the intuitive concept and examples.
For hairy conceptual/strategic problems, map out possible compatible/incompatible decisions for plans/operationalisation, perhaps in disjunctive normal form (e.g. “we can fulfill axiom A1 and axiom A2, but not A3, or A1,A3 and A4” (I stole this idea from the Tsvi Benson-Tilssen interview on the Bayesian Conspiracy podcast)
Try to find the algorithm for proving mathematical theorems (maybe some variant of MCTS?), and visualize the tree to find possible alternative assumptions/related theorems:
“Perhaps they are checking many instances? Perhaps they are white-box testing and looking for boundaries? Could there be some sort of “logical probability” where going down possible proof-paths yield probabilistic information about the final target theorem, maybe in some sort of Monte Carlo tree search of proof-trees? Do sleep serve to consolidate & prune & replay memories of incomplete lines of thought, finetuning heuristics or intuitions for future attacks and getting deeper into a problem (perhaps analogous to expert iteration)? Reading great mathematicians like Terence Tao discuss the heuristics they use on unsolved problems²⁶, they bear some resemblances to computer science techniques.”
In an unrelated note, the more conceptual/theoretical/abstract alignment research is, the less dangerous the related tools look to me: I usually don’t want my AI system to be thinking about ML architectures and scaling laws and how to plan better, but abstract mathematical theorems seem fine.
Though “try to solve mathematics algorithmically” is maybe a bit far-fetched.
Additional ideas for cognitive tools:
Let the user describe an intuitive concept, maybe with some additional examples, and let the system return one or several type signatures that match the intuitive concept and examples.
For hairy conceptual/strategic problems, map out possible compatible/incompatible decisions for plans/operationalisation, perhaps in disjunctive normal form (e.g. “we can fulfill axiom A1 and axiom A2, but not A3, or A1,A3 and A4” (I stole this idea from the Tsvi Benson-Tilssen interview on the Bayesian Conspiracy podcast)
Try to find the algorithm for proving mathematical theorems (maybe some variant of MCTS?), and visualize the tree to find possible alternative assumptions/related theorems:
—Gwern Branwen, “The Existential Risk of Math Errors”, 2019
In an unrelated note, the more conceptual/theoretical/abstract alignment research is, the less dangerous the related tools look to me: I usually don’t want my AI system to be thinking about ML architectures and scaling laws and how to plan better, but abstract mathematical theorems seem fine.
Though “try to solve mathematics algorithmically” is maybe a bit far-fetched.