Is epistemic logic useful for agent foundations?
The title isn’t a rhetorical question; I’m actually looking for answers. This summer, I’ll have the opportunity to attend a summer school on logic, language and information. Whether or not I go depends to a significant extent on whether what they’ll be teaching—particularly epistemic logic, also some other topics in logic and language—will be useful for AI safety research. Here is a summary of epistemic logics, and here are the courses I’ll be able to take. I’ve already taken classes in first-order logic, but right now I’m uncertain about the value of doing these extra courses.
Reasons to think learning epistemic logic will be useful for agent foundations:
MIRI’s work relies heavily on high-level concepts in logic
Epistemic logic is particularly concerned with statements about knowledge and belief which seem very relevant to reasoning about agents
Learning about epistemic logic is probably useful for thinking about other forms of logic
Reasons to think it won’t be useful:
As far as I can tell, it doesn’t appear on MIRI’s research guide, nor in any of their papers, nor in the sequences
It seems like epistemic logic is mostly non-probabilistic and is talking about a fundamentally different sort of knowledge from probabilistic Bayesian knowledge, which increases my credence that it’s the sort of philosophy which isn’t going to be of much practical use
The entire point of research is to do something new. That becomes easier if you are attacking a problem using a different set of tools. A valuable but neglected tool is like an intellectual arbitrage opportunity. See also. The potential downside of this approach is that others may not see the merit in your work if you’re operating outside of their paradigm—science advances one funeral at a time, etc.
I think you’ll find it useful regardless of how much in relates to MIRI’s program: epistemology is foundational and having a better understanding of it is wildly useful if you have an interest in anything that comes remotely close to touching philosophical questions. In fact, my own take on most existing AI safety research is that it doesn’t enough address issues related to foundational questions of epistemology by choosing to make certain implicit, strong assumptions about the discoverability of truth and as a result you can add a lot of value by more carefully questioning how we know what we think we know as it relates to solving AI safety issues.
Yes. I had a course on Logic and Knowledge Representation last semester (October->January). In parallel, I attended an Autumn School about AI in late October, which included two 2h courses on Epistemic Logic. The speaker went super fast, so those 4 hours were ultra-productive (here are my notes in French). However, I did not fully understand everything, so I was happy to make my knowledge more solid with practical exercises/exams/homework etc. during my Logic Knowledge Representation course. The two approach (summer school and semester course) were complementary and gave me a good grasp on logic in general, and in particular epistemic logic.
This semester(February->May), I had a course on Multi-Agent Systems , and knowing about epistemic logic, and more generally modal logic, was handy. When the environment of a robot changes, it needs to take into account this change and integrate this into his representation of the world. When two agents communicate, having a representation of the knowledge of the other agent is essential to not send redundant or, worse, contradictory information.
A great part of Multi-Agent Systems is about communication, or how to harmonize the global knowledge, so knowing about epistemic logic is advantageous. In this article I talk about the gossip problem in the context of a project we had to do in a course on Multi-Agent Systems. The same teacher who exposed me the gossip problem, was the one who taught the Modal Logic/Epistemic Logic course. Epistemic Logic is useful to have the full picture about communication protocols, and the lower bounds in bits of information necessary to communicate secrets.
A little anecdote to end this comment: last week I had a mathematician/logician who came to a Meetup I organized on AI Safety. At the beginning, it was only the two of us, and he went to explain how he went from logic to AI. “You know, all my life I have been studying very theoretical problems. To be very specific, group theory was the most applied math I have ever done. But, now that I study AI, having studied intractable/undecidable problems for decades, I know almost instantaneously which theory will work for AGI and which won’t.” We ended up having some discussion about logic and knowledge representation. We could not have had this chat without me having taken some courses on epistemic logic.