I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make “Sherlock Holmes” style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
I’ve finally gotten around to reading the paper from Seidenfeld et al. that you cited. I am also surprised this isn’t more frequently discussed as an approach to logical/computational uncertainty.
I agree that the consistency/coherence requirement must be relaxed in a descriptive theory of choice behavior; humans can not be expected to take perfectly coherent actions. It may be difficult or impossible to define an “unnatural” sense in which we are coherent, and it’s not clear that this is even desirable. I see that this is connected to the strategy of reducing incoherence by Bayesian updating, which is neat. It also reminds me of something Gilboa wrote, section 13.3.2, page 108 of these lecture notes. Gilboa’s approach is to define stricter coherent probabilities/preferences which are not “complete” and as distinguished from those elicited through choice behavior—the consequence is usually some form of imprecise probability theory, which as we’ve discussed usually seems arbitrary. I am initially less skeptical of Seidenfeld et al.’s approach.
However, I think there may be something missing here—the model of Bayesian learning to reduce incoherence requires a coherent likelihood function, which allows certain types of learning about mathematical constants (through Monte-Carlo methods) but perhaps not the most important and powerful types. We should be able to leverage other known (or suspected) mathematical facts to constrain our expectations even when those beliefs are slightly or even seriously incoherent. I think the missing piece is an algorithm to do this. That is, by allowing incoherence but hoping to correct it, we’re leaving out the “core engine” of cognition, with Garrabrant induction as the only fleshed-out possibility I am aware of. Perhaps the implication is that it is just Bayes all the way down—YOU (as Seidenfeld addresses the bounded reasoner) decide what to think about and how to update in a sort of meta-cognitive Bayesian fashion, by reasoning about what types of beliefs and updating make sense from theory or experience. It’s not clear to me what sort of prior beliefs and other algorithmic details allow this process to converge, which of course ties in very closely with this post! So I think the central question is still left open by Seidenfeld et al.