I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make “Sherlock Holmes” style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
Some errata:
The bat thing might have just been Thomas Nagel, I can’t find the source I thought I remembered.
At one point I said LLMs forget everything they thought previously between predicting (say) token six and seven and half to work from scratch. Because of the way the attention mechanism works it is actually a little more complicated (see the top comment from hmys). What I said is (I believe) still overall right but I would put that detail less strongly.
Hofstadter apparently was the one who said a human-level chess AI would rather talk about poetry.