One particular thing that I wanted to emphasize is that I think you can see as a thread on this forum (in particular, the modal UDT work is relevant) is that it’s useful to make formal toy models where the math is fully specified, so that you can prove theorems about what exactly an agent would do (or, sometimes, write a program that figures it out for you). When you write out things that explicitly, then, for example, it becomes clearer that you need to assume that a decision problem is “fair” (extensional) to get certain results, as Nate points out (or if you don’t assume it, someone else can look at your result and point out that it’s not true as stated). In your post, you’re using “logical expectations” that condition on something being true, without defining exactly what all of this means, and as a result you can argue about what these agents will do, but not actually prove it; that’s certainly a reasonable part of the research process, but I’d like to encourage you to turn your work into models that are fully specified, so that you can actually prove theorems about them.
I agree that it’s best to work on fully specified models. Hopefully, soon I will write about my own approach to logical uncertainty via complexity theory.
Want to echo Nate’s points!
One particular thing that I wanted to emphasize is that I think you can see as a thread on this forum (in particular, the modal UDT work is relevant) is that it’s useful to make formal toy models where the math is fully specified, so that you can prove theorems about what exactly an agent would do (or, sometimes, write a program that figures it out for you). When you write out things that explicitly, then, for example, it becomes clearer that you need to assume that a decision problem is “fair” (extensional) to get certain results, as Nate points out (or if you don’t assume it, someone else can look at your result and point out that it’s not true as stated). In your post, you’re using “logical expectations” that condition on something being true, without defining exactly what all of this means, and as a result you can argue about what these agents will do, but not actually prove it; that’s certainly a reasonable part of the research process, but I’d like to encourage you to turn your work into models that are fully specified, so that you can actually prove theorems about them.
Hi Benja, thx for commenting!
I agree that it’s best to work on fully specified models. Hopefully, soon I will write about my own approach to logical uncertainty via complexity theory.