if you’re interested in anything in particular, I’ll be happy to answer.
I very much appreciate the offer! I can’t think of anything specific, though; the comments of yours that I find most valuable tend to be “unknown unknowns” that suggest a hypothesis I wouldn’t previously have been able to articulate.
Would trying to become less confused about commitment races before building a superintelligent AI count as a metaphilosophical approach or a decision theoretic one (or neither)? I’m not sure I understand the dividing line between the two.