Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
Yes, I replied to it :)
Unfortunately, I don’t expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn’t seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I’m not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.