Perhaps you could put something in the title or first line of the post to signal that it will be (a) only of interest to people who enjoy mathematical philosophy and (b) not indicative of the kind of writing normally seen here. The fact that similar posts of yours have attracted readers to LW is, I acknowledge, evidence against my previous comment.
Also please do omit the term obvious from future articles. It’s infuriating (for me at least) to read technical articles and not understand something the author has labeled “obvious”.
If possible you might want to motivate why whatever you have written provides insight into rationality. EY, I think, at least eventually always does this with his writings.
Perhaps you could put something in the title or first line of the post to signal that it will be (a) only of interest to people who enjoy mathematical philosophy and (b) not indicative of the kind of writing normally seen here.
Done, sort of.
Also please do omit the term obvious from future articles.
Okay,
It’s infuriating (for me at least) to read technical articles and not understand something the author has labeled “obvious”.
whuh? Your past comments indicate that you’re a college professor and have written a book on game theory! I’m surprised… Okay, point taken.
If possible you might want to motivate why whatever you have written provides insight into rationality.
Most of my technical posts are about the mathematics of decision theory and AI, not human rationality. That is also a traditional LW topic that predates me. In particular, I’m very interested in AIs that try to prove theorems, and this post is the sort of theoretical result that could be relevant to those. Also it’s relevant to my next post which will be about decision theory, if I don’t refute that result first :-)
Perhaps you could put something in the title or first line of the post to signal that it will be (a) only of interest to people who enjoy mathematical philosophy and (b) not indicative of the kind of writing normally seen here. The fact that similar posts of yours have attracted readers to LW is, I acknowledge, evidence against my previous comment.
Also please do omit the term obvious from future articles. It’s infuriating (for me at least) to read technical articles and not understand something the author has labeled “obvious”.
If possible you might want to motivate why whatever you have written provides insight into rationality. EY, I think, at least eventually always does this with his writings.
Done, sort of.
Okay,
whuh? Your past comments indicate that you’re a college professor and have written a book on game theory! I’m surprised… Okay, point taken.
Most of my technical posts are about the mathematics of decision theory and AI, not human rationality. That is also a traditional LW topic that predates me. In particular, I’m very interested in AIs that try to prove theorems, and this post is the sort of theoretical result that could be relevant to those. Also it’s relevant to my next post which will be about decision theory, if I don’t refute that result first :-)