That’s exactly the impression that I got. That it was awkward phrasing, because you just didn’t know how to phrase it—but that it wasn’t a coincidence that you defaulted to that particular awkward phrasing. It seems that, on some level, you were surprised to see people outside lesswrong discussing “lesswrong ideas.” Even though, intellectually, you know that most of the good ideas on lesswrong didn’t originate here. Don’t be too hard on yourself. I probably have the opposite problem, where, as a meta-contrarian, I can’t do anything but criticize lesswrong.
If you want to avoid sounding like a cheerleader, I think the best rule of thumb is to just not name-drop. It’s great if you get a lot of ideas from Eliezer and lesswrong, but then communicate those ideas in a way that makes it difficult to trace them back to lesswrong. This should come naturally, because you shouldn’t believe everything you hear on lesswrong anyway. Confirm what you hear with an independent source, and then you can refer to that source instead of lesswrong, just like you would with information you learned on wikipedia.
I get that. What I’m really wondering is how this extends to probabilistic reasoning. I can think of an obvious analog. If the algorithm assigns zero probability that it will choose $5, then when it explores the counterfactual hypothesis “I choose $5”, it gets nonsense when it tries to condition on the hypothesis. That is, for all U,
P(utility=U | action=$5) = P(utility=U and action=$5) / P(action=$5) = 0⁄0
is undefined. But is there an analog for this problem under uncertainty, or was my sketch correct about how that would work out?