I have enjoyed your writings both on LessWrong and on your personal blog. I share your lack of engagement with EA and with Hanson (although I find Yudkowsky’s writing very elegant and so felt drawn to LW as a result.) If not the above, which intellectuals do you find compelling, and what makes them so by comparison to Hanson/Yudkowsky?
My main issue with the community was that it seemed to have negative effects on some people and fewer benefits than claimed.
My main issue with Yudkowsky was that he seemed overconfident about some things he didn’t seem to understand that well.
If not the above, which intellectuals do you find compelling
When I was in elementary school, people asked me who my role model was and I’d reply “Feynman” but I don’t think that was true in the sense they meant.
It’s a common human tendency to want to become exactly like some role model, like a parent or celebrity, but I think it’s healthier to try to imitate specific aspects of people and with a limited degree. Yes, maybe there are reasons for everything that you don’t understand, but maybe what you’d be imitating is a fictional character. I started reading Feynman in 3rd grade, but it wasn’t until later that I realized how different the person was from the character in the books. Kids can try to copy Elon Musk or PewDiePie but that’s unlikely to work out for them.
So, in my case, your question is similar to asking what books I liked. The answer would be something like: “The Man Without Qualities, Molecular Biology of the Cell, March’s Advanced Organic Chemistry, Wikipedia, Wikipedia, Wikipedia...”—but to quote that famous philosopher Victor Wooten:
every action under certain circumstances and for certain people may actually be a stepping stone to spiritual growth
uncertainty is often more Knightian than Bayesian which makes different things appropriate
Equivalently speaking, if you can apply Bayesianism in a general enough setting, then you can trivially reduce the epistemic uncertainty to 0 in at least some environments, so Bayesianism is superfluous.
Specifically, the problem arises because Bayesianism assumes logical omniscience, but that is tantamount to at least having a halting oracle, specifically here a machine that can at least do infinite computation in finite time, and in that case, it is trivial to know all the recursively enumerable set of logic with certainty, because we can simply enumerate the set of theorems in math, and thus we don’t have to deal with uncertainty at all, since many, many problems (not all of them) are either a strict subset of the recursively enumerable set or are equivalent to the set of recursively enumerable turing machines, so uncertainty didn’t matter at all, so Bayesianism is superfluous.
I have enjoyed your writings both on LessWrong and on your personal blog. I share your lack of engagement with EA and with Hanson (although I find Yudkowsky’s writing very elegant and so felt drawn to LW as a result.) If not the above, which intellectuals do you find compelling, and what makes them so by comparison to Hanson/Yudkowsky?
Thanks.
My main issues with the early writing on LessWrong were:
uncertainty is often more Knightian than Bayesian which makes different things appropriate
some criticisms that David Chapman later made seemed obvious
unseen correlations are difficult to account for, and some suggestions I saw make that problem worse
sometimes “bias” exists for a reason
My main issue with the community was that it seemed to have negative effects on some people and fewer benefits than claimed.
My main issue with Yudkowsky was that he seemed overconfident about some things he didn’t seem to understand that well.
When I was in elementary school, people asked me who my role model was and I’d reply “Feynman” but I don’t think that was true in the sense they meant.
It’s a common human tendency to want to become exactly like some role model, like a parent or celebrity, but I think it’s healthier to try to imitate specific aspects of people and with a limited degree. Yes, maybe there are reasons for everything that you don’t understand, but maybe what you’d be imitating is a fictional character. I started reading Feynman in 3rd grade, but it wasn’t until later that I realized how different the person was from the character in the books. Kids can try to copy Elon Musk or PewDiePie but that’s unlikely to work out for them.
So, in my case, your question is similar to asking what books I liked. The answer would be something like: “The Man Without Qualities, Molecular Biology of the Cell, March’s Advanced Organic Chemistry, Wikipedia, Wikipedia, Wikipedia...”—but to quote that famous philosopher Victor Wooten:
Equivalently speaking, if you can apply Bayesianism in a general enough setting, then you can trivially reduce the epistemic uncertainty to 0 in at least some environments, so Bayesianism is superfluous.
Specifically, the problem arises because Bayesianism assumes logical omniscience, but that is tantamount to at least having a halting oracle, specifically here a machine that can at least do infinite computation in finite time, and in that case, it is trivial to know all the recursively enumerable set of logic with certainty, because we can simply enumerate the set of theorems in math, and thus we don’t have to deal with uncertainty at all, since many, many problems (not all of them) are either a strict subset of the recursively enumerable set or are equivalent to the set of recursively enumerable turing machines, so uncertainty didn’t matter at all, so Bayesianism is superfluous.