I’d mostly rather not respond to this, but others have made the case that defending the conversational norms we’d like to see in the community is worth the fight, so I will.
We’ve previously gotten to the heart of our disagreement in other forums, PDV, as being about, as I would put it, the primacy of epistemology, namely that I believe we cannot develop a fundamentally “correct” epistemology while you believe that we can. If I am mischaracterizing this, though, please correct me as this discussion happened before the time of the double crux so we lacked a formal process to ensure our good faith. I’m happy to engage in a double crux now to see what shakes out of it, especially since I always seem to figure out new things from dialogue with you. If that’s something you’d like to do feel free to kick it off with a separate top-level comment on this post or as its own post.
But good faith is really why I think it necessary to write a response. I will leave it for others to judge my success, but I always strive to give my interlocutors the benefit of the doubt, to disagree in good faith, and do whatever I can to be maximally charitable to positions I think are confused. I’m willing to bet some people would even accuse me of performing too much hermeneutics on the arguments of others so that I can interpret them in ways that I can agree with. It’s thus sort of hard for me to even conclude this, but your responses to me often appear to be given in bad faith.
For example, above you accuse me and my “whole crowd” of “feeling superior and ignoring counterargument”. You say we are “a cancer in the rationalist community”. As best I can tell this is a position based on sentiment, and I’ve been happy to admit since the very first time I publicly posted about these ideas I’ve been aware that they risk creating negative sentiment and I’m still working out how to talk about them without doing that. Certainly sentiment is important for a lot of things, but one thing it’s not really important for is developing what we might call here “rational” episteme, i.e. the sort of classical, logical arguments of rationalism. Yet your comments are not directly a criticism of the sentiment I am creating, but instead seem directed at assessing the truth of my arguments. Thus I’m left with little choice but to consider these statements in the class of ad hominem.
Now I’m not opposed to ad hominem and snark per se: it can be fun and I’m happy to be knocked down for my mistakes. But you present them as if they are part of your argument for why I am wrong and this is, as far as I can tell, contra the values you seem to be yourself favoring (a cluster we might call generically “LW rationality”). And unfortunately doing this is not simply noise to be filtered out from the rest of the argument: it creates an example of it being okay to respond to arguments via the social side channel rather than addressing the ideas head-on. And this behavior creates the sort of environment that makes people hesitant to speak for fear of social reprisal.
I am sufficiently esteemed by myself and others to weather such remarks, but directed at those with less power and esteem this behavior would be read as bullying, and I at least don’t want LW to be the sort of place where that’s okay, especially since I suspect that sort of behavior is much of what killed LW 1.0 because it’s the kind of thing that made me lose interest in LW 1.0.
So I’m happy to engage in discussion around whether there is a difference between episteme and gnosis (the Anglo-Austrian-Analytic stance says “practically, no”, the Continental stance says “yes”), around how the presentation of my ideas may produce negative sentiment, and even around how these ideas impact the rationalist mission (though I don’t think that’s well defined and agreed upon). Really I’m happy to discuss anything in good faith! What I’m not happy to do is let slide arguments presented in what I must reluctantly conclude are bad faith because it is a weed in the garden we are trying to tend here.
There are places for bad faith, for trolling, for bullying. I would like LW to not be one of those places.
namely that I believe we cannot develop a fundamentally “correct” epistemology
U wot m8
If that’s an “everything is true”, then I think I disagree with it. I agree in a very vacuous sense; but I think all useful powerful reasoning processes are reachable from each other in the world we actually live in. There don’t seem to be local minima in reasoning space once you simplify the world hard enough.
You can’t have a “correct” epistemology in some great oracle sense, but you only get to have one overarching one anyway. Meta-rationality is still implemented on your same old brain.
What I really should say is that I believe we can’t develop a consistent epistemology that is also complete. That is, there will be facts that cannot be reckoned within a consistent epistemology and an epistemology that admits such facts will be inconsistent. I believe this follows directly from the incompleteness theorems, in so far as we consider epistemologies formal systems (I do because I deal with stuff that could get in the way as part of the phenomenological layer). So you are right that I mean an epistemology cannot be correct in the way we let oracles be correct (complete and consistent).
I think this is worth stressing because thinking as if oracles are possible and as if they are indeed the thing we should aspire to be like seems to be natural within human thought even though it is a computationally impossible achievement within our universe as best we can tell. I believe I read an implicit assumption in much writing, rationalist or no, that is also to this effect.
With sufficient computational resources we can act as if we are oracles, but only if we restrict ourselves to problems simple enough that the resources needed, generally of order exponential or more in terms of the problem size, are physically available to us. I expect though that for no matter how much resources we have we will always be interested in those questions for which we do not have enough resources to pretend to be oracles for, so addressing such issues is important both now while we are very much limited by our brains and in the future when we will at least be limited by the amount of reachable energy within the universe.
Thus we are stuck trading off between various epistemologies the same way in mathematics we may have to use different formal systems to address different questions, as in when we choose whether or not to pick up the axiom of choice and in so doing necessitate the introduction of heuristics to keep us away from the places where everything is true because the system no longer keeps those things out on its own. Of course this is all part of a single computation with an epistemological telos implemented in our “same old brain”s, but that’s something distinct from even if it approximates an epistemology.
This seems contra our current best understanding of physics, specifically that fundamental physics operates in a nondeterministic fashion from our perspective because there is uncomputable stuff happening. Just what that looks like appears to be literally unknowable, but we have made some decent inferences as to what might be going on, hence MWI and other metaphysical theories.
Your belief system is flawed, built on embracing not-even-wrong statements as truth. This makes every conclusion you draw suspect, and when you’ve stated enough confident conclusions confidently which bear out that suspicion, it is no longer reasonable to presume they are correct until proven otherwise. That does not constitute an ad hominem, merely updating priors in response to evidence.
I’d mostly rather not respond to this, but others have made the case that defending the conversational norms we’d like to see in the community is worth the fight, so I will.
We’ve previously gotten to the heart of our disagreement in other forums, PDV, as being about, as I would put it, the primacy of epistemology, namely that I believe we cannot develop a fundamentally “correct” epistemology while you believe that we can. If I am mischaracterizing this, though, please correct me as this discussion happened before the time of the double crux so we lacked a formal process to ensure our good faith. I’m happy to engage in a double crux now to see what shakes out of it, especially since I always seem to figure out new things from dialogue with you. If that’s something you’d like to do feel free to kick it off with a separate top-level comment on this post or as its own post.
But good faith is really why I think it necessary to write a response. I will leave it for others to judge my success, but I always strive to give my interlocutors the benefit of the doubt, to disagree in good faith, and do whatever I can to be maximally charitable to positions I think are confused. I’m willing to bet some people would even accuse me of performing too much hermeneutics on the arguments of others so that I can interpret them in ways that I can agree with. It’s thus sort of hard for me to even conclude this, but your responses to me often appear to be given in bad faith.
For example, above you accuse me and my “whole crowd” of “feeling superior and ignoring counterargument”. You say we are “a cancer in the rationalist community”. As best I can tell this is a position based on sentiment, and I’ve been happy to admit since the very first time I publicly posted about these ideas I’ve been aware that they risk creating negative sentiment and I’m still working out how to talk about them without doing that. Certainly sentiment is important for a lot of things, but one thing it’s not really important for is developing what we might call here “rational” episteme, i.e. the sort of classical, logical arguments of rationalism. Yet your comments are not directly a criticism of the sentiment I am creating, but instead seem directed at assessing the truth of my arguments. Thus I’m left with little choice but to consider these statements in the class of ad hominem.
Now I’m not opposed to ad hominem and snark per se: it can be fun and I’m happy to be knocked down for my mistakes. But you present them as if they are part of your argument for why I am wrong and this is, as far as I can tell, contra the values you seem to be yourself favoring (a cluster we might call generically “LW rationality”). And unfortunately doing this is not simply noise to be filtered out from the rest of the argument: it creates an example of it being okay to respond to arguments via the social side channel rather than addressing the ideas head-on. And this behavior creates the sort of environment that makes people hesitant to speak for fear of social reprisal.
I am sufficiently esteemed by myself and others to weather such remarks, but directed at those with less power and esteem this behavior would be read as bullying, and I at least don’t want LW to be the sort of place where that’s okay, especially since I suspect that sort of behavior is much of what killed LW 1.0 because it’s the kind of thing that made me lose interest in LW 1.0.
So I’m happy to engage in discussion around whether there is a difference between episteme and gnosis (the Anglo-Austrian-Analytic stance says “practically, no”, the Continental stance says “yes”), around how the presentation of my ideas may produce negative sentiment, and even around how these ideas impact the rationalist mission (though I don’t think that’s well defined and agreed upon). Really I’m happy to discuss anything in good faith! What I’m not happy to do is let slide arguments presented in what I must reluctantly conclude are bad faith because it is a weed in the garden we are trying to tend here.
There are places for bad faith, for trolling, for bullying. I would like LW to not be one of those places.
U wot m8
If that’s an “everything is true”, then I think I disagree with it. I agree in a very vacuous sense; but I think all useful powerful reasoning processes are reachable from each other in the world we actually live in. There don’t seem to be local minima in reasoning space once you simplify the world hard enough.
You can’t have a “correct” epistemology in some great oracle sense, but you only get to have one overarching one anyway. Meta-rationality is still implemented on your same old brain.
What I really should say is that I believe we can’t develop a consistent epistemology that is also complete. That is, there will be facts that cannot be reckoned within a consistent epistemology and an epistemology that admits such facts will be inconsistent. I believe this follows directly from the incompleteness theorems, in so far as we consider epistemologies formal systems (I do because I deal with stuff that could get in the way as part of the phenomenological layer). So you are right that I mean an epistemology cannot be correct in the way we let oracles be correct (complete and consistent).
I think this is worth stressing because thinking as if oracles are possible and as if they are indeed the thing we should aspire to be like seems to be natural within human thought even though it is a computationally impossible achievement within our universe as best we can tell. I believe I read an implicit assumption in much writing, rationalist or no, that is also to this effect.
With sufficient computational resources we can act as if we are oracles, but only if we restrict ourselves to problems simple enough that the resources needed, generally of order exponential or more in terms of the problem size, are physically available to us. I expect though that for no matter how much resources we have we will always be interested in those questions for which we do not have enough resources to pretend to be oracles for, so addressing such issues is important both now while we are very much limited by our brains and in the future when we will at least be limited by the amount of reachable energy within the universe.
Thus we are stuck trading off between various epistemologies the same way in mathematics we may have to use different formal systems to address different questions, as in when we choose whether or not to pick up the axiom of choice and in so doing necessitate the introduction of heuristics to keep us away from the places where everything is true because the system no longer keeps those things out on its own. Of course this is all part of a single computation with an epistemological telos implemented in our “same old brain”s, but that’s something distinct from even if it approximates an epistemology.
The falsity of this argument follows directly from the computability of physics.
This seems contra our current best understanding of physics, specifically that fundamental physics operates in a nondeterministic fashion from our perspective because there is uncomputable stuff happening. Just what that looks like appears to be literally unknowable, but we have made some decent inferences as to what might be going on, hence MWI and other metaphysical theories.
Your belief system is flawed, built on embracing not-even-wrong statements as truth. This makes every conclusion you draw suspect, and when you’ve stated enough confident conclusions confidently which bear out that suspicion, it is no longer reasonable to presume they are correct until proven otherwise. That does not constitute an ad hominem, merely updating priors in response to evidence.