Yes, or rather I realize that in the sense that I do remember seeing you write that somewhere, but I’m not sure whether I had it sufficiently in mind during my replies. If you see anything suggesting that I didn’t have it in mind such that it invalidated what I said as irrelevant to your position, let me know.
I should mention though that it may be epistemically unsanitary to use the term “god” (or “God)” when you really mean AIs, considering how long and winding the history of such theistic terminology has been. If your goal is clear communication, I would suggest switching to a term with less baggage.
Even though I did know that’s what you meant in the sense that I saw you define it earlier, I might easily have fallen into pattern-matching and ended up largely criticizing a position irrelevant to yours.
Your goal seems to be to identify as a theist though, so using the term “God” (and the other standard theistic terminology) may be necessary for that purpose, in which case you may either (1) want to make sure to take extra care to compensate for the historical baggage and ambiguity, or (2) simply forget you ever read this comment.
I actually go out of my way to equate “god” and “AGI”/”superintelligence”, because to a large extent they seem like the same thing to me.
Your goal seems to be to identify as a theist though
It’s not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they’re compartmentalizing. I think Aquinas and I believe in the same God, even if we think about Him differently. I know algorithmic probability theory, Aquinas didn’t. Leibniz almost did.
(There’s two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons; whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)
I actually go out of my way to equate “god” and “AGI”/”superintelligence”, because to a large extent they seem like the same thing to me.
Can you give me the common meanings of those terms, and explain how they’re equivalent?
It’s not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they’re compartmentalizing.
Compartmentalizing in what way? I think they’re different things, or rather it seems utterly obvious to me that religious people using the theistic terms are always using them to refer to things completely different than those on LW employing those other terms.
I should say though that the way that the theistic terms are used is in no way consistent, and everybody seems to mean something different (if I can even venture a guess as to what the hell they’re talking about). There are multiple meanings associated with these terms, to say the least.
Maybe your conception is something like, “If there really is anything out there that could in any way match the description in Catholicism or whatever, then it would perhaps have to be an AGI, or else a super-intelligent life-form that evolved naturally.”
I would say though that this seems like a desperate attempt to resurrect the irrationality of religion. If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards—I would not care. I would move on, and consider that tradition utterly useless and uninteresting.
I don’t understand why you care. It’s not like Aquinas or anybody else believed any of this stuff for the same reasons you do, or anything like that, so what’s the point of being like, “Hey, I know these people came up with this stuff for some random other reasons, but it seems like I can still support their conclusions and everything, so yeah, I’m a theist!” It just doesn’t make any sense to me, unless of course you think they came to those conclusions for good reasons that have anything at all to do with yours, in which case I need some elaboration on that point.
Either way, usually I can’t even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint. I used to think they were just totally insane or something, and I would make actual attempts to understand what they were trying to get me to visualize, but it all became clear when I started interpreting what they were saying in a different way. It all became clear when I started thinking about it in terms of them employing techniques to delude themselves into believing in an afterlife, or simply just believing it because of some epistemic vulnerability their brain was operating under.
Those theistic terms (“God” etc) have multiple meanings, and different people tend to use them differently, or rather they don’t really have meanings at all, and they’re just the way some people delude themselves into feeling more comfortable about whatever, or perhaps they’re just mind viruses taking advantage of some well-known vulnerabilities found our hardware.
I can’t for the life of me figure out why you want to retain this terminology. What use is it besides for contrarianism? Does calling yourself a theist and using the theistic terms actually aid in my or anybody else’s understanding of what you’re thinking, or what? Is the objective clear communication of something that would be important for me or other people on here to know, or what? I’m utterly confused at what you’re trying to do, and what the supposed utility is, of these beliefs of yours and your way of trying to communicate them.
I think Aquinas and I believe in the same God, even if we think about Him differently.
What does that even mean? It sounds like the worst sort of sophistry, but I say that not necessarily to suggest you’re making an error in your thinking, but simply to allude to how and why I have no exactly what that means.
(There’s two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons;
So you’re defining the sequence of letters starting with “G”, next being “o”, and ending with “d” as “the ideal decision theory”? Is this a common meaning? Do all (or most of) the religious people I know IRL use that term to refer to the ideal decision theory, even if they wouldn’t call it that?
And what do you mean by “ideal”? Ideal for what? Our utility functions? Maybe I even need to hear a bit of elaboration on what you mean by “decision theory”. Are we talking about AI programming, or human psychology, or what?
whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)
I literally have absolutely no idea why you chose the word “phenomenological” right there, or what you could possibly mean.
If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards—I would not care. I would move on, and consider that tradition utterly useless and uninteresting.
If I found a school of thought that seemed to come to correct conclusion unusually often but “believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards”, I’d take that as evidence that there is something to their reasons that I’m missing.
So you’re defining the sequence of letters starting with “G”, next being “o”, and ending with “d” as “the ideal decision theory”? Is this a common meaning?
Actually, yes. Specifically the tendency in Catholic thought to equate God with Plato’s Form of the Good.
If I found a school of thought that seemed to come to correct conclusion unusually often but “believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards”, I’d take that as evidence that there is something to their reasons that I’m missing.
You’re absolutely right, but you’re stipulating the further condition that they come to the correct conclusions “unusually often”. I on the other hand was talking about a situation where they just happen to have a few of the same conclusions, and those conclusions just so happen to be central to their worldview.
I didn’t get the feeling that Will thought that Catholicism was correct an unusually amount of time. I was under the impression that he (like many others before him) is simply trying his hardest to use some of the theistic terminology and identify as a theist, despite his science background.
Actually, yes. Specifically the tendency in Catholic thought to equate God with Plato’s Form of the Good
I just read that article, but I couldn’t parse anything, nor did I see any relation to decision theory. I’m left utterly confused.
I think you’re way too confident that the people you disagree with are obviously wrong, to the extent that I don’t think we can usefully communicate. I’m tapping out of this discussion.
Have you observed my discussions elsewhere on this website, and came to the conclusion that I’m way too confident in that way in general, or are you referring only to this particular exchange?
This discussion seems like sort of a unique case. I wouldn’t say I’m generally so confident in that respect, but I’m certainly extremely confident in this discussion, even to the point that I don’t yet have a sufficiently detailed model of you to account for how you could possibly spend so much time on this website and still engage in the sort of communication that went on throughout this discussion.
Sure, I’m extremely confident that I’m the one who’s right in this discussion, but then again that’s probably the majority feeling when people on this website engage you on this topic, even to the extent that it’s not uncommon for people to question whether you’re just trolling at this point.
Looking back on what I could have done better in this discussion to have it have been more likely for you to hear me out instead of quit, I realize that I probably would have had to spend about 5 times as much time writing, and have been extremely careful in every way throughout every reply. Even in retrospect, that probably wouldn’t have been worth it. Takes much less time to just spill my thoughts and reactions than it is to take the necessary precautions to make this sort of tap out less likely.
Even with all that said, I don’t really understand the connection between me signaling that I think you’re obviously wrong, and you saying that we can’t usefully communicate. Am I being uncharitable in my replies, or do you think it’s unlikely that I would update toward your position after setting a precedent of me thinking your clearly confused, or what?
I could see how you could pattern-match my high confidence with expecting me to have trouble updating, and I could also see how maybe some of my more terse moments may have come off as, “If he wasn’t so confident, perhaps he would have thought longer about this and responded to a more charitable interpretation.” But I should mention that in at least one case I went as far as responding to your question of whether I even know that you’re using the word “God” or “god” to refer to AGIs, by admitting that I may be attacking a strawman.
I don’t necessarily expect you to respond to any of this considering you already tapped out, but perhaps I’ve gone sufficiently meta for you to consider it a different discussion, one that you perhaps haven’t tapped out of, but in any case you’ll probably read this and maybe get something out of it, or even change your mind as to whether you want to continue engaging me on this topic.
Have you observed my discussions elsewhere on this website, and came to the conclusion that I’m way too confident in that way in general, or are you referring only to this particular exchange?
Only this particular exchange, I haven’t seen any of your other discussions.
It’s not you clearly signaling you think I’m obviously wrong that I anticipate difficulties with; I was being imprecise. Rather, it’s a specific emotion/attitude (exasperation?) that I detect and that stresses me out a lot, because it imposes a moral obligation on me to act in good faith to show you that the kind of reasoning you’re engaged in in my experience often leads to terrible consequences that will look in retrospect as if they could easily have been avoided. On the one hand I want to try to help you, on the other hand I want to avoid blame for not having tried to help you enough, and there’s no obvious solution to that double bind, and the easiest solution is to simply bail out of the discussion. (Not necessarily your blame, just someone’s, e.g. God’s.)
And it’s not you thinking that I’m obviously wrong; apologies for being unclear. It’s people in general. You say you “usually can’t even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint”, and yet you’re very confident they are wrong. People who haven’t practiced the art of analyzing people’s decision policies in terms of signaling games, Schelling points, social psychology &c. simply don’t have the skills necessary to determine whether they’re justified in strongly disagreeing with someone. Confidently assuming that your enemies are stupid is what basically everyone does, and they’re all retarded for doing it. LessWrong is no exception; in fact, it’s a lot worse than my high school friends, who weren’t fooled into thinking that their opinions were worth something ’cuz of a superficial knowledge of cognitive science and Bayesian statistics.
It’s not that I don’t think you’d update. If I took the time to lay out all my arguments, or had time to engage you often in conversation, as I have done with many folk from the SingInst community, then I’m sure I would cause you to massively update towards thinking I’m right and that LessWrong has gaping holes in its epistemology. It’s happened many times now. People start out thinking I’m crazy or obviously wrong or just being contrarian, I talk to them for a long time, they realize I have very good epistemic habits and kick themselves for not seeing it earlier. But it takes time, and LessWrong isn’t worth my time; the only reason I comment on LessWrong is because I feel a moral obligation to, and the moral obligation isn’t strong enough to compel me to do it well.
Also, I generally don’t like talking about object level beliefs; I prefer to discuss epistemology. But I’m too lazy to have long, involved discussions about epistemology, so I wouldn’t have been able to keep up our discussion either way.
Rather, it’s a specific emotion/attitude (exasperation?) that I detect and that stresses me out a lot, because it imposes a moral obligation on me to act in good faith to show you that the kind of reasoning you’re engaged in in my experience often leads to terrible consequences that will look in retrospect as if they could easily have been avoided.
I just don’t understand. I see why you may detect a level of exasperation in my replies, but I don’t get why that specifically would be what would impose that sort of moral obligation on you. You’re saying that what I’m doing may lead to terrible consequences, which sounds bad and like maybe you should do something about it, but I’m utterly confused about why my attitude is what confers that on you.
In other words, wouldn’t you feel just as morally obligated (if not more) to help me avoid such terrible consequences if I had handled this discussion with a higher level of respect or grace? Why does me (accidentally or not) signaling exasperation or annoyance lead to that feeling of moral obligation, rather than the simple fact that you consider it in your power to help somebody avoid (or lower the likelihood) of whatever horrible outcome you have in mind?
When I was first reading your reply and had only reached up to where you said “stresses me out a lot”, I thought you were just going to say that me acting frustrated with you or whatever was simply making it uncomfortable or like you would get emotionally attached such that it would be epistemically hazardous or something, which I would have understood, but then you transitioned to the whole moral obligation thing and I sort of lost you.
On the one hand I want to try to help you
Just for reference, I should probably tell you what (I think) my utility function is, so you’re in a (better) position to appraise whether what you have in mind really would be of help to me.
I’m completely and utterly disinterested in academic or intellectual matters unless they somehow directly benefit me in the more mundane, base aspects of my life. Unless a piece of information is apt to make me better at parkour, lifting, socializing, running, etc., or enable me to eat healthier so I’m less likely to get sick or come down with a terrible disease, or something like that, it’s not useful to me.
If studying some science or learning some new esoteric fact or correcting some intellectual error of mine could help me get to sleep on time, make it less likely for me to die anytime soon, make it less probable for me to suffer from depression, help me learn how to handle social interaction more effectively, tame my (sometimes extreme) akrasia, enable me to contribute to reducing the possibility of civilization-wide catastrophe in my lifetime, etc., then I’m interested. Otherwise I’m not.
I’m telling you this simply so you know what it means to help me out. If whatever you have in mind can’t be of use for me in my everyday life, then it’s not helpful. I hang out on this website, and engage in intellectual matters quite regularly, but I do so only because I think it’s the best way to fulfill my rather mundane utility function. We’re not designed properly for our current environment, and the only way to compensate is to engage in some pretty deep introspection and spend a lot of time and energy working through plenty of intellectual matters.
So what do you have that could help me? I want to live a healthy, happy human life, not have it cut short by some societal collapse, and also hopefully be around for when (or if) we put an end to aging and make it so we don’t have to die so young anymore. I also don’t want to suffer an eternity burning in Hell, that is if such a place exists.
And it’s not you thinking that I’m obviously wrong; apologies for being unclear. It’s people in general. You say you “usually can’t even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint”, and yet you’re very confident they are wrong.
Oh sorry. I should have been more precise. I don’t think anything past what you quoted of me. If by “wrong”, you mean anything incompatible with not having any idea what they’re talking about, or rather just not being able to interpret what they’re saying as serious attempts at clear communication, then I certainly don’t think they’re wrong. I just think they’re either really bad at communicating, or else engaged in a different activity.
So yeah. In that sense, I don’t think they’re wrong, and I don’t think you’re wrong. I just don’t know what they’re attempting to communicate. Or rather, it seems pretty obvious to me that most religious people aren’t even trying to communicate at all, at least in the sense of intellectual discourse, or in terms of epistemic rationality. It seems pretty clear to me that they’re just employing a bunch of techniques to get themselves to believe certain things, or else they’re just repeating certain things because of some oddity in human brain design.
But there are a ton of different religions, and a ridiculous amount of variation from person to person, so I can’t really criticize them all at once or anything, nor would it matter. And as for you, at this point I really just have no idea what you believe. It’s not that I think you’re wrong about whatever beliefs you have. It’s that I still don’t know what those beliefs are, and also that I’m under the impression that you’re not doing a very good job with your attempts to communicate them to me.
In most discussions like this, the issue isn’t that somebody has a clear map that doesn’t fit the territory. It’s almost always just a matter of a communication failure or a set of key misinterpretations, or something like that. Likewise in this discussion. It’s not that I think what you believe is wrong; it’s that I don’t know even know what you believe.
People who haven’t practiced the art of analyzing people’s decision policies in terms of signaling games, Schelling points, social psychology &c. simply don’t have the skills necessary to determine whether they’re justified in strongly disagreeing with someone.
I can’t tell whether you’re implying that I specifically don’t have those skills, or whether you’re just making some general observation or something.
Confidently assuming that your enemies are stupid is what basically everyone does, and they’re all retarded for doing it.
I certainly don’t do that. When you disagree with somebody, there’s no getting around thinking that they’re making an error (because it’s like that by definition), but considering them “stupid” is nothing more than an empty explanation. Much more useful would be saying that your opponent thinks X because he’s operating under some bias Y, or something like that.
In other words, I probably engage in plenty of discussions where I consider my opponent to be making a serious error, or not very good at managing the inferential distance properly, or ridiculously apt to make word-based errors, or whatever, but I never settle for the thought-terminating explanation that they’re just stupid, or at least I don’t think I do. Or do I?
There’s just no getting around appraising the level of intellectual ability your opponent is operating on, just like I would never play a match of tennis with somebody who sucks without acknowledging that to myself. It’s saying “he sucks” without even considering why exactly that is the case that’s the problem. When I engage in intellectual discussions, I try to stick to observations like “he doesn’t define his terms precisely” rather than just “he’s an idiot”.
LessWrong is no exception; in fact, it’s a lot worse than my high school friends, who weren’t fooled into thinking that their opinions were worth something ’cuz of a superficial knowledge of cognitive science and Bayesian statistics.
Is this aimed at me also, or what?
It’s not that I don’t think you’d update. If I took the time to lay out all my arguments, or had time to engage you often in conversation, as I have done with many folk from the SingInst community, then I’m sure I would cause you to massively update towards thinking I’m right and that LessWrong has gaping holes in its epistemology.
If the received opinion on Less Wrong really does have gaping holes in its epistemology, then I’d like to be first in line to hear about it.
That said, I alone am not this entity we call “Less Wrong”. You’re telling me I’d update massively in your direction, which means you think I also have these gaping holes in my epistemology, but do you really know that through just these few posts back and forth we’ve had here?
It’s happened many times now. People start out thinking I’m crazy or obviously wrong or just being contrarian, I talk to them for a long time, they realize I have very good epistemic habits and kick themselves for not seeing it earlier.
with many folk from the SingInst community
Who are these people who updated massively in your direction, and would any of them be willing to explain what happened, or have they in some series of posts? You’re telling me that I could revolutionize my epistemic framework if I just listened to what you had to say, but then you’re leaving me hanging.
But it takes time, and LessWrong isn’t worth my time; the only reason I comment on LessWrong is because I feel a moral obligation to, and the moral obligation isn’t strong enough to compel me to do it well.
Are you sure you’re not just doing more harm than good by being so messy in your posts? And of course the important implication here is that I personally am not worth your time, or you would talk to me long enough to actually explain yourself.
I’m just left wondering why you’re still here, just as many other people probably have been, and of course also left wondering what sort of revolutionary idea you may be hiding.
People who haven’t practiced the art of analyzing people’s decision policies in terms of signaling games, Schelling points, social psychology &c. simply don’t have the skills necessary to determine whether they’re justified in strongly disagreeing with someone.
I can’t tell whether you’re implying that I specifically don’t have those skills, or whether you’re just making some general observation or something.
As far as I can tell, Will’s a stronger epistemic majoritarian than most nerds, including us LW nerds. If a bunch of people engage in a behavior, his default belief is that behavior is adaptive in a comprehensive enough context, when examined at a meta-enough level.
Will spend a lot of time practicing model-based thinking. Even with that specific focus, he doesn’t consider his own skills adequate to declare the average person’s behavior stupid and counterproductive. I’m an average LW’ian, I’ve read The Strategy of Conflict and the sequences and Overcoming Bias had a few related insights in my daily life. I don’t have enough skill to dissolve the question and write out a flowchart that shows why some of the smartest and most rational people in the world are religious. So Will’s not going to trust me when I say that they’re wrong.
And as for you, at this point I really just have no idea what you believe.
I’m just left wondering why you’re still here, just as many other people probably have been, and of course also left wondering what sort of revolutionary idea you may be hiding.
He suspects himself of prodromal schizophrenia, due to symptoms like continuing to post here.
Some of my majoritarianism is in some sense a rationalization, or at least it’s retrospective. I happened to reach various conclusions, some epistemic, some moral, and learned various things that happened to line up much better with Catholic dogma than with any other system of thought. Some of my majoritarianism stems from wondering how I could have reached those conclusions earlier or more reliably, without the benefit of epistemic luck, which I’ve had a lot of. I think the policy that pops out isn’t actually majoritarianism so much as harboring a deep respect for highly evolved institutions, a la Nick Szabo. There’s also Chesterton’s idea of orthodoxy as democracy spread over time. On matters where there’s little reason to expect great advancement of the moderns over older cultures, like in spirituality or morality, it would be foolish to adopt a modern-majoritarian position that ignored the opinions of those older cultures. I don’t actually have all that much respect for the “average person”, but I do have great respect for the pious and the intellectually humble. I honestly see more rationality in the humble creationist than in the protypical yay-science boo-religion liberal.
He’s a prospective modal catholic—replace each instance of “amen” with “or so we are led to believe.”
Though I think my actually converting is getting less likely the more I think about the issue and study recent Church history.
He suspects himself of prodromal schizophrenia, due to symptoms like continuing to post here.
More due to typical negative symptoms and auditory hallucinations and so on most prominent about six months ago, among a few other reasons. But perhaps it’s more accurate to characterize myself as schizotypal.
You do realize that “gods” means “other AGIs”, right?
Yes, or rather I realize that in the sense that I do remember seeing you write that somewhere, but I’m not sure whether I had it sufficiently in mind during my replies. If you see anything suggesting that I didn’t have it in mind such that it invalidated what I said as irrelevant to your position, let me know.
I should mention though that it may be epistemically unsanitary to use the term “god” (or “God)” when you really mean AIs, considering how long and winding the history of such theistic terminology has been. If your goal is clear communication, I would suggest switching to a term with less baggage.
Even though I did know that’s what you meant in the sense that I saw you define it earlier, I might easily have fallen into pattern-matching and ended up largely criticizing a position irrelevant to yours.
Your goal seems to be to identify as a theist though, so using the term “God” (and the other standard theistic terminology) may be necessary for that purpose, in which case you may either (1) want to make sure to take extra care to compensate for the historical baggage and ambiguity, or (2) simply forget you ever read this comment.
I actually go out of my way to equate “god” and “AGI”/”superintelligence”, because to a large extent they seem like the same thing to me.
It’s not that I want to identify as a theist, so much as that I want to point out that I think that the only reason people think that gods/angels/demons and AGIs/superintelligences/transhuman-intelligences are different things is because they’re compartmentalizing. I think Aquinas and I believe in the same God, even if we think about Him differently. I know algorithmic probability theory, Aquinas didn’t. Leibniz almost did.
(There’s two different things going on: I believe there exists an ideal decision theory, Who is God, for theoretical reasons; whereas my reasons for believing that transhuman intelligences (lower-case-g gods) affect humans are entirely phenomenological.)
Can you give me the common meanings of those terms, and explain how they’re equivalent?
Compartmentalizing in what way? I think they’re different things, or rather it seems utterly obvious to me that religious people using the theistic terms are always using them to refer to things completely different than those on LW employing those other terms.
I should say though that the way that the theistic terms are used is in no way consistent, and everybody seems to mean something different (if I can even venture a guess as to what the hell they’re talking about). There are multiple meanings associated with these terms, to say the least.
Maybe your conception is something like, “If there really is anything out there that could in any way match the description in Catholicism or whatever, then it would perhaps have to be an AGI, or else a super-intelligent life-form that evolved naturally.”
I would say though that this seems like a desperate attempt to resurrect the irrationality of religion. If I came up with or learned something interesting or important, and also realized that some scholar or school of thought from the past or present had a few central conclusions or beliefs that seem sort of similar in some way, but believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards—I would not care. I would move on, and consider that tradition utterly useless and uninteresting.
I don’t understand why you care. It’s not like Aquinas or anybody else believed any of this stuff for the same reasons you do, or anything like that, so what’s the point of being like, “Hey, I know these people came up with this stuff for some random other reasons, but it seems like I can still support their conclusions and everything, so yeah, I’m a theist!” It just doesn’t make any sense to me, unless of course you think they came to those conclusions for good reasons that have anything at all to do with yours, in which case I need some elaboration on that point.
Either way, usually I can’t even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint. I used to think they were just totally insane or something, and I would make actual attempts to understand what they were trying to get me to visualize, but it all became clear when I started interpreting what they were saying in a different way. It all became clear when I started thinking about it in terms of them employing techniques to delude themselves into believing in an afterlife, or simply just believing it because of some epistemic vulnerability their brain was operating under.
Those theistic terms (“God” etc) have multiple meanings, and different people tend to use them differently, or rather they don’t really have meanings at all, and they’re just the way some people delude themselves into feeling more comfortable about whatever, or perhaps they’re just mind viruses taking advantage of some well-known vulnerabilities found our hardware.
I can’t for the life of me figure out why you want to retain this terminology. What use is it besides for contrarianism? Does calling yourself a theist and using the theistic terms actually aid in my or anybody else’s understanding of what you’re thinking, or what? Is the objective clear communication of something that would be important for me or other people on here to know, or what? I’m utterly confused at what you’re trying to do, and what the supposed utility is, of these beliefs of yours and your way of trying to communicate them.
What does that even mean? It sounds like the worst sort of sophistry, but I say that not necessarily to suggest you’re making an error in your thinking, but simply to allude to how and why I have no exactly what that means.
So you’re defining the sequence of letters starting with “G”, next being “o”, and ending with “d” as “the ideal decision theory”? Is this a common meaning? Do all (or most of) the religious people I know IRL use that term to refer to the ideal decision theory, even if they wouldn’t call it that?
And what do you mean by “ideal”? Ideal for what? Our utility functions? Maybe I even need to hear a bit of elaboration on what you mean by “decision theory”. Are we talking about AI programming, or human psychology, or what?
I literally have absolutely no idea why you chose the word “phenomenological” right there, or what you could possibly mean.
If I found a school of thought that seemed to come to correct conclusion unusually often but “believed them all for the wrong reasons—specifically ones absolutely insane by my own epistemic standards”, I’d take that as evidence that there is something to their reasons that I’m missing.
Actually, yes. Specifically the tendency in Catholic thought to equate God with Plato’s Form of the Good.
You’re absolutely right, but you’re stipulating the further condition that they come to the correct conclusions “unusually often”. I on the other hand was talking about a situation where they just happen to have a few of the same conclusions, and those conclusions just so happen to be central to their worldview.
I didn’t get the feeling that Will thought that Catholicism was correct an unusually amount of time. I was under the impression that he (like many others before him) is simply trying his hardest to use some of the theistic terminology and identify as a theist, despite his science background.
I just read that article, but I couldn’t parse anything, nor did I see any relation to decision theory. I’m left utterly confused.
I think you’re way too confident that the people you disagree with are obviously wrong, to the extent that I don’t think we can usefully communicate. I’m tapping out of this discussion.
Have you observed my discussions elsewhere on this website, and came to the conclusion that I’m way too confident in that way in general, or are you referring only to this particular exchange?
This discussion seems like sort of a unique case. I wouldn’t say I’m generally so confident in that respect, but I’m certainly extremely confident in this discussion, even to the point that I don’t yet have a sufficiently detailed model of you to account for how you could possibly spend so much time on this website and still engage in the sort of communication that went on throughout this discussion.
Sure, I’m extremely confident that I’m the one who’s right in this discussion, but then again that’s probably the majority feeling when people on this website engage you on this topic, even to the extent that it’s not uncommon for people to question whether you’re just trolling at this point.
Looking back on what I could have done better in this discussion to have it have been more likely for you to hear me out instead of quit, I realize that I probably would have had to spend about 5 times as much time writing, and have been extremely careful in every way throughout every reply. Even in retrospect, that probably wouldn’t have been worth it. Takes much less time to just spill my thoughts and reactions than it is to take the necessary precautions to make this sort of tap out less likely.
Even with all that said, I don’t really understand the connection between me signaling that I think you’re obviously wrong, and you saying that we can’t usefully communicate. Am I being uncharitable in my replies, or do you think it’s unlikely that I would update toward your position after setting a precedent of me thinking your clearly confused, or what?
I could see how you could pattern-match my high confidence with expecting me to have trouble updating, and I could also see how maybe some of my more terse moments may have come off as, “If he wasn’t so confident, perhaps he would have thought longer about this and responded to a more charitable interpretation.” But I should mention that in at least one case I went as far as responding to your question of whether I even know that you’re using the word “God” or “god” to refer to AGIs, by admitting that I may be attacking a strawman.
I don’t necessarily expect you to respond to any of this considering you already tapped out, but perhaps I’ve gone sufficiently meta for you to consider it a different discussion, one that you perhaps haven’t tapped out of, but in any case you’ll probably read this and maybe get something out of it, or even change your mind as to whether you want to continue engaging me on this topic.
Only this particular exchange, I haven’t seen any of your other discussions.
It’s not you clearly signaling you think I’m obviously wrong that I anticipate difficulties with; I was being imprecise. Rather, it’s a specific emotion/attitude (exasperation?) that I detect and that stresses me out a lot, because it imposes a moral obligation on me to act in good faith to show you that the kind of reasoning you’re engaged in in my experience often leads to terrible consequences that will look in retrospect as if they could easily have been avoided. On the one hand I want to try to help you, on the other hand I want to avoid blame for not having tried to help you enough, and there’s no obvious solution to that double bind, and the easiest solution is to simply bail out of the discussion. (Not necessarily your blame, just someone’s, e.g. God’s.)
And it’s not you thinking that I’m obviously wrong; apologies for being unclear. It’s people in general. You say you “usually can’t even tell what the hell most religious people are talking about from an epistemic or clear communication standpoint”, and yet you’re very confident they are wrong. People who haven’t practiced the art of analyzing people’s decision policies in terms of signaling games, Schelling points, social psychology &c. simply don’t have the skills necessary to determine whether they’re justified in strongly disagreeing with someone. Confidently assuming that your enemies are stupid is what basically everyone does, and they’re all retarded for doing it. LessWrong is no exception; in fact, it’s a lot worse than my high school friends, who weren’t fooled into thinking that their opinions were worth something ’cuz of a superficial knowledge of cognitive science and Bayesian statistics.
It’s not that I don’t think you’d update. If I took the time to lay out all my arguments, or had time to engage you often in conversation, as I have done with many folk from the SingInst community, then I’m sure I would cause you to massively update towards thinking I’m right and that LessWrong has gaping holes in its epistemology. It’s happened many times now. People start out thinking I’m crazy or obviously wrong or just being contrarian, I talk to them for a long time, they realize I have very good epistemic habits and kick themselves for not seeing it earlier. But it takes time, and LessWrong isn’t worth my time; the only reason I comment on LessWrong is because I feel a moral obligation to, and the moral obligation isn’t strong enough to compel me to do it well.
Also, I generally don’t like talking about object level beliefs; I prefer to discuss epistemology. But I’m too lazy to have long, involved discussions about epistemology, so I wouldn’t have been able to keep up our discussion either way.
I just don’t understand. I see why you may detect a level of exasperation in my replies, but I don’t get why that specifically would be what would impose that sort of moral obligation on you. You’re saying that what I’m doing may lead to terrible consequences, which sounds bad and like maybe you should do something about it, but I’m utterly confused about why my attitude is what confers that on you.
In other words, wouldn’t you feel just as morally obligated (if not more) to help me avoid such terrible consequences if I had handled this discussion with a higher level of respect or grace? Why does me (accidentally or not) signaling exasperation or annoyance lead to that feeling of moral obligation, rather than the simple fact that you consider it in your power to help somebody avoid (or lower the likelihood) of whatever horrible outcome you have in mind?
When I was first reading your reply and had only reached up to where you said “stresses me out a lot”, I thought you were just going to say that me acting frustrated with you or whatever was simply making it uncomfortable or like you would get emotionally attached such that it would be epistemically hazardous or something, which I would have understood, but then you transitioned to the whole moral obligation thing and I sort of lost you.
Just for reference, I should probably tell you what (I think) my utility function is, so you’re in a (better) position to appraise whether what you have in mind really would be of help to me.
I’m completely and utterly disinterested in academic or intellectual matters unless they somehow directly benefit me in the more mundane, base aspects of my life. Unless a piece of information is apt to make me better at parkour, lifting, socializing, running, etc., or enable me to eat healthier so I’m less likely to get sick or come down with a terrible disease, or something like that, it’s not useful to me.
If studying some science or learning some new esoteric fact or correcting some intellectual error of mine could help me get to sleep on time, make it less likely for me to die anytime soon, make it less probable for me to suffer from depression, help me learn how to handle social interaction more effectively, tame my (sometimes extreme) akrasia, enable me to contribute to reducing the possibility of civilization-wide catastrophe in my lifetime, etc., then I’m interested. Otherwise I’m not.
I’m telling you this simply so you know what it means to help me out. If whatever you have in mind can’t be of use for me in my everyday life, then it’s not helpful. I hang out on this website, and engage in intellectual matters quite regularly, but I do so only because I think it’s the best way to fulfill my rather mundane utility function. We’re not designed properly for our current environment, and the only way to compensate is to engage in some pretty deep introspection and spend a lot of time and energy working through plenty of intellectual matters.
So what do you have that could help me? I want to live a healthy, happy human life, not have it cut short by some societal collapse, and also hopefully be around for when (or if) we put an end to aging and make it so we don’t have to die so young anymore. I also don’t want to suffer an eternity burning in Hell, that is if such a place exists.
Oh sorry. I should have been more precise. I don’t think anything past what you quoted of me. If by “wrong”, you mean anything incompatible with not having any idea what they’re talking about, or rather just not being able to interpret what they’re saying as serious attempts at clear communication, then I certainly don’t think they’re wrong. I just think they’re either really bad at communicating, or else engaged in a different activity.
So yeah. In that sense, I don’t think they’re wrong, and I don’t think you’re wrong. I just don’t know what they’re attempting to communicate. Or rather, it seems pretty obvious to me that most religious people aren’t even trying to communicate at all, at least in the sense of intellectual discourse, or in terms of epistemic rationality. It seems pretty clear to me that they’re just employing a bunch of techniques to get themselves to believe certain things, or else they’re just repeating certain things because of some oddity in human brain design.
But there are a ton of different religions, and a ridiculous amount of variation from person to person, so I can’t really criticize them all at once or anything, nor would it matter. And as for you, at this point I really just have no idea what you believe. It’s not that I think you’re wrong about whatever beliefs you have. It’s that I still don’t know what those beliefs are, and also that I’m under the impression that you’re not doing a very good job with your attempts to communicate them to me.
In most discussions like this, the issue isn’t that somebody has a clear map that doesn’t fit the territory. It’s almost always just a matter of a communication failure or a set of key misinterpretations, or something like that. Likewise in this discussion. It’s not that I think what you believe is wrong; it’s that I don’t know even know what you believe.
I can’t tell whether you’re implying that I specifically don’t have those skills, or whether you’re just making some general observation or something.
I certainly don’t do that. When you disagree with somebody, there’s no getting around thinking that they’re making an error (because it’s like that by definition), but considering them “stupid” is nothing more than an empty explanation. Much more useful would be saying that your opponent thinks X because he’s operating under some bias Y, or something like that.
In other words, I probably engage in plenty of discussions where I consider my opponent to be making a serious error, or not very good at managing the inferential distance properly, or ridiculously apt to make word-based errors, or whatever, but I never settle for the thought-terminating explanation that they’re just stupid, or at least I don’t think I do. Or do I?
There’s just no getting around appraising the level of intellectual ability your opponent is operating on, just like I would never play a match of tennis with somebody who sucks without acknowledging that to myself. It’s saying “he sucks” without even considering why exactly that is the case that’s the problem. When I engage in intellectual discussions, I try to stick to observations like “he doesn’t define his terms precisely” rather than just “he’s an idiot”.
Is this aimed at me also, or what?
If the received opinion on Less Wrong really does have gaping holes in its epistemology, then I’d like to be first in line to hear about it.
That said, I alone am not this entity we call “Less Wrong”. You’re telling me I’d update massively in your direction, which means you think I also have these gaping holes in my epistemology, but do you really know that through just these few posts back and forth we’ve had here?
Who are these people who updated massively in your direction, and would any of them be willing to explain what happened, or have they in some series of posts? You’re telling me that I could revolutionize my epistemic framework if I just listened to what you had to say, but then you’re leaving me hanging.
Are you sure you’re not just doing more harm than good by being so messy in your posts? And of course the important implication here is that I personally am not worth your time, or you would talk to me long enough to actually explain yourself.
I’m just left wondering why you’re still here, just as many other people probably have been, and of course also left wondering what sort of revolutionary idea you may be hiding.
I think I can translate, a bit:
As far as I can tell, Will’s a stronger epistemic majoritarian than most nerds, including us LW nerds. If a bunch of people engage in a behavior, his default belief is that behavior is adaptive in a comprehensive enough context, when examined at a meta-enough level.
Will spend a lot of time practicing model-based thinking. Even with that specific focus, he doesn’t consider his own skills adequate to declare the average person’s behavior stupid and counterproductive. I’m an average LW’ian, I’ve read The Strategy of Conflict and the sequences and Overcoming Bias had a few related insights in my daily life. I don’t have enough skill to dissolve the question and write out a flowchart that shows why some of the smartest and most rational people in the world are religious. So Will’s not going to trust me when I say that they’re wrong.
He’s a prospective modal catholic—replace each instance of “amen” with “or so we are led to believe.”
He suspects himself of prodromal schizophrenia, due to symptoms like continuing to post here.
Some of my majoritarianism is in some sense a rationalization, or at least it’s retrospective. I happened to reach various conclusions, some epistemic, some moral, and learned various things that happened to line up much better with Catholic dogma than with any other system of thought. Some of my majoritarianism stems from wondering how I could have reached those conclusions earlier or more reliably, without the benefit of epistemic luck, which I’ve had a lot of. I think the policy that pops out isn’t actually majoritarianism so much as harboring a deep respect for highly evolved institutions, a la Nick Szabo. There’s also Chesterton’s idea of orthodoxy as democracy spread over time. On matters where there’s little reason to expect great advancement of the moderns over older cultures, like in spirituality or morality, it would be foolish to adopt a modern-majoritarian position that ignored the opinions of those older cultures. I don’t actually have all that much respect for the “average person”, but I do have great respect for the pious and the intellectually humble. I honestly see more rationality in the humble creationist than in the protypical yay-science boo-religion liberal.
Though I think my actually converting is getting less likely the more I think about the issue and study recent Church history.
More due to typical negative symptoms and auditory hallucinations and so on most prominent about six months ago, among a few other reasons. But perhaps it’s more accurate to characterize myself as schizotypal.