My primary interest is determining what the “best” thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that’s a good thing but I really don’t know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.
I believe so for reasons you wouldn’t find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn’t harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they’re falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is especially prone to this kind of arrogance; see IlyaShpitser’s comments on LessWrong’s rah-rah-Bayes stance to see part of what I mean.)
As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I’m wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I’m not talking about normal human memory biases and biases of interpretation, I’m saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia. Given that I don’t know how we should even act in such a world, I’m more inclined to go with the gods hypothesis, which, while baffling, at least has some semblance of graspability.
Can you please describe one example of the firsthand evidence you’re talking about?
Also, I honestly don’t know what the everyday sense of supernatural is. I don’t think most people who believe in “the supernatural” could give a clear definition of what they mean by the word. Can you give us yours?
Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance.
Psychologically “5%” seems to correspond to the difference between a hypothesis you’re willing to consider seriously, albeit briefly, versus a hypothesis that is perhaps worth keeping track of by name but not worth the effort required to seriously consider.
Do you have any thoughts about why, given that the gods apparently do not want their existence to be common knowledge, they allow selected individuals such as yourself to obtain compelling evidence of their presence?
I don’t have good thoughts about that. There may be something about sheep and goats, as a general rule but certainly not a universal law. It is possible that some are more cosmically interesting than others for some reason (perhaps a matter of their circumstances and not their character), but it seems unwise to ever think that about oneself; breaking the fourth wall is always a bold move, and the gods would seem to know their tropes. I wouldn’t go that route too far without expectation of a Wrong Genre Savvy incident. Or, y’know, delusionally narcissistic schizophrenia. Ah, the power of the identity of indiscernibles. Anyhow, it is possible such evidence is not so rare, especially among sheep whose beliefs are easily explained away by other plausible causes.
Do you think the available evidence, overall, is so finely balanced that somewhere between 5% and 95% confidence (say) is appropriate? That would be fairly surprising given how much evidence there is out there that’s somewhat relevant to the question of gods. Or do you think that, even in the absence of dramatic epiphanies of one’s own, we should all be way more than 95% confident of (something kinda like) theism?
I think I understand your statement about social epistemic rationality but it seems to me that a better response to the situation where you think there are many many bits of evidence for one position but lots of people hold a contrary one is to estimate your probabilities in the usual way but be aware that this is an area in which either you or many others have gone badly wrong, and therefore be especially watchful for errors in your thinking, surprising new evidence, etc.
No, without epiphanies you probably shouldn’t be more than 95% confident, I think; with the institutions we currently have for epistemic communication, and with the polarizing nature of the subject, I don’t think most people can be very confident either way. So I would say yes, I think between 5% and 95% would be appropriate, and I don’t think I share your intuition that that would be fairly surprising, perhaps because I don’t understand it. Take cold fusion, say, and ask a typical college student studying in psychology how plausible they think it is that it has been developed or will soon be developed et cetera. I think they should give an answer between 5% and 95% for most variations on that question. I think the supernatural is in that reference class. You have in mind a better reference class?
I agree the response you propose in your second paragraph is good. I don’t remember what I was proposing instead but if it was at odds with what you’re proposing then it might not be good, especially if what I recommended requires somewhat complex engineering/politics, which IIRC it did.
worlds where outright complex hallucination is a normal feature of human experience
What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it.
I sometimes find it difficult to tell hallucinations from normal experiences, even though my reasoning faculty is intact during sleep paralysis and even though I know perfectly well that these things happen to me. Here are two stories to illustrate.
Recently, my son was ill and sleeping fitfully, frequently waking up me and my wife. After one restless episode late in the night he had finally fallen asleep, snuggling up to my wife. I was trying to fall asleep again, when I heard footsteps outside the room. “My daughter (4 years old) must have gotten out of bed”, I thought, “she’ll be coming over”. But this didn’t happen. The footsteps continued and there was a light out in the hall. “Odd, my daughter must have turned on the light for some reason.” Then through the door came an infant, floating in the air. V orpnzr greevsvrq ohg sbhaq gung V jnf cnenylmrq naq pbhyq abg zbir be fcrnx. V gevrq gb gbhpu zl jvsr naq pel bhg naq svanyyl znantrq gb rzvg n fhoqhrq fuevrx. Gura gur rkcrevrapr raqrq naq V fnj gung gur yvtugf va gur unyy jrer abg ghearq ba naq urneq ab sbbgfgrcf. “Fghcvq fyrrc cnenylfvf”, V gubhtug, naq ebyyrq bire ba zl fvqr.
Here’s another somewhat older incident: I was lying in bed beside my wife when I heard movement in our daughter’s room. I lay still wondering whether to go fetch her—but then it appeared as if the sounds were coming closer. This was surprising since at that time my daughter didn’t have the habit of coming over on her own. But something was unmistakeably coming into the room and as it entered I saw that it was a large humanoid figure with my daughter’s face. V erpbvyrq va ubeebe naq yrg bhg n fuevrx. Nf zl yrsg unaq frnepurq sbe zl jvsr V sbhaq gung fur jnfa’g npghnyyl ylvat orfvqr zr—fur jnf fgnaqvat va sebag bs zr ubyqvat bhe qnhtugre. Fur’q whfg tbggra bhg bs orq gb srgpu bhe qnhtugre jvgubhg zr abgvpvat.
The two episodes play our very similarly but only one of them involved hallucinations.
I’ve sort of forgotten where I was going with this, but if Will would like to tell us a bit more about his experiences I would be interested.
You are arguing, if I understand you aright, (1) that the gods don’t want their existence to be widely known but (2) that encounters with the gods, dramatic enough to demand extraordinary explanations if they aren’t real, are commonplace.
This seems like a curious combination of claims. Could you say a little about why you don’t find their conjunction wildly implausible? (Or, if the real problem is that I’ve badly misunderstood you, correct my misunderstanding?)
It is possible the beings in question could have predicted such advances and accounted for them. But it seems some sufficiently advanced technology, whether institutional or neurological, could make the evidence “communicable”. But perhaps by the time such technologies are available, there will be many more plausible excuses for spooky agents to hide behind. Such as AGIs.
Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can’t be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And—do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.)
(I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)
There might be anthropic issues, I’ve been thinking about that more the last week. The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?‘. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but ‘the map is not the territory’. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren’t mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of “insanity”, abstract social uncleanliness.
The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.)
The experiences I’ve had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it’s one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it’s not too controversial)). But it could be done by a sufficiently experienced Bayesian modeler. (Which I’m not.)
do you think you have incommunicable evidence?
Incommunicable to some but not others. And I sort of try not to communicate the evidence to people who I think would have the interpretational framework and skills necessary to analyze it fairly, because I’m superstitious… it vaguely feels like there are things I might be expected to keep private. A gut feeling that I’d somehow be betraying something’s or someone’s confidence. It might be worth noting that I was somewhat superstitious long before I explicitly considered supernaturalism reasonable; of course, I think even most atheists who were raised atheist (I was raised atheist) are also superstitious in similar ways but don’t recognize it as such.
The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?’
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)
As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I’m wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I’m not talking about normal human memory biases and biases of interpretation, I’m saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia.
Can you explain why you believe this? To me it doesn’t seem like complex hallucination is that common. I know about 1% of the population is schizophrenic and hallucinates regularly, and I’m sure non-schizophrenics hallucinate occasionally, but it certainly seems to be fairly rare.
Can you describe your own experience with these gods?
ETA: To clarify, I’m saying that I don’t think hallucination is common, and I also don’t believe that gods are real. I don’t see why there should be any tension between those beliefs.
I agree complex recurrent hallucination in otherwise seemingly psychologically healthy people is rare, which is why the “gods”/psi hypothesis is more compelling to me. For the hallucination hypothesis to hold it would require some kind of species-wide anosognosia or something like it.
I think you misunderstood me.… My position is: Most people don’t claim to have seen gods, and gods aren’t real. A small percentage of people do have these experiences, but these people are either frauds, hallucinating, or otherwise mistaken.
I don’t see why you think the situation is either [everyone is hallucinating] or [gods are real].” It seems clear to me that [most people aren’t hallucinating] and [gods aren’t real.] Are you under the impression that most people are having direct experiences of gods or other supernatural apparitions?
Same as with Bigfoot/Loch Ness Monster. People (especially children) are highly suggestible, hallucinations and optical illusions occur, hoaxes occur. People lie to fit in. These are things that are already known to be true.
It looks to me as if the two of you are talking past each other. I think knb means “it doesn’t seem to me like things that would have to be complex hallucination if there were no gods are that common”, and is kinda assuming there are in fact no gods; whereas Will means “actual complex hallucinations aren’t common” and is kinda assuming that apparent manifestations of gods (or something of the sort) are common.
I second knb’s request that Will give some description of his own encounters with god(s), but I expect him to be unwilling to do so with much detail. [EDITED to add: And in fact I see he’s explicitly declined to do so elsewhere in the thread.]
I think hallucination is more common than many people think it is (Oliver Sacks recently wrote a book that I think makes this claim, but I haven’t read it), and I am not aware of good evidence that apparent manifestations of gods dramatic enough to be called “outright complex hallucination” are common enough to require a huge fraction of people to be anosognosic if gods aren’t real—Will, if you’re reading this, would you care to say more?
Upon further reflection it is very difficult for me to guess what percentage of people experience what evidence and of what nature and intensity. I do not feel comfortable generalizing from the experiences of people in my life, for obvious reasons and some less obvious ones. I believe this doesn’t ultimately matter so much for me, personally, because what I’ve seen implies it is common enough and clear enough to require a perhaps-heavy explanation. But for others trying to guess at more general base rates, I think I don’t have much insight to offer.
A while back, you mentioned that people regularly confuse universal priors with coding theory. But minimum message length is considered a restatement of occam’s razor, just like solomonoff induction; and MML is pretty coding theory-ish. Which parts of coding theory are dangerous to confuse with the universal prior, and what’s the danger?
The difference I was getting at is that when constructing a code you’re taking experiences you’ve already had and then assigning them weight, whereas the universal prior, being a prior, assigns weight to strings without any reference to your experiences. So when people say “the universal prior says that Maxwell’s equations are simple and Zeus is complex”, what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence: “this explanation is more probable because it is simpler, and I know it’s simpler because it’s more probable”. When in fact the relationship between simplicity and probability is tautologous, not mutually reinforcing.
This error really bothers me, because aside from its incorrectness it’s using technical mathematics in a surface way as a blunt weapon verbose argument that makes people unfamiliar with the math feel like they’re not getting something that they shouldn’t in fact get nor need to understand.
(I’ve swept the problem of “which prefix do I use?” under the rug because there are no AIT tools to deal with that and so if you want to talk about the problem of prefixes, you should do so separately from invoking AIT for some everyday hermeneutic problem. Generally if you’re invoking AIT for some object-level hermeneutic problem you’re Doing It Wrong, as has been explained most clearly by cousin_it.)
So when people say “the universal prior says that Maxwell’s equations are simple and Zeus is complex”, what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents
I thought it meant that if you taboo “Zeus”, the string length increases more dramatically than when you taboo “Maxwell’s equations”.
Sure, but stil somehow “my grandma” is more complex than “two plus two”, even if the former string has only 10 characters and the latter has 12. So now the question is whether “Zeus” is more like “my grandma” or more like “two plus two”.
So when people say “the universal prior says that Maxwell’s equations are simple and Zeus is complex”, what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence
Attempting to work the dependence of my epistemology on my experience into my epistemology itself creates a cycle in the definitions of types, and wrecks the whole thing. I suspect that reformalizing as a fixpoint thing would fix the problem, but I suspect even more strongly that the point I’m already at would be a unique fixpoint and that I’d be wrecking its elegance for the sake of generalizing to hypothetical agents that I’m not and may never encounter. (Or that all such fixpoints can be encoded as prefixes, which I too feel like sweeping under the rug.)
...So, where in this schema does Minimum Message Length fit? Under AIT, or coding theory? Seems like it’d be coding theory, since it relies on your current coding to describe the encoding for the data you’re compressing. But everyone seems to refer to MML as the computable version of Kolmogorov Complexity; and it really does seem fairly equivalent.
It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?
It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?
I think more or less yes, if I understand it. And more seriously, AIT is in some ways meant not to be practical, the interesting results require setting things up so that technically the work is pushed to the “within a constant” part. Which is divorced from praxis. Practical MML intuitions don’t carry over into such extreme domains. That said, the same core intuitions inspire them; there are just other intuitions that emerge depending on what context you’re working in or mathematizing. But this is still conjecture, ’cuz I personally haven’t actually used MML on any project, even if I’m read some results.
I mostly don’t, but when I do, Twitter. @willdoingthings mostly; it’s an uninhibited drunken tweeting account. I also participate on IRC in private channels. But in general I’ve become a lot more secretive and jaded so I post a lot less.
Any particular reason? I’d certainly be interested in some of the things you have to say. Incidentally, I’ve also had some experiences myself that could reasonably be interpreted as supernatural and wouldn’t mind comparing notes (although mine are more along the lines of having latent psychic powers and not direct encounters with other entities).
This is hard to answer. I mean something vague. A god is a seemingly transhumanly intelligent agent. (By this I don’t mean something cheap like “the economy” or “evolution”, I mean the obvious thing.) As to their origins I have little idea; aliens, simulators, programs simpler than our physical universe according to a universal prior, hypercompetent human conspiracies with seemingly inhuman motivations, whatever, I’m agnostic. For what it’s worth (some of) the entity or entities I’ve interacted with seem to want to be seen as related to or identical with one or more of the gods of popular religions, but I’m not sure. In general it’s all quite ambiguous and people are extremely hasty and heavy with their interpretations. Further complicating the issue is that it seems like the gods are willing to go along with and support humans’ heavy-handed interpretations and so the interpretations become self-confirming. I say “gods”, but for all I know it’s just one entity with very diverse effects, like an author of a book.
Note that many folklore traditions posit paranormal entities that are basically capricious and mischievous (though not unfriendly or malevolent in any real sense) and may try to deceive people who interact with them, for their own enjoyment. Some parapsychologists argue that _if_ psi-related phenomena exist, then this is pretty much the best model we have for them.
In your view, how likely is it that you may also be interacting with entities of this kind?
It seems likely that something like that is going on, but I wouldn’t think of capriciousness and mischievousness as character traits, just descriptions of the observed phenomena that are agnostic regarding the nature of any agency behind them. Those caveats are too vague for me to give an answer more precise than “likely”.
Re memantine, it helped with overactive inhibition some, but not all that much, and it made my short term memory worse and spaced me out. Not at all like the alcohol-in-a-pill I was going for, but of course benzos are better for that anyway.
New rationalists… reminds me of New Atheism these days, for a rationalist to be new. They’ve missed out on x-rationalism’s golden days, and the current currents are more hoi polloi and less interesting for, how should I put it, those who are “intelligent” in the 19th-century French sense. I don’t really identify as a rationalist, but maybe I can be identified as one. I think perhaps it would mean reading a lot in general, e.g. in history and philosophy, and reading some core LW texts like GEB, while holding back on forming any opinions, and instead just keeping a careful account of who says what and why you or others think they said what they said. I haven’t been to university but I would guess they encourage a similar attitude, at least in philosophy undergrad? I hope. Anyway I think just reading a bunch of stuff is undervalued; the most impressive rationalists according to the LW community are generally those who have read a bunch of stuff, they just have a lot of information at hand to draw from. Old books too: Wealth of Nations, Origin of Species; the origins of the modern worldview. Intelligence matters a lot, but reading a lot is equally essential.
Studying Eliezer’s Technical Explanation of Technical Explanation in depth is good for Yudkowskology which is important hermeneutical knowledge if you plan on reading through all the Sequences without being overwhelmed (whether attractively or repulsively) by their particular Yudkowskyan perspective. I do think Eliezer’s worth reading, by the way, it’s just not the core of rationality, it’s not a reliable source of epistemic norms, and it has some questionable narratives driving it that some people miss and thereby accept semi-unquestioningly. The subtext shapes the text more than is easily seen. (Of course, this also applies to those who dismiss it by assuming less credible subtext than is actually there.)
I think there are god-like things that interact with humans
Crazy people and trolls exist. Some of them are eloquent.
So why do you talk about it at all when it just makes you seem crazy to most of us?
Are you looking for confirmation or agreement in others’ hallucinations? Or perhaps you suspect your kind of experiences are more common than openly expressed?
I assume I’d take seriously your crazy experiences if they were mine. Is there anything at all you can say that’s of value to someone like me who just hears crazy?
So why do you talk about it at all when it just makes you seem crazy to most of us?
When it comes to epistemic praxis I am not a friend of the mob. I want to minimize my credibility with most of LessWrong and semi-maximize my credibility with the people I consider elite. I’m very satisfied with how successful my strategy has been.
Or perhaps you suspect your kind of experiences are more common than openly expressed?
Indeed.
I assume I’d take seriously your crazy experiences if they were mine. Is there anything at all you can say that’s of value to someone like me who just hears crazy?
I am somewhat proud of the care I’ve taken in interpreting my experiences. I think that even if people don’t think there’s anything substantial in my experiences, they might still appreciate and perhaps learn from my prudence. Interpreting the supernatural is extremely difficult and basically everyone quickly goes off the rails. Insofar as there is a rational way to really engage with the contents of the subject I think my approach is, if not rational, at least rational enough to avoid many of the failure modes. But perhaps I am overly proud.
Thanks for answering that as if it were a sincere question (it was).
“Maybe this universe has invisible/anthropic/supernatural properties” is a fascinating line of daydreaming that seems a bit time-wasting to me, because I’m not at all confident I’d do anything healthy/useful if I started attempting to experiment. Looking at all the people who are stuck in one conventional religion or another, who (otherwise?) seem every bit as intelligent and emotionally stable as I am, I think, to the extent that you’re predisposed to having any mystical experiences, that way is dangerous.
My primary interest is determining what the “best” thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that’s a good thing but I really don’t know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.
Why do you believe that there are god-like beings that interact with humans? How confident are you that this is the case?
I believe so for reasons you wouldn’t find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn’t harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they’re falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is especially prone to this kind of arrogance; see IlyaShpitser’s comments on LessWrong’s rah-rah-Bayes stance to see part of what I mean.)
As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I’m wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I’m not talking about normal human memory biases and biases of interpretation, I’m saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia. Given that I don’t know how we should even act in such a world, I’m more inclined to go with the gods hypothesis, which, while baffling, at least has some semblance of graspability.
Can you please describe one example of the firsthand evidence you’re talking about?
Also, I honestly don’t know what the everyday sense of supernatural is. I don’t think most people who believe in “the supernatural” could give a clear definition of what they mean by the word. Can you give us yours?
Thanks.
I realize it’s annoying, but I don’t think I should do that.
I give a definition of “supernatural” here. Of course, it doesn’t capture all of what people use the word to mean.
Why not?
Where does the 5% threshold come from?
Psychologically “5%” seems to correspond to the difference between a hypothesis you’re willing to consider seriously, albeit briefly, versus a hypothesis that is perhaps worth keeping track of by name but not worth the effort required to seriously consider.
(nods) Fair enough.
Do you have any thoughts about why, given that the gods apparently do not want their existence to be common knowledge, they allow selected individuals such as yourself to obtain compelling evidence of their presence?
I don’t have good thoughts about that. There may be something about sheep and goats, as a general rule but certainly not a universal law. It is possible that some are more cosmically interesting than others for some reason (perhaps a matter of their circumstances and not their character), but it seems unwise to ever think that about oneself; breaking the fourth wall is always a bold move, and the gods would seem to know their tropes. I wouldn’t go that route too far without expectation of a Wrong Genre Savvy incident. Or, y’know, delusionally narcissistic schizophrenia. Ah, the power of the identity of indiscernibles. Anyhow, it is possible such evidence is not so rare, especially among sheep whose beliefs are easily explained away by other plausible causes.
Do you think the available evidence, overall, is so finely balanced that somewhere between 5% and 95% confidence (say) is appropriate? That would be fairly surprising given how much evidence there is out there that’s somewhat relevant to the question of gods. Or do you think that, even in the absence of dramatic epiphanies of one’s own, we should all be way more than 95% confident of (something kinda like) theism?
I think I understand your statement about social epistemic rationality but it seems to me that a better response to the situation where you think there are many many bits of evidence for one position but lots of people hold a contrary one is to estimate your probabilities in the usual way but be aware that this is an area in which either you or many others have gone badly wrong, and therefore be especially watchful for errors in your thinking, surprising new evidence, etc.
No, without epiphanies you probably shouldn’t be more than 95% confident, I think; with the institutions we currently have for epistemic communication, and with the polarizing nature of the subject, I don’t think most people can be very confident either way. So I would say yes, I think between 5% and 95% would be appropriate, and I don’t think I share your intuition that that would be fairly surprising, perhaps because I don’t understand it. Take cold fusion, say, and ask a typical college student studying in psychology how plausible they think it is that it has been developed or will soon be developed et cetera. I think they should give an answer between 5% and 95% for most variations on that question. I think the supernatural is in that reference class. You have in mind a better reference class?
I agree the response you propose in your second paragraph is good. I don’t remember what I was proposing instead but if it was at odds with what you’re proposing then it might not be good, especially if what I recommended requires somewhat complex engineering/politics, which IIRC it did.
What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it.
I sometimes find it difficult to tell hallucinations from normal experiences, even though my reasoning faculty is intact during sleep paralysis and even though I know perfectly well that these things happen to me. Here are two stories to illustrate.
Recently, my son was ill and sleeping fitfully, frequently waking up me and my wife. After one restless episode late in the night he had finally fallen asleep, snuggling up to my wife. I was trying to fall asleep again, when I heard footsteps outside the room. “My daughter (4 years old) must have gotten out of bed”, I thought, “she’ll be coming over”. But this didn’t happen. The footsteps continued and there was a light out in the hall. “Odd, my daughter must have turned on the light for some reason.” Then through the door came an infant, floating in the air. V orpnzr greevsvrq ohg sbhaq gung V jnf cnenylmrq naq pbhyq abg zbir be fcrnx. V gevrq gb gbhpu zl jvsr naq pel bhg naq svanyyl znantrq gb rzvg n fhoqhrq fuevrx. Gura gur rkcrevrapr raqrq naq V fnj gung gur yvtugf va gur unyy jrer abg ghearq ba naq urneq ab sbbgfgrcf. “Fghcvq fyrrc cnenylfvf”, V gubhtug, naq ebyyrq bire ba zl fvqr.
Here’s another somewhat older incident: I was lying in bed beside my wife when I heard movement in our daughter’s room. I lay still wondering whether to go fetch her—but then it appeared as if the sounds were coming closer. This was surprising since at that time my daughter didn’t have the habit of coming over on her own. But something was unmistakeably coming into the room and as it entered I saw that it was a large humanoid figure with my daughter’s face. V erpbvyrq va ubeebe naq yrg bhg n fuevrx. Nf zl yrsg unaq frnepurq sbe zl jvsr V sbhaq gung fur jnfa’g npghnyyl ylvat orfvqr zr—fur jnf fgnaqvat va sebag bs zr ubyqvat bhe qnhtugre. Fur’q whfg tbggra bhg bs orq gb srgpu bhe qnhtugre jvgubhg zr abgvpvat.
The two episodes play our very similarly but only one of them involved hallucinations.
I’ve sort of forgotten where I was going with this, but if Will would like to tell us a bit more about his experiences I would be interested.
You are arguing, if I understand you aright, (1) that the gods don’t want their existence to be widely known but (2) that encounters with the gods, dramatic enough to demand extraordinary explanations if they aren’t real, are commonplace.
This seems like a curious combination of claims. Could you say a little about why you don’t find their conjunction wildly implausible? (Or, if the real problem is that I’ve badly misunderstood you, correct my misunderstanding?)
Could a future neuroscience in principle change this, or do you have a stronger notion of incommunicability?
It is possible the beings in question could have predicted such advances and accounted for them. But it seems some sufficiently advanced technology, whether institutional or neurological, could make the evidence “communicable”. But perhaps by the time such technologies are available, there will be many more plausible excuses for spooky agents to hide behind. Such as AGIs.
Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can’t be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And—do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.)
(I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)
There might be anthropic issues, I’ve been thinking about that more the last week. The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?‘. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but ‘the map is not the territory’. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren’t mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of “insanity”, abstract social uncleanliness.
The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.)
The experiences I’ve had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it’s one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it’s not too controversial)). But it could be done by a sufficiently experienced Bayesian modeler. (Which I’m not.)
Incommunicable to some but not others. And I sort of try not to communicate the evidence to people who I think would have the interpretational framework and skills necessary to analyze it fairly, because I’m superstitious… it vaguely feels like there are things I might be expected to keep private. A gut feeling that I’d somehow be betraying something’s or someone’s confidence. It might be worth noting that I was somewhat superstitious long before I explicitly considered supernaturalism reasonable; of course, I think even most atheists who were raised atheist (I was raised atheist) are also superstitious in similar ways but don’t recognize it as such.
Sorry for the poor writing.
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)
Can you explain why you believe this? To me it doesn’t seem like complex hallucination is that common. I know about 1% of the population is schizophrenic and hallucinates regularly, and I’m sure non-schizophrenics hallucinate occasionally, but it certainly seems to be fairly rare.
Can you describe your own experience with these gods?
ETA: To clarify, I’m saying that I don’t think hallucination is common, and I also don’t believe that gods are real. I don’t see why there should be any tension between those beliefs.
I agree complex recurrent hallucination in otherwise seemingly psychologically healthy people is rare, which is why the “gods”/psi hypothesis is more compelling to me. For the hallucination hypothesis to hold it would require some kind of species-wide anosognosia or something like it.
I think you misunderstood me.… My position is: Most people don’t claim to have seen gods, and gods aren’t real. A small percentage of people do have these experiences, but these people are either frauds, hallucinating, or otherwise mistaken.
I don’t see why you think the situation is either [everyone is hallucinating] or [gods are real].” It seems clear to me that [most people aren’t hallucinating] and [gods aren’t real.] Are you under the impression that most people are having direct experiences of gods or other supernatural apparitions?
So how do you explain things like this?
Same as with Bigfoot/Loch Ness Monster. People (especially children) are highly suggestible, hallucinations and optical illusions occur, hoaxes occur. People lie to fit in. These are things that are already known to be true.
Well the miracle of the sun was witnessed by 30,000 to 100,000 people.
How many people witnessed this?
It looks to me as if the two of you are talking past each other. I think knb means “it doesn’t seem to me like things that would have to be complex hallucination if there were no gods are that common”, and is kinda assuming there are in fact no gods; whereas Will means “actual complex hallucinations aren’t common” and is kinda assuming that apparent manifestations of gods (or something of the sort) are common.
I second knb’s request that Will give some description of his own encounters with god(s), but I expect him to be unwilling to do so with much detail. [EDITED to add: And in fact I see he’s explicitly declined to do so elsewhere in the thread.]
I think hallucination is more common than many people think it is (Oliver Sacks recently wrote a book that I think makes this claim, but I haven’t read it), and I am not aware of good evidence that apparent manifestations of gods dramatic enough to be called “outright complex hallucination” are common enough to require a huge fraction of people to be anosognosic if gods aren’t real—Will, if you’re reading this, would you care to say more?
Upon further reflection it is very difficult for me to guess what percentage of people experience what evidence and of what nature and intensity. I do not feel comfortable generalizing from the experiences of people in my life, for obvious reasons and some less obvious ones. I believe this doesn’t ultimately matter so much for me, personally, because what I’ve seen implies it is common enough and clear enough to require a perhaps-heavy explanation. But for others trying to guess at more general base rates, I think I don’t have much insight to offer.
A while back, you mentioned that people regularly confuse universal priors with coding theory. But minimum message length is considered a restatement of occam’s razor, just like solomonoff induction; and MML is pretty coding theory-ish. Which parts of coding theory are dangerous to confuse with the universal prior, and what’s the danger?
The difference I was getting at is that when constructing a code you’re taking experiences you’ve already had and then assigning them weight, whereas the universal prior, being a prior, assigns weight to strings without any reference to your experiences. So when people say “the universal prior says that Maxwell’s equations are simple and Zeus is complex”, what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence: “this explanation is more probable because it is simpler, and I know it’s simpler because it’s more probable”. When in fact the relationship between simplicity and probability is tautologous, not mutually reinforcing.
This error really bothers me, because aside from its incorrectness it’s using technical mathematics in a surface way as a blunt weapon verbose argument that makes people unfamiliar with the math feel like they’re not getting something that they shouldn’t in fact get nor need to understand.
(I’ve swept the problem of “which prefix do I use?” under the rug because there are no AIT tools to deal with that and so if you want to talk about the problem of prefixes, you should do so separately from invoking AIT for some everyday hermeneutic problem. Generally if you’re invoking AIT for some object-level hermeneutic problem you’re Doing It Wrong, as has been explained most clearly by cousin_it.)
I thought it meant that if you taboo “Zeus”, the string length increases more dramatically than when you taboo “Maxwell’s equations”.
Except that’s not the case. I can make any statement arbitrarily long by continuously forcing you to taboo the words you use.
Sure, but stil somehow “my grandma” is more complex than “two plus two”, even if the former string has only 10 characters and the latter has 12. So now the question is whether “Zeus” is more like “my grandma” or more like “two plus two”.
Attempting to work the dependence of my epistemology on my experience into my epistemology itself creates a cycle in the definitions of types, and wrecks the whole thing. I suspect that reformalizing as a fixpoint thing would fix the problem, but I suspect even more strongly that the point I’m already at would be a unique fixpoint and that I’d be wrecking its elegance for the sake of generalizing to hypothetical agents that I’m not and may never encounter. (Or that all such fixpoints can be encoded as prefixes, which I too feel like sweeping under the rug.)
...So, where in this schema does Minimum Message Length fit? Under AIT, or coding theory? Seems like it’d be coding theory, since it relies on your current coding to describe the encoding for the data you’re compressing. But everyone seems to refer to MML as the computable version of Kolmogorov Complexity; and it really does seem fairly equivalent.
It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?
I think more or less yes, if I understand it. And more seriously, AIT is in some ways meant not to be practical, the interesting results require setting things up so that technically the work is pushed to the “within a constant” part. Which is divorced from praxis. Practical MML intuitions don’t carry over into such extreme domains. That said, the same core intuitions inspire them; there are just other intuitions that emerge depending on what context you’re working in or mathematizing. But this is still conjecture, ’cuz I personally haven’t actually used MML on any project, even if I’m read some results.
Where are you posting these days?
I mostly don’t, but when I do, Twitter. @willdoingthings mostly; it’s an uninhibited drunken tweeting account. I also participate on IRC in private channels. But in general I’ve become a lot more secretive and jaded so I post a lot less.
Any particular reason? I’d certainly be interested in some of the things you have to say. Incidentally, I’ve also had some experiences myself that could reasonably be interpreted as supernatural and wouldn’t mind comparing notes (although mine are more along the lines of having latent psychic powers and not direct encounters with other entities).
What do you mean with the term god?
This is hard to answer. I mean something vague. A god is a seemingly transhumanly intelligent agent. (By this I don’t mean something cheap like “the economy” or “evolution”, I mean the obvious thing.) As to their origins I have little idea; aliens, simulators, programs simpler than our physical universe according to a universal prior, hypercompetent human conspiracies with seemingly inhuman motivations, whatever, I’m agnostic. For what it’s worth (some of) the entity or entities I’ve interacted with seem to want to be seen as related to or identical with one or more of the gods of popular religions, but I’m not sure. In general it’s all quite ambiguous and people are extremely hasty and heavy with their interpretations. Further complicating the issue is that it seems like the gods are willing to go along with and support humans’ heavy-handed interpretations and so the interpretations become self-confirming. I say “gods”, but for all I know it’s just one entity with very diverse effects, like an author of a book.
Note that many folklore traditions posit paranormal entities that are basically capricious and mischievous (though not unfriendly or malevolent in any real sense) and may try to deceive people who interact with them, for their own enjoyment. Some parapsychologists argue that _if_ psi-related phenomena exist, then this is pretty much the best model we have for them.
In your view, how likely is it that you may also be interacting with entities of this kind?
It seems likely that something like that is going on, but I wouldn’t think of capriciousness and mischievousness as character traits, just descriptions of the observed phenomena that are agnostic regarding the nature of any agency behind them. Those caveats are too vague for me to give an answer more precise than “likely”.
I’m curious about your experience with memantine- I vaguely remember you tweeting about it. What was it helping you with?
If you disagree in spirit with much of the sequences, what would you recommend for new rationalists to start with instead?
Re memantine, it helped with overactive inhibition some, but not all that much, and it made my short term memory worse and spaced me out. Not at all like the alcohol-in-a-pill I was going for, but of course benzos are better for that anyway.
New rationalists… reminds me of New Atheism these days, for a rationalist to be new. They’ve missed out on x-rationalism’s golden days, and the current currents are more hoi polloi and less interesting for, how should I put it, those who are “intelligent” in the 19th-century French sense. I don’t really identify as a rationalist, but maybe I can be identified as one. I think perhaps it would mean reading a lot in general, e.g. in history and philosophy, and reading some core LW texts like GEB, while holding back on forming any opinions, and instead just keeping a careful account of who says what and why you or others think they said what they said. I haven’t been to university but I would guess they encourage a similar attitude, at least in philosophy undergrad? I hope. Anyway I think just reading a bunch of stuff is undervalued; the most impressive rationalists according to the LW community are generally those who have read a bunch of stuff, they just have a lot of information at hand to draw from. Old books too: Wealth of Nations, Origin of Species; the origins of the modern worldview. Intelligence matters a lot, but reading a lot is equally essential.
Studying Eliezer’s Technical Explanation of Technical Explanation in depth is good for Yudkowskology which is important hermeneutical knowledge if you plan on reading through all the Sequences without being overwhelmed (whether attractively or repulsively) by their particular Yudkowskyan perspective. I do think Eliezer’s worth reading, by the way, it’s just not the core of rationality, it’s not a reliable source of epistemic norms, and it has some questionable narratives driving it that some people miss and thereby accept semi-unquestioningly. The subtext shapes the text more than is easily seen. (Of course, this also applies to those who dismiss it by assuming less credible subtext than is actually there.)
Crazy people and trolls exist. Some of them are eloquent.
So why do you talk about it at all when it just makes you seem crazy to most of us?
Are you looking for confirmation or agreement in others’ hallucinations? Or perhaps you suspect your kind of experiences are more common than openly expressed?
I assume I’d take seriously your crazy experiences if they were mine. Is there anything at all you can say that’s of value to someone like me who just hears crazy?
When it comes to epistemic praxis I am not a friend of the mob. I want to minimize my credibility with most of LessWrong and semi-maximize my credibility with the people I consider elite. I’m very satisfied with how successful my strategy has been.
Indeed.
I am somewhat proud of the care I’ve taken in interpreting my experiences. I think that even if people don’t think there’s anything substantial in my experiences, they might still appreciate and perhaps learn from my prudence. Interpreting the supernatural is extremely difficult and basically everyone quickly goes off the rails. Insofar as there is a rational way to really engage with the contents of the subject I think my approach is, if not rational, at least rational enough to avoid many of the failure modes. But perhaps I am overly proud.
Thanks for answering that as if it were a sincere question (it was).
“Maybe this universe has invisible/anthropic/supernatural properties” is a fascinating line of daydreaming that seems a bit time-wasting to me, because I’m not at all confident I’d do anything healthy/useful if I started attempting to experiment. Looking at all the people who are stuck in one conventional religion or another, who (otherwise?) seem every bit as intelligent and emotionally stable as I am, I think, to the extent that you’re predisposed to having any mystical experiences, that way is dangerous.