Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can’t be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And—do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.)
(I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)
There might be anthropic issues, I’ve been thinking about that more the last week. The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?‘. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but ‘the map is not the territory’. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren’t mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of “insanity”, abstract social uncleanliness.
The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.)
The experiences I’ve had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it’s one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it’s not too controversial)). But it could be done by a sufficiently experienced Bayesian modeler. (Which I’m not.)
do you think you have incommunicable evidence?
Incommunicable to some but not others. And I sort of try not to communicate the evidence to people who I think would have the interpretational framework and skills necessary to analyze it fairly, because I’m superstitious… it vaguely feels like there are things I might be expected to keep private. A gut feeling that I’d somehow be betraying something’s or someone’s confidence. It might be worth noting that I was somewhat superstitious long before I explicitly considered supernaturalism reasonable; of course, I think even most atheists who were raised atheist (I was raised atheist) are also superstitious in similar ways but don’t recognize it as such.
The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?’
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)
Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can’t be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And—do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.)
(I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)
There might be anthropic issues, I’ve been thinking about that more the last week. The specific question I’ve been asking is ‘What does it mean for me and someone else to live in the same world?‘. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but ‘the map is not the territory’. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren’t mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of “insanity”, abstract social uncleanliness.
The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.)
The experiences I’ve had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it’s one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it’s not too controversial)). But it could be done by a sufficiently experienced Bayesian modeler. (Which I’m not.)
Incommunicable to some but not others. And I sort of try not to communicate the evidence to people who I think would have the interpretational framework and skills necessary to analyze it fairly, because I’m superstitious… it vaguely feels like there are things I might be expected to keep private. A gut feeling that I’d somehow be betraying something’s or someone’s confidence. It might be worth noting that I was somewhat superstitious long before I explicitly considered supernaturalism reasonable; of course, I think even most atheists who were raised atheist (I was raised atheist) are also superstitious in similar ways but don’t recognize it as such.
Sorry for the poor writing.
As best I can tell, a full reduction of “existence” necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word “exist”, or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents’ utility functions don’t have to share that component.
The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don’t care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately.
Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name “probability” stop lining up with each other.)
But it’s possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those universes separately. So if you observe whether or not your memories seem to follow the Born rule, and that you’re evolved with respect to an environment that seems to follow the Born rule, then one version of you will go on to optimize the content of universes that follow it, and another version will go on to optimize the content of universes that don’t, and this will be more effective than trying to keep them tied together. Similarly for deism; if you make the observation, then you can accept that some other version of you had the observation come out the other way, and get on with optimizing your own side of the divide.
That is, if you never forget anything. If you model yourself with short and long term memory as separate, and think in TDT-like terms, then all similar agents with matching short-term memories act the same way, and it’s the retrieval of an observation from long-term memory—rather than the observation itself—that splits an agent between universes. (But the act of performing an observation changes the distribution of results when agents do this long-term-memory lookup. I think this adds up to normality, eventually and in most cases. But the cases in which it doesn’t seem interesting.)