Author of meaningness.com, vividness.live, and other things.
MIT AI PhD, successful biotech entrepreneur, and other things.
Author of meaningness.com, vividness.live, and other things.
MIT AI PhD, successful biotech entrepreneur, and other things.
Quote from Richard Feynman explaining why there are no objects here.
I’ve begun a STEM-compatible attempt to explain a “no objectively-given objects” ontology in “Boundaries, objects, and connections.” That’s supposed to be the introduction to a book chapter that is extensively drafted but not yet polished enough to publish.
Really glad you are working on this also!
I would not say that selves don’t exist (although it’s possible that I have done so somewhere, sloppily).
Rather, that selves are both nebulous and patterned (“empty forms,” in Tantric terminology).
Probably the clearest summary of that I’ve written so far is “Selfness,” which is supposed to be the introduction to a chapter of the Meaningness book that does not yet otherwise exist.
Renouncing the self is characteristic of Sutrayana.
[FWOMP Summoned spirit appears in Kaj’s chalk octogram. Gouts of eldritch flame, etc. Spirit squints around at unfamiliar environment bemusedly. Takes off glasses, holds them up to the candlelight, grimaces, wipes glasses on clothing, replaces on nose. Grunts. Speaks:]
Buddhism is a diverse family of religions, with distinct conceptions of enlightenment. These seem to be quite different and contradictory.
According to one classification of disparate doctrines, Buddhism can be divided into Vajrayana (Tantra plus Dzogchen) and Sutrayana (everything else, except maybe Zen). In this classification, Sutrayana aims at “emptiness,” which is a generalization of the Three Marks, including anatman (non-self). The central method of Sutrayana is renunciation. Renunciation of the self is a major aspect. For Sutrayana, clear sustained perception of anatman (or emptiness more generally) is enlightenment, by definition.
For Buddhist Tantra, experience of emptiness is the “base” or starting point. That’s the sense in which “enlightenment is the prerequisite”—but it’s enlightenment as understood in Sutrayana. Whereas Sutrayana is the path from “form” (ordinary appearances) to emptiness, Tantra is the path from emptiness to the non-duality of emptiness and form. The aim is to perceive everything as both simultaneously. That non-dual vision is the definition of enlightenment within Tantra. The “duality” referred to here is the duality between emptiness and form, rather than the duality between self and other—which is what is overcome in Sutrayana. The non-dual vision that is the end-point of Tantra, is then the base or starting point for Dzogchen.
(Probably the best thing I’ve written about this is “Beyond Emptiness: Zen, Tantra, and Dzogchen.” It may not be very clear but I hope at least it is entertaining. “Sutra, Tantra, and the Modern Worldview” is less fun but more concrete.)
seeing that the self is an arbitrary construct which you don’t need to take too seriously, can enable you to play with it in a tantric fashion
Yes, this is a Vajrayana viewpoint. For Sutrayana, the self is non-existent, or at least “empty”; for Vajrayana, it is empty form. That is, “self” is a label applied to various phenomena, which overall are found to be insubstantial, transient, boundaryless, discontinuous, and ambiguous—and yet which exhibit heft, durability, continence, extension, and specificity. This mild paradox is quite amusing—a starting point for tantric play.
I’ll say a bit more about “self” in response to Sarah Constantin’s comment on this post.
Glad you liked the post! Thanks for pointing out the link problem. I’ve fixed it, for now. It links to a PDF of a file that’s found in many places on the internet, but any one of them might be taken down at any time.
A puzzling question is why your brain doesn’t get this right automatically. In particular, deciding whether to gather some food before sleeping is an issue mammals have faced in the EEA for millions of years.
Temporal difference learning seems so basic that brains ought to implement it reasonably accurately. Any idea why we might do the wrong thing in this case?
Are there any psychoactive gases or aerosols that drive you mad?
I suppose a psychedelic might push someone over the edge if they were sufficiently psychologically fragile. I don’t know of any substances that specifically make people mad, though.
One aspect of what I consider the correct solution is that the only question that needs to be answered is “do I think putting a coin in the box has positive or negative utility”, and one can answer that without any guess about what it is actually going to do.
What is your base rate for boxes being able to drive you mad if you put a coin in them?
Can you imagine any mechanism whereby a box would drive you mad if you put a coin in it? (I can’t.)
Excellent! This is very much pointing in the direction of what I consider the correct general approach. I hadn’t thought of what you suggest specifically, but it’s an instance of the general category I had in mind.
Thanks for the encouragement! I have way too many half-completed writing projects, but this does seem an important point.
Oh, goodness, interesting, you do think I’m evil!
I’m not sure whether to be flattered or upset or what. It’s kinda cool, anyway!
Well, the problem I was thinking of is “the universe is not a bit string.” And any unbiased representation we can make of the universe as a bit string is going to be extremely large—much too large to do even sane sorts of computation with, never mind Solomonoff.
Maybe that’s saying the same thing you did? I’m not sure...
I can’t guarantee you won’t get blown up
Yes—this is part of what I’m driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn’t like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn’t really feasible. That’s one reason the uncertainty here is “Knightian” or “radical.”
In fact, in the real world, “and then you get eaten by a black hole incoming near the speed of light” is always a possibility. Life comes with no guarantees at all.
Often in Knightian problems you are just screwed and there’s nothing rational you can do. But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
Hmm… given that the previous several boxes have either paid $2 or done nothing, it seems like that primes the hypothesis that the next in the series also pays $2 or does nothing. (I’m not actually disagreeing, but doesn’t that argument seem reasonable?)
To answer this we engage our big amount of human knowledge about boxes and people who hand them to you.
Of comments so far, this comes closest to the answer I have in mind… for whatever that’s worth!
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let’s set that aside.
So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don’t want to waste your time with that… Maybe the thought experiment ought to have specified a time limit. Personally, I don’t think enumerating things the boxes could possibly do would be helpful at all. Isn’t there an easier approach?
The evidence that I didn’t select it at random was my saying “I find this one particularly interesting.”
I also claimed that “I’m probably not that evil.” Of course, I might be lying about that! Still, that’s a fact that ought to go into your Bayesian evaluation, no?
Yes, I’m not at all committed to the metaprobability approach. In fact, I concocted the black box example specifically to show its limitations!
Solomonoff induction is extraordinarily unhelpful, I think… that it is uncomputable is only one reason.
I think there’s a fairly simple and straightforward strategy to address the black box problem, which has not been mentioned so far...
That’s good, yes!
How would you assign a probability to that?
So… you think I am probably evil, then? :-)
I gave you the box (in the thought experiment). I may not have selected it from Thingspace at random!
In fact, there’s strong evidence in the text of the OP that I didn’t...
Have you read Minsky’s _Society of Mind_? It is an AI-flavored psychological model of subagents that draws heavily on psychotherapeutic ideas. It seems quite similar in flavor to what you propose here. It inspired generations of students at the MIT AI Lab (although attempts to code it never worked out).