And for people on the Vim side, there’s VimOutliner for doing workflowy-like outlines, also with a time-tracking component.
Risto_Saarelma
Cal Newport on “Write Every Day”. If it’s not your main job, you’re going to end up having no write days, and if you’re committed to a hard schedule a missed day is going to translate into “welp, couldn’t make the cut then, better quit for good”.
On Moldbug from 2012.
Yes, The Mind Illuminated is basically the same ten-step model as the one in that article, but expanded to book length and with lots of extra practice advice and theory of mental models.
The Mind Illuminated by John Yates is my new favorite meditation instruction book. Has lots of modern neuroscience grounding, completely secular, and presents a very detailed step-by-step instruction on going from not having a daily meditation habit going to attaining very deep concentration states.
One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren’t really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.
Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.
Congratulations on getting a “ban any new user posting the sort of stuff Eugine would post” moderation norm on the way I guess.
This sounds like someone who’s salient feature is math anxiety from high school asking how to be a research director at CERN. It’s not just that the salient feature seems at odds with the task, it’s that the task isn’t exactly something you just walk into, while you sound like you’re talking about helping someone overcome a social phobia by taking a part-time job at supermarket checkout. Is your friend someone who wins International Math Olympiads?
Maybe someday someone clever will figure out how to disseminate that knowledge, but it simply isn’t there yet.
Based on Razib Khan’s blog posts, many cutting edge researchers seem to be pretty active on Twitter where they can talk about their own stuff and keep up on what their colleagues are up to. Grad students on social media will probably respond to someone asking about their subfield if it looks like they know their basics and may be up to something interesting.
The tiny bandwidth is of course a problem. “Professor Z has probably proven math lemma A” fits in a tweet, instruction on lab work rituals not so much.
Clever people who don’t want to pay for plane tickets and tuition might be pretty resourceful though, once they figure out they want to talk with each other to learn what they need to know.
I am quite certain this is very unlikely to become any type of trend (it is certainly possible for outsiders to be great, Ramanujan was an outsider after all).
Not in the present circumstances, no. The interesting thing is if it would strike a match with the current disaffection with academia (perceptions of must-have-bachelor’s-for-any-kind-of-job student loan rackets and stressed-out researchers who spend most of their energy gaming administrative systems and grinding out cookie-cutter research tailored to fit standardized bureaucratic metrics for acceptable tenure-track career path progress), cause more young people who think they are talented and exceptional to drop out, and what they will do once they have and if that trend might continue far enough to change the wider circumstances around academia.
Yeah, I am sure enough about this not happening that I am willing to place bets. There is an enormous amount of intangibles Coursera can’t give you (I agree it can be useful for a certain type of person for certain types of aims).
Agree that being inside academia is probably a lot bigger deal than people outside it really appreciate. We’re about to see the first generation that grew up with a really ubiquitous internet come to grad school age though. Currently in addition to the assumption that generally clever people will want to go to university, we’ve treated it as obvious that the Nobel prize winning clever people will have an academic background. Which has been pretty much mandatory, since that used to be the only way you got to talk with other academicians and to access academic publications.
What I’m interested in now is whether in the next couple decades we’re going to see a Grigori Perelman or Shinichi Mochizuki style extreme outlier produce some result that ends up widely acknowledged to be an equally big deal as what Perelman did, without ever having seen the inside of an university. You can read pretty much any textbook or article you want over an internet connection now, and it’s probably not impossible to get professional mathematicians talking with you even when they have no idea who you are if it’s evident from the start that you have some idea what their research is about. And an extreme outlier might be clever enough to figure things on their own, obsessive enough to keep working on them on their own for years, and somewhat eccentric so that they take a dim view on academia and decline to play along out of principle.
It’d basically be a fluke statistically, but it would put a brand new spin on the narrative about academia. Academia wouldn’t be the obvious one source of higher learning anymore, it’d be the place where you go when you’re pretty smart but not quite good and original enough to go it alone.
Yeah, for some reason I’m not inclined to give very much weight to an event that can’t be detected by outside observers at all and which my past, present or future selves can’t subjectively observe being about to happen, happening right now or having happened.
You seem to be hung up on either memories or observations being the key to decoding the subjective self. I think that is your error.
This sounds like a thing people who want to explain away subjective consciousness completely are saying. I’m attacking the notion that the annoying mysterious part in subjective consciousness with the qualia and stuff includes a privileged relation from the present moment of consciousness to a specific future moment of consciousness, not the one that there’s subjective consciousness stuff to begin with that isn’t easy to reduce to just objective memories and observations.
There is some Buddhist connection, yes. The moments of experience thing is a thing in some meditation styles, and advanced meditators are actually describing something like subjective experience starting to feel like an on/off sequence instead of a continuous flux. Haven’t gone really deep into what either the Buddhist metaphysics or the meditation phenomenology says. Neuroscience also has some discrete consciousness steps stuff, but I likewise haven’t gone very deep into that. Anyway,
I’m with them so far. Here’s where I get off): All sentient beings are points of naked awareness, by definition they are identical (naked, passive), therefore they are the same, Therefore even this self does not matter, therefore thou shall not value the self more than others. At all. On any level. All of which can lead you to bricking yourself up in a cave being the correct course of action.
This is still up for grabs. Given the whole thing about memories being what makes you you, consciousness itself is nice but it’s not all that. It can still be your tribe against the world, your family against your tribe, your siblings against your family and you and your army of upload copies against your siblings and their armies of upload copies. So I’m basically thinking about this from a kin altruism and a general having people more like you closer in your circle of concern than people less like you thing. Upload copies are basically way, way closer kin than any actual kin.
So am I a pattern theorist? Not quite sure. It seems to resolve lots of paradoxes with the upload thought experiments, and I have no idea about a way to prove it wrong. (Would like to find one though, it seems sorta simplistic and we definitely still don’t understand consciousness to my satisfaction.) But like I said, if I sit down on an upload couch, I fully expect to get up from an upload couch, not suddenly be staring at a HUD saying “IN SIMULATION”, even though pattern theory seems to say that I should expect each outcome with 50 % probability. There will be someone who does wake up in the simulation with my memories in the thought experiment, no matter which interpretation, so I imagine those versions will start expecting to shift viewpoints while they do further upload scans, while the version of me who always wakes up on the upload coach (by the coin-toss tournament logic, there will be a me who never experiences waking up in a simulation, and one who always does) will continue to not expect much. I think uploads are a good idea more because of the kin selection like reasons above rather than because I’m convinced it’s a ticket to personal immortality.
I wouldn’t give a damn about aliens taking my body and brain apart every time I sleep as long as they put it back together perfectly again though, so if that makes me a pattern theorist then yes.
The strange part that might give your intuition a bit of a shake is that it’s not entirely clear how you tell the difference as an inside observer either. The thought experiment wasn’t “we’re going to start doing this tomorrow night unless you acquiesce”, it’s “we’ve been doing this the whole time”, and everybody had been living their life exactly as before until told about it. What should you now think of your memories of every previous day and going to sleep each night?
My expounding of the pattern identity theory elsewhere in the comments is probably a textbook example of what Scott Aaronson calls bullet-swallowing, so just to balance things out I’m going to link to Aaronson’s paper Ghost in the Quantum Turing Machine that sketches a very different attack on standard naive patternism. (Previous LW discussion here)
If pressed, right now I’m leaning towards the matter-based argument, that if consciousness is not magical then it is tied to specific sets of matter. And that a set of matter can not exist in multiple locations. Therefore a single consciousness can not exist in multiple locations. The consciousness A that I am now is in matter A.
So, there are two things we need to track here, and you’re not really making a distinction between them. There are individual moments of consciousness, which, yes, probably need to be on a physical substrate that exists in the single location. This is me saying that I’m this moment of conscious experience right now, which manifests in my physical brain. Everybody can be in agreement about this one.
Then there is the continuity of consciousness from moment to moment, which is where the problems show up. This is me saying that I’m the moment of conscious experience in my brain right now, and I’m also going to be the next moment of conscious experience in my brain.
The problems start when you want to say that the moment of consciousness in your brain now and the moment of consciousness in your brain a second in the future are both “your consciousness” and the moment of consciousness in your brain now and the moment of consciousness in your perfect upload a second in the future are not. There is no actual “consciousness” that refers to things other than the single moments for the patternist. There is momentary consciousness now, with your memories, then there is momentary consciousness later, with your slightly evolved memories. And on and on. Once you’ve gone past the single eyeblink of consciousness, you’re already gone, and a new you might show up once, never, or many times in the future. There’s nothing but the memories that stay in your brain during the gap laying the claim for the you-ness of the next moment of consciousness about to show up in a hundred or so milliseconds.
You’re still mostly just arguing for your personal intuition for the continuity theory though. People have been doing that pretty much as long as we’ve had fiction about uploads or destructive teleportation, with not much progress to the arguments. How would you convince someone sympathetic to the pattern theory that the pattern theory isn’t viable?
FWIW, after some earlier discussions about this, I’ve been meaning to look into Husserl’s phenomenology to see if there are some more interesting arguments to be found there. That stuff gets pretty weird and tricky fast though, and might be a dead end anyway.
I’m guessing a part of the point is that nobody had noticed anything (and indeed still can’t, at least in any way they could report back) until the arrangement was pointed out, which highlights that there are bits in the standard notion of personal identity that get a bit tricky once you try to get more robust than just going by intuition on them. How do you tell you die when a matrix lord disintegrates you and then puts together an identical copy? How do you tell you don’t die when you go under general anesthesia for brain surgery and then wake up?
I see the pattern identity theory, where uploads make sense, as one that takes it as a starting point that you have an unambiguous past but no unambiguous future. You have moments of consciousness where you remember your past, which gives you identity, and lets you associate your past moments of consciousness to your current one. But there’s no way, objective or subjective, to associate your present moment of consciousness to a specific future moment of consciousness, if there are multiple such moments, such as a high-fidelity upload and the original person, who remember the same past identity equally well. A continuity identity theorist thinks that a person who gets uploaded and then dies is dead. A pattern identity theorist thinks that people die in that sense several times a second and have just gotten used to it. There are physical processes that correspond to moments of consciousness, but there’s no physical process for linking two consecutive moments of consciousness as the same consciousness, other than regular old long and short term memories.
There’s no question that the upload and the original will diverge. If I have a non-destructive upload done on me, I expect to get up from the upload couch, not wake up in the matrix, old habits and all that. And there is going to be a future me who will experience exactly that. But if the upload was successful, there’s also going to be a future me who will be very surprised to wake up staring at some fluorescent polygons, having expected to wake up on the upload coach. This is where the “no unambiguous future selves” stops being sophistry and starts paying rent for the pattern identity theorist. “Which one is the real me” is a meaningless question. All we have to go with are memories, and both of me will have my memories.
If you want to argue a pattern identity theorist out of it, you’ll want to argue why there has to necessarily be more than just memory going on with producing the sense of moment-to-moment personal continuity, and why the physically unconnected moments of consciousness model can’t be sufficient.
This isn’t working for me as pumping the intuition you seem to want it to. I think life is worth living and I’d just cut to the chase and pick 1 because option 2 doesn’t make sense as a way to get more life. Pattern theory of identity, life is a process, not a weighted lump of time-space-matter-stuff where you can just say “let’s double the helping” like this. If you run the exact same process twice, that doesn’t get you any new patterns and new life compared to just running it once.
Or if the idea is that I’d be aware of having gotten a second run, the part about the exact same decisions and experiences seems to make this amount to spending a few decades watching a boring home video with nothing you-on-second-trip can do about it and constantly aware that you’ll be annihilated at the end. I guess the “maybe the horse will learn to sing” thinking would make sense here, but that’s just fighting the hypothetical that the thought experiment will unfold exactly as described.