Aspiring rationalist in Ottawa, Canada.
StefanDeYoung
In any field, you will be influenced to follow main-stream approaches. I don’t see that there’s any way to avoid that; you’ll need to be keeping abreast of arxiv papers, updates to programming libraries, and whatever wisdom the community can accumulate. I’d say that you should embrace the community, as I’ve always found it much more difficult to go it alone for reasons of inspiration, motivation, and desire to feel social approval.
If you’re concerned that you will miss critical insights while following someone else’s approach, set appointments for yourself to check-in with how you’re working. Take an hour every month or two every quarter to think through how you’ve been approaching your work, and how you should change.
Thank you!
I hadn’t read these sequences as part of LW 1.0, so thank you very much for bringing them back into the spotlight. Is there contained within them a listing of habits that have been useful to those aspiring to implement instrumental rationality? Is there a compendium of what obvious advice is on offer in various domains?
Thanks. That answers my question; seeing VOI capitalised and immediately acronymed made me think that it might be a Named Concept.
When you’re thinking about whether to keep pulling on the thread of inquiry, do you actually write down any pseudomath or do you decide by feeling? Sometimes, I think through some pseudomath, but I wonder whether it might be worth recording that information, or if thinking on paper would produce better results than thinking “out-loud.”
Agreed. In the article, Conor says
I claim that my brain frequently produces narrative satisfaction long before the story’s really over, causing me to think I’ve understood when I really haven’t.
When you use the phrase Value of Information, are you drawing from any particular definition or framework? Are you using the straightforward concept of placing value on having the information?
Thanks for the tip. I am sensitive to the limits of my own willpower.
A strategy that was working for me was keeping my daily tasks/to-do lists and my journal in the same book. That way, I needed to check into my book in order to do my work, and would be able to intersperse journaling in between lists as the urge arose.
At what point do we judge that our map of this particular part of the territory is sufficiently accurate, and accept the level of explanation that we’ve reached?
If we’re going to keep pulling on the thread of “why are the dominoes on the floor” past “Zacchary did it” then we need to know why we’re asking. Are we trying to prevent future messes? Are we concerned about exactly how our antique set of dominoes was treated? Are we trying to figure out who should clean this mess?
If we’re only trying to figure out who should clean the mess, then “Zacchary did it” is sufficient, and we can stop looking deeper.
I remember at the start of each year of high-school having the experience of realising just how stupid and ignorant I had been the previous year. And each year, I was surprised to have the same experience. This revealed to me, I think, that I’m more episodic than diachronic in that I dissociate from my past selves.
I appreciate the advice here to have a more diachronous meta-personality. To implement this, I intend to double-down on keeping a journal. I’ve struggled with this habit before, but upon rereading journal entries from a year ago, I have received insights into how to improve my life in the present.
I will attend. Looking forward to meeting you all!
I’m having trouble with formatting. Here is what I was trying to write, less my attempts to include links:
Greetings, LessWrong.
I’m a 21 y/o Physics undergrad at the University of Waterloo. I’m currently finishing a coop work-term at the Grand River Regional Cancer Centre. I’m also trying to build a satellite www.WatSat.ca.
My girlfriend recommended that I read HPMoR—which I find delightful—but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I’m happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I’m also particularly intrigued by this Bayesian Conspiracy you guys have going.
I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo http://lesswrong.com/r/discussion/lw/790/are_there_any_lesswrongers_in_the_waterloo/, and I couldn’t help myself.
Greetings, LessWrong.
I’m a 21 y/o Physics undergrad at the University of Waterloo. I’m currently finishing a coop work-term at the Grand River Regional Cancer Centre. I’m also trying to build a satellite.
My girlfriend recommended that I read HPMoR—which I find delightful—but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I’m happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I’m also particularly intrigued by this Bayesian Conspiracy you guys have going.
I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo, and I couldn’t help myself.
I am a student at UW. If you build it, I will come.
I’ve been skimming LW for about a month now, and just registered to respond to this post. Seems you lowered my cost-of-entry. I thank you.
Your plan currently only addresses ex-risk from AGI. However, there are several other problems that should be considered if your goal is to prevent global catastrophe. I have recently been reading 80000 Hours, and they have the following list of causes that may need to be included in your plan: https://80000hours.org/articles/cause-selection/
In general, I think that it’s difficult to survey a wide topic like AI Alignment or Existential Risk, and, with granularity, write out a to-do list for solving them. I believe that people who work more intimately with each ex-risk would be better suited to develop the on-the-ground action plan.
It is likely that a variety of ex-risks would be helped by reaching for similar goals, in which case, high-level coordinated action plans developed by groups focused on each ex-risk would be useful to the community. If possible, try to attend events such as EA conferences where groups focusing on each of the possible global catastrophes will be present, and you can try to capture their shared action plans.