I’m open to hiring people remotely. DM me.
Eli Tyre
Then, since I’ve done the upfront work of thinking through my own metacognitive practices, the assistant only has to track in the moment what situation I’m in, and basically follow a flowchart I might be too tunnel-visioned to handle myself.
In the past I have literally used flowcharts for this, including very simple “choose your own adventure” templates in roam.
The root node is just “something feels off, or something”, and then the template would guide me through a series of diagnostic questions, leading me to root nodes with checklists of very specific next actions depending on my state.
FYI: I’m hiring for basically a thinking assistant, right now, for I expect 5 to 10 hours a week. Pay depending on skill-level. Open to in-person or remote.
If you’re really good, I’ll recommend you to other people who I want boosted, and I speculate that this could easily turn into a full time role.
If you’re interested or maybe interested, DM me. I’ll send you my current writeup of what I’m looking for (I would prefer not to post that publicly quite yet), and if you’re still interested, we can do a work trial.
However, fair warning: I’ve tried various version of hiring people to support my metacognition over the past 5 years, and so far none of them have worked well enough that it was worth continuing. I’ve learned a bit about what I need each time, but a lot of this will probably come down to personal fit.
A different way to ask the question: what, specifically, is the last part of the text that is spoiled by this review?
Can someone tell me if this post contains spoilers?
Planecrash might be the single work of fiction for which I most want to avoid spoilers, of either the plot or the finer points of technical philosophy.
I’ve sometimes said that dignity in the first skill I learned (often to the surprise of others, since I am so willing to look silly or dumb or socially undignified). Part of my original motivation for bothering to intervene on x-risk, is that it would be beneath my dignity to live on a planet with an impending intelligence explosion on track to wipe out the future, and not do anything about it.
I think Ben’s is a pretty good description of what it means for me, modulo that the “respect” in question is not at all social. It’s entirely about my relationship with myself. My dignity or not is often not visible to others at all.
I use daily checklists, in spreadsheet form, for this.
Was this possibly a language thing? Are there Chinese or Indian machine learning researchers who would use a different term than AGI in their native language?
If your takeaway is only that you should have fatter tails on the outcomes of an aspiring rationality community, then I don’t object.
If “I got some friends together and we all decided to be really dedicatedly rational” is intended as a description of Ziz and co, I think it is a at least missing many crucial elements, and generally not a very good characterization.
I think this post cleanly and accurately elucidates a dynamic in conversations about consciousness. I hadn’t put my finger on this before reading this post, and I noe think about it every time I hear or participate in a discussion about consciousness.
Short, as near as I can tell, true, and important. This expresses much of my feeling about the world.
Perhaps one of the more moving posts I’ve read recently, of direct relevance to many of us.
I appreciate the simplicity and brevity in expressing a regret that resonate strongly with.
The general exercise of reviewing prior debate, now that ( some of ) the evidence is come in, seems very valuable, especially if one side of the debate is making high level claims that their veiw has been vindicated.
That said, I think there were several points in this post where I thought the author’s read of the current evidence is/was off or mistaken. I think this overall doesn’t detract too much from the value of the post, especially because it prompted discussion in the comments.
I don’t remember the context in detail, so I might be mistaken about Scott’s specific claims. But I currently think this is a misleading characterization.
Its conflating two distinct phenomena, namely non-mystical cult leader-like charisma / reality distortion fields, on the one hand, and metaphysical psychic powers, on the other, under the label “spooky mind powers”, to imply someone is reasoning in bad faith or at least inconsistently.
It’s totally consistent to claim that the first thing is happening, while also criticizing someone for believing that the second thing is happening. Indeed, this seems like a correct read of the situation to me, and therefore a natural way to interpret Scott’s claims.
I think about this post several times a year when evaluating plans.
(Or actually, I think about a nearby concept that Nate voiced in person to me, about doing things that you actually believe in, in your heart. But this is the public handle for that.)
I don’t understand how the second sentence follows from the first?
Disagreed insofar by “automatically converted” you mean “the shortform author has no recourse against this’”.
No. That’s why I said the feature should be optional. You can make a general default setting for your shortform, plus there should and there should be a toggle (hidden in the three dots menu?) to turn this on and off on a post by post basis.
I agree. I’m reminded of Scott’s old post The Cowpox of Doubt, about how a skeptics movement focused on the most obvious pseudoscience is actually harmful to people’s rationality because it reassures them that rationality failures are mostly obvious mistakes that dumb people make instead of hard to notice mistakes that I make.
And then we get people believing all sorts of shoddy research – because after all, the world is divided between things like homeopathy that Have Never Been Supported By Any Evidence Ever, and things like conventional medicine that Have Studies In Real Journals And Are Pushed By Real Scientists.
Calling groups cults feels similar, in that it allows one to write them off as “obviously bad” without need for further analysis, reassures one that their own groups (which aren’t cults, of course) are obviously unobjectionable.
Read ~all the sequences. Read all of SSC (don’t keep up with ACX).
Pessimistic about survival, but attempting to be aggresively open-minded about what will happen instead of confirmation biasing my views from 2015.
A friend of mine once told me “if you’re making a decision that depends on a number, and you haven’t multiplied two numbers together, you’re messing up.” I think this is basically right, and I’ve taken it to heart.
Some triggers for me:
Verbiage
When I use any of the following words, in writing or in speech, I either look up an actual number, or quickly do a fermi estimate in a spreadsheet, to check if my intutitive idea is actually right.
“Order of magnitude”
“A lot”
“Enormous” / “enormously”
Question Templates
When I’m asking a question, that effectively reduces to one of the following forms:
Is it worth it to [take some action]? (Including an internal conflict about whether something is worth doing.)
Is [a specific idea] feasible? Does it pencil?
Is [an event] probable?
One thing that’s been critical for me is having a hotkey that opens a new spreadsheet. I want “open a spreadsheet” to be in muscle memory and take litterally less than a second.