Co-founded Nonlinear.org (x-risk incubator) and Superlinear (x-risk prizes/bounties).
Also into complex systems, history, and crypto.
Co-founded Nonlinear.org (x-risk incubator) and Superlinear (x-risk prizes/bounties).
Also into complex systems, history, and crypto.
Another example of Overton movement—imagine seeing these results a few years ago:
“I’m an accelerationist for solar power, nuclear power to the extent it hasn’t been obsoleted by solar power and we might as well give up but I’m still bitter about it, geothermal, genetic engineering, neuroengineering, FDA delenda est, basically everything except GoF bio and AI”
https://twitter.com/ESYudkowsky/status/1629725763175092225?t=A-po2tuqZ17YVYAyrBRCDw&s=19
I also think this approach deserves more consideration.
Also: since BCIs can generate easy-to-understand profits, and are legibly useful to many, we could harness market forces to shorten BCI timelines.
Ambitious BCI projects will likely be more shovel ready than many other alignment approaches—BCIs are plausibly amenable to Manhattan Project-level initiatives where we unleash significant human and financial capital. Maybe use Advanced Market Commitments to kickstart the innovators, etc.
For anybody interested, Tim Urban has a really well written post about Neuralink/BCIs: https://waitbutwhy.com/2017/04/neuralink.html
Anecdata: many in my non-EA/rat social circles of entrepreneurs and investors are engaging with this for the first time.
And, to my surprise (given the optimistic nature of entrepreneurs/VCs) they aren’t just being reflexive techno-optimists, they’re taking the ideas seriously and, since Bankless, “Eliezer” is becoming a first name-only character.
Eliezer said he’s an accelerationist in basically everything except AI and gain-of-function bio and that seems to resonate. AI is Not Like The Other Problems.
Came here to say this. Highly recommend this book for anyone working on deception.
Love this! I used to manage teams of writers/editors and here are some ideas we found useful for increasing readability:
To remove fluff, imagine someone is paying you $1,000 for every word you remove. Our writers typically could cut 20-50% with minimal loss of information.
Long sentences are hard to read, so try to change your commas into periods.
Long paragraphs are hard to read, so try to break each paragraph into 2-3 sentences.
Most people just skim, and some of your ideas are much more important than others, so bold/italicize your important points.
Thanks for the feedback! We think a bot could make sense as well—we’re exploring this internally.
Good idea! Will add this to the roadmap.
So glad you’re enjoying it! It’s mine too—I consume way more LW content because of it.
Great idea, we’ll add this to the roadmap!
I’d expect the most common failure mode for rationalists here is not understanding how patronage networks work.
Even if you do everything else right, it is very hard to get elected to a position of power if the other guy is distributing the office’s resources for votes.
You should be able to map out the voting blocs and what their criteria are, i.e. “Union X and its 500 members will mostly vote for Incumbent Y because they get $X in contracts per year etc”
People are so irrationally intimidated by lawyers that some legal firms make all their money by sending out thousands of scary form letters demanding payment for bullshit transgressions. My company was threatened with thousands of frivolous lawsuits but only actually sued once.
Threats are cheap.
Great idea! We’ll add it to the list.
Going to share a seemingly-unpopular opinion and in a tone that usually gets downvoted on LW but I think needs to be said anyway:
This stat is why I still have hope: 100,000 capabilities researchers vs 300 alignment researchers.
Humanity has not tried to solve alignment yet.
There’s no cavalry coming—we are the cavalry.
I am sympathetic to fears of a new alignment researchers being net negative, and I think plausibly the entire field has, so far, been net negative, but guys, there are 100,000 capabilities researchers now! One more is a drop in the bucket.
If you’re still on the sidelines, go post that idea that’s been gathering dust in your Google Docs for the last six months. Go fill out that fundraising application.
We’ve had enough fire alarms. It’s time to act.