Embedded Interactive Predictions on LessWrong
Ought and LessWrong are excited to launch an embedded interactive prediction feature. You can now embed binary questions into LessWrong posts and comments. Hover over the widget to see other people’s predictions, and click to add your own.
Try it out
How to use this
Create a question
Go to elicit.org/binary and create your question by typing it into the field at the top
Click on the question title, and click the copy button next to the title – it looks like this:
Paste the URL into your LW post or comment. It’ll look like this in the editor:
Troubleshooting: if the prediction box fails to appear and the link just shows up as text, go to you LW Settings, uncheck “Activate Markdown Editor”, and try again.
Make a prediction
Click on the widget to add your own prediction
Click on your prediction line again to delete it
Link your accounts
Linking your LessWrong and Elicit accounts allows you to:
Filter for and browse all your LessWrong predictions on Elicit
Add notes to your LessWrong predictions on Elicit
See your calibration for your LessWrong predictions on Elicit
Predict on LessWrong questions in the Elicit app
To link your accounts:
Make an Elicit account
Send me (amanda@ought.org) an email with your LessWrong username and your Elicit account email
Motivation
We hope embedded predictions can prompt readers and authors to:
Actively engage with posts. By making predictions as they read, people have to stop and think periodically about how much they agree with the author.
Distill claims. For writers, integrating predictions challenges them to think more concretely about their claims and how readers might disagree.
Communicate uncertainty. Rather than just stating claims, writers can also communicate a confidence level.
Collect predictions. As a reader, you can build up a personal database of predictions as you browse LessWrong.
Get granular feedback. Writers can get feedback on their content at a more granular level than comments or upvotes.
By working with LessWrong on this, Ought hopes to make forecasting easier and more prevalent. As we learn more about how people think about the future, we can use Elicit to automate larger parts of the workflow and thought process until we end up with end-to-end automated reasoning that people endorse. Check out our blog post to see demos and more context.
Some examples of how to use this
To make specific predictions, like in Zvi’s post on COVID predictions
To express credences on claims like those in Daniel Kokotajlo’s soft takeoff post
Beyond LessWrong – if you want to integrate this into your blog or have other ideas for places you’d want to use this, let us know!
- Arbital has been imported to LessWrong by Feb 20, 2025, 12:47 AM; 279 points) (
- Covid 11/26: Thanksgiving by Nov 26, 2020, 2:50 PM; 114 points) (
- Voting Results for the 2020 Review by Feb 2, 2022, 6:37 PM; 108 points) (
- LessWrong FAQ by Jun 14, 2019, 7:03 PM; 91 points) (
- Forum update: New features (December 2020) by Dec 4, 2020, 6:45 AM; 52 points) (EA Forum;
- Thread for making 2019 Review accountability commitments by Dec 18, 2020, 5:07 AM; 46 points) (
- Forum user manual by Apr 28, 2022, 2:05 PM; 42 points) (EA Forum;
- Five examples by Feb 14, 2021, 2:47 AM; 42 points) (
- Preface to the Sequence on Factored Cognition by Nov 30, 2020, 6:49 PM; 35 points) (
- Beta test GPT-3 based research assistant by Dec 16, 2020, 1:42 PM; 34 points) (
- What is estimational programming? Squiggle in context by Aug 12, 2022, 6:01 PM; 26 points) (EA Forum;
- Sep 14, 2021, 4:23 AM; 16 points) 's comment on LessWrong is paying $500 for Book Reviews by (
- What is estimational programming? Squiggle in context by Aug 12, 2022, 6:39 PM; 14 points) (
- Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong by Jun 5, 2024, 7:20 AM; 13 points) (
- Jan 9, 2021, 5:59 AM; 13 points) 's comment on Make more land by (
- What are all the things you can embed into forum posts? by Oct 5, 2022, 4:13 PM; 12 points) (EA Forum;
- Dec 24, 2020, 7:13 PM; 12 points) 's comment on Covid 12/24: We’re F***ed, It’s Over by (
- Nov 21, 2020, 3:49 AM; 9 points) 's comment on AGI Predictions by (
- Dec 15, 2020, 1:05 PM; 9 points) 's comment on Open & Welcome Thread—December 2020 by (
- Jan 6, 2021, 8:15 AM; 3 points) 's comment on Predictions for 2021 (+ a template for yours) by (
- Dec 6, 2020, 5:18 PM; 2 points) 's comment on Postmortem on my Comment Challenge by (
I liked this post a lot. In general, I think that the rationalist project should focus a lot more on “doing things” than on writing things. Producing tools like this is a great example of “doing things”. Other examples include starting meetups and group houses.
So, I liked this post a) for being an example of “doing things”, but also b) for being what I consider to be a good example of “doing things”. Consider that quote from Paul Graham about “live in the future and build what’s missing”. To me, this has gotta be a tool that exists in the future, and I appreciate the effort to make it happen.
Unfortunately, as I write this on 12/15/21, https://elicit.org/binary. is down. That makes me sad. It doesn’t mean the people who worked on it did a bad job though. The analogy of a phase change in chemistry comes to mind.
If you are trying to melt an ice cube and you move the temperature from 10℉ to 31℉, you were really close, but you ultimately came up empty handed. But you can’t just look at the fact that the ice cube is still solid and judge progress that way. I say that you need to look more closely at the change in temperature. I’m not sure how much movement in temperature happened here, but I don’t think it was trivial.
As for how it could have been better, I think it would have really helped to have lots and lots of examples. I’m a big fan of examples, sorta along the lines of what the specificity sequence talks about. I’m talking dozens and dozens of examples. I think that helps people grok how useful this can be and when they might want to use it. As I’ve mentioned elsewhere though, coming up with examples is weirdly difficult.
As for followup work, I don’t know what the Elicit team did and I don’t want to be presumptuous, but I don’t recall any followup posts on LessWrong or iteration. Perhaps something like that would have lead to changes that caused more adoption. I still stand by my old comments about there needing to be 1) a way to embed the prediction directly from the LessWrong text editor, and 2) things like a feed of recent predictions.