Embedded Interactive Predictions on LessWrong
Ought and LessWrong are excited to launch an embedded interactive prediction feature. You can now embed binary questions into LessWrong posts and comments. Hover over the widget to see other people’s predictions, and click to add your own.
Try it out
How to use this
Create a question
Go to elicit.org/binary and create your question by typing it into the field at the top
Click on the question title, and click the copy button next to the title – it looks like this:
Paste the URL into your LW post or comment. It’ll look like this in the editor:
Troubleshooting: if the prediction box fails to appear and the link just shows up as text, go to you LW Settings, uncheck “Activate Markdown Editor”, and try again.
Make a prediction
Click on the widget to add your own prediction
Click on your prediction line again to delete it
Link your accounts
Linking your LessWrong and Elicit accounts allows you to:
Filter for and browse all your LessWrong predictions on Elicit
Add notes to your LessWrong predictions on Elicit
See your calibration for your LessWrong predictions on Elicit
Predict on LessWrong questions in the Elicit app
To link your accounts:
Make an Elicit account
Send me (amanda@ought.org) an email with your LessWrong username and your Elicit account email
Motivation
We hope embedded predictions can prompt readers and authors to:
Actively engage with posts. By making predictions as they read, people have to stop and think periodically about how much they agree with the author.
Distill claims. For writers, integrating predictions challenges them to think more concretely about their claims and how readers might disagree.
Communicate uncertainty. Rather than just stating claims, writers can also communicate a confidence level.
Collect predictions. As a reader, you can build up a personal database of predictions as you browse LessWrong.
Get granular feedback. Writers can get feedback on their content at a more granular level than comments or upvotes.
By working with LessWrong on this, Ought hopes to make forecasting easier and more prevalent. As we learn more about how people think about the future, we can use Elicit to automate larger parts of the workflow and thought process until we end up with end-to-end automated reasoning that people endorse. Check out our blog post to see demos and more context.
Some examples of how to use this
To make specific predictions, like in Zvi’s post on COVID predictions
To express credences on claims like those in Daniel Kokotajlo’s soft takeoff post
Beyond LessWrong – if you want to integrate this into your blog or have other ideas for places you’d want to use this, let us know!
- Covid 11/26: Thanksgiving by 26 Nov 2020 14:50 UTC; 114 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- Forum update: New features (December 2020) by 4 Dec 2020 6:45 UTC; 52 points) (EA Forum;
- Thread for making 2019 Review accountability commitments by 18 Dec 2020 5:07 UTC; 46 points) (
- Forum user manual by 28 Apr 2022 14:05 UTC; 42 points) (EA Forum;
- Five examples by 14 Feb 2021 2:47 UTC; 42 points) (
- Preface to the Sequence on Factored Cognition by 30 Nov 2020 18:49 UTC; 35 points) (
- Beta test GPT-3 based research assistant by 16 Dec 2020 13:42 UTC; 34 points) (
- What is estimational programming? Squiggle in context by 12 Aug 2022 18:01 UTC; 26 points) (EA Forum;
- 14 Sep 2021 4:23 UTC; 16 points) 's comment on LessWrong is paying $500 for Book Reviews by (
- What is estimational programming? Squiggle in context by 12 Aug 2022 18:39 UTC; 14 points) (
- Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong by 5 Jun 2024 7:20 UTC; 13 points) (
- What are all the things you can embed into forum posts? by 5 Oct 2022 16:13 UTC; 12 points) (EA Forum;
- 24 Dec 2020 19:13 UTC; 12 points) 's comment on Covid 12/24: We’re F***ed, It’s Over by (
- 21 Nov 2020 3:49 UTC; 9 points) 's comment on AGI Predictions by (
- 15 Dec 2020 13:05 UTC; 9 points) 's comment on Open & Welcome Thread—December 2020 by (
- 6 Jan 2021 8:15 UTC; 3 points) 's comment on Predictions for 2021 (+ a template for yours) by (
- 6 Dec 2020 17:18 UTC; 2 points) 's comment on Postmortem on my Comment Challenge by (
I liked this post a lot. In general, I think that the rationalist project should focus a lot more on “doing things” than on writing things. Producing tools like this is a great example of “doing things”. Other examples include starting meetups and group houses.
So, I liked this post a) for being an example of “doing things”, but also b) for being what I consider to be a good example of “doing things”. Consider that quote from Paul Graham about “live in the future and build what’s missing”. To me, this has gotta be a tool that exists in the future, and I appreciate the effort to make it happen.
Unfortunately, as I write this on 12/15/21, https://elicit.org/binary. is down. That makes me sad. It doesn’t mean the people who worked on it did a bad job though. The analogy of a phase change in chemistry comes to mind.
If you are trying to melt an ice cube and you move the temperature from 10℉ to 31℉, you were really close, but you ultimately came up empty handed. But you can’t just look at the fact that the ice cube is still solid and judge progress that way. I say that you need to look more closely at the change in temperature. I’m not sure how much movement in temperature happened here, but I don’t think it was trivial.
As for how it could have been better, I think it would have really helped to have lots and lots of examples. I’m a big fan of examples, sorta along the lines of what the specificity sequence talks about. I’m talking dozens and dozens of examples. I think that helps people grok how useful this can be and when they might want to use it. As I’ve mentioned elsewhere though, coming up with examples is weirdly difficult.
As for followup work, I don’t know what the Elicit team did and I don’t want to be presumptuous, but I don’t recall any followup posts on LessWrong or iteration. Perhaps something like that would have lead to changes that caused more adoption. I still stand by my old comments about there needing to be 1) a way to embed the prediction directly from the LessWrong text editor, and 2) things like a feed of recent predictions.