In our vision, Elicit learns by imitating the thoughts and reasoning steps users share in the tool. It also gets direct feedback from users on its suggestions.
It just seems like LW piggybacks on Elicit without revealing to Elicit any of the more complex stuff that goes into predictions. Elicit wants to get data about (as I understand it) probabilistic argument-mapping. Instead, it’s just getting point probabilities for questions. That doesn’t seem very useful to me.
Lots of uncertainty but a few ways this can connect to the long-term vision laid out in the blog post:
We want to be useful for making forecasts broadly. If people want to make predictions on LW, we want to support that. We specifically want some people to make lots of predictions so that other people can reuse the predictions we house to answer new questions. The LW integration generates lots of predictions and funnels them into Elicit. It can also teach us how to make predicting easier in ways that might generalize beyond LW.
It’s unclear how exactly the LW community will use this integration but if they use it to decompose arguments or operationalize complex concepts, we can start to associate reasoning or argumentative context with predictions. It would be very cool if, given some paragraph of a LW post, we could predict what forecast should be embedded next, or how a certain claim should be operationalized into a prediction. Continuing the takeoffs debate and Non-Obstruction: A Simple Concept Motivating Corrigibility start to point at this.
There are versions of this integration that could involve richer commenting in the LW editor.
Mostly it was a quick experiment that both teams were pretty excited about :)
We specifically want some people to make lots of predictions so that other people can reuse the predictions we house to answer new questions.
Yep, OK, this makes sense to me.
It’s unclear how exactly the LW community will use this integration but if they use it to decompose arguments or operationalize complex concepts, we can start to associate reasoning or argumentative context with predictions. It would be very cool if, given some paragraph of a LW post, we could predict what forecast should be embedded next, or how a certain claim should be operationalized into a prediction.
Right, OK, this makes sense to me as well, although it’s certainly more speculative.
When Elicit has nice argument mapping (it doesn’t yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
When Elicit has nice argument mapping (it doesn’t yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
Yes that is very aligned with the type of things we’re interested in!!
That is indeed what I read. A quote:
It just seems like LW piggybacks on Elicit without revealing to Elicit any of the more complex stuff that goes into predictions. Elicit wants to get data about (as I understand it) probabilistic argument-mapping. Instead, it’s just getting point probabilities for questions. That doesn’t seem very useful to me.
Lots of uncertainty but a few ways this can connect to the long-term vision laid out in the blog post:
We want to be useful for making forecasts broadly. If people want to make predictions on LW, we want to support that. We specifically want some people to make lots of predictions so that other people can reuse the predictions we house to answer new questions. The LW integration generates lots of predictions and funnels them into Elicit. It can also teach us how to make predicting easier in ways that might generalize beyond LW.
It’s unclear how exactly the LW community will use this integration but if they use it to decompose arguments or operationalize complex concepts, we can start to associate reasoning or argumentative context with predictions. It would be very cool if, given some paragraph of a LW post, we could predict what forecast should be embedded next, or how a certain claim should be operationalized into a prediction. Continuing the takeoffs debate and Non-Obstruction: A Simple Concept Motivating Corrigibility start to point at this.
There are versions of this integration that could involve richer commenting in the LW editor.
Mostly it was a quick experiment that both teams were pretty excited about :)
Ah, a lot of this makes sense!
So you’re from Ought?
Yep, OK, this makes sense to me.
Right, OK, this makes sense to me as well, although it’s certainly more speculative.
When Elicit has nice argument mapping (it doesn’t yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
Yes that is very aligned with the type of things we’re interested in!!