coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)
jungofthewon
Do you have any examples?
Beta test GPT-3 based research assistant
When Elicit has nice argument mapping (it doesn’t yet, right?) it might be pretty cool and useful (to both LW and Ought) if that could be used on LW as well. For example, someone could make an argument in a post, and then have an Elicit map (involving several questions linked together) where LW users could reveal what they think of the premises, the conclusion, and the connection between them.
Yes that is very aligned with the type of things we’re interested in!!
Lots of uncertainty but a few ways this can connect to the long-term vision laid out in the blog post:
We want to be useful for making forecasts broadly. If people want to make predictions on LW, we want to support that. We specifically want some people to make lots of predictions so that other people can reuse the predictions we house to answer new questions. The LW integration generates lots of predictions and funnels them into Elicit. It can also teach us how to make predicting easier in ways that might generalize beyond LW.
It’s unclear how exactly the LW community will use this integration but if they use it to decompose arguments or operationalize complex concepts, we can start to associate reasoning or argumentative context with predictions. It would be very cool if, given some paragraph of a LW post, we could predict what forecast should be embedded next, or how a certain claim should be operationalized into a prediction. Continuing the takeoffs debate and Non-Obstruction: A Simple Concept Motivating Corrigibility start to point at this.
There are versions of this integration that could involve richer commenting in the LW editor.
Mostly it was a quick experiment that both teams were pretty excited about :)
I see what you’re saying. This feature is designed to support tracking changes in predictions primarily over longer periods of time e.g. for forecasts with years between creation and resolution. (You can even download a csv of the forecast data to run analyses on it.)
It can get a bit noisy, like in this case, so we can think about how to address that.
you mean because my predictions are noisy and you don’t want to see them in that list?
try it and let’s see what happens!
TurnTrout will use the Elicit embedding on LessWrong for a non-prediction question by 28-11-2020 TurnTrout will use the Elicit embedding on LessWrong for a non-prediction question by 28-11-2020
this is too much fun to click on
Automating reasoning about the future at Ought
Haha I didn’t find it patronizing personally but it did take me an embarrassingly long time to figure out what Filipe did there :) Resource allocation seems to be a common theme in this thread.
Yes! For example I am often amazed by people who are able to explain complex technical concepts in accessible and interesting ways
Yes-anding you: our limited ability to run “experiments” and easily get empirical results for policy initiatives seems to really hinder progress. Maybe AI can help us organize our values, simulate a bunch of policy outcomes, and then find the best win-win solution when our values diverge.
I love the idea of exploring different minds and seeing how they fit. Getting chills thinking about what it means for humanity’s capacity for pleasure to explode. And loving the image of swimming through a vast, clear, blue mind design ocean.
Doesn’t directly answer the question but: AI tools / assistants are often portrayed as having their own identities. They have their own names e.g. Samantha, Clara, Siri, Alexa. But it doesn’t seem obvious that they need to be represented as discrete entities. Can an AI system be so integrated with me that it just feels like me on a really really really good day? Suddenly I’m just so knowledgeable and good at math!
Instant translation across nueroatypical people, just like instant translation between English and Korean. An AI system that helps me understand what an autistic individual is currently experiencing, helps me communicate more easily with them.
An interactive, conversational system that makes currently expensive and highly manual therapy much more accessible. Something that talks you through a cortisol spike, anxiety attack, panic attack.
I tweeted an idea earlier: A tool that explains in words you understand what the other person really meant. maybe has settings for “gently nudge me if i’m unfairly assuming negative intent”
[Question] Brainstorming positive visions of AI
I generally agree with this but think the alternative goal of “make forecasting easier” is just as good, might actually make aggregate forecasts more accurate in the long run, and may require things that seemingly undermine the virtue of precision.
More concretely, if an underdefined question makes it easier for people to share whatever beliefs they already have, then facilitates rich conversation among those people, that’s better than if a highly specific question prevents people from making a prediction at all. At least as much, if not more, of the value of making public, visual predictions like this comes from the ensuing conversation and feedback than from the precision of the forecasts themselves.
Additionally, a lot of assumptions get made at the time the question is defined more precisely, which could prematurely limit the space of conversation or ideas. There are good reasons why different people define AGI the way they do, or the moment of “AGI arrival” the way they do, that might not come up if the question askers had taken a point of view.
“Remember that you are dying.”