French. Product manager at Metaculus. Partial to dark chocolate.
tenthkrige
Very interesting!
From eyeballing the graphs, it looks like the average Brier score is barely below 0.25. This indicates that GPT-4 is better than a dart-throwing monkey (i.e. predicting a random %age, score of 0.33), and barely better than chance (always predicting 50%, score of 0.25).
It would be interesting to see the decompositions for those two naive strategies for that set of questions, and compare to the sub-scores GPT-4 got.
You could also check if GPT-4 is significantly better than chance.
dieontrasted
Typo?
people who are focused on providing—and incentivized to provide—estimates of the expected number of cases
Can you say more about this? Would users forecast a single number? Would they get scored on how close their number is to the actual number? Could they give confidence intervals?
I think that’s how I’d use this as well.
I don’t think that solves the problem though. There are a lot of people, and many of them believe very unlikely models. Any model we (lesswrong-ish) people spend time discussing is going to be vastly more likely than a randomly selected human-thought-about model. I realise this is getting close to reference class tennis, sorry.
Cool idea. Any model we actually spend time talking about is going to be vastly above the base rate, though. Because most human-considered models are very nonsensical/unlikely.
At first I was dubious about the framing of a “shifting” n-dimensional landscape, because in a sense the landscape is fixed in 2n dimensions (I think?), but you’ve convinced me this is a useful tool to think about/discuss these issues. Thanks for writing this!
Epistemic status: gross over-simplification, and based on what I remember from reading this 6 months ago.
This paper resolved many quesitons I had left with MWI. Relevantly here, I think it argues that the number of worlds doesn’t grow because there was already an infinity of them through space.
Observing an experiment is then equivalent to locating yourself in space. Worlds splitting is the process where identical regions of the universe become different.
The scoring system incentivizes predicting your true credence, (gory details here).
I think Metaculus rewarding participation is one of the reasons it has participation. Metaculus can discriminate good predictors from bad predictors because it has their track record (I agree this is not the same as discriminating good/bad predictions). This info is incorporated in the Metaculus prediction, which is hidden by default, but you can unlock with on-site fake currency.
You could also check their track record. It has a calibration curve and much more.
This feels related to Policy Debates Should Not Appear One-Sided: anything that’s obvious does not even enter into consideration, so you only have difficult choices to make.
Don’t you mean that it will damage the institutions built on intellectual dark matter? Did I miss something?
This was interesting. I think I missed an assumption somewhere, because for , it seems that the penalty is , which seems very low for a -degree polynomial fitted on points.
I almost gave up halfway through, for much the same reasons, but this somehow felt important, the way some sequences/codex posts felt important at the time, so I powered through. I definitely will need a second pass on some of the large inferential steps, but overall this felt long-term valuable.
I find this kind of post very valuable, thank you for writing it so well.
I see someone who seems to see part of the world the same way I do, and I go “can we talk? can we be buds? can we be twinsies? are we on the same team?” and then I realize “oh, no, outside of this tiny little area, they…really don’t agree with me at all. Dammit.”
That rang very close to home, choked me up a little bit. But the good sad, where you put clean socks on and go make the world less terrible.
I’d never thought about it clearly, so thanks for this model.
A behavior I’ve observed (and participated in) that you don’t mention: the group can temporarily splinter. Picture 6 people. Someone explores topic A. Two other people like the new topic. The other 3 listen politely for 1-2 minutes. One of the three bored people explores topic B, addressing a bored neighbor (identified by their silence). The third bored person latches on to them. Then both conversations evolve until one dies down or a leader forcibly merges the two.
(By forcibly merge, I mean: while participating in conversation A, listen to conversation B, wait for a simultaneous pause in both, then respond conspicuously to conversation B, dragging most conversation A participants with you. I have observed myself doing this.) (Single participants can also switch freely.) (I have observed this to work with close friends and relative strangers, but obviously strangers need leaders/confident people to start new conversations, because they have to be in explore mode.)
I think this lowers the exploration barrier, compared to your model.
Content feedback : the inferential distance between Löb’s theorem and spurious counterfactuals seems larger than that of the other points. Maybe that’s because I haven’t internalised the theorem, not being a logician and all.
Unnecessary nitpick: the gears in the robot’s brain would turn just fine as drawn: since the outer gears are both turning anticlockwise, the inner gear would just turn clockwise. (I think my inner engineer is showing)
Very neat tool, thanks for the conciseness of the explanation. Though I hope I won’t have to measure 70° temperatures by hand any time soon. (I know, I know, it’s in Fahrenheit, but it still sounds… dissonant ? to my european ears)
Well that’s a mindset I don’t encounter often irl. Do you estimate you’re a central example in your country / culture ?
Good points well made. I’m not sure what you mean by “my expected log score is maximized” (and would like to know), but in any case it’s probably your average world rather than your median world that does it?