[Deleted on request]
Alex_Altair
Rediscovering some math.
[I actually wrote this in my personal notes years ago. Seemed like a good fit for quick takes.]
I just rediscovered something in math, and the way it came out to me felt really funny.
I was thinking about startup incubators, and thinking about how it can be worth it to make a bet on a company that you think has only a one in ten chance of success, especially if you can incubate, y’know, ten such companies.
And of course, you’re not guaranteed success if you incubate ten companies, in the same way that you can flip a coin twice and have it come up tails both times. The expected value is one, but the probability of at least one success is not one.
So what is it? More specifically, if you consider ten such 1-in-10 events, do you think you’re more or less likely to have at least one of them succeed? It’s not intuitively obvious which way that should go.
Well, if they’re independent events, then the probability of all of them failing is 0.9^10, or
And therefore the probability of at least one succeeding is More likely than not! That’s great. But not hugely more likely than not.
(As a side note, how many events do you need before you’re more likely than not to have one success? It turns out the answer is 7. At seven 1-in-10 events, the probability that at least one succeeds is 0.52, and at 6 events, it’s 0.47.)
So then I thought, it’s kind of weird that that’s not intuitive. Let’s see if I can make it intuitive by stretching the quantities way up and down — that’s a strategy that often works. Let’s say I have a 1-in-a-million event instead, and I do it a million times. Then what is the probability that I’ll have had at least one success? Is it basically 0 or basically 1?
...surprisingly, my intuition still wasn’t sure! I would think, it can’t be too close to 0, because we’ve rolled these dice so many times that surely they came up as a success once! But that intuition doesn’t work, because we’ve exactly calibrated the dice so that the number of rolls is the same as the unlikelihood of success. So it feels like the probability also can’t be too close to 1.
So then I just actually typed this into a calculator. It’s the same equation as before, but with a million instead of ten. I added more and more zeros, and then what I saw was that the number just converges to somewhere in the middle.
If it was the 1300s then this would have felt like some kind of discovery. But by this point, I had realized what I was doing, and felt pretty silly. Let’s drop the “”, and look at this limit;
If this rings any bells, then it may be because you’ve seen this limit before;
or perhaps as
The probability I was looking for was , or about 0.632.
I think it’s really cool that my intuition somehow knew to be confused here! And to me this path of discovery was way more intuitive that just seeing the standard definition, or by wondering about functions that are their own derivatives. I also think it’s cool that this path made pop out on its own, since I almost always think of e in the context of an exponential function, rather than as a constant. It also makes me wonder if 1/e is more fundamental than . (Similar to how is more fundamental than .)
we only label states as ‘different’ if they actually result in different controller behaviour at some point down the line.
This reminds me a lot of the coarse-graining of “causal” states in comp mech.
I got a ton of value from ILIAD last year, and strongly recommend it to anyone interested!
IYKYK
Report & retrospective on the Dovetail fellowship
For anyone reading this comment thread in the future, Dalcy wrote an amazing explainer for this paper here.
Come join Dovetail’s agent foundations fellowship talks & discussion
Dovetail’s agent foundations fellowship talks & discussion
Towards building blocks of ontologies
See also the classic LW post, The Best Textbooks on Every Subject.
Indeed, we know about those posts! Lmk if you have a recommendation for a better textbook-level treatment of any of it (modern papers etc). So far the grey book feels pretty standard in terms of pedagogical quality.
Some small corrections/additions to my section (“Altair agent foundations”). I’m currently calling it “Dovetail research”. That’s not publicly written anywhere yet, but if it were listed as that here, it might help people who are searching for it later this year.
Which orthodox alignment problems could it help with?: 9. Humans cannot be first-class parties to a superintelligent value handshake
I wouldn’t put number 9. Not intended to “solve” most of these problems, but is intended to help make progress on understanding the nature of the problems through formalization, so that they can be avoided or postponed, or more effectively solved by other research agenda.
Target case: worst-case
definitely not worst-case, more like pessimistic-case
Some names: Alex Altair, Alfred Harwood, Daniel C, Dalcy K
Add “José Pedro Faustino”
Estimated # FTEs: 1-10
I’d call it 2, averaged throughout 2024.
Some outputs in 2024: mostly exposition but it’s early days
“Gain writing skills BEFORE...”
FWIW I can’t really tell what this website is supposed to be/do by looking at the landing page and menu
The title reads ambiguous to me; I can’t tell if you mean “learn to [write well] before” or “learn to write [well before]”.
DM me if you’re interested.
I, too am quite interested in trialing more people for roles on this spectrum.
Thanks. Is “pass@1” some kind of lingo? (It seems like an ungoogleable term.)
Oh, sure, I’m happy to delete it since you requested. Although, I don’t really understand how my comment is any more politically object-level than your post? I read your post as saying “Hey guys I found a 7-leaf clover in Ireland, isn’t that crazy? I’ve never been somewhere where clovers had that many leaves before.” and I’m just trying to say “FYI I think you just got lucky, I think Ireland has normal clovers.”