Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com!
Austin Chen
Announcing the $200k EA Community Choice
What are you getting paid in?
Podcast: Elizabeth & Austin on “What Manifold was allowed to do”
One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time).
I think you’re talking about me? I may have miscommunicated; I was ~zero anxious, instead trying to signal that I’d looked over the doc as requested, and poking some fun at the TODOs.FWIW I appreciated your process for running criticism ahead of time (and especially enjoyed the back-and-forth comments on the doc; I’m noticing that those kinds of conversations on a private GDoc seem somehow more vibrant/nicer than the ones on LW or on a blog’s comments.)
most catastrophes through both recent and long-ago history have been caused by governments
Interesting lens! Though I’m not sure if this is fair—the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao’s famine or Hitler’s genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.
I’d be interested to hear how Austin has updated regarding Sam’s trustworthiness over the past few days.
Hm I feel like a bunch of people have updated majorly negatively, but I haven’t—only small amounts. I think he eg gets credit for the ScarJo thing. I am mostly withholding judgement, though; now that the NDAs have been dropped, curious to see what comes to light (if nothing does, that would be more positive credit towards Sam, and some validation to my point that NDAs were not really concealing much).
Ah interesting, thanks for the tips.
I use filler a lot so thought the um/ah removal was helpful (it actually cut down the recording by something like 10 minutes overall). It’s especially good for making the transcript readable, though perhaps I could just edit the transcript without changing the audio/video.
Thanks for the feedback! I wasn’t sure how much effort to put into this producing this transcript (this entire podcast thing is pretty experimental); good to know you were trying to read along.
It was machine transcribed via Descript but then I did put in another ~90min cleaning it up a bit, removing filler words and correcting egregious mistranscriptions. I could have spent another hour or so to really clean it up, and perhaps will do so next time (or find some scaleable way to handle it eg outsource or LLM). I think that put it in an uncanny valley of “almost readable, but quite a bad experience”.
Yeah I meant her second post, the one that showed off the emails around the NDAs.
Episode: Austin vs Linch on OpenAI
Hm, I disagree and would love to operationalize a bet/market on this somehow; one approach is something like “Will we endorse Jacob’s comment as ‘correct’ 2 years from now?”, resolved by a majority of Jacob + Austin + <neutral 3rd party>, after deliberating for ~30m.
Starting new technical AI safety orgs/projects seems quite difficult in the current funding ecosystem. I know of many alumni who have founded or are trying to found projects who express substantial difficulties with securing sufficient funding.
Interesting—what’s like the minimum funding ask to get a new org off the ground? I think something like $300k would be enough to cover ~9 mo of salary and compute for a team of ~3, and that seems quite reasonable to raise in this current ecosystem for pre-seeding a org.
Manifund Q1 Retro: Learnings from impact certs
I very much appreciate @habryka taking the time to lay out your thoughts; posting like this is also a great example of modeling out your principles. I’ve spent copious amounts of time shaping the Manifold community’s discourse and norms, and this comment has a mix of patterns I find true out of my own experiences (eg the bits about case law and avoiding echo chambers), and good learnings for me (eg young/non-English speakers improve more easily).
So, I love Scott, consider CM’s original article poorly written, and also think doxxing is quite rude, but with all the disclaimers out of the way: on the specific issue of revealing Scott’s last name, Cade Metz seems more right than Scott here? Scott was worried about a bunch of knock-off effects of having his last name published, but none of that bad stuff happened.[1]
I feel like at this point in the era of the internet, doxxing (at least, in the form of involuntary identity association) is much more of an imagined threat than a real harm. Beff Jezos’s more recent doxxing also comes to mind as something that was more controversial for the controversy, than for any factual harms done to Jezos as a result.
- ^
Scott did take a bunch of ameliorating steps, such as leaving his past job—but my best guess is that none of that would have actually been necessary. AFAICT he’s actually in a much better financial position thanks to subsequent transition to Substack—though crediting Cade Metz for this is a bit like crediting Judas for starting Christianity.
- ^
My friend Eric once proposed something similar, except where two charitable individuals just create the security directly. Say Alice and Bob both want to donate $7500 to Givewell; instead of doing so directly, they could create a security which is “flip a coin, winner gets $15000”. They do so, Alice wins, waits a year and donates for $15000 of appreciated longterm gains and gets a tax deduction, while Bob deducts the $7500 loss.
This seems to me like it ought to work, but I’ve never actually tried this myself...
Manifund: 2023 in Review
Manifold Halloween Hackathon
Warning: Dialogues seem like such a cool idea that we might steal them for Manifold (I wrote a quick draft proposal).
On that note, I’d love to have a dialogue on “How do the Manifold and Lightcone teams think about their respective lanes?”
Haha, this actually seems normal and fine. We who work on prediction markets, understand the nuances and implementation of these markets (what it means in mathematical terms when a market says 25%). And Kevin and Casey haven’t quite gotten it yet, based on a couple of days of talking to prediction markets enthusiasts.
But that’s okay! Ideas are actually super hard to understand by explanation, and much easier to understand by experience (aka trial and error). My sense is that if Kevin follows up and bets on a few other markets, he’d start to wonder “hm, why did I get M100 for winning this market but only M50 on that one?” and then learn that the odds at which you place the bet actually matter. This principle underpins the idea of Manifold—you can argue all day about whether prediction markets are good for X or Y, or… you can try using them with play money and find out.
It’s reasonable for their reporting to be vibes-based for now—so long as they are reasonably accurate in characterizing the vibes, it sets the stage for other people to explore Manifold or other prediction markets.
Welcome to the US; excited for your time at LessOnline (and maybe Manifest too?)
And re: 19., we’re working on it![1]
(Sorry, that was a lie too.)