My name is Yaseen Mowzer. I’m working on using dominant assurance contracts to solve the public good problem. I have a web platform where you can use dominant assurance contracts: EnsureDone.
moyamo
Thanks for you comment, it’s very helpful.
Sorry if I missed it in the article, but who are the parties you would like to get on board to put up the refund bonuses?
It would the producer of the public good (e.g. for my project I put up the collateral).
Can you bootstrap them by using DACs to fundraise for bigger and bigger refund bonuses?
Possibly? I’m not sure why you’d do that?
Are you aware of any issues with securities law, since people can make monetary profits off refund bonuses?
I disagree that a Refund Bonus is a security. It’s a refund. To me it’s when you buy something, but it comes broken, so the store gives you a voucher to make up for your troubles.
I imagine (no idea really) that the law of the country of the person who starts a fundraiser will be what matters legally.
I’m in South Africa but from what I can tell, if you work with US dollars and do something illegal the FBI will come after you, so I wouldn’t be confident that only South African law applies.
Can you maybe just pick out a charity fundraising platform that has funding thresholds and refunds (and is used by consequentialists), and refund bonuses to everyone who contributed to failed fundraisers?
This is actually a cool idea. I don’t know how I’d manage to get people’s details for giving refund without co-operating with the fundraising platform, and my impression is that most platforms are hesitant to do things like this. If you know of a platform that would be keen on trying this, please tell me!
Put differently, I would love a non-speciesist world and a world where funding in AI safety is allocated efficiently. I’m currently working only on the second problem, so one could argue that I’m freeriding on others’ solutions to the first.
I don’t quite understand this point. You could work on AI Safety and donate to animal charities if you don’t want to free-ride.
Btw., I think it would be useful to mention what the article is about in the title. I would not have read it (even though I’m very interested in the topic) if Dony hadn’t told me that it’s about DACs!
You’re right. I didn’t want the title to just be “Dominant Assurance Contracts” because I assumed that most people have never heard of them and tried to come up with something more interesting, but maybe enough people on lesswrong have heard of them so I should probably be more straight forward.
The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts)
Cape Town, South Africa – ACX Meetups Everywhere Fall 2023
I created a manifold.market.
Yes, individuals have the incentive to mitigate existential risk, but only in their lifetime, and possibly the lifetime of their grandchildren.
Institutions can last many generations and also allow people to coordinate and work together. In theory it’s difficult to form a company with the sole purpose of mitigating existential risk, since investors will be pushing you to grow big or make huge profits (in practice it seems like Conjecture managed to do this?). With an eternal company the bondholders want your company to not take risks for big-profits.
One concern on the alignment of executive compensation is that it’s especially hard to get executives to care about what happens after they die, unless their perpetual bonds go to their heirs, unlike a regular pension. Even then, they or their heirs can sell those bonds, no? At least in the US, we have laws setting time limits on constraints about how heirs can use or dispose of property left to them.
This is a good point, but I think empirically people don’t really divest/diversify their inheritance? This is something that could be tested. In theory the perpetual bonds of a large eternal company should be some of the safest assets to hold, similar to U.S. bonds. So I don’t think most people will want to sell these for other assets.
When I look at the world’s actually existing very old companies, [...] Do we, or should we expect to, see any signs that these kinds of companies ae unusually motivated to reduce existential risks?
Possibly. I’m not sure we should look at what long-lasting companies do now, since we can’t be sure that they will continue to last long (maybe by now the management has gone bad). It would probably be better to look at how long-lasting companies behaved in the past.
it’s pretty questionable whether “corporation” is the unit of institution to focus on.
I agree. AI Safety is a public good and so suffers from the https://en.wikipedia.org/wiki/Free-rider_problem and so even if you had eternal companies, they would have to co-ordinate some how. But I think it would be easier for eternal companies to coordinate on AI Safety compared to normal companies.
I’m also pretty skeptical that slack is compatible with financial metrics as the primary optimization lever, whether amortized or instantaneous.
I’m not sure what you mean by this. I think a lot of companies already give their employees a lot of slack? e.g. Apparently Google used to allow every employee to spend 20% of their time on pet projects (although I’ve heard this practice no longer exists).
Also, it’s unclear [...] that theoretical stock value deviates much from perpetual bond value. Both are quite sensitive to perceived stability of company.
Surely their must be a difference. A bond is not exposed to the “upside” of the companies profits only the “downside” of them defaulting. I think maybe a good analogy is how parents behave with their children. Parents are much less exposed to the upside of their children’s accomplishments (if you start a business and become a multi-millionaire you’re parents see very little of that money), but are much more exposed to the downsides of their children’s failures (if your business fails, then they might have to live with you, increasing rent and food costs). Understandably parents tend to push their children to go for safe jobs (accountant, doctor, lawyer, programmer) rather than doing risky jobs with high upside (actor, artist, musician).
I think in the same way an eternal company (if the incentives work) will behave in a less risky way.
An Eternal Company
I like this idea, because I’m too lazy to review pull requests. It would be great if other people could just review and vote on them for me :P
Cape Town, South Africa—ACX Meetups Everywhere “Spring” 2023
I’m looking for feedback on my understanding of Corrigibility.
I skimmed CHAI, Assistance Games, And Fully Updated Deferece . Is the key criticism that The Off-Switch Game ignores the embeddedness of the human? i.e. The agent will stochastically model the human and so will just try to predict what the human will say instead of asking it? This limitation is mentioned on Page 3.Is this criticism correct or am I missing something else?
I tried diving into All of Statistics but I found it to be way too concise. I didn’t get past Chapter 3. In particular Chapter 3 felt like a list of distributions and some arbitrary properties. It felt like I wasn’t really getting an intuition for what these distributions represent or why these properties are interesting. In the end I dropped the book because I felt like a wasn’t really learning anything.
My negative experience with this book is likely a result of me having no previous experience with statitistics.
This is the Howey test
An investment of money
In a common enterprise
With the expectation of profit
To be derived from the efforts of others
I think it fails 3 and 4 simultaneously. There is no “expectation of profit to be derived from the efforts of others”. If the contract succeeds then others make an effort but you do not make a profit. If the contract fails you make a profit, but not from the efforts of others.