SpaceX might be more competitive than its competitors not because it’s particularly functional (i.e., compared to typical firms in other fields) but because its competitors are particularly non-functional.
If it is particularly effective but not due to the formal structure of the organization, then those informal things are likely to be hard to copy to another organization.
SpaceX can be very successful by having just marginally better institutional design than its competitors because they’re all trying to do the same things. A successful AI alignment organization however would have to be much more effective than organizations that are trying to build unaligned AI (or nominally trying to build aligned AI but cutting lots of corners to win the race, or following some AI alignment approach that seems easy but is actually fatally flawed).
those that do take responsibility usually end up feeling like they are highly constrained into doing the “responsible” thing (e.g. using defensible bureaucratic systems rather than their intuition), which is often at odds with the straightforward just-solve-it mental motion (which is used in, for example, playing video games), or curiosity in general (e.g. mathematical or scientific)
I don’t think that playing video games, math, and science are good models here because those all involve relatively fast feedback cycles which make it easy to build up good intuitions. It seems reasonable to not trust one’s intuitions in AI alignment, and the desire to appear defensible also seems understandable and hard to eliminate, but perhaps we can come up with better “defensible bureaucratic systems” than what exists today, i.e., systems that can still appear defensible but make better decisions than they currently do. I wonder if this problem has been addressed by anyone.
Yes, both awards and tenure seem like improvements, and in any case well worth experimenting with.
Ok, I made the suggestion to BERI since it seems like they might be open to this kind of thing.
ETA: Another consideration against individuals directly funding other individuals is that it wouldn’t be tax deductible. This could reduce the funding by up to 40%. If the funding is done through a tax-exempt non-profit, then IRS probably has some requirements about having formal procedures for deciding who/what to fund.
(Just got around to reading this. As a point of reference, it seems that at least Open Phil seems to have decided that tax-deductability is not more important than being able to give to things freely, which is why the Open Philanthropy Project is an LLC. I think this is at least slight evidence towards that tradeoff being worth it.)
There’s an enormous difference between having millions of dollars of operating expenditures in an LLC (so that an org is legally allowed to do things like investigate non-deductible activities like investment or politics), and giving up the ability to make billions of dollars of tax-deductible donations. Open Philanthropy being an LLC (so that its own expenses aren’t tax-deductible, but it has LLC freedom) doesn’t stop Good Ventures from making all relevant donations tax-deductible, and indeed the overwhelming majority of grants on its grants page are deductible.
Yep, sorry. I didn’t mean to imply that all of Open Phil’s funding is non-deductible, just that they decided that it was likely enough that they would find non-deductible opportunities that they went through the effort of restructuring their org to do so (and also gave up a bunch of other benefits like the ability to sponsor visas efficiently). My comment wasn’t very clear on that.
Here’s Open Phil’s blog post on why they decided to operate as an LLC. After reading it, I think their reasons are not very relevant to funding AI alignment research. (Mainly they want the freedom to recommend donations to non-501(c)(3) organizations like political groups.)
SpaceX might be more competitive than its competitors not because it’s particularly functional (i.e., compared to typical firms in other fields) but because its competitors are particularly non-functional.
If it is particularly effective but not due to the formal structure of the organization, then those informal things are likely to be hard to copy to another organization.
SpaceX can be very successful by having just marginally better institutional design than its competitors because they’re all trying to do the same things. A successful AI alignment organization however would have to be much more effective than organizations that are trying to build unaligned AI (or nominally trying to build aligned AI but cutting lots of corners to win the race, or following some AI alignment approach that seems easy but is actually fatally flawed).
I don’t think that playing video games, math, and science are good models here because those all involve relatively fast feedback cycles which make it easy to build up good intuitions. It seems reasonable to not trust one’s intuitions in AI alignment, and the desire to appear defensible also seems understandable and hard to eliminate, but perhaps we can come up with better “defensible bureaucratic systems” than what exists today, i.e., systems that can still appear defensible but make better decisions than they currently do. I wonder if this problem has been addressed by anyone.
Ok, I made the suggestion to BERI since it seems like they might be open to this kind of thing.
ETA: Another consideration against individuals directly funding other individuals is that it wouldn’t be tax deductible. This could reduce the funding by up to 40%. If the funding is done through a tax-exempt non-profit, then IRS probably has some requirements about having formal procedures for deciding who/what to fund.
(Just got around to reading this. As a point of reference, it seems that at least Open Phil seems to have decided that tax-deductability is not more important than being able to give to things freely, which is why the Open Philanthropy Project is an LLC. I think this is at least slight evidence towards that tradeoff being worth it.)
There’s an enormous difference between having millions of dollars of operating expenditures in an LLC (so that an org is legally allowed to do things like investigate non-deductible activities like investment or politics), and giving up the ability to make billions of dollars of tax-deductible donations. Open Philanthropy being an LLC (so that its own expenses aren’t tax-deductible, but it has LLC freedom) doesn’t stop Good Ventures from making all relevant donations tax-deductible, and indeed the overwhelming majority of grants on its grants page are deductible.
Yep, sorry. I didn’t mean to imply that all of Open Phil’s funding is non-deductible, just that they decided that it was likely enough that they would find non-deductible opportunities that they went through the effort of restructuring their org to do so (and also gave up a bunch of other benefits like the ability to sponsor visas efficiently). My comment wasn’t very clear on that.
Here’s Open Phil’s blog post on why they decided to operate as an LLC. After reading it, I think their reasons are not very relevant to funding AI alignment research. (Mainly they want the freedom to recommend donations to non-501(c)(3) organizations like political groups.)