I also write at https://splittinginfinity.substack.com/
harsimony
Modifying Jones’ “AI Dilemma” Model
Land value taxation is designed to make land ownership more affordable by lowering the cost to buy land. Would it change the value of property as an investment for current owners? I’m not sure, one one hand, land values would go down, but on the other, land would get used more efficiently and deadweight loss of taxation would go down, boosting the local economy.
As for the public choice hurdles, reform doesn’t seem intractable. Detroit is considering a split-rate property tax, and it’s not infeasible that other places switch. Owners hate property taxes and land values are less than property values. Why not slowly switch to using land values and lower everyone’s property tax bill? That seems like it could be popular with voters, economists, and politicians.
This proposal doesn’t involve any forced moves, owners only auction when they want to sell their land.
So yes, taxing property values is undesirable, but it also happens with imperfect land value assessments: https://www.jstor.org/stable/27759702
It looks like you have different numbers for the cost of land, sale value of a house, and cost of construction. I’m not an expert, so I welcome other estimates. A couple comments:
Land value assessors typically say that the land value is larger than the improvement value. In urban centers, land can be over 70% of the overall property value. I would guess this is where the discrepancy comes from with our numbers. AEI has a nice graphic of this here:
https://www.aei.org/housing/land-price-indicators/
Overhead costs of construction would act to reduce the overall distortion, since those are included in C_b in the formula for distortion. The construction costs look larger in that article than what I used, but I guess what we really need to know is the markup from construction.
Let’s just keep all the construction and demolition costs the same and use your land value ($100K) and improvement value ($400K):
P = 400K + 0.5*(76K -(400K + 10K)) = 233K B = 100K + ((400-233) − 0.05*(400-233)*10)*0.31 = 126K
Total = 359K
So the buyer gets 500K of property for $359K, a 28% price reduction. The land tax is ~25% improvement value. It’s easy to adjust land taxes down by 25% so that you tax the correct amount, but the implicit tax on property is a big problem in this case.
The thing is, I don’t think land value being only 20% of property values is realistic, especially in urban areas. Median land share in the US is more like 50% so I’m not really sure where the discrepancy comes from.
As for skyscrapers, the interesting thing about this proposal is that hard-to-remove amendments essentially become land. For example, if you made a plot of land fertile, that improvement is difficult/undesirable to remove, so when you go to sell it, the owner pays for it as if it were land. I’ll tackle this more in the second post.
Some Thoughts On Using Auctions For Land Valuation
Thanks for the clarification! Do you know if either condition is associated with abnormal levels of IGF-1 or other growth hormones?
Are there examples of ineffective drugs leading to increased FDA stringency? I’m not as familiar with the history. For example, people agree that Aducanumab is ineffective, has that cause people to call for greater scrutiny? (genuinely asking, I haven’t followed this story much).
There are definitely examples of a drug being harmful that caused increased scrutiny. But unless we get new information that this drug is unsafe, that doesn’t seem to be the case here.
I agree that the difference between disease-treating interventions (that happen to extend life) versus longevity interventions is murky.
For example, would young people taking statins to prevent heart disease be a longevity intervention?
https://johnmandrola.substack.com/p/why-i-changed-my-mind-about-preventing
See this post arguing that rapamycin is not a longevity drug:
https://nintil.com/rapamycin-not-aging
Broadly, I’m not too concerned with what we classify a drug as as long as its safe, effective, well-understood, and gets approved by regulatory authorities.
I personally don’t expect very high efficacy, and I do expect that Loyal will sell the drug for the next 4.5 years. However, as long as Loyal is clear about the nature of the approval of the drug, I think this is basically fine. People should be allowed to, at their own expense, give their pets experimental treatments that won’t hurt them and might help them. They should also be able to do the same for themselves, but that’s a fight for another day.
Agreed! Beyond potentially developing a drug, think Loyal’s strategy has the potential to change regulations around longevity drugs, raise profits for new trials, and bring attention/capital to the longevity space. I don’t see many downside risks here unless the drug turns out to be unsafe.
Note: I’m not affiliated with Loyal or any other longevity organization, I’m going off the same outside information as the author.
I think there’s a substantial chance that this criticism is misguided. A couple points:
The term “efficacy nod” is a little confusing, the FDA term is “reasonable expectation of effectiveness”, which makes more sense to me, it sounds like the drug has enough promise that the FDA thinks its worth continuing testing. They may not have actual effectiveness data yet, just evidence that it’s safe and a reasonable explanation for why it might work.
It’s surprising, to say the least, to see a company go from zero information to efficacy nod, because, well, what are you basing your efficacy on?
I don’t know what the standard practices are for releasing trial data, especially for an initial trial like this. Are we sure this isn’t standard practice? Even if it isn’t, I don’t think this is sufficient to assume that Loyal is being disingenuous.
They then looked at healthy pet dogs, and found that big dogs had higher levels of IGF-1, which is one of the reasons they’re big. Small dogs had lower levels of IGF-1. Small dogs, as we all know, live longer than big dogs. Therefore, Loyal said, our IGF-1 inhibitor will extend the life of dogs. Needless to say, this is bad science. Really bad science.
Take the outside view here, both Loyal and the FDA have veterinarians who seem to think that the drug is promising.
I also think there’s a reasonable argument to be made for an IGF-1 inhibitor in large-breed dogs. Large breed dogs often die of heart disease which is often due to dilated cardiomyopathy (heart becomes enlarged and can’t pump blood effectively). This enlargement can come from hypertrophic cardiomyopathy (overgrowth of the heart muscle). I don’t know if it’s known why large breed dogs have hypertrophic cardiomyopathy, but maybe IGF-1 makes the heart muscle grow over a dog’s lifetime which would suggest that an IGF-1 inhibitor is worth trying. It’s also suggestive that diabetes is a risk-factor for cardiomegaly (enlarged heart).
With this in mind, we can answer the next point:
And, even if they did, there’s no reason to believe that lowering levels of IGF-1 would reverse any of the “damage” caused by high levels of IGF-1! The big dogs will still be big!
So my theory says that high IGF-1 over a lifetime progressively increases the size of the heart muscle until you get dilated cardiomyopathy. Stopping IGF-1 even in middle age might help. We can falsify this theory by checking if large breed dogs show heart enlargement over their lifetime (instead of growth stopping after puberty like it should). Why would heart muscle keep growing while nothing else does? I’m not sure.
Now we can turn to the question: do large breed dogs actually have elevated IGF-1?
Looking at your first figure, the answer seems to be yes! There’s a straightforward correlation between bodyweight and IGF-1 concentration, the slope would likely be higher without the 3 outliers on the right. Notice also that the sample doesn’t have many large breed dogs (great danes weigh 110-180 lbs). I would guess that those 3 dogs are large breed dogs, and they do in fact have IGF-1 levels higher than most of the dogs in the sample.
Now lets turn to the second plot, we see that IGF-1 concentration decreases with age. Remember that there is survivorship bias at higher ages, large breed dogs with higher IGF-1 will die at around 72 months while chihuahuas will live over 150 months. Declining IGF-1 with age is exactly what we should see if we expected IGF-1 to correlate with longevity! The plot supports the theory that IGF-1 is important for aging, you can’t cherry-pick outliers and ignore the overall relationship in the plot.
oh, did I forget to mention that IGF-1 inhibitors have existed for humans for decades, and they have zero evidence of being longevity drugs?
I’m no expert, but I think there’s interest in IGF-1 inhibitors for longevity. To quote Sarah Constantin:
There’s a _lot _of evidence that the IGF/insulin signaling/growth hormone metabolic pathway is associated with aging and short lifespan, and that inhibiting genes on that pathway results in longer lifespan. IGF-receptor-inhibiting or growth-hormone-inhibiting drugs could be studied for longevity, but haven’t yet.
I would guess this is one of the reasons Loyal had an interest in IGF-1 inhibitors from the outset.
IGF-1 inhibitors can causelow levels of blood platelets, elevated liver enzymes, and hyperglycemia
The dose makes the poison! Every drug has negative effects at a high enough dose, the trials will determine if these actually arise at the dose they are using.
I’m no expert, but this evidence doesn’t seem sufficient to stop research on this drug. Will it prove safe or effective? Will it also benefit human health? I have no idea, but unless we discover that the drug is hurting patients, I think its fine for Loyal to carry on.
Thanks for writing this!
In addition to regulatory approaches to slowing down AI development, I think there is room for “cultural” interventions within academic and professional communities that discourage risky AI research:
https://www.lesswrong.com/posts/ZqWzFDmvMZnHQZYqz/massive-scaling-should-be-frowned-upon
Could someone help me collect the relevant literature here?
I think the complete class theorems are relevant: https://www.lesswrong.com/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations
The Non-Existence of Representative Agents: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3302656
Representative Agents: https://en.wikipedia.org/wiki/Representative_agent
John Wentworth on Subagents: https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents
Two arguments I would add:
Conflict has direct costs/risks, a fight between AI and humanity would make both materially worse off
Because of comparative advantage, cooperation between AI and humanity can produce gains for both groups. Cooperation can be a Pareto improvement.
Alignment applies to everyone, and we should be willing to make a symmetric commitment to a superintelligence. We should grant them rights, commit to their preservation, respect it’s preferences, be generally cooperative and avoid using threats, among other things.
It may make sense to commit to a counterfactual contract that we expect an AI to agree to (conditional on being created) and then intentionally (carefully) create the AI.
Standardization/interoperability seems promising, but I want to suggest a stranger option: subsidies!
In general, monopolies maximize profit by setting an inefficiently high price, meaning that they under-supply the good. Essentially, monopolies don’t make enough money.
A potential solution is to subsidize the sale of monopolized goods so the monopolist increases supply to the efficient level.
For social media monopolies, they charge too high a “price” by using too many ads, taking too much data, etc. Because of the network effect, it would be socially beneficial to have more users, but the social media company drives them away with their high “prices”. The socially efficient network size could be achieved by paying the social media company per active user!
I was planning to write this up in more detail at some point (see also). There are of course practical difficulties with identifying monopolies, determining the correct subsidy in an adversarial environment, Sybil attacks, etc.
Current AI Models Seem Sufficient for Low-Risk, Beneficial AI
Nice post, thanks!
Is there a formulation of UDASSA that uses the self-indication assumption instead? What would be the implications of this?
Frowning upon groups which create new, large scale models will do little if one does not address the wider economic pressures that cause those models to be created.
I agree that “frowning” can’t counteract economic pressures entirely, but it can certainly slow things down! If 10% of researchers refused to work on extremely large LM’s, companies would have fewer workers to build them. These companies may find a workaround, but it’s still an improvement on the situation where all researchers are unscrupulous.
The part I’m uncertain about is: what percent of researchers need to refuse this kind of work to extend timelines by (say) 5 years? If it requires literally 100% of researchers to coordinate, then it’s probably not practical, if we only have to convince the single most productive AI researcher, then it looks very doable. I think the number could be smallish, maybe 20% of researchers at major AI companies, but that’s a wild guess.
That being said, work on changing the economic pressures is very important. I’m particularly interested in open-source projects that make training and deploying small models more profitable than using massive models.
On outside incentives and culture: I’m more optimistic that a tight-knit coalition can resist external pressures (at least for a short time). This is the essence of a coordination problem; it’s not easy, but Ostrom and others have identified examples of communities that coordinate in the face of internal and external pressures.
Massive Scaling Should be Frowned Upon
I like this intuition and it would be interesting to formalize the optimal charitable portfolio in a more general sense.
I talked about a toy model of hits-based giving which has a similar property (the funder spends on projects proportional to their expected value rather than on the best projects):
https://ea.greaterwrong.com/posts/eGhhcH6FB2Zw77dTG/a-model-of-hits-based-giving
Updated version here: https://harsimony.wordpress.com/2022/03/24/a-model-of-hits-based-giving/
Great post!!
I think the section “Perhaps we don’t want AGI” is the best argument against these extrapolations holding in the near-future. I think data limitations, practical benefits of small models, and profit-following will lead to small/specialized models in the near future.
https://www.lesswrong.com/posts/8e3676AovRbGHLi27/why-i-m-optimistic-about-near-term-ai-risk
Note that these sorts of situations are perfectly foreseeable from the perspective of owners. They know precisely what they will pay each year in taxes based on their bid. It’s prudent to re-value the home every once in a while if taxes drift too much, but the owner can keep the same schedule if they want. They can also use the public listing of local bids, so they know what to bid and can feel pretty safe that they will keep their home. They truly have the highest valuation of all the bidders in most cases.
The thing is, every system of land ownership faces a tradeoff between investment efficiency and allocative efficiency. This is a topic in the next post, where I’ll discuss why the best growth rate of taxes closely follows the true growth rate of land values. Essentially, you want people to pay their fair share. Unfortunately, any system that has taxes move along with land values will risk “taxing people out of their homes”, there are legitimate ways to do land policy on either end of the spectrum.
The neat thing about this system is that you can choose where on the spectrum you want to be! If you want high investment efficiency (i.e. people can securely hold their homes and don’t have to worry about re-auctioning) then just set the tax growth rate to zero; that way the owner pays a fixed amount each year indefinitely. In net present value terms, the indefinite taxes will be finite and the tax rate can be set to adjust this amount up or down.
If for some reason you want allocative efficiency, you can crank the growth rate high enough to trigger annual auctions. This is bad for physical land, but this could be valuable for other types of economic land like broadband spectrum.