I also write at https://splittinginfinity.substack.com/
harsimony
This is super cool stuff, thank you for posting!
I may have missed this, but do these scoring rules prevent agents from trying to make the environment more un-predictable? In other words, if you’re competing against other predictors, it may make sense to influence the world to be more random and harder to understand.
I think this prediction market type issue has been discussed elsewhere but I can’t find a name for it.
Thanks for this! I misinterpreted Lucius as saying “use the single highest and single lowest eigenvalues to estimate the rank of a matrix” which I didn’t think was possible.
Counting the number of non-zero eigenvalues makes a lot more sense!
You can absolutely harvest potential energy from the solar system to spin up tethers. ToughSF has some good posts on this:
https://toughsf.blogspot.com/2018/06/inter-orbital-kinetic-energy-exchanges.html https://toughsf.blogspot.com/2020/07/tethers-all-way.html
Ideally your tether is going to constantly adjust its orbit so it says far away from the atmosphere, but for fun I did a calculation of what would happen if a 10K tonne tether (suitable for boosting 100 tonne payloads) fell to the Earth. Apparently it just breaks up in the atmosphere and produces very little damage. More discussion here:
The launch cadence is an interesting topic that I haven’t had a chance to tackle. The rotational frequency limits how often you can boost stuff.
Since time is money you would want a shorter and faster tether, but a shorter time of rotation means that your time window to dock with the tether is smaller, so there’s an optimization problem there as well.
It’s a little easier when you’ve got catapults on the moon’s surface. You can have two running side by side and transfer energy between them electrically. So load up catapult #1, spin it up, launch the payload, and then transfer the remaining energy to catapult #2. You can get much higher launch cadence that way.
Oops yes, that should read “Getting oxygen from the moon to LEO requires less delta V than going from the Earth to LEO!”. I edited the original comment.
Lunar tethers actually look like they will be feasible sooner than Earth tethers! The lack of atmosphere, micrometeorites, and lower gravity (g) makes them scale better.
In fact, you can even put a small tether system on the lunar surface to catapult payloads to orbit: https://splittinginfinity.substack.com/p/should-we-get-material-from-the-moon
Whether tethers are useful on the moon depends on the mission you want to do. Like you point out, low delta-V missions probably don’t need a tether when rockets work just fine. But if you want to take lunar material to low earth orbit or send it to Mars, a lunar tether is a great option.
The near-term application I’m most excited about is liquid oxygen. Getting oxygen from the moon to LEO requires less delta V than going from the Earth to LEO! Regolith is ~45% oxygen by mass and a fully-fueled Starship is 80% LOX by mass. So refueling ships in LEO with lunar O2 could be viable.
Even better, the falling lunar oxygen can spin up a tether in LEO which can use that momentum to boost a Starship to other parts of the solar system.
Thanks for the comments! Going point-by-point:
-
I think both fiberglass and carbon fiber use organic epoxy that’s prone to UV (and atomic oxygen) degradation? One solution is to avoid epoxy entirely using parallel strands or something like a Hoytether. The other option is to remove old epoxy and reapply over time, if its economical vs just letting the tether degrade.
-
I worry that low-thrust options like ion engines and sails could be too expensive vs catching falling mass, but I could be convinced either way!
-
Yeah, some form of vibration damping will be important, I glossed over this. Bending modes are particularly a problem for glass. Though I would guess that vibrations wouldn’t make the force along the tether any higher?
-
Catching the projectile is a key engineering challenge here! One that I probably can’t solve from my armchair. As for missing the catch, I guess I don’t see this as a huge issue? If the rocket can re-land, missing the catch means that the only loss is fuel. Though colliding with the tether would be a big problem.
-
Yeah I think low orbits are too challenging for tethers, so they’re definitely going to be at risk of micrometeorite impacts. I see this as a key role of the “safety factor”. Tether should be robust to ~10-50% of fibers being damaged, and there should be a way to replace/repair them as well.
-
Right, though tethers can’t help satellites get to LEO, they can help them get to higher orbits which seems useful. But the real value-add comes when you want to get to the Moon and beyond.
-
Good to know! I would love to see more experiments on glass fibers pulled in space, small-scale catches, and data on what kinds of defects form on these materials in orbit.
-
Yeah, my overall sense is that using falling mass to spin the tether back up is the most practical. But solar sails and ion drives might contribute too, these are just much slower which hurts launch cadence and costs.
The fact that you need a regular supply of falling mass from e.g. the moon is yet another reason why tethers need a mature space industry to become viable!
That makes sense, I guess it just comes down to an empirical question of which is easier.
Question about what you said earlier: How can you use the top/bottom eigenvalues to estimate the rank of the Hessian? I’m not as familiar with this so any pointers would be appreciated!
Isn’t calculating the Hessian for large statistical models kind of hard? And aren’t second derivatives prone to numerical errors?
Agree that this is only valuable if sampling on the loss landscape is easier or more robust than calculating the Hessian.
You may find this interesting “On the Covariance-Hessian Relation in Evolution Strategies”:
https://arxiv.org/pdf/1806.03674
It makes a lot of assumptions, but as I understand it if you: a. Sample points near the minima [1]. b. Select only the lowest loss point from that sample and save it. c. Repeat that process many times d. Create a covariance matrix of the selected points
The covariance matrix will converge to the inverse of the Hessian, assuming the loss landscape is quadratic. Since the inverse of a matrix has the same rank, you could probably just use this covariance matrix to bound the local learning coefficient.
Though since a covariance matrix has rank less than n-1 (where n is the number of sample points) you would need to sample and evaluate roughly d/2 points. The process seems pretty parallelize-able though.
[1] Specifically using an an isotropic, unit variance normal distribution centered at the minima.
Exciting to see this up and running!
If I’m understanding correctly, the system looks for modifications to certain viruses. So if someone modified a virus that NAO wasn’t explicitly monitoring for modifications, then that would go undetected?
I like the simple and clear model and I think discussions about AI risk are vastly improved by people proposing models like this.
I would like to see this model extended by including the productive capacity of the other agents in the AI’s utility function. In other words, the other agents have a comparative advantage over the AI in producing some stuff and the AI may be able to get a higher-utility bundle overall by not killing everyone (or even increasing the productivity of the other agents so they can produce more stuff for the AI to consume).
Super useful post, thank you!
The condensed vaporized rock is particularly interesting to me. I think it could be an asset instead of a hindrance. Mining expends a ton of energy just crushing rock into small pieces for processing, turning ores into dust you can pump with air could be pretty valuable.
I was always skeptical of enhanced geothermal beating solar on cost, though I do think the supercritical water Quaise could generate has interesting chemical applications: https://splittinginfinity.substack.com/p/recycling-atoms-with-supercritical
This post has some useful info:
https://milkyeggs.com/biology/lifespan-extension-separating-fact-from-fiction/
It basically says that sunscreen, ceramide moisturizers, and retinols are the main evidence-based skincare products. I would guess that more expensive versions of these don’t add much value.
Some amount of experimentation is required to find products that don’t irritate your skin.
Good framing! Two forms of social credit that I think are worth paying attention to:
Play money prediction markets and forecasting. I think it’s fruitful to think about these communities as using prediction accuracy as a form of status/credit.
Cryptocurrencies, which are essentially financial credit but with its own rules and community. The currency doesn’t have to have a dollar value to induce coordination it can still function as a reputation system and medium of exchange.
It’s somewhat tangential, but Sarah Constantin discussing attestation has some insight on this I think (I put some comments here).
Note that these sorts of situations are perfectly foreseeable from the perspective of owners. They know precisely what they will pay each year in taxes based on their bid. It’s prudent to re-value the home every once in a while if taxes drift too much, but the owner can keep the same schedule if they want. They can also use the public listing of local bids, so they know what to bid and can feel pretty safe that they will keep their home. They truly have the highest valuation of all the bidders in most cases.
The thing is, every system of land ownership faces a tradeoff between investment efficiency and allocative efficiency. This is a topic in the next post, where I’ll discuss why the best growth rate of taxes closely follows the true growth rate of land values. Essentially, you want people to pay their fair share. Unfortunately, any system that has taxes move along with land values will risk “taxing people out of their homes”, there are legitimate ways to do land policy on either end of the spectrum.
The neat thing about this system is that you can choose where on the spectrum you want to be! If you want high investment efficiency (i.e. people can securely hold their homes and don’t have to worry about re-auctioning) then just set the tax growth rate to zero; that way the owner pays a fixed amount each year indefinitely. In net present value terms, the indefinite taxes will be finite and the tax rate can be set to adjust this amount up or down.
If for some reason you want allocative efficiency, you can crank the growth rate high enough to trigger annual auctions. This is bad for physical land, but this could be valuable for other types of economic land like broadband spectrum.
Land value taxation is designed to make land ownership more affordable by lowering the cost to buy land. Would it change the value of property as an investment for current owners? I’m not sure, one one hand, land values would go down, but on the other, land would get used more efficiently and deadweight loss of taxation would go down, boosting the local economy.
As for the public choice hurdles, reform doesn’t seem intractable. Detroit is considering a split-rate property tax, and it’s not infeasible that other places switch. Owners hate property taxes and land values are less than property values. Why not slowly switch to using land values and lower everyone’s property tax bill? That seems like it could be popular with voters, economists, and politicians.
This proposal doesn’t involve any forced moves, owners only auction when they want to sell their land.
So yes, taxing property values is undesirable, but it also happens with imperfect land value assessments: https://www.jstor.org/stable/27759702
It looks like you have different numbers for the cost of land, sale value of a house, and cost of construction. I’m not an expert, so I welcome other estimates. A couple comments:
Land value assessors typically say that the land value is larger than the improvement value. In urban centers, land can be over 70% of the overall property value. I would guess this is where the discrepancy comes from with our numbers. AEI has a nice graphic of this here:
https://www.aei.org/housing/land-price-indicators/
Overhead costs of construction would act to reduce the overall distortion, since those are included in C_b in the formula for distortion. The construction costs look larger in that article than what I used, but I guess what we really need to know is the markup from construction.
Let’s just keep all the construction and demolition costs the same and use your land value ($100K) and improvement value ($400K):
P = 400K + 0.5*(76K -(400K + 10K)) = 233K B = 100K + ((400-233) − 0.05*(400-233)*10)*0.31 = 126K
Total = 359K
So the buyer gets 500K of property for $359K, a 28% price reduction. The land tax is ~25% improvement value. It’s easy to adjust land taxes down by 25% so that you tax the correct amount, but the implicit tax on property is a big problem in this case.
The thing is, I don’t think land value being only 20% of property values is realistic, especially in urban areas. Median land share in the US is more like 50% so I’m not really sure where the discrepancy comes from.
As for skyscrapers, the interesting thing about this proposal is that hard-to-remove amendments essentially become land. For example, if you made a plot of land fertile, that improvement is difficult/undesirable to remove, so when you go to sell it, the owner pays for it as if it were land. I’ll tackle this more in the second post.
Oh that makes sense!
If the predictors can influence the world in addition to making a prediction, they would also have an incentive to change the world in ways that make their predictions more accurate than their opponents right? For example, if everyone else thinks Bob is going to win the presidency, one of the predictors can bribe Bob to drop out and then bet on Alice winning the presidency.
Is there work on this? To be fair, it seems like every AI safety proposal has to deal with something like this.