Problem is that the relation of asset values to realized returns (that they will equalize between assets with fixed returns), means that any tax on asset returns immediately reflects in the valuations. But that is not the end of the world, since if you hold assets, paine would argue, you can afford a haircut.
Mis-Understandings
I noticed another thing.
All these analysis put a lot of stock on the democrats being Anti-market, because well, it is in the democratic discourse. But I think that is misreading that discourse. A lot of it is that the democrats are rightly very scared and suspiscious (almost paranoid), over monopolies, monopsonies, and cartelization. And they don’t just endorse the obvious solution of agressively breaking up companies. (since it is bad for buisnesses even though it is good for competition)
But i just don’t think that it is the only way to frame that. Especially Biden’s SEC and FTC are very skeptical of M and A because they are very scared of monopolies, and most of the democratic policies make sense in a we think that there might be an x monopoly, and we dont just want to point antitrust at it, so what should we do.
And generally the solution that they come up with is that government should engage in effectively price negotiations with the monopoly provider, where they use to law to get people to coordinate in bargining for a better price, so you end up with 2 agent no alt bargianing as the pricing mechanism), hopefully to agree to something closer to the free market pricing (the price capping). That is a bad pricing mechanism (often ending up below market). It is really hard to figure out the coordination method used so as to break them up. This is a bad solution. If you think there is a cartel, you don’t put in a price cap, you break the cartel.
Another broad problem is not noticing (or caring) about the degree to which being a good administrator of the federal bureaucracy, is a critical skill for a president. The things where it seems like Trump has no clue what is going on was baked in in P2025, when it talked about doing things that disrupted the normal function of agencies, because 90% of what the president knows he is getting from his secretaries and advisors, which get stuff from their departments. The fact that Trump watches Fox but sometimes ignores briefings is in fact a big deal.
Harris winning probably would not stop the democratic civil war, (unless she got some deals done), because the democrats have a civil war every election cycle and did not get a chance to do so in the primaries. We don’t know how she would govern through that.
Note that knowing != doing, so in principle there is a gap between a world model which includes lots of information about what the user is feeling (what you call cognitive empathy), and acting on that information in prosocial/beneficial ways.
Similarly, one can considers anothers emotions both to mislead or to comfort someone.
There is a bit of tricky framing/ training work in making a model that “knows” what a user is feeling, having that at a low enough layer that the activation is useful, and actually acting on that in a beneficial way.
Steering might help taxonimize here.
. It is within our power to prevent lab-originated pandemics but not natural pandemics
Might be false.
If you could clear vaccines for deployment with good transmission prevention before zoonosis, and the hypothesis that viruses in the wild that are prone to zoonosis are observable (so you can prepare), then you could basically prevent zoonosis events from becoming pandemics (because you could immediately begin ring vaccination).
So there are no “natural pandemics”, there are new diseases which interact with social conditions to become pandemic (just as existing diseases can mutate past current limitations). If those social conditions do not exist, disease does not reach pandemic status.
Generally the releases of the “open source” models release the inference code and the weights, but not the exact training data, and often not information about training setup. (for instance, Deepseek has done a pile of hacking on how to get the most out of their H800s, which is private)
Thank you.
[Question] Does this game have a name?
The effect will at first be most clear in fields where entry-level cognitive tasks are especially prone to near-term AI automation, like software engineering and law as seen above. Other industries where complex human interaction or physical dexterity are crucial, for example healthcare and construction, will hold out for longer before being automated too through a combination of AI and robotics.
Repeat Paragraph
Historically, the social contract describes the set of agreements concerning the legitimacy of government and its role in governing citizens. This concept, developed by philosophers like Hobbes, Locke, and Rousseau, posits that individuals surrender certain natural freedoms and contribute a portion of their wealth to governments in exchange for protection, order, and social stability.
Over time, this foundational concept evolved beyond the basic relationship between citizen and state. The industrial age expanded the social contract to encompass economic relationships between workers, employers, and broader society. Citizens came to expect not just security, but also economic opportunity. Governments increasingly took on responsibilities for education, infrastructure, and basic welfare as part of their obligation under this implicit agreement.
This evolution produced the modern social contract we recognize today: citizens contribute their labor and a portion of their earnings through taxation; in return, they receive not just protection but also economic security and the promise that hard work would be rewarded with prosperity.
I am not sure how this social change is discontinuous with previous developments which introduced new social conditions, new capabilities, and new externalities. In short, it is clear that if there is big economic change, there will be political changes too, but if this is rethinking the social contract, then we have been doing this continuosly. We do not need to begin to rethink the social contract. We need to recognize that we have always been continually rethinking it.
There is another analogy where this works. It is like bank failures, where things fall apart slowly, then all at once. That is to say that being past the critical threshold does not guarntee failure timing. Specifically, you can’t tell if an organization can do something without actually trying to do it. So noticing disempowerment is not helpful if you notice it only after the critical threshold, where you try something and it does not work.
Mainly things that we would never think of, as fruitful for AI and not for us.
Things that are useful for us but not for AI is things like investigating gaps in tokenization, hiding things from AI, and things that are hard to explain/judge, because we probably ought to trust the AI researchers less than we do human researchers with regards to good faith.
That is, given that you get useful work out of AI-driven research before things fall apart (reasonable but not guarnteed).
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
It also relies on work done now giving useful direction, especially if paralelism grows faster than serial speeds.
In short, this says that if time horizons to AI assistance are short, the most important things are A. The framework to be able to verify an approach, so we can hand it off. B. Information about whether it will ultimately be workable.
As always, it seems to bias towards long term approaches where you can do the hard part first.
If this becomes widespread, and there are two problems bad enough that they might create significant backlash.
if things like 4. happen, or get leaked because data security is in general hard, people will either generate precautions on their own (use controls like local models or hosting on dedicated cloud infrastructure). There is a tension that you want to save all context so that your therapist knows you better, and you will probably tell them things that you do not want others to know.
Second, there is a tension with not wanting the models to talk about particular things, in that letting the model talk about suicide can help prevent it. But if this fails, somebody talks to a model about suicide, it says what it thinks would help, and it does not work, that will be very unpopular even if the model acted in a high EV manner.
You are wrong. The article does not refute that argument because (2) is about exactly the large dimensions of types of talent demanded. (Since universities want a variety of things)
You are assuming the consequent that there is not a large variety of things a university wants.
Saying if you relax a problem, it is easier, is not an argument that it can be relaxed. That is your fundamental misunderstanding. For the university, they do really find value in the things they select for with 2, so they have a lot of valuable candidates, and so picking a mixture of valuable candidates with a large supply of hard to compare offerings is in fact difficult, and will leave any one metric too weak for their preferences.
This is about your top line claim, and your framing.
If you try to say that if you exclude the reason a system is competitive, it does not need to be competitive, this is obvious.
The system you propose does not fufill the top line purposes of the admissions system.
there isn’t a huge oversupply of talent at all for these spots,
Misses the fact that the complexity of the admissions process does not come from competition over talent (universities would be willing to accept most people on their waitlist if they had more slots, and slots are limited by other factors), but from highly multidimensional preference frontiers which require complicated information about applicants to get good distributions of students.
Basically, the argument about talent is wrong directioned for talking about admission systems.
University cohorts are basically setup to maximally benefit the people who do get admitted, not to admit the most qualified. For this purpose universities would rather have students that make the school more rewarding for other students, and not the smartest possible students. This is combined with a general tendency to do prestigous/donor wanted things. And donors want to have gone to a college that is hard to get into (even though they did not like applying). The challenge of application (and with that admit rates over yield rates) is a signal.
I think you might fundamentally misunderstand the purpose of admission systems. To be frank, admissions is set up to benefit the university and the university alone. If getting good test scores was the bottleneck, you would see shifts in strategic behaviour until the test became mostly meaningless. For instance, you can freely retake the SAT, so if you just selected based on that people would just retake till they got a good result.
The university has strong preferences about the distribution of students in classes. They have decided that they want different things from their applicants than “just” being good at tests.
They get this exactly through account race-based affirmative action, athletic recruitment, “Dean’s Interest List”-type tracking systems for children of donors or notable persons, and legacy preference for children of alumni, and a bunch of ill articulated selection actions in admissions offices and in other various places.
stable-marriage system would require a national system, which would require universities as distinct and competing organizations (mostly for prestige) to coordinate for the benefit of students. They obviously should do things wiht that general description, but they tend not to.
Maximize EV is probably a skewed distribution. But maximize skewness+variance+EV is lower EV than maximize EV for almost certain.
You would need to make sure that this change in asset values does not wipe out highly leveraged players. But that is also a thing that has been done in the past. (see 2023 banking failures for what happens).