Dmitry Vaintrob
I’m managing to get verve and probity, but having issues with wiles
I really liked the post—I was confused by the meaning and purpose no-coincidence principle when I was a ARC, and this post clarifies it well. I like that this is asking for something that is weaker than a proof (or a probabilistic weakening of proof), as [related to the example of using the Riemann hypothesis], in general you expect from incompleteness for there to be true results that lead to “surprising” families of circuits which are not provable by logic. I can also see Paul’s point of how this statement is sort of like P vs. BPP but not quite.
More specifically, this feels like a sort of 2nd-order boolean/polynomial hierarchy statement whose first-order version is P vs. BPP. Are there analogues of this for other orders?
Looks like a conspiracy of pigeons posing as lw commenters have downvoted your post
Thanks!
I haven’t grokked your loss scales explanation (the “interpretability insights” section) without reading your other post though.
Not saying anything deep here. The point is just that you might have two cartoon pictures:
every correctly classified input is either the result of a memorizing circuit or of a single coherent generalizing circuit behavior. If you remove a single generalizing circuit, your accuracy will degrade additively.
a correctly classified input is the result of a “combined” circuit consisting of multiple parallel generalizing “subprocesses” giving independent predictions, and if you remove any of these subprocesses, your accuracy will degrade multiplicatively.
A lot of ML work only thinks about picture #1 (which is the natural picture to look at if you only have one generalizing circuit and every other circuit is a memorization). But the thing I’m saying is that picture #2 also occurs, and in some sense is “the info-theoretic default” (though both occur simultaneously—this is also related to the ideas in this post)
Thanks for the questions!
You first introduce the SLT argument that tells us which loss scale to choose (the “Watanabe scale”, derived from the Watanabe critical temperature).
Sorry, I think the context of the Watanabe scale is a bit confusing. I’m saying that in fact it’s the wrong scale to use as a “natural scale”. The Watanabe scale depends only on the number of training datapoints, and doesn’t notice any other properties of your NN or your phenomenon of interest.
Roughly, the Watanabe scale is the scale on which loss improves if you memorize a single datapoint (so memorizing improves accuracy by 1/n with n = #(training set) and in a suitable operationalization, improves loss by , and this is the Watanabe scale).
It’s used in SLT roughly because it’s the minimal temperature scale where “memorization doesn’t count as relevant”, and so relevant measurements become independent of the n-point sample. However in most interp experiments, the realistic loss reconstruction loss reconstruction is much rougher (i.e., further from optimal loss) than the 1/n scale where memorization becomes an issue (even if you conceptualize #(training set) as some small synthetic training set that you were running the experiment on).
For your second question: again, what I wrote is confusing and I really want to rewrite it more clearly later. I tried to clarify what I think you’re asking about in this shortform. Roughly, the point here is that to avoid having your results messed up by spurious behaviors, you might want to degrade as much as possible while still observing the effect of your experiment. The idea is that if you found any degradation that wasn’t explicitly designed with your experiment in mind (i.e., is natural), but where you see your experimental results hold, then you have “found a phenomenon”. The hope is that if you look at the roughest such scale, you might kill enough confounders and interactions to make your result be “clean” (or at least cleaner): so for example optimistically you might hope to explain all the loss of the degraded model at the degradation scale you chose (whereas at other scales, there are a bunch of other effects improving the loss on the dataset you’re looking at that you’re not capturing in the explanation).
The question now is when degrading, what order you want to “kill confounders” in to optimally purify the effect you’re considering. The “natural degradation” idea seems like a good place to look since it kills the “small but annoying” confounders: things like memorization, weird specific connotations of the test sentences you used for your experiment, etc. Another reasonable place to look is training checkpoints, as these correspond to killing “hard to learn” effects. Ideally you’d perform several kinds of degradation to “maximally purify” your effect. Here the “natural scales” (loss on the level Claude 1 e.g., or Bert) are much too fine for most modern experiments, and I’m envisioning something much rougher.
The intuition here comes from physics. Like if you want to study properties of a hydrogen atom that you don’t see either in water or in hydrogen gas, a natural thing to do is to heat up hydrogen gas to extreme temperatures where the molecules degrade but the atoms are still present, now in “pure” form. Of course not all phenomena can be purified in this way (some are confounded by effects both at higher and at lower temperature, etc.).
Thanks! Yes the temperature picture is the direction I’m going in. I had heard the term “rate distortion”, but didn’t realize the connection with this picture. Might have to change the language for my next post
This seems overstated
In some sense this is the definition of the complexity of an ML algorithm; more precisely, the direct analog of complexity in information theory, which is the “entropy” or “Solomonoff complexity” measurement, is the free energy (I’m writing a distillation on this but it is a standard result). The relevant question then becomes whether the “SGLD” sampling techniques used in SLT for measuring the free energy (or technically its derivative) actually converge to reasonable values in polynomial time. This is checked pretty extensively in this paper for example.
A possibly more interesting question is whether notions of complexity in interpretations of programs agree with the inherent complexity as measured by free energy. The place I’m aware of where this is operationalized and checked is our project with Nina on modular addition: here we do have a clear understanding of the platonic complexity, and the local learning coefficient does a very good job of asymptotically capturing it with very good precision (both for memorizing and generalizing algorithms, where the complexity difference is very significant).
Citation? [for Apollo]
Look at this paper (note I haven’t read it yet). I think their LIB work is also promising (at least it separates circuits of small algorithms)
Thanks for the reference, and thanks for providing an informed point of view here. I would love to have more of a debate here, and would quite like being wrong as I like tropical geometry.
First, about your concrete question:
As I understand it, here the notion of “density of polygons’ is used as a kind of proxy for the derivative of a PL function?
Density is a proxy for the second derivative: indeed, the closer a function is to linear, the easier it is to approximate it by a linear function. I think a similar idea occurs in 3D graphics, in mesh optimization, where you can improve performance by reducing the number of cells in flatter domains (I don’t understand this field, but this is done in this paper according to some energy curvature-related energy functional). The question of “derivative change when crossing walls” seems similar. In general, glancing at the paper you sent, it looks like polyhedral currents are a locally polynomial PL generalization of currents of ordinary functions (and it seems that there is some interesting connection made to intersection theory/analogues of Chow theory, though I don’t have nearly enough background to read this part carefully). Since the purpose of PL functions in ML is to approximate some (approximately smooth, but fractally messy and stochastic) “true classification”, I don’t see why one wouldn’t just use ordinary currents here (currents on a PL manifold can be made sense of after smoothing, or in a distribution-valued sense, etc.).
In general, I think the central crux between us is whether or not this is true:
tropical geometry might be relevant ML, for the simple reason that the functions coming up in ML with ReLU activation are PL
I’m not sure I agree with this argument. The use of PL functions is by no means central to ML theory, and is an incidental aspect of early algorithms. The most efficient activation functions for most problems tend to not be ReLUs, though the question of activation functions is often somewhat moot due to the universal approximation theorem (and the fact that, in practice, at least for shallow NNs anything implementable by one reasonable activation tends to be easily implementable, with similar macroscopic properties, by any other). So the reason that PL functions come up is that they’re “good enough to approximate any function” (and also “asymptotic linearity” seems genuinely useful to avoid some explosion behaviors). But by the same token, you might expect people who think deeply about polynomial functions to be good at doing analysis because of the Stone-Weierstrass theorem.
More concretely, I think there are two core “type mismatches” between tropical geometry and the kinds of questions that appear in ML:
Algebraic geometry in general (including tropical geometry) isn’t good at dealing with deep compositions of functions, and especially approximate compositions.
(More specific to TG): the polytopes that appear in neural nets are as I explained inherently random (the typical interpretation we have of even combinatorial algorithms like modular addition is that the PL functions produce some random sharding of some polynomial function). This is a very strange thing to consider from the point of view of a tropical geometer: like as an algebraic geometer, it’s hard for me to imagine a case where “this polynomial has degree approximately 5… it might be 4 or 6, but the difference between them is small”. I simply can’t think of any behavior that is at all meaningful from an AG-like perspective where the questions of fan combinatorics and degrees of polynomials are replaced by questions of approximate equality.
I can see myself changing my view if I see some nontrivial concrete prediction or idea that tropical geometry can provide in this context. I think a “relaxed” form of this question (where I genuinely haven’t looked at the literature) is whether tropical geometry has ever been useful (either in proving something or at least in reconceptualizing something in an interesting way) in linear programming. I think if I see a convincing affirmative answer to this relaxed question, I would be a little more sympathetic here. However, the type signature here really does seem off to me.
If I understand correctly, you want a way of thinking about a reference class of programs that has some specific, perhaps interpretability-relevant or compression-related properties in common with the deterministic program you’re studying?
I think in this case I’d actually say the tempered Bayesian posterior by itself isn’t enough, since even if you work locally in a basin, it might not preserve the specific features you want. In this case I’d probably still start with the tempered Bayesian posterior, but then also condition on the specific properties/explicit features/ etc. that you want to preserve. (I might be misunderstanding your comment though)
Statistical localization in disordered systems, and dreaming of more realistic interpretability endpoints
[epistemic status: half fever dream, half something I think is an important point to get across. Note that the physics I discuss is not my field though close to my interests. I have not carefully engaged with it or read the relevant papers—I am likely to be wrong about the statements made and the language used.]
A frequent discussion I get into in the context of AI is “what is an endpoint for interpretability”. I get into this argument from two sides:
arguing with interpretability purists, who say that the only way to get robust safety from interpretability is to mathematically prove that behaviors are safe and/or no deception is going on.
arguing with interpretability skeptics, who say that the only way to get robust safety from interpretability is to prove that behaviors are safe and/or no deception is going on.
My typical response to this is that no, you’re being silly: imagine discussing any other phenomenon in this way: “the only way to show that the sun will rise tomorrow is to completely model the sun on the level of subatomic particles and prove that they will not spontaneously explode”. Or asking a bridge safety expert to model every single particle and provably lower-bound the probability of them losing structural coherence in a way not observed by bulk models.
But there’s a more fundamental intuition here, that I started developing when I started trying to learn statistical physics. There are a few lossy ways of expressing it. One is to talk about renormalization, how assumptions about renormalizability of systems is a “theorem” in statistical mechanics, but is not (and probably never will be) proven mathematically, (in some sense, it feels much more like a “truly new flavor of axiom” than even complexity-theoretic things like P vs. NP). But that’s still not it. There is a more general intuition, that’s hard to get across (in particular for someone who, like me, is only a dabbler in the subject) -- that some genuinely incredibly complex and information-laden systems have some “strong locality” properties, which are (insofar as the physical meaning of the word holds meaning) both provable and very robust to changing and expanding the context.
For a while, I thought that this is just a vibe—a way to guide thinking, but not something that can be operationalized in a way that may significantly convince people without a similar intuition.
However, recently I’ve become more hopeful that an “explicitly formalizable” notion of robust interpretability may fall out of this language in a somewhat natural way.
This is closely related to recent discussions and writeups we’ve been doing with Lauren Greenspan on scale and renormalization in (statistical) QFT and connections to ML.
One direction to operationalize this is through the notion of “localization” in statistical physics, and in particular “Anderson localization”. The idea (if I understand it correctly) is that in certain disordered systems (think of a semiconductor, which is an “ordered” metal with a disordered system of “impurity atoms” sprinkled inside), you can prove a kind of screening property: that from the point of view of the localized dynamics near a particular spin, you can provably ignore spins far away from the point you’re studying (or rather, replace them by an “ordered” field that modifies the local dynamics in a fully controllable way). This idea of of local interactions being “screened” from far-away details is ubiquitous. In a very large and very robust class of systems, interactions are purely local, except for mediation by a small number of hierarchical “smooth” couplings that see only high-level summary statistics of the “non-local” spins and treat them as a background—and moreover, these “locality” properties are provable (insofar as we assume the extra “axioms” of thermodynamics), assuming some (once again, hierarchical and robustly adjustable) assumptions of independence. There are a number of related principles here that (if I understand correctly) get used in similar contexts, sometimes interchangeably: one I liked is “local perturbations perturb locally” (“LPPL”) from this paper.
Note that in the above paragraph I did something I generally disapprove of: I am trying to extract and verbalize “vibes” from science that I don’t understand on a concrete level, and I am almost certainly getting a bunch of things wrong. But I don’t know of another way of gesturing in a “look, there’s something here and it’s worth looking into” way without doing this to some extent.
Now AI systems, just like semiconductors, are statistical systems with a lot of disorder. In particular in a standard operationalization (as e.g. in PDLT), we can conceptualize of neural nets as a field theory. There is a “vacuum theory” that depends only on the architecture, and then adding new datapoints corresponds to adding particles. PDLT only studies a certain perturbative picture here, but it seems plausible that an extension of these techniques may extend to non-perturbative scales (and hope for this is a big part of the reason that Lauren and I have been thinking and writing about renormalization). In a “dream” version of such an extension, the datapoints would form a kind of disordered system, with both ordered components, hierarchical relationships, and some assumption of inherent randomness outside of the relationships. A great aspect of “numerical” QFT, such as gets applied in condensed matter models, is that you don’t need a really great model of the hierarchical relationships: sometimes you can just play around and turn on a handful of extra parameters until you find something that works. (Again, at the moment this is an imprecise interpretation of things I have not deeply engaged with.)
Of course doing this makes some assumptions—but the assumptions are on the level of the data (i.e. particles), not the weights/ model internals (i.e., fields—the place where we are worried about misalignment, etc.). And if you assume these assumptions and write down a “localization theorem” result, then plausibly the kind of statement you will get is something along the lines of the following:
“the way this LLM is completing this sentence is a combination of a sophisticated collection of hierarchical relationships, but I know that the behavior here is equivalent to behaviors on other similar sentences up to small (provably) low-complexity perturbations”.
More generally, the kinds of information this kind of picture would give is a kind of “local provably robust interpretability”—where the text completion behavior of a model is provably (under suitable “disordered system” assumptions) reducible to a collection of several local circuits that depend on understandable phenomena at a few different scales. A guiding “complexity intuition” for me here is provided by the notrivial but tractable grammar task diagrams in the paper Marks et al. (See pages 25-27, and note the shape of these diagrams is more or less straightup typical of the shape of a nonrenormalized interaction diagram you see before you start applying renormalization to simplify a statistical system).
An important caveat here is that in physical models of this type (and in pictures that include renormalization more generally), one does not make—or assume—any “fundamentality” assumptions. In many cases a number of alternative (but equivalent, once the “screening” is factored in) pictures exist, with various levels of granularity, elegance, etc. (this already can be seen in the 2D Ising model—a simple magnet model—where the same behaviors can either be understood in a combinatorial “spin-to-spin interaction” way, which mirrors the “fundamental interpretability” desires of mechinterp, and through this “recursive screening out” model that is more renormalization-flavored; the results are the same (to a very high level of precision), even when looking at very localized effects involving collections of a few spins. So the question of whether an interpretation is “fundamental” or uses the “right latents” is to a large extent obviated here; the world of thermodynamics is much more anarchical and democratic than the world of mathematical formalism and “elegant proof”, at least in this context.
Having handwavily described a putative model, I want to quickly say that I don’t actually believe in this model. There are a bunch of things I probably got wrong, there are a bunch of other, better tools to use, and so on. But the point is not the model: it’s that this kind of stuff exists. There exist languages that show that arbitrarily complex, arbitrarily expressive behaviors are provably reducible to local interactions, where behaviors can be understood as clusters of hierarchical interactions that treat all but a few parts of the system at every point as “screened out noise”.
I think that if models like this are possible, then a solution to “the interpretability component to safety” is possible in this framework. If you have provably localized behaviors then for example you have a good idea where to look for deception: e.g., deception cannot occur on the level “very low-level” local interactions, as they are too simple to express the necessary reasoning, and perhaps it can be carefully operationalized and tracked in the higher-level interactions.
As you’ve no doubt noticed, this whole picture is splotchy and vague. It may be completely wrong. But there also may be something in this direction that works. I’m hoping to think more about this, and very interested in hearing people’s criticisms and thoughts.
What application do you have in mind? If you’re trying to reason about formal models without trying to completely rigorously prove things about them, then I think thinking of neural networks as stochastic systems is the way to go. Namely, you view the weights as a random variable solving a stochastic optimization problem to produce a weight-valued random variable, then conditioning it on whatever knowledge about the weights/activations you assume is available. This can be done both in the Bayesian “thermostatic” sense as a model of idealized networks, and in the sense of modeling the NN as SGD-like systems. Both methods are explored explicitly (and give different results) in suitable high width limits by the PDLT and tensor networks paradigms (the latter also looks at “true SGD” with nonnegligible step size).
Here you should be careful about what you condition on, as conditioning on exact knowledge of too much input-output behavior of course blows stuff up, and you should think of a way of coarse-graining, i.e. “choose a precision scale” :). Here my first goto would be to assume the tempered Boltzmann distribution on the loss at an appropriate choice of temperature for what you’re studying.
If you’re trying to do experiments, then I would suspect that a lot of the time you can just blindly throw whatever ML-ish tools you’d use in an underdetermined, “true inference” context and they’ll just work (with suitable choices of hyperparameters)
This is where this question of “scale” comes in. I want to add that (at least morally/intuitively) we are also thinking about discrete systems like lattices, and then instead of a regulator you have a coarsegraining or a “blocking transformation”, which you have a lot of freedom to choose. For example in PDLT, the object that plays the role of coarsegraining is the operation that takes a probability distribution on neurons and applies a single-layer NN to it.
Thanks for the reference—I’ll check out the paper (though there are no pointer variables in this picture inherently).
I think there is a miscommunication in my messaging. Possibly through overcommitting to the “matrix” analogy, I may have given the impression that I’m doing something I’m not. In particular, the view here isn’t a controversial one—it has nothing to do with Everett or einselection or decoherence. Crucially, I am saying nothing at all about quantum branches.
I’m now realizing that when you say map or territory, you’re probably talking about a different picture where quantum interpretation (decoherence and branches) is foregrounded. I’m doing nothing of the sort, and as far as I can tell never making any “interpretive” claims.
All the statements in the post are essentially mathematically rigorous claims which say what happens when you
start with the usual QM picture, and posit that
your universe divides into at least two subsystems, one of which you’re studying
one of the subsystems your system is coupled to is a minimally informative infinite-dimensional environment (i.e., a bath).
Both of these are mathematically formalizable and aren’t saying anything about how to interpret quantum branches etc. And the Lindbladian is simply a useful formalism for tracking the evolution of a system that has these properties (subdivisions and baths). Note that (maybe this is the confusion?) subsystem does not mean quantum branch, or decoherence result. “Subsystem” means that we’re looking at these particles over here, but there are also those particles over there (i.e. in terms of math, your Hilbert space is a tensor product
Also, I want to be clear that we can and should run this whole story without ever using the term “probability distribution” in any of the quantum-thermodynamics concepts. The language to describe a quantum system as above (system coupled with a bath) is from the start a language that only involves density matrices, and never uses the term “X is a probability distribution of Y”. Instead you can get classical probability distributions to map into this picture as a certain limit of these dynamics.
As to measurement, I think you’re once again talking about interpretation. I agree that in general, this may be tricky. But what is once again true mathematically is that if you model your system as coupled to a bath then you can set up behaviors that behave exactly as you would expect from an experiment from the point of view of studying the system (without asking questions about decoherence).
Thanks for the questions!
Yes, “QFT” stands for “Statistical field theory” :). We thought that this would be more recognizable to people (and also, at least to some extent, statistical is a special case of quantum). We aren’t making any quantum proposals.
-
We’re following (part of) this community, and interested in understanding and connecting the different parts better. Most papers in the “reference class” we have looked at come from (a variant of) this approach. (The authors usually don’t assume Gaussian inputs or outputs, but just high width compared to depth and number of datapoints—this does make them “NTK-like”, or at least perturbatively Gaussian, in a suitable sense).
Neither of us thinks that you should think of AI as being in this regime. One of the key issues here is that Gaussian models can not model any regularities of the data beyond correlational ones (and it’s a big accident that MNIST is learnable by Gaussian methods). But we hope that what AIs learn can largely be well-described by a hierarchical collection of different regimes where the “difference”, suitably operationalized, between the simpler interpretation and the more complicated one is well-modeled by a QFT-like theory (in a reference class that includes perturbatively Gaussian models but is not limited to them). In particular one thing that we’d expect to occur in certain operationalizations of this picture is that once you have some coarse interpretation that correctly captures all generalizing behaviors (but may need to be perturbed/suitably denoised to get good loss), the last and finest emergent layer will be exactly something in the perturbatively Gaussian regime.
Note that I think I’m more bullish about this picture and Lauren is more nuanced (maybe she’ll comment about this). But we both think that it is likely that having good understanding of perturbatively Gaussian renormalization would be useful for “patching in the holes”, as it were, of other interpretability schemes. A low-hanging fruit here is that whenever you have a discrete feature-level interpreatation of a model, instead of just directly measuring the reconstruction loss you should at minimum model the difference model-interpretation as a perturbative Gaussian (corresponding to assuming the difference has “no regularity beyond correlation information”).
We don’t want to assume homogeneity, and this is mostly covered by 2b-c above. I think the main point we want to get across is that it’s important and promising to try to go beyond the “homogeneity” picture—and to try to test this in some experiments. I think physics has a good track record here. Not on the level of tigers, but for solid-state models like semiconductors. In this case you have:
The “standard model” only has several-particle interactions (corresponding to the “small-data limit”).
By applying RG techniques to a regular metallic lattice (with initial interactions from the standard model), you end up with a good new universality class of QFT’s (this now contains new particles like phonons and excitons which are dictated by the RG analysis at suitable scales). You can be very careful and figure out the renormalization coupling parameters in this class exactly, but much more realistically and easily you just get them from applying a couple of measurements. On an NN level, “many particles arranged into a metallic pattern” corresponds to some highly regular structure in the data (again, we think “particles” here should correspond to datapoints, at least in the current RLTC paradigm).
The regular metal gives you a “background” theory, and now we view impurities as a discrete random-feature theory on top of this background. Physicists can still run RG on this theory by zooming out and treating the impurities as noise, but in fact you can also understand the theory on a fine-grained level near an impurity by a more careful form of renormalization, where you view the nearest several impurities as discrete sources and only coarsegrain far-away impurities as statistical noise. At least for me, the big hope is that this last move is also possible for ML systems. In other words, when you are interpreting a particular behavior of a neural net, you can model it as a linear combination of a few messy discrete local circuits that apply in this context (like the complicated diagram from Marks et al below) plus a correctly renormalized background theory associated to all other circuits (plus corrections from other layers plus …)
To add: I think the other use of “pure state” comes from this context. Here if you have a system of commuting operators and take a joint eigenspace, the projector is mixed, but it is pure if the joint eigenvalue uniquely determines a 1D subspace; and then I think this terminology gets used for wave functions as well
One person’s “occam’s razor” may be description length, another’s may be elegance, and a third person’s may be “avoiding having too much info inside your system” (as some anti-MW people argue). I think discussions like “what’s real” need to be done thoughtfully, otherwise people tend to argue past each other, and come off overconfident/ underinformed.
To be fair, I did use language like this so I shouldn’t be talking—but I used it tongue-in-cheek, and the real motivation given in the above is not “the DM is a more fundamental notion” but “DM lets you make concrete the very suggestive analogy between quantum phase and probability”, which you would probably agree with.For what it’s worth, there are “different layers of theory” (often scale-dependent), like classical vs. quantum vs. relativity, etc., where there I think it’s silly to talk about “ontological truth”. But these theories are local conceptual optima among a graveyard of “outdated” theories, that are strictly conceptually inferior to new ones: examples are heliocentrism (and Ptolemy’s epycycles), the ether, etc.
Interestingly, I would agree with you (with somewhat low confidence) that in this question there is a consensus among physicists that one picture is simply “more correct” in the sense of giving theoretically and conceptually more elegant/ precise explanations. Except your sign is wrong: this is the density matrix picture (the wavefunction picture is genuinely understood as “not the right theory”, but still taught and still used in many contexts where it doesn’t cause issues).
I also think that there are two separate things that you can discuss.
Should you think of thermodynamics, probability, and things like thermal baths as fundamental to your theory or incidental epistemological crutches to model the world at limited information?
Assuming you are studying a “non-thermodynamic system with complete information”, where all dynamics is invertible over long timescales, should you use wave functions or density matrices?
Note that for #1, you should not think of a density function as a probability distribution on quantum states (see the discussion with Optimization Process in the comments), and this is a bad intuition pump. Instead, the thing that replaces probability distributions in quantum mechanics is a density matrix.
I think a charitable interpertation of your criticism would be a criticism of #1 (putting limited-info dynamics—i.e., quantum thermodynamics) as primary to “invertible dynamics”. Here there is a debate to be had.
I think there is not really a debate in #2: even in invertible QM (no probability), you need to use density matrices if you want to study different subsystems (e.g. when modeling systems existing in an infinite, but not thermodynamic universe you need this language, since restricting a wavefunction to a subsystem makes it mixed). There’s also a transposed discussion, that I don’t really understand, of all of this in field theory: when do you have fields vs. operators vs. other more complicated stuff, and there is some interesting relationship to how you conceptualize “boundaries”—but this is not what we’re discussing. So you really can’t get away from using density matrices even in a nice invertible universe, as soon as you want to relate systems to subsystems.
For question #1 is reasonable (though I don’t know how productive) to discuss what is “primary”. I think (but here I am really out of my depth) that people who study very “fundamental” quantum phenomena increasingly use a picture with a thermal bath (e.g. I vaguely remember this happening in some lectures here). At the same time, it’s reasonable to say that “invertible” QM phenomena are primary and statistical phenomena are ontological epiphenomena on top of this. While this may be a philosophical debate, I don’t think it’s a physical one, since the two pictures are theoretically interchangeable (as I mentioned, there is a canonical way to get thermodynamics from unitary QM as a certain “optimal lower bound on information dynamics”, appropriately understood).
Still, as soon as you introduce the notion of measurement, you cannot get away from thermodynamics. Measurement is an inherently information-destroying operation, and iiuc can only be put “into theory” (rather than being an arbitrary add-on that professors tell you about) using the thermodynamic picture with nonunitary operators on density matrices.
Thanks—you’re right. I have seen “pure state” referring to a basis vector (e.g. in quantum computation), but in QTD your definition is definitely correct. I don’t like the term “pointer variable”—is there a different notation you like?
Thanks for this post. I would argue that part of an explanation here could also be economic: modernity brings specialization and a move from the artisan economy of objects as uncommon, expensive, multipurpose, and with a narrow user base (illuminated manuscripts, decorative furniture) to a more utilitarian and targeted economy. Early artisans need to compete for a small number of rich clients by being the most impressive, artistic, etc., whereas more modern suppliers follow more traditional laws of supply and demand and track more costs (cost-effectiveness, readability and reader’s time vs. beauty and remarkableness). And consumers similarly can decouple their needs: art as separate from furniture and architecture, poetry and drama as separate from information and literature. I think another aspect of this shift, that I’m sad we’ve lost, is the old multipurpose scientific/philosophical treatises with illustrations or poems (my favorite being de Rerum Natura, though you could argue that Nietzsche and Wagner tried to revive this with their attempts at Gesamtkunstwerke).