I would distinguish two variants of this. There’s just plain inertia, like if you have a big pile of legacy code that accumulated from a lot of work, then it takes a commensurate amount of work to change it. And then there’s security, like a society needs rules to maintain itself against hostile forces. The former is sort of accidentally surreal, whereas the latter is somewhat intentionally so, in that a tendency to re-adapt would be a vulnerability.
tailcalled
I wonder if you could also do something like, have an LLM evaluate whether a message contains especially-private information (not sure what that would be… gossip/reputationally-charged stuff? sexually explicit stuff? planning rebellions? doxxable stuff?), and hide those messages while looking at other ones.
Though maybe that’s unhelpful because spambot authors would just create messages that trigger these filters?
I had at times experimented with making LLM commentators/agents, but I kind of feel like LLMs are always (nearly) “in equillibrium”, and so your comments end up too dependent on the context and too unable to contribute with anything other than factual knowledge. It’s cute to see your response to this post, but ultimately I expect that LessWrong will be best off without LLMs, at least for the foreseeable future.
On the object level, this is a study about personality, and it majorly changed the way I view some personality traits:
I now see conservatism/progressivism as one of the main axes of personality,
It further cemented my perception that “well-being”, or “extraversion minus neuroticism”, is the strongest of the traditional personality dimensions, and that maybe also this raises questions about what personality even means (for instance, surely well-being is not simply a biological trait),
I’m now much more skeptical about how “real” many personality traits are, including traits like “compassion” that were previously quite central to my models of personality.
I think my study on the EQ-SQ model follows in the footsteps of this, rethinking much of what I thought I knew about differential psychology.
However, I actually view the fundamental contribution of the post quite different from this. Really, I’m trying to articulate and test theories of personality, as well as perform exploratory analyses, and I hope that I will inspire others to do so, as well as that I will become better at doing so over time. If this interests you, I would suggest you join Rationalist Psychometrics, a small discord server for this general topic.
In terms of methodology, this study is heavily focused on factor analysis. At the time of writing the post, I thought factor analysis was awesome and underrated. I still think it’s great for testing the sorts of theories discussed in the post, and since such theories take up a lot of space in certain groups’ discussion of differential psychology, I still think factor analysis is quite underrated.
But factor analysis is not everything. My current special interest is Linear Diffusion of Sparse Lognormals, which promises to do much better than factor analysis … if I can get it to work. As such, while the post (and psychometrics in general) focuses quite heavily on factor analysis, I cannot wholeheartedly endorse that aspect of the post.
Prediction markets are good at eliciting information that correlates with what will be revealed in the future, but they treat each piece of information independently. Latent variables are a well-established method of handling low-rank connections between information, and I think this post does a good job of explaining why we might want to use that, as well as how we might want to implement them in prediction markets.
Of course the post is probably not entirely perfect. Already shortly after I wrote it, I switched from leaning towards IRT to leaning towards LCA, as you can see in the comments. I think it’s best to think of the post as staking out a general shape for the idea, and then as one goes to implementing it, one can adjust the details based on what seems to work the best.
Overall though, I’m now somewhat less excited about LVPMs than I was at the time of writing it, but this is mainly because I now disagree with Bayesianism and doubt the value of eliciting information per se. I suspect that the discourse mechanism we need is not something for predicting the future, but rather for attributing outcomes to root causes. See Linear Diffusion of Sparse Lognormals for a partial attempt at explaining this.
Insofar as rationalists are going to keep going with the Bayesian spiral, I think LVPMs are the major next step. Even if it’s not going to be the revolutionary method I assumed it would be, I would still be quite interested to see what happens if this ever gets implemented.
How about geology, ecology and history? It seems like you are focused on mechanisms rather than contents.
That said, I’m using “quantum mechanics” to mean “some generalization of the standard model” in many places.
I think this still has the ambiguity that I am complaining about.
As an analogy, consider the distinction between:
Some population of rabbits that is growing over time due to reproduction
The Fibonacci sequence as a model of the growth dynamics of this population
A computer program computing or mathematician deriving the numbers in or properties of this sequence
The first item in this list is meant to be analogous to the quantum mechanics qua the universe, as in it is some real-world entity that one might hypothesize acts according to certain rules, but exists regardless. The second is a Platonic mathematical object that one might hypothesize matches the rules of the real-world entity. And the third are actual instantiations of this Platonic mathematical object in reality. I would maybe call these “the territory”, “the hypothetical map” and “the actual map”, respectively.
In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don’t think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.
Wouldn’t this fail for metals, quantum computing, the double slit experiment, etc.? By switching back and forth between quantum and classical, it seems like you forbid any superpositions/entanglement/etc. on a scale larger than your classical lattice size. The standard LessWrongian approach is to just bite the bullet on the many worlds interpretation (which I have some philosophical quibbles with, but those quibbles aren’t so relevant to this discussion, I think, so I’m willing to grant the many worlds interpretation if you want).
Anyway, more to the point, this clearly cannot be done with the actual map, and the hypothetical map does not actually exist, so my position is that while this may help one understand the notion that there is an rule that perfectly constrains the world, the thought experiment does not actually work out.
Somewhat adjacently, your approach to this is reductionistic, viewing large entities as being composed of unfathomably many small entities. As part of LDSL I’m trying to wean myself off of reductionism, and instead take large entities to be more fundamental, and treat small entities as something that the large entities can be broken up into.
This is tangential to what I’m saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say “ah, but this one detail of the quark model is wrong/incomplete” as if it changes his argument when it doesn’t. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his “quark” and “quantum mechanics” words with something else, but the point still stands about the relationship between higher-level abstractions and reality.
My in-depth response to the rationalist-reductionist-empiricist worldview is Linear Diffusion of Sparse Lognormals. Though there’s still some parts of it I need to write. The main objection I have here is that “single layer” is not so much the true rules of reality so much as it is the subset of rules that are unobjectionable due to applying everywhere and every time. It’s like the minimal conceivable set of rules.
The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on.
I’d argue the practical rules of the world are determined not just by the idealized rules, but also by the big entities within the world. The simplest example is outer space; it acts as a negentropy source and is the reason we can assume that e.g. electrons go into the lowest orbitals (whereas if e.g. outer space was full of hydrogen, it would undergo fusion, bombard us with light, and turn the earth into a plasma instead). More elaborate examples would be e.g. atmospheric oxygen, whose strong reactivity leads to a lot of chemical reactions, or even e.g. thinking of people as economic agents means that economic trade opportunities get exploited.
It’s sort of conceivable that quantum mechanics describes the dynamics as a function of the big entities, but we only really have strong reasons to believe so with respect to the big entities we know about, rather than all big entities in general. (Maybe there are some entities that are sufficiently constant that they are ~impossible to observe.)
Quantum mechanics isn’t computationally intractable, but making quantum mechanical systems at large scales is.
But in the context of your original post, everything you care about is large scale, and in particular the territory itself is large scale.
That is a statement about the amount of compute we have, not about quantum mechanics.
It’s not a statement about quantum mechanics if you view quantum mechanics as a Platonic mathematical ideal, or if you use “quantum mechanics” to refer to the universe as it really is, but it is a statement about quantum mechanics if you view it as a collection of models that are actually used. Maybe we should have three different terms to distinguish the three?
Couldn’t one say that a model is not truly a model unless it’s instantiated in some cognitive/computational representation, and therefore since quantum mechanics is computationally intractable, it is actually quite far from being a complete model of the world? This would change it from being a map vs territory thing to more being a big vs precise Pareto frontier.
(Not sure if this is too tangential to what you’re saying.)
This also kind of reveals why bad faith is so invalidating. If the regulatory commission can trust others to outsource its investigations, then it might be able to save resources. However, that mainly works if those others act in sufficiently good faith that they aren’t a greater resource sink than investigating it directly and/or just steamrolling the others with a somewhat-flawed regulatory authority.
Neither full-contact psychoanalysis nor focusing on the object-level debate seems like a good way to proceed in the face of a regulatory commission. Instead, the regulatory commission should just spend its own resources checking what’s true, and maybe ask the parties in the debate to account for their deviances from the regulatory commission’s findings. Or if the regulatory commission is a sort of zombie commission that doesn’t have the capacity to understand reality, each member in the conflict could do whatever rituals best manipulate the commission to their own benefit.
One thing to consider is that until you’ve got an end-to-end automation of basic human needs like farming, the existence of other humans remains a net benefit for you, both to maintain these needs and to incentivize others to share what they’ve done.
Automating this end-to-end is a major undertaking, and it’s unclear whether LLMs are up to the task. If they aren’t, it’s possible we will return to a form of AI where classical alignment problems apply.
There might be humans who set it up in exchange for power/similar, and then it continues after they are gone (perhaps simply because it is “spaghetti code”).
The presence of the regulations might also be forced by other factors, e.g. to suppress AI-powered frauds, gangsters, disinformation spreaders, etc..
Not if the regulation is sufficiently self-sustainably AI-run.
These aren’t the only heavy tails, just the ones with highest potential to happen quickly. You could also have e.g. people regulating themselves to extinction.
I think this is a temporary situation because no sufficiently powerful entity has invested sufficiently much in AI-based defence. If this situation persists without any major shift in power for long enough, then it will be because the US and/or China have made an AI system to automatically suppress AI-powered gangs, and maybe also to automatically defend against AI-powered militaries. But the traditional alignment problem would to a great degree apply to such defensive systems.
She also frequently compared herself to Glaistig Uaine and Kyubey.
Reminder not to sell your soul(s) to the devil.
What I don’t get is, why do you have this impulse to sanewash the sides in this discussion?
Is this someone who has a parasocial relationship with Vassar, or a more direct relationship? I was under the impression that the idea that Michael Vassar supports this sort of thing was a malicious lie spread by rationalist leaders in order to purge the Vassarites from the community. That seems more like something someone in a parasocial relationship would mimic than like something a core Vassarite would do.
I have been very critical of cover ups in lesswrong. I’m not going to name names and maybe you don’t trust me. But I have observed this all directly. If you are let people toy with your brain while you are under the influence of psychedelics you should expect high odds of severe consequences. And your friends mental health might suffer as well.
I would highlight that the Vassarite’s official stance is that privacy is a collusion mechanism created to protect misdoers, and so they can’t consistently oppose you sharing what they know.
The reason I suggest making it filter-in is because it seems to me that it’s easier to make a meaningful filter that accurately detects a lot of sensitive stuff than a filter that accurately detects spam, because “spam” is kind of open-ended. Or I guess in practice spam tends to be porn bots and crypto scams? (Even on LessWrong?!) But e.g. truly sensitive talk seems disproportionately likely to involve cryptography and/or sexuality, so trying to filter for porn bots and crypto scams seems relatively likely to have reveal sensitive stuff.
The filter-in vs filter-out in my proposal is not so much about the degree of visibility. Like you could guard my filter-out proposal with the other filter-in proposals, like to only show metadata and only inspect suspected spammers, rather than making it available for everyone.