You see either something special, or nothing special.
Rana Dexsin
What do you mean by “makes the URL bar useless”? What’s the use you’re hoping would still be there?
The URL is externalized mental context on which article I’m currently viewing, and it’s also common for me to copy the link out to use elsewhere. Previously it would stay the URL of the front page—I think that’s been changed since I wrote that, though.
The point of the modals is they don’t lose your place in the feed in a way that’s hard technically to do with proper navigation, though it’s possible we should just figure out how to do that.
Yeah, though as a desktop browser user, I already have a well-practiced way of doing that, which is to open stuff in new tabs if I want to keep my place. I would imagine doing a pre-link-following replacement of the history state to include an anchor that restores my position on Back would allow true top-level navigation here? Or, stashing the “read up to X so far” state somewhere seems to be a common thing. (You’ve presumably thought of all that already.)
All comment threads are what I call a “linear-slice” (parent-child-child-child) with no branching.
Yeah, I figured it out eventually. It does seem tricky to get a good presentational design here; I don’t know of a great way to convey that difference in context, and I do feel like it’s awkward to remember the distinction or have to flip between the formats mentally when navigating around. Maybe if the visual frames were more distinct from the kind used in the nested-comments interface it’d be easier to remember that the chain isn’t siblings?
Thanks for continuing to try to improve the site!
I’ve just realized a potential connection upon remembering something from a different Paul Graham essay.
From this post:
Graham has a natural affinity for production-based strategies which allowed him to acquire various kinds of capital. He blinds himself to the existence of adversarial strategies, so he’s able to authentically claim to think that e.g. mean people fail….
From “Lies We Tell Kids”:
Innocence is also open-mindedness. We want kids to be innocent so they can continue to learn. Paradoxical as it sounds, there are some kinds of knowledge that get in the way of other kinds of knowledge. If you’re going to learn that the world is a brutal place full of people trying to take advantage of one another, you’re better off learning it last. Otherwise you won’t bother learning much more.
Very smart adults often seem unusually innocent, and I don’t think this is a coincidence. I think they’ve deliberately avoided learning about certain things. Certainly I do. I used to think I wanted to know everything. Now I know I don’t.
So he’s already written something similar out explicitly, at least. And juxtaposing those feels like it bends the framing away from the blind spot being a natural accident of affinity (which I don’t think is explicit in this post but which is how my mind tends to treat it by default). I’m not sure what to make of this, but it feels interesting.
Without having made much adaptation effort:
The “open stuff in a modal overlay page on top of the feed rather than linking normally, incidentally making the URL bar useless” is super confusing and annoying. Just now, when I tried to use my usual trick of Open Link in New Tab for getting around confusingly overridden navigation on the “Click to view all comments” link to this very thread, it wasn’t an actual link at all.
I don’t know how to interpret what’s going on when I’m only shown a subset of comments in a feed section and they don’t seem to be contiguous. Usually I feel like I’m missing context, fumble around with the controls that seem related to the relationships between comments to try to open the intervening ones, still can’t tell whether I’m missing things (I think the indentation behavior is different? How do I tell how many levels there are between two comments that look like they have a parent^N relationship?), and wind up giving up and trying to find a good thread parent to see in its entirety.
Not infrequently, I listen to music with lyrics in a language that I don’t understand well, alongside which I sometimes do things like listen carefully to the pronunciation, or look up the meaning and consciously nudge myself to associate the words I didn’t previously know. I’m not serious enough about this to measure, but from self-observation it seems like I’ve picked up or reinforced a noticeable amount of language patterns via exposure (indeed, implicit and untuned spaced repetition) that way. Lyrics being a form of poetry broadly, and especially the use of children’s music in early learning more specifically (where my guess would be that the use of melody might help anchor the representations in memory), also seem suggestive of pro-language-acquisition music activities. How does your model treat this?
I wonder how it would look through a lens of “local people living densely enough is a precondition for making the city ‘hot’ to begin with, so disengaged investment in empty housing removes a usually-unaccounted-for positive externality”? For that matter, is that the lens that advocates for punitive empty-unit taxation use?
Just because the assumption was that the problem would be discrimination in favour of white men
I’m missing a connection somewhere—who was assuming this? You mean people at the AI companies evaluating the results? Other researchers? The general public?
I think it’s pretty clear that in the Bay Area tech world, recently enough to have a strong impact on AI model tuning, there has been a lot of “we’re doing the right and necessary thing by implementing what we believe/say is a specific counterbias”, which, when combined with attitudes of “not doing it enough constitutes active complicity in great evil” and “the legal system itself is presumed to produce untrustworthy results corrupted by the original bias” and no widely accepted countervailing tuning mechanism, is a recipe for cultural overshoot both before and impacting AI training and for the illegality to be pushed out of view. In particular, if what you meant is that you think the AI training process was the original source of the overshoot and it happened despite careful and unbiased attention to the legality, I think both are unlikely because of this other overwhelming environmental force.
One of my main sources for how that social sphere works is Patrick McKenzie, but most of what he posts on such things has been on the site formerly known as Twitter and not terribly explicit in isolation nor easily searchable, unfortunately. This is the one I most easily found in my history, from mid-2023: https://x.com/patio11/status/1678235882481127427. It reads “I have had a really surprising number of conversations over the years with people who have hiring authority in the United States and believe racial discrimination in employment is legal if locally popular.” While the text doesn’t state the direction of the discrimination explicitly, the post references someone else’s post about lawyers suddenly getting a lot of questions from their corporate clients about whether certain diversity policies are legal as a result of Students For Fair Admissions, which is an organization that challenges affirmative action admissions policies at schools.
(Meta: sorry for the flurry of edits in the first few minutes there! I didn’t quite order my posting and editing processes properly.)
(Now much more tangentially:)
… hmm, come to think of it, maybe part of conformity-pressure in general can be seen as a special case of this where the pool resource is more purely “cognition and attention spent dealing with non-default things” and the nonconformity by default has more of a purely negative impact on that axis, whereas conformity-pressure over technology with specific capabilities causes the nature of the pool resource to be pulled in the direction of what the technology is providing and there’s an active positive thing going on that becomes the baseline… I wonder if anything useful can be derived from thinking about those two cases as denoting an axis of variation.
And when the conformity is to a new norm that may be more difficult to understand but produces relative positive externalities in some way, is that similar to treating the new norm as a required table stakes cognitive technology?
You’ve reminded me of a perspective I was meaning to include but then forgot to, actually. From the perspective of an equilibrium in which everyone’s implicitly expected to bring certain resources/capabilities as table stakes, making a personal decision that makes your life better but reduces your contribution to the pool can be seen as defection—and on a short time horizon or where you’re otherwise forced to take the equilibrium for granted, it seems hard to refute! (ObXkcd: “valuing unit standardization over being helpful possibly makes me a bad friend” if we take the protagonist as seeing “US customary units” as an awkward equilibrium.) Some offshoots of this which I’m not sure what to make of:
-
If the decision would lead to a better society if everyone did it, and leads to an improvement for you if only you do it, but requires the rest of a more localized group to spend more energy to compensate for you if you do it and they don’t, we have a sort of “incentive misalignment sandwich” going on. In practice I think there’s usually enough disagreement about the first point that this isn’t clear-cut, but it’s interesting to notice.
-
In the face of technological advances, what continues to count as table stakes tends to get set by Moloch and mimetic feedback loops rather than intentionally. In a way, people complaining vociferously about having to adopt new things are arguably acting in a counter-Moloch role here, but in the places I’ve seen that happen, it’s either been ineffective or led to a stressful and oppressive atmosphere of its own (or, most commonly and unfortunately, both).
-
I think intuitive recognition of (2) is a big motivator behind attacking adopters of new technology that might fall into this pattern, in a way that often gets poorly expressed in a “tech companies ruin everything” type of way. Personally taking up smartphones, or cars, or—nowadays the big one that I see in my other circles—generative AI, even if you don’t yourself look down on or otherwise directly negatively impact non-users, can be seen as playing into a new potential equilibrium where if you can, you ‘must’, or else you’re not putting in as much as everyone else, and so everyone else will gradually find that they get boxed in and any negative secondary effects on them are irrelevant compared to the phase transition energy. A comparison that comes to mind is actually labor unions; that’s another case where restraining individually expressed capabilities in order to retain a better collective bargaining position for others comes into play, isn’t it?
-
(I feel sort of confused about how people who don’t use it for coding are doing. With coding, I can feel the beginnings of a serious exoskeleton that can build structures around me with thought. Outside of that, I don’t know of it being more than a somewhat better google).
There’s common ways I currently use (the free version of) ChatGPT that are partially categorizable as “somewhat better search engine”, but where I feel like that’s not representative of the real differences. A lot of this is coding-related, but not all, and the reasons I use it for coding-related and non-coding-related tasks feel similar. When it is coding-related, it’s generally not of the form of asking it to write code for me that I’ll then actually put into a project, though occasionally I will ask for example snippets which I can use to integrate the information better mentally before writing what I actually want.
The biggest difference in feel is that a chat-style interface is predictable and compact and avoids pushing a full-sized mental stack frame and having to spill all the context of whatever I was doing before. (The name of the website Stack Exchange is actually pretty on point here, insofar as they were trying to provide something similar from crowdsourcing!) This is something I can see being a source of creeping mental laziness—but it depends on the size and nature of the rest of the stack: if you were already under high context-retention load relative to your capabilities, and you’re already task-saturated enough, and you use a chatbot for leaf calls that would otherwise cause you to have to do a lot of inefficient working-memory juggling, then it seems like you’re already getting a lot of the actually-useful mental exercise at the other levels and you won’t be eliminating much of it, just getting some probabilistic task speedups.
In roughly descending order of “qualitatively different from a search engine” (which is not the same as “most impactful to me in practice”):
Some queries are reverse concept search, which to me is probably the biggest and hardest-to-replicate advantage over traditional search engine: I often have the shape of a concept that seems useful, but because I synthesized it myself rather than drawing from popular existing uses, I don’t know what it’s called. This can be checked for accuracy using a traditional search engine in the forward direction once I have the candidate term.
Some queries are for babble purposes: “list a bunch of X” and I’ll throw out 90% of them for actual use but use the distribution to help nudge my own imagination—generally I’ll do my own babble first and then augment it, to limit priming effects. There’s potential for memetic health issues here, but in my case most of these are isolated enough that I don’t expect them to combine to create larger problems. (In a qualitatively similar way but with a different impact, some of it is pure silliness. “Suppose the protagonists of Final Fantasy XIII had Geese powers. What kind of powers might they have?”)
Synthesis and shaping of information is way different from search engine capabilities. This includes asking for results tailored along specific axes I care about where it’s much less likely an existing webpage author has used that as a focus, small leaps of connective reasoning that would take processing and filtering through multiple large pages to do via search engine, and comparisons between popular instances of a class (in coding contexts, often software components) where sometimes someone’s actually written up the comparison and sometimes not. Being able to fluently ask followups that move from a topic to a subtopic or related topic without losing all the context is also very useful. “Tell me about the main differences between X1 and X2.” → “This new thing introduced in X2, is that because of Y?” (but beware of sycophancy biases if you use leading questions like that)
(Beyond this point we get closer to “basically a search engine”.)
Avoiding the rise in Web annoyances is a big one in practice—which ties into the weird tension of social contracts around Internet publishing being kind of broken right now, but from an information-consumer perspective, the reprocessed version is often superior. If a very common result is that a search engine will turn up six plausible results, and three of them are entirely blog slop (often of a pre-LLM type!) which is too vague to be useful for me, two of them ask me to sign up for a ‘free’ account to continue but only after I’ve started reading the useless intro text, and one of them contains the information I need in theory but I have to be prepared to click the “reject cookies” button, and click the close button on the “get these delivered to your inbox” on-scroll popup, and hope it doesn’t load another ten-megabyte hero image that I don’t care about and chew through my cellular quota in the process, and if I try to use browser extensions to combat this then the text doesn’t load, and so on and so on… then obviously I will switch to asking the chatbot first! “most of the content is buried in hour-long videos” is skew to this but results in the same for me.
In domains like “how would I get started learning skill X”, where there’s enough people who can get a commercial advantage through SEO’ing that into “well, take our course or buy our starter kit” (but usually subtler than that), those results seem (and I think for now probably are) less trustworthy than chatbot output that goes directly to concrete aspects that can be checked more cleanly, and tend to disguise themselves to be hard to filter out without reading a lot of the way through. Of course, there’s obvious ways for this not to last, either as SEO morphs into AIO or if the chatbot providers start selling the equivalent of product placement behind the scenes.
I imagine this differs a lot based on what social position you’re already in and where you’re likely to get your needs met. When assumptions like “everyone has a smartphone” become sufficiently widespread, you can be blocked off from things unpredictably when you don’t meet them. You often can’t tell which things these are in advance: simplification pressure causes a phase transition from “communicated request” to “implicit assumption”, and there’s too many widely-distributed ways for the assumption to become relevant, so doing your own modeling will produce a “reliably don’t need” result so infrequently as to be effectively useless. Then, if making the transition to conformity when you notice a potential opportunity is too slow or is blocked by e.g. resource constraints or value differences, a lot of instant-lose faces get added to the social dice you roll. If your anticipated social set is already stable and well-adapted to you, you may not be rolling many dice, but if you’re precarious, or searching for breakthrough opportunities, or just have a role with more wide-ranging and unpredictable requirements on which interactions you need to succeed at, it’s a huge penalty. Other technologies this often happens with in the USA, again depending on your social class and milieu, include cars, credit cards, and Facebook accounts.
(It feels like there has to already be an explainer for this somewhere in the LW-sphere, right? I didn’t see an obvious one, though…)
lsusr said an official goodbye months ago...
That one is an April Fools post. Judging by lsusr’s user page, they’ve continued participating since then.
To warp the quotations somewhat to focus on something:
… And if some people don’t like it, no big deal: they don’t have to participate.
… Best I can tell, it doesn’t work.
Specifically, it seems to me like not immediately making that first part overt, salient common knowledge would be load-bearing for acquiring the kind of social cohesion attributed to religious groups. The stickiness of mutual anticipation is a lot of the point.
The artist is using “does the audience overtly respond well to this” as a proxy measure for whether the art meets the artist’s more illegible standard of goodness, but the audience is using “does this come from an artist we already regard as good” as a proxy measure for their own illegible standard of goodness. The illegible standards of both parties had to intersect enough around the initial art for the cycle to get started, but that doesn’t mean they’re the same, nor that the optimization processes are completely symmetrical or the same process. It might be possible that the signals get so entangled that you could treat it as an instance of single-Goodhart on some compound measure from outside the system, but from inside the system there’s still multiple sub-cycles going on that feed each other. Does that answer this, or is there something else off?
The funny thing I feel when reading this post is that I’ve had thoughts about this sort of cycle before—I think not the exact mirror cycle you’re talking about, but similar fixed-points of ping-ponging taste-shaping—but they weren’t framed as “how do we avoid this as a trap” so much as “what if that class of system and its basins are the figure that ‘humanity’ (or some relevant subset?) effectively cares about, and the grounding in some other reality is mainly ‘useful’ for edge constraints and random perturbations”.
Mostly re the footnote: as a long-term legacy em dash user, I have fear and apprehension over potential changing connotations, myself.
FWIW I ran a draft post through this service once, asked some questions along the way, and got exactly the kind of feedback I was looking for. Some of the feedback resulted in a minor cascade of realizations about what edits I’d want to make and how I’d want to think about things, and then some other stuff happened that shoved the task priority down indefinitely, so for now the draft is still sitting there unpublished—but consider this an endorsement!
Is there a known etymology for these? Also, what do people think of as the existing native-sounding pair closest in meaning to this pair?
My guess would be that “din” is an abstraction of English “din” as in “noise”, and “kodo” might be via Japanese “鼓動” (kodō) = “beating” (especially including for heartbeats).
Sorry if I’m missing something stupid, but doesn’t that first sentence there explain the situation? “Please note that we are not accepting applications to use the cluster at this time.” I would presume the Submit button is just vestigial due to not being able to easily hide/remove it at that target URL.
Assuming I’m right, what would be nicer presentation-wise is if the page at https://safe.ai/work/compute-cluster were to change its button to a non-button reading “Applications Currently Closed” or such, and if the explanation included something more explicitly referring to the non-functional UI like “This form thus cannot currently be submitted.”
You mean something like using libraries of 3D models, maybe some kind of generative grammar of how to arrange them, and renderer feedback for what’s actually visible to produce (geometry description, realistic rendering) pairs to train visuospatial understanding on?
(sorry for the snark, but I’m guessing leaving user emotions in during UX testing is valuable)
I just did the following:
Clicked on this answer to this post in the feed, which expanded the context to show the first part of the post itself.
Clicked on the title of the post in the expansion, which opened the whole thing in the post-over-feed overlay—or so I assumed.
Got terribly confused when the answer I’d originally gotten there from was absent. In fact, there was no indication of it being a question-type post—it was presented as though it were a basic article, with the comment section present but the answer section just missing!
Navigated to the original post in a new tab and facepalmed real hard.
This reeks of underlying fragility and has destroyed my confidence that the overlay is implemented in a way that will keep continued track of how posts are actually supposed to be presented. Until I see a “we refactored it so that it is now difficult to push changes that will desynchronize the presentation logic between these cases”, I’m going to have to assume it’s an attractive nuisance. 🙁
Edited to add: ah, I see dirk reported the same underlying issue—leaving this up in case the feeling/implication parts are still relevant.