You see either something special, or nothing special.
Rana Dexsin
The identifiable code chunks look more specifically like they’re meant for ComputerCraft, which is a Minecraft mod that provides Lua-programmable in-game computers. Your link corroborates this: it’s within the ComputerCraft repository itself, underneath an asset path that provides files for in-game floppy disks containing Lua programs that players can discover as dungeon loot; GravityScore is a contributor with one associated loot disk, which claims to be an improved Lua code editor. The quoted chunk is slightly different, as the “availableThemes” paragraph is not commented out—probably a different version. Lua bytecode would be uncommon here; ComputerCraft programs are not typically stored in bytecode form, and in mainline Lua 5.2 it’s a security risk to enable bytecode loading in a multitenant environment (but I’m not sure about in LuaJ).
The outermost structure starting from the first image looks like a Lua table encoding a tree of files containing an alternate OS for the in-game computers (“Linox” likely a corruption of “Linux”), so probably an installer package of some kind. The specific “!@#&” sequence appears exactly where I would expect newlines to appear where the ‘files’ within the tree correspond to Lua source, so I think that’s a crude substitution encoding of newline; perhaps someone chose it because they thought it would be uncommon (or due to frustration over syntax errors) while writing the “encode as string literal” logic.
The strings of hex digits in the “etc” files look more like they’re meant to represent character-cell graphics, which would be consistent with someone wanting to add logos in a character-cell-only context. One color palette index per character would make the frequency distribution match up with logos that are mostly one color with some accents. However, we can’t easily determine the intended shapes if whitespace has been squashed HTML-style for display.
I was pretty sad about the ongoing distortion of “I checked” in what’s meant to be an epistemics-oriented community. I think the actual meanings are potentially really valuable, but without some way of avoiding them getting eaten, they become a hazard.
My first thought is to put a barrier in the way, but I don’t know if that plays well with the reactions system being for lower-overhead responses, and it might also give people unproductive bad feelings unless sold the right way.
Cars and planes and knives and various chemicals can be easily goaded to break the law by the user. No one has yet released a car that only ever follows all applicable laws no matter what the driver does.
Without taking a position on the copyright problem as a whole, there’s an important distinction here around how straightforward the user’s control is. A typical knife is operated in a way where deliberate, illegal knife-related actions can reasonably be seen as a direct extension of the user’s intent (and accidental ones an extension of the user’s negligence). A traditional car is more complex, but cars are also subject to licensing regimes which establish social proof that the user has been trained in how to produce intended results when operating the car, so that illegal car-related actions can be similarly seen as an extension of the user’s intent or negligence. Comparing this to the legal wrangling around cars with ‘smarter’ autonomous driving features may be informative, because that’s when it gets more ambiguous how much of the result is a direct translation of the user’s intent. There does seem to be a lot of legal and social pressure on manufacturers to ensure the safety of autonomous driving by technical means, but I’m not as sure about legality; in particular, I vaguely remember mixed claims around the way self-driving features handle the tension between posted speed limits and commonplace human driving behavior in the US.
In the case of a chatbot, the part where the bot makes use of a vast quantity of information that the user isn’t directly aware of as part of forming its responses is necessary for its purpose, so expecting a reasonable user to take responsibility for anticipating and preventing any resulting copyright violations is not practical. Here, comparing chatbot output to that of search engines—a step down in the tool’s level of autonomy, rather than a step up as in the previous car comparison—may be informative. The purpose of a search engine similarly relies on the user not being able to directly anticipate the results, but the results can point to material that contains copyright violations or other content that is illegal to distribute. And even though those results are primarily links instead of direct inclusions, there’s legal and social pressure on search engines to do filtering and enforce specific visibility takedowns on demand.
So there’s clearly some kind of spectrum here between user responsibility and vendor responsibility that depends on how ‘twisty’ the product is to operate.
Detached from a comment on Zvi’s AI #80 because it’s a hazy tangent: the idea of steering an AI early and deeply using synthetic data reminds me distinctly of the idea of steering a human early and deeply using culture-reinforcing mythology. Or, nowadays, children’s television, I suppose.
Followup:
How so much artistry had been infused into the creation of Hogwarts was something that still awed Draco every time he thought about it. There must have been some way to do it all at once, no one could have detailed so much piece by piece, the castle changed and every new piece was like that.
Years later, Midjourney happened.
My favorite active use of those is differentially. Wiggling my nose can inspire visceral surprise.
Temporarily taking the post’s theory as given, then speculating: managers a few levels above the bottom won’t feel much dominance increase from hires at the bottom if they’re too organizationally distant for it to register, I’d think; the feeling boost from Nth-level reports would drop sharply with increasing N due to less personal contact. They would then seek to manipulate their set of direct reports. Some would see internal underlings as a threat, want to get them out of the way, and not necessarily have another insider suitable to displace them with. Some would see outsiders with external status markers (intelligence, high-profile accomplishments) whom they can gain indirect status by hiring directly. Some might be obstructed directly from engaging internal promotions or get outcompeted for the internal pool.
… at least in the default light theme. (This is arguably a secondary reason not to overuse images.)
Observation of context drift: I was rereading some of HPMOR just now, and Harry’s complaint of “The person who made this probably didn’t speak Japanese and I don’t speak any Hebrew, so it’s not using their knowledge, and it’s not using my knowledge”, regarding a magic item in chapter 6, hits… differently in the presence of the current generation of language models.
The Review Bot would be much less annoying if it weren’t creating a continual stream of effective false positives on the “new comments on post X” indicators, which are currently the main way I keep up with new comments. I briefly looked for a way of suppressing these via its profile page and via the Site Settings screen but didn’t see anything.
I haven’t worked in an organization that uses microservices extensively, but what I hear from people who use them goes far beyond visibility constraints. As an example, allowing groups to perform deployment cycles without synchronizing seems to be a motivation that’s harder to solve by having independently updated parts of a build-level monolith—not impossible, because you could set up to propagate full rebuilds somehow and so forth, but more awkward. Either way, as you probably know, “in theory, people could just … but” is a primary motivator behind all kinds of socially- or psychologically-centered design.
That said, getting into too much detail on microservices feels like it’d get off topic, because your central example of the Giga Press is in a domain where the object-level manufacturing issues of metal properties and such should have a lot more impact. But to circle around, now I’m wondering: does the ongoing “software eating the world” trend come along with a side of “software business culture eating into other business cultures”? In the specific case of Tesla, there’s a more specific vector for this, because Elon Musk began his career during the original dot-com era and could have carried associated memes to Tesla. Are management and media associated with more physical industries being primed this way elsewhere? Or is this just, as they say, Elon Musk being Elon Musk, and (as I think you suggested in the original post) the media results more caused by the distortion of celebrity and PR than by subtler/deeper dysfunctions?
And microservices are mostly a solution to institutional/management problems, not technical ones.
So this is interesting in context, because management and coordination problems are problems! But they’re problems where the distinction between “people think this is a good idea” and “this is actually a good idea” is more bidirectionally porous than the kinds of problems that have more clearly objective solutions. In fact the whole deal with “Worse is Better” is substantially based on observing that if people gravitate toward something, that tends to change the landscape to make it a better idea, even if it didn’t look like that to start with, because there’ll be a broader selection of support artifacts and it’ll be easier to work with other people.
One might expect an engineering discipline to be more malleable to this when social factors are more constraining than impersonal physical/computational ones. In software engineering, I think this is true across large swaths of business software, but not necessarily in specialized areas. In mechanical engineering or manufacturing, closer to the primary focus of the original post, I would expect impersonal physical reality to push back much harder.
A separate result of this model would be that things become more fashion-based on average as humanity’s aggregate power over impersonal constraints increases, much like positional goods becoming more relatively prominent as basic material needs become easier to meet.
Publishing “that ship has sailed” earlier than others actively drives the ship. I notice that this feels terrible, but I don’t know where sensible lines are to draw in situations where there’s no existing institution that can deliver a more coordinated stop/go signal for the ship. I relatedly notice that allowing speed to make things unstoppable means any beneficial decision-affecting processes that can’t be or haven’t been adapted to much lower latencies lose all their results to a never-ending stream of irrelevance timeouts. I have no idea what to do here, and that makes me sad.
Related but more specific: “Give Up Seventy Percent Of The Way Through The Hyperstitious Slur Cascade”
Ted Chiang’s Chrystal Nights
Minor corrections: “Crystal Nights” does not have an H in the first word and is by Greg Egan. (The linked copy is on his own website, in fact, which also includes a number of his other works.)
So in the original text, you meant “openness minus conscientiousness”? That was not clear to me at all; a hyphen-minus looks much more like a hyphen in that position. A true minus sign (−) would have been noticeable to me; using the entire word would have been even more obvious.
Could restaurants become better aligned if instead of food we paid them for time?
The “anti-café” concept is like this. I’ve never been to one myself, but I’ve seen descriptions on the Web of a few of them existing. They don’t provide anything like restaurant-style service that I’ve heard; instead, there are often cheap or free snacks along the lines of what a office break room might carry, along with other amenities, and you pay for the amount of time you spend there.
How are those staying alive in the first place? I had previously used Nitter for keeping up with some of Eliezer’s posts without being logged in, but my understanding was that the workaround they were using to obtain the necessary API keys was closed off several months ago, and indeed the instances I used stopped working for that purpose. Have the linked instances found some alternative method?
Have you met non-serious people who long to be serious?
I am one of those people—modulo some possible definitional skew, of course, especially around to what degree someone who wishes to be different from how they are can be considered to wish for it coherently.
I know that right now I am not acting seriously almost at all, and I feel a strong dislike of this condition. Most of my consciously held desires are oriented in the direction of seriousness. A great deal of me longs to be serious in wholeness, but that desire is also being opposed by a combination of deep (but ego-dystonic) conditioning, some other murkier limitations that seem ambiguously biological and in any case have been very difficult to get at or pin down, and some major internal conflicts around which path to be serious about—whose resolution in turn is being obstructed by the rest of it.
Edited to add: to be clear, this isn’t a statement about whether the article winds up actually being useful for helping people become more serious, and indeed I have a vague intuition that most reading-actions applied to articles of this general nature may decay into traps of a “not getting out of the car” variety. (If I had a better way that I thought would be useful to talk about, I’d be talking about it.)
That description is distinctly reminiscent of the rise of containerization in software.
This is awkwardly armchair, but… my impression of Eliezer includes him being just so tired, both specifically from having sacrificed his present energy in the past while pushing to rectify the path of AI development (by his own model thereof, of course!) and maybe for broader zeitgeist reasons that are hard for me to describe. As a result, I expect him to have entered into the natural pattern of having a very low threshold for handing out blocks on Twitter, both because he’s beset by a large amount of sneering and crankage in his particular position and because the platform easily becomes a sinkhole in cognitive/experiential ways that are hard for me to describe but are greatly intertwined with the aforementioned zeitgeist tiredness.
Something like: when people run heavily out of certain kinds of slack for dealing with The Other, they reach a kind of contextual-but-bleed-prone scarcity-based closed-mindedness of necessity, something that both looks and can become “cultish” but where reaching for that adjective first is misleading about the structure around it. I haven’t succeeded in extracting a more legible model of this, and I bet my perception is still skew to the reality, but I’m pretty sure it reflects something important that one of the major variables I keep in my head around how to interpret people is “how Twitterized they are”, and Eliezer’s current output there fits the pattern pretty well.
I disagree with the sibling thread about this kind of post being “low cost”, BTW; I think adding salience to “who blocked whom” types of considerations can be subtly very costly. The main reason I’m not redacting my own whole comment on those same grounds is that I’ve wound up branching to something that I guess to be more broadly important: there’s dangerously misaligned social software and patterns of interaction right nearby due to how much of The Discussion winds up being on Twitter, and keeping a set of cognitive shielding for effects emanating from that seems prudent.