See also Steven Kaas’ aphorisms on twitter:
> First Commandment of the Church of Tautology: Live next to thy neighbor
And
> “Whatever will be will be” is only the first secret of the tautomancers.
khafra
The story I read about why neighbor polling is supposed to correct for bias in specifically the last few presidential elections is that some people plan to vote for Trump, but are ashamed of this, and don’t want to admit it to people who aren’t verified Trump supporters. So if you ask them who they plan to vote for, they’ll dissemble. But if you ask them who their neighbors are voting for, that gives them permission to share their true opinion non-attributively.
In the late 80′s, I was homeschooled, and studied caligraphy (as well as cursive); but I considered that more of a hobby than preparation for entering the workforce of 1000 years ago.
I also learned a bit about DOS and BASIC, after being impressed with the fractal-generating program that the carpenter working on our house wrote, and demonstrated on our computer.
Your definition seems like it fits the Emperor of China example—by reputation, they had few competitors for being the most willing and able to pessimize another agent’s utility function; e.g. 9 Familial Exterminations.
And that seems to be a key to understanding this type of power, because if they were able to pessimize all other agents’ utility functions, that would just be an evil mirror of bargaining power. Being able to choose a sharply limited number of unfortunate agents, and punish them severely pour encourager les autres, seems like it might just stop working when the average agent is smart enough to implicitly coordinate around a shared understanding of payoff matrices.
So I think I might have arrived back to the “all dominance hierarchies will be populated solely by scheming viziers” conclusion.
Clarifying question: If A>B on the dominance hierarchy, that doesn’t seem to mean that A can always just take all B’s stuff, per the Emperor of China example. It also doesn’t mean that A can trust B to act faithfully as A’s agent, per the cowpox example.
If all dominance hierarchies control is who has to signal submission to whom, dominance seems only marginally useful for defense, law, taxes, and public expenditure; mostly as a way of reducing friction toward the outcome that would have happened anyway.
It seems like, with intelligence too cheap to meter, any dominance hierarchy that doesn’t line up well with the bargaining power hierarchy or the getting-what-you-want vector space is going to be populated with nothing but scheming viziers.
But that seems like a silly conclusion, so I think I’m missing something about dominance hierarchies.
Note also that there are several free parameters in this example. E.g., I just moved to Germany, and now have wimpy German burners on my stove. If I put on a large container with 6L or more of water, and I do not cover it, the water will never go beyond bubble formation into a light simmer, let alone a rolling boil. If I cover the container at this steady state, it reaches a rolling boil in about another 90s.
Is Patrick McKenzie (@patio11) another Matt Levine of fintech? Or is he something else? I know several people outside of the industry (including myself) who read pretty much everything he writes, which includes a lot of technical detail written very accessibly.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect
Catholic EA: You have a leader you trust and respect, and defer to their judgement.
Sola Fide EA: You read 80k hours and Givewell, but you keep your own spreadsheet of EV calculations.
I’d be interested to know what the numbers on UV in ductwork look like over the past 5 years. When I had to get a new A/C system installed in 2020, they asked whether I wanted a UVC light installed in the air handler. I had, before then, been using a 70w UVC corn light I bought on Amazon to sterilize the exterior of groceries (back when we thought fomites might be a major transmission vector), and in improvised ductwork with fans and cardboard boxes taped together.
Getting a proper bulb—an optimal wavelength source—seemed like a big upgrade. Hard to come up with quantitative efficacy numbers, but we did have a friend over for the day, who turned out to have been in the early stages of covid, without getting infected. Our first infection was years later, at a music event.
This is great! Everybody loves human intelligence augmentation, but I’ve never seen a taxonomy of it before, offering handholds for getting started.
I’d say “software exobrain” is less “weaksauce,” and more “80% of the peak benefits are already tapped out, for conscientious people who have heard of OneNote or Obsidian.” I also am still holding out for bird neurons with portia spider architectural efficiency and human cranial volume; but I recognize that may not be as practical as it is cool.
It’s very standard advice to notice when a sense of urgency is being created by a counterparty in some transaction; and to reduce your trust in that counterparty as well as pausing.
It feels like a valuable observation, to me, that the counterparty could be internal—some unendorsed part of your own values, perhaps.
(e.g. in the hypothetical ‘harbinger tax’ world, you actively want to sabotage the resale value of everything you own that you want to actually use).
“harberger tax,” for anyone trying to look that up.
If you can pay the claimed experts enough to submit to some testing, you could use Google’s new doubly-efficient debate protocol to make them either spend some time colluding, or spend a lot more time in their efforts at deception: https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of
This could exclude competent evaluators without other income—this isn’t Dath Ilan, where a bank could evaluate evaluators and front them money at interest rates that depended on their probability of finding important risks—and their shortage of liquidity could provide a lever for distortion of their incentives.
On Earth, if someone’s working for you, and you’re not giving them a salary commensurate with the task, there’s a good chance they are getting compensation in other ways (some of which might be contrary to your goals).
Thanks! Just what I was looking for.
Some cities have dedicated LW/ACX Discord servers, which is pretty neat. Many of the cities hosting meetups over the next month are too small to have much traffic to such a server, were it set up. A combined, LW meetup oriented Discord server for all the smaller cities in the world, with channels for each city and a few channels for common small-meetup concerns, seems like a $20 bill on the sidewalk. So I’m checking whether such a thing exists here, before I start it.
khafra’s Shortform
I think the cruxes here are whether Aldi forced out small retailers like Walmart did; and how significant the difference between Walmart and Aldi is, compared to the difference between Aldi and large, successful retail orgs in wentworthland or christiankiland.
(my experience in German shopping is that most grocery stores are one of a half-dozen chains, most hardware stores are Bauhaus or OBI, but there isn’t a dominant “everything” store like Walmart; Müller might be closest but its market dominance and scale is more like K-mart in the 90′s than Walmart today.)
An existing subgenre of this with several examples is the two-timer date. As I recall, it was popular in 90′s sitcoms. Don’t expect INT 18 tier scheming, but it does usually show the perspective of the people frantically trying to keep the deception running.
Tangentially to Tanagrabeast’s “least you can do” suggestion, as a case report: I came out to my family as an AI xrisk worrier over a decade ago, when one could still do so in a fairly lighthearted way. They didn’t immediately start donating to MIRI and calling their senators to request an AI safety manhattan project, but they did agree with the arguments I presented, and check up with me, on occasion, about how the timelines and probabilities are looking.
I have had two new employers since then, and a few groups of friends; and with each, when the conversation turns to AI (as it often does, over the last half-decade), I mention my belief that it’s likely going to kill us all, and expand on Instrumental Convergence, RAAP, and/or “x-risk, from Erewhon, to IJ Good, to the Extropians,” depending on which aspect people seem interested in. I’ve been surprised by the utter lack of dismissal and mockery, so far!