I often feel that people don’t get how the sucking up thing works. Not only does it not matter that it is transparent, that is part of the point. There is simultaneously common knowledge of the sucking up and common knowledge that those in the inner party don’t acknowledge the sucking up, that’s part of what the inner party membership consists of. People outside can accuse the insiders of nakedly sucking up and the insiders can just politely smile at them while carrying on. Sucking up can be what deference networks look like from the outside when we don’t particularly like any of the people involved or what they are doing. But their hierarchy visibly produces their own aims, so more fools we.
romeostevensit
The corn thresher is not inherently evil. Because it is more efficient than other types of threshers, the humans will inevitably eat corn. If this persists for long enough the humans will be unsurprised to find they have a gut well adapted to corn.
Per Douglas Adams, the puddle concludes that the indentation in which it rests fits it so perfectly that it must have been made for it.
The means by which the ring always serves sauron is that any who wear it and express a desire will have the possible worlds trimmed both in the direction of their desire, but also in the direction of sauron’s desire in ways that they cannot see. If this persists long enough they may find they no longer have the sense organs to see (the mouth of sauron is blind).
Some people seem to have more dimensions of moral care than others, it makes one wonder about the past.
These things are similar in shape.
Even a hundred million humanoid robots a year (we currently make 90 million cars a year) will be a demand shock for human labor.
https://benjamintodd.substack.com/p/how-quickly-could-robots-scale-up
No they don’t, billionaires consume very little of their net worth.
I am very confused why the tax is 99% in this example.
Post does not include the word auction, which is a key aspect of how LVT works to not have some of these downsides.
Yes, and I don’t mean to overstate a case for helplessness. Demons love convincing people that the anti demon button doesn’t work so that they never press it even though it is sitting right out in the open.
unfortunately, the disanalogy is that any driver who moves their foot towards the brakes is almost instantly replaced with one who won’t.
High variance but there’s skew. The ceiling is very high and the downside is just a bit of wasted time that likely would have been wasted anyway. The most valuable alert me to entirely different ways of thinking about problems I’ve been working on.
no
Both people ideally learn from existing practitioners for a session or two, ideally they also review the written material or in the case of Focusing also try the audiobook. Then they simply try facilitating each other. The facilitator takes brief notes to help keep track of where they are in the other person’s stack, but otherwise acts much as eg Gendlin acts in the audiobook.
Probably the most powerful intervention I know of is to trade facilitation of emotional digestion and integration practices with a peer. The modality probably only matters a little, and so should be chosen for what’s easiest to learn to facilitate. Focusing is a good start, I also like Core Transformation for going deeper once Focusing skills are good. It’s a huge return on ~3 hours per week (90 minutes facilitating and being facilitated, in two sessions) IME.
“What causes your decisions, other than incidentals?”
“My values.”
People normally model values as upstream of decisions. Causing decisions. In many cases values are downstream of decisions. I’m wondering who else has talked about this concept. One of the rare cases that the LLM was not helpful.
moral values
Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.
I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?
People inexplicably seem to favor extremely bad leaders-->people seem to inexplicably favor bad AIs.
One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.
Thank you for writing this. A couple shorthands I keep in my head for aspects:
My confidence interval ranges across the sign flip.
Due to the waluigi effect, I don’t know if the outcomes I care about are sensitive to the dimension I’m varying my credence along.