No they don’t, billionaires consume very little of their net worth.
romeostevensit
I am very confused why the tax is 99% in this example.
Post does not include the word auction, which is a key aspect of how LVT works to not have some of these downsides.
Yes, and I don’t mean to overstate a case for helplessness. Demons love convincing people that the anti demon button doesn’t work so that they never press it even though it is sitting right out in the open.
unfortunately, the disanalogy is that any driver who moves their foot towards the brakes is almost instantly replaced with one who won’t.
High variance but there’s skew. The ceiling is very high and the downside is just a bit of wasted time that likely would have been wasted anyway. The most valuable alert me to entirely different ways of thinking about problems I’ve been working on.
no
Both people ideally learn from existing practitioners for a session or two, ideally they also review the written material or in the case of Focusing also try the audiobook. Then they simply try facilitating each other. The facilitator takes brief notes to help keep track of where they are in the other person’s stack, but otherwise acts much as eg Gendlin acts in the audiobook.
Probably the most powerful intervention I know of is to trade facilitation of emotional digestion and integration practices with a peer. The modality probably only matters a little, and so should be chosen for what’s easiest to learn to facilitate. Focusing is a good start, I also like Core Transformation for going deeper once Focusing skills are good. It’s a huge return on ~3 hours per week (90 minutes facilitating and being facilitated, in two sessions) IME.
“What causes your decisions, other than incidentals?”
“My values.”
People normally model values as upstream of decisions. Causing decisions. In many cases values are downstream of decisions. I’m wondering who else has talked about this concept. One of the rare cases that the LLM was not helpful.
moral values
Is there a broader term or cluster of concepts within which is situated the idea that human values are often downstream of decisions, not upstream, in that the person with the correct values will simply be selected based on what decisions they are expected to make (ie election of a CEO by shareholders). This seems like a crucial understanding in AI acceleration.
I like this! improvement: a lookup chart for lots of base rates of common disasters as an intuition pump?
People inexplicably seem to favor extremely bad leaders-->people seem to inexplicably favor bad AIs.
One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.
I had first optimism and then sadness as I read the post bc my model is that every donor group is invested in the world where we make liability laundering organizations that make juicy targets for social capture the primary object of philanthropy instead of the actual patronage (funding a person) model. I understand it is about taxes, but my guess is that biting the bullet on taxes probably dominates given various differences. Is anyone working on how to tax efficiently fund individuals via eg trusts, distributed gift giving etc?
Upvotes for trying anything at all of course since that is way above the current bar.
Would be a Whole Thing so perhaps unlikely but here is something I would use: A bounty system, microtipping system on LW where I can both pay people for posts I really like in some visible way, with a percent cut going to LW, and a way to aggregate bounties for posts people want to see (subject to vote whether a post passed the bounty threshold etc.)
Just the general crypto cycle continuing onwards since then (2018). The idea being it was still possible to get in at 5% of current prices at around the time the autopsy was written.
Even a hundred million humanoid robots a year (we currently make 90 million cars a year) will be a demand shock for human labor.
https://benjamintodd.substack.com/p/how-quickly-could-robots-scale-up