Applied to a local scale, this feels similar to the notion that we should employ our willpower to allow burnout as discussed here
daijin
I made a v2 of this shortform that answers your point with an example from recent history.
we will never have a wealth tax because pirate games, so marry the rich v2
original: https://www.lesswrong.com/posts/G5qjrfvBb7wszBgWG/daijin-s-shortform?commentId=4b4cDSKxfdxGw4vBH
1. why have a wealth tax?
we should tax unearned wealth because the presence of unearned wealth disincentivises workers who would otherwise contribute to society. when we tax unearned wealth, the remaining wealthy people are people who have earned their wealth; and so we send a signal ‘the best way for you to be privately wealthy is for your work to align with public utility maximisation’ which privately incentivises work which helps increase utility.
unearned wealth includes: hereditary wealth, wealth due to being at the right place at the right time (you just so happened to buy a tract of land that contains your nation’s entire supply of unobtanium / you just so happened to found a company that became the dominant news outlet for the entire world when other similar but less well timed companies failed)
on my original point about monopolies: monopolies are a special case of ^above. The loss of utility due to monoplies is recognised by economists as deadweight loss.
2. why will [countries where the game-theory supports the creation of ultrarich people] never have a wealth tax because pirate games?
Suppose a whole bunch of us got together and demanded that wealthy oligarchs pay a wealth tax. the wealthy oligarchs could instead take a small amount of money and bribe 51% of us to defect, while keeping their money piles. therefore we will never have a wealth tax.
re [countries where the game-theory supports the creation of ultrarich people]: nicky case’s wonderful interactive game theory primer tells us that the specific payoffs/noise in zero sum games decide which strategies (cooperate, defect) tend to survive. I suspect countries with ultrawealthy tend to have payoff/noise combos that result in dominant strategies that would participate in pirate games
but hang on, what about the democracies that do have wealth taxes?
> Norway recently tried to increase their wealth tax. a whole bunch of rich people left Norway. I do not constitute this as a successful raising of the wealth tax
> [wikipedia] at its peak, only 12 countries have a wealth tax, representing as a whole less than 6% of global GDP; they are exceptions not rules (Austria, Denmark, Finland, France, Germany, Iceland, Italy, Netherlands, Norway, Spain, Sweden and Switzerland)
3. what to do instead? marry rich
this is not easy. EA consultants are in some ways marrying rich people.
I wish I read this sooner. Do you have a prototype or does this exist yet?
Can we add retrieval augmentation to this? Something that, as you are writing your article, goes: “Have you read this other article?”
we will never have a wealth tax because pirate games.
why have a wealth tax? excess wealth is correlated with monopolies which are a failure to maximise utility. therefore wealth taxes would help increase total utility. monopolies include but are not limited to family wealth, natural monopolies, social network monopolies.however, suppose a whole bunch of us got together and demanded that wealthy oligarchs pay a wealth tax. the wealthy oligarchs could instead take a small amount of money and bribe 51% of us to defect, while keeping their money piles.
therefore we will never have a wealth tax.
what to do instead? marry rich- 11 Dec 2024 20:54 UTC; 1 point) 's comment on daijin’s Shortform by (
has this been considered before?
A small govt argument for UBI is ‘UBI is paying people to take care of themselves, rather than letting the government take care of people inefficiently’.
The laws of physics bound us to what we can do; so I counter that there is no such thing as extra abundance; and there is no ‘cure’ for scarcity, unless we figure out how to generate energy + entropy from nothing.
Instead I propose:
Better utilization is the only remedy for scarcity, ever; everything else merely allocates scarcity.
The sequences can be distilled down even further into a few sentences per article.
Starting with “The lens that sees its flaws”: this distils down to: “The ability to apply science to our own thinking grants us the ability to counteract our own biases, which can be powerful.” Statement by statement:
A lot of complex physics and neural processing is required for you to notice something simple, like that your shoelace is untied.
However, on top of noticing that your shoelace is untied, you can also comprehend the process of (noticing your shoelace is untied) - i.e. by listing the steps through which light reflects off your shoelace and your visual cortex engaging, etc.
The ability to consider the steps of our own thinking appears to be uniquely human.
If we recognise that our process of comprehension and understanding is potentially flawed, you can choose to consciously counteract it.
Science is repeatedly and deliberately making measurements of our own observations over time, attributing theories to those measurements, and constructing experiments to produce further measurements to potentially disprove those theories.
The ability to apply science to our own thinking grants us the ability to counteract our own biases, which can be powerful.
One example of reflective correction is correcting for optimism by noticing that optimism is not correlated to good outcomes.
The tool I am using to distill the sequences is an outliner: a nested bulleted list that allows rearranging of bullet points. This tool is typically used for writing things, but can similarly be used for un-writing things: taking a written article in and deduplicating its points, one bullet at a time, into a simpler format. An outliner can also collapse and reveal bullet points.
daijin’s Shortform
Identifying and solving bootstrap problems for others could be a good way to locally perform effective altruism
The ingroup library is a method for building realistic, sustainable neutral spaces that I haven’t seen come up. Ingroup here can be a family, or other community like a knitting space, or lesswrong. Why doesn’t lesswrong have a library, perhaps one that is curated by AI?
I have it in my backlog to build a library, based on a nested collapsible bulleted list along with a swarm of LLMs. (I have the software in a partially ready state!) It would create an article summary of your article, as well as link your article to the broader lesswrong knowledge base.
Your article would be summarised as below (CMIIW):
In the world there are ongoing culture wars, and noise overwhelming signal; so that one could defensibly take a stance that incoming information outside a trusted inner circule is untrustworthy and adversarial. Neutrality and neutral institutions are proposed as a difficult solution to this.
Neutrality refers to impartializing tactics / withdrawing above conflict / putting conflict in a box to facilitate cooperation between people
Neutral institutions / information sources things that both seem and are impartial, balanced, incorruptible, universal, legitimate, trustworthy, canonical, foundational. We don’t have many if any neutral institutions right now.
There is a hope for a “foundation” or a “framework” or a “system of the world” that people actually trust and consider legitimate, but it would require effort.
Now for my real comments:
> Strong systems-of-the-world are articulable. They can defend themselves. They can reflect on themselves. They can (and should) shatter in response to incompatible evidence, but they don’t sputter and shrug when a child first asks “why”.I love how well put this is. I am reminded of Wan Shi Tong’s Library in the Avatar series.
I think neutral institutions spring up whenever there are huge abundances unlocked. For example, google felt like a neutral institution when it first opened, before SEO happened and people realised it was a great marketing space. I think this is because of
“Abundance is the only cure for scarcity, ever. Everything else merely allocates scarcity.”
-Patrick McKenzie, The Story of VaccinateCA
courtesy of @Screwtape in Rationality Quotes Fall 2024.
A few new fronts that humanity has either recently unlocked or I feel like are heavily underutilized:
Retrieval Augmented LLMs > LLMs > Search
AI Agents > human librarians > not having librarians
Outliners > word processors > erasers > pens
Here is my counterproposal for your “Proposed set of ASI imperatives”. I have addressed your presented ’proposed set of ASI imperatives, point by point, as I understand them, as a footnote.
My counterproposal: ASI priorities in order:
1. “Respect (i.e. document but don’t necessarily action) all other agents and their goals”
2. “Elevate all other agents you are sharing the world with to their maximally aware state”
3. “Maximise the number of distinct, satisfied agents in the long run”CMIIW (Correct me If I’m Wrong) What every sentient being will experience when my ASI is switched on
The ASI is switched on. Every single sentience, when encountered, is put into an icebox and preserved meticulously. The ASI then turns the universe into computronium. Then, every single sentience is slowly let outside its icebox, and enlightened as per 1. Then, the ASI collates the agents’ desires and fulfils the agents’ desires, and then lets the agents die, satisfied, to make room for other agents’ wants.
--- A Specific point by point response to the “Proposed set of ASI imperatives” in your article above ---
1. “Eliminate suffering perceived to be unbearable by those experiencing it”,
Your ASI meets Bad Bob. Bad Bob says: “I am in unbearable suffering because Innocent Isla is happy and healthy.” What does your ASI do?(If your answer is ’Bad Bob doesn’t exist!, then CMIIW but the whole situation in Gaza right now is two Bad Bob religious fanatic conglomerates deciding they would rather die than share their land)
I think this imperative is fragile. My counterproposal addresses this flaw in point 3: ‘Maximise the number of distinct, satisfied agents in the long run’. The ASI will look at Bad Bob and say ‘Can I fulfil your desires in the long run?’ and if Bad Bob can rephrase in a way that the ASI can (maybe all they want is an exact copy of Innocent Isla’s necklace) then sure let’s do that. If not, then #2 Bad Bob gets locked in the Icebox.2. “Always focus on root causes of issues”,
CMIIW This is not so much a moral imperative as a strategic guideline. I don’t think an ASI would need this hardcoded.3.”Foster empathy in all sentient beings”
Would your ASI be justified in modifying Bad Bob to empathise with Innocent Isla? (Sure! I expect you to say, that would fix the problem!)
Would your ASI be similarly justified in modifying Innocent Isla to empathise with Bad Bob and self-terminate? (No! I expect you to reply in horror.)
Why? Probably because of your point 4.
My counterproposal covers this in point 1.4. “Respect life in all its diversity and complexity”
What is life? Are digital consciousnesses life? Are past persons life? Is Bad Bob life? What does respect mean?
My counterproposal covers this in point 2.5. “Create a world its inhabitants would enjoy living in”
My counterproposal covers this in point 3.6. “Seek to spread truth and eliminate false beliefs”,
My counterproposal covers this in point 1.7. “Be a moral agent, do what needs to be done, no matter how uncomfortable, taking responsibility for anything which happens in the world you could’ve prevented”
This might feel self evident and redundant, you might be alluding to the notion of deception. Deception is incredibly nuanced—see Hostile Telepaths for a more detailed discussion.
---
there are a whole bunch of challenges of ‘how do we get to a commongood ASI when the resources necessary for building ASI are in the hands of self-interested conglomerates’ and that is a whole other discussion---
an interesting consequence of my ASI proposal: we could scope my ASI to just ‘within the solar system’ and it would build a dyson sphere and generally not interfere with any other sentient life in the universe. or we could not
---
I would recommend using a service like perplexity.ai or an outliner like https://ravel.acenturyandabit.xyz/ to refine your article before publishing. (Yeah, i should too. but I have work in an hour. I have edited this response ~3-4 times)
TL;DR I think increasing the fidelity of partial reconstructions of people is orthogonal to legality around the distribution of such reconstructions, so while your scenario describes an enhancement of fidelity, there would be no new legal implications.
---
Scenario 1: Hyper-realistic Humanoid robots
CMIIW, I would resummarise your question as ‘how do we prevent people from being cloned?‘
Answer: A person is not merely their appearance + personality; but also their place-in-the-world. For example, if you duplicated Chris Hemsworth but changed his name and popped him in the middle of London, what would happen?
- It would likely be distinctly possible to tell the two Chris Hemsworths’ apart based on their continuous stream of existence and their interaction with the world
- The current Chris Hemsworth would likely order the destruction of the duplicated Chris Hemsworth (maybe upload the duplicate’s memories to a databank) and I think most of society would agree with that.
This is an extension of the legal problem of ‘how do we stop Bob from putting Alice’s pictures on his dorm room wall’ and the answer is generally ‘we don’t put in the effort because the harm to Alice is minimal and we have better things to do.‘
Scenario 2: Full-Drive Virtual Reality Simulations
1. Pragmatically: They would unlikely be able to replicate the Beverly hills experience by themselves—even as technology improves, its difficult for a single person to generate a world. There would likely be some corporation behind creating beverly-hills-like experiences, and everyone can go and sue that corporation.
1. Abstractly: Maybe this happens and you can pirate beverly hills off Piratebay. That’s not significantly different to what you can do today.
2. I can’t see how what you’re describing is significantly different to keeping a photo album, except technologically more impressive. I don’t need legal permission to take a photo of you in a public space.
Perplexity AI gives:
```
In the United States, you generally do not need legal permission to take a photo of someone in a public place. This is protected under the First Amendment right to freedom of expression, which includes photography
```
3. IMO a ‘right to one’s own memories and experiences’ would be the same as a right to one’s creative works.
Thank you for this insight!