Inspired by a post by Self-Embedded Agent, I’ve
decided to vomit out a couple of short post blurbs. These necessarily
don’t contain all the necessary caveats or implications or details,
but whatever.
Scope Neglect Is Not A Cognitive Bias (or, Contra Desvouges and Kahnemann and Yudkowsky and Soares and Alexander on Oily Birds)
Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie
alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase on the table (less being too sparse, and more being too busy). The pebble-sorters of course go the extra mile.
Calling scope neglect a bias pre-supposes that we ought to value certain things linearly (or at least monotonically). This does not follow from any mathematics I know of. Instead it tries to sneak in utilitarian assumptions by calling their violation “biased”.
The Fragile Blessing of Unrewarded Altruism
Society free-rides on a lot of unrewarded small altruism, which might
just be crucial. The source of this altruism is humans being adapted
to environments where reputation mattered much more than in modern
societies, we therefore should expect humans to care more about their
reputation than their other value imply. So humans are more altruistic
than expected, among other things these norm-violations are weirdly
uncommon in western societies despite there being ~0 repercussions:
Furthermore, there are forms of altruism that are weirdly common from an egoistic perspective:
editing Wikipedia
uploading tutorial to YouTube
donating blood
organ donations
working at soup kitchens
charitable donations
I have the intuition that there are many more of these small altruisms
(and suspicious lack of norm violations), and that these are crucial
for the functioning of society.
An Unreasonable Proposal
The European Union is stagnant (low birth rate, few people start
companies) but has (comparatively) good governance, India has a lot of
“drive”/”energy”, but bad governance. Furthermore they are both “3rd
powers” who want to not get caught in the Thucidydes trap between China
and the US, and they are both very large democracies. Why not merge the
two, slowly, over the century?
A few reasons against:
This is crazy
Merging two gigantic bureaucracies will just produce a horrible giga-bureaucracy
Large sudden immigration to the EU would be very destabilising
Towards the Best Programming Language for Universal Induction
Kolmogorov complexity depends on your choice of programming language,
and this choice can change the relative prior probabilities assigned in
Solomonoff induction. For a set P of programming languages,
we then might want to find the “simplest” programming language. We
can determine this via the size of the smallest interpreters: for two
programming languages p1,p2∈P, p1 is simpler than p2
(p1≺p2) iff the shortest interpreter for p1 on p2
(i12) is longer than the the shortest interpreter for
p2 on p1 (i21): |i21|<|i12|
(and p1⪯p2 iff |i21|≤|i12|).
If this relation is transitive and complete, we simply choose the least
element, if it is just transitive, we choose the minimal elements and
assign uniform probabilities over them.
If the relation is not transitive, we can still use tools from social choice theory such
as the top cycle (or rather
here the bottom cycle) to determine which programming languages to use.
(Thanks for Misha Yagudin for basically convincing me that this is
indeed correct).
I think this relation might be uncomputable (even if we have finite-length
mutual interpreters, since we’d need to rule out for all shorter ones
that they either don’t halt or are not interpreters), but that doesn’t
matter because we’re in Solomonoff-land anyway.
The General Societal Problem of Matching Things
Many problems in society fit into the slot of “matching two sets together”:
Friends to each other
Romantic partners to each other
Jobs to workers
Living locations to tenants
Contra Yudkowsky on Axelrodian Tolerance
In Tolerate Tolerance,
Yudkowsky argues exactly what the title says. But not punishing
non-punishers is not game-theoretically stable:
What Axelrod found is that, in most situations (involving a variety of
different costs and benefits, including the costs of helping to punish),
people have no incentive to punish cheaters. However—and this was
Axelrod’s great contribution—the model can be made to work in favor
of the good guys with one simple addition: a norm of punishing anyone
who doesn’t punish others. Axelrod called this the “meta-norm.”
— Robin Hanson & Kevin Simler, “The Elephant in the Brain” 2018, p. 64
We should probably not base community norms on meta-norms that are not
game-theoretically stable.
I think the rate of people who engage in small tax evasion like not declaring the tips they receive as income is higher than the number of people who edit Wikipedia. ~1 million people edit Wikipedia per year. I’m pretty certain that >10 million people engage in small tax evasion when they receive cash directly for services that are illegible to tax authorities.
Posts I Will Not Finish On Time
Inspired by a post by Self-Embedded Agent, I’ve decided to vomit out a couple of short post blurbs. These necessarily don’t contain all the necessary caveats or implications or details, but whatever.
Scope Neglect Is Not A Cognitive Bias (or, Contra Desvouges and Kahnemann and Yudkowsky and Soares and Alexander on Oily Birds)
Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase on the table (less being too sparse, and more being too busy). The pebble-sorters of course go the extra mile.
Calling scope neglect a bias pre-supposes that we ought to value certain things linearly (or at least monotonically). This does not follow from any mathematics I know of. Instead it tries to sneak in utilitarian assumptions by calling their violation “biased”.
The Fragile Blessing of Unrewarded Altruism
Society free-rides on a lot of unrewarded small altruism, which might just be crucial. The source of this altruism is humans being adapted to environments where reputation mattered much more than in modern societies, we therefore should expect humans to care more about their reputation than their other value imply. So humans are more altruistic than expected, among other things these norm-violations are weirdly uncommon in western societies despite there being ~0 repercussions:
littering
air/water pollution by individuals
men approaching women on the street
playing loud music
small tax evasions
Furthermore, there are forms of altruism that are weirdly common from an egoistic perspective:
editing Wikipedia
uploading tutorial to YouTube
donating blood
organ donations
working at soup kitchens
charitable donations
I have the intuition that there are many more of these small altruisms (and suspicious lack of norm violations), and that these are crucial for the functioning of society.
An Unreasonable Proposal
The European Union is stagnant (low birth rate, few people start companies) but has (comparatively) good governance, India has a lot of “drive”/”energy”, but bad governance. Furthermore they are both “3rd powers” who want to not get caught in the Thucidydes trap between China and the US, and they are both very large democracies. Why not merge the two, slowly, over the century?
A few reasons against:
This is crazy
Merging two gigantic bureaucracies will just produce a horrible giga-bureaucracy
Large sudden immigration to the EU would be very destabilising
Towards the Best Programming Language for Universal Induction
Kolmogorov complexity depends on your choice of programming language, and this choice can change the relative prior probabilities assigned in Solomonoff induction. For a set P of programming languages, we then might want to find the “simplest” programming language. We can determine this via the size of the smallest interpreters: for two programming languages p1,p2∈P, p1 is simpler than p2 (p1≺p2) iff the shortest interpreter for p1 on p2 (i12) is longer than the the shortest interpreter for p2 on p1 (i21): |i21|<|i12| (and p1⪯p2 iff |i21|≤|i12|).
If this relation is transitive and complete, we simply choose the least element, if it is just transitive, we choose the minimal elements and assign uniform probabilities over them.
If the relation is not transitive, we can still use tools from social choice theory such as the top cycle (or rather here the bottom cycle) to determine which programming languages to use. (Thanks for Misha Yagudin for basically convincing me that this is indeed correct).
I think this relation might be uncomputable (even if we have finite-length mutual interpreters, since we’d need to rule out for all shorter ones that they either don’t halt or are not interpreters), but that doesn’t matter because we’re in Solomonoff-land anyway.
The General Societal Problem of Matching Things
Many problems in society fit into the slot of “matching two sets together”:
Friends to each other
Romantic partners to each other
Jobs to workers
Living locations to tenants
Contra Yudkowsky on Axelrodian Tolerance
In Tolerate Tolerance, Yudkowsky argues exactly what the title says. But not punishing non-punishers is not game-theoretically stable:
— Robin Hanson & Kevin Simler, “The Elephant in the Brain” 2018, p. 64
We should probably not base community norms on meta-norms that are not game-theoretically stable.
I think the rate of people who engage in small tax evasion like not declaring the tips they receive as income is higher than the number of people who edit Wikipedia. ~1 million people edit Wikipedia per year. I’m pretty certain that >10 million people engage in small tax evasion when they receive cash directly for services that are illegible to tax authorities.