antanaclasis
SIA can be considered (IMO more naturally) as randomly sampling you from “observers in your epistemic situation”, so it’s not so much “increasing the prior” but rather “caring about the absolute number of observers in your epistemic situation” rather than “caring about the proportion of observers in your epistemic situation” as SSA does.
This has the same end result as “up-weighting the prior then using the proportion of observers in your epistemic situation”, but I find it to be much more intuitive than that, as the latter seems to me to be overly circuitous by multiplying by population then dividing by population (as part of taking the proportion of the reference class that you comprise), rather than just taking the number we care about (number of observers in your epistemic situation) in the first place.
I think the point being made in the post is that there’s a ground-truth-of-the-matter as to what comprises Art-Following Discourse.
To move into a different frame which I feel may capture the distinction more clearly, the True Laws of Discourse are not socially constructed, but our norms (though they attempt to approximate the True Laws) are definitely socially constructed.
From the SIA viewpoint the anthropic update process is essentially just a prior and an update. You start with a prior on each hypothesis (possible universe) and then update by weighting each by how many observers in your epistemic situation each universe has.
This perspective sees the equalization of “anthropic probability mass” between possible universes prior to apportionment as an unnecessary distortion of the process: after all, “why would you give a hypothesis an artificial boost in likelihood just because it posits fewer observers than other hypotheses”.
Of course, this is just the flip side of what SSA sees as an unnecessary distortion in the other direction. “Why would you give a hypothesis an artificial boost due to positing more observers” it says. And here we get back to deep-seated differences in what people consider the intuitive way of doing things that underlie the whole disagreement over different anthropic methods.
On the question of how to modify your prior over possible universe+index combinations based on observer counts, the way that I like to think of the SSA vs SIA methods is that with SSA you are first apportioning probability mass to each possible universe, then dividing that up among possible observers within each universe, while with SIA you are directly apportioning among possible observers, irrespective of which possible universes they are in.
The numbers come out the same as considering it in the way you write in the post, but this way feels more intuitive to me (as a natural way of doing things, rather than “and then we add an arbitrary weighing to make the numbers come out right”) and maybe to others.
If you’re adding the salt after you turn on the burner then it doesn’t actually add to the heating+cooking time.
To steelman the anti-sex-for-rent case, it could be considered that after the tenant has entered into that arrangement, the tenant could feel pressure to keep having sex with the landlord (even if they would prefer not to and would not at that later point choose to enter the contract) due to the transfer cost of moving to a new home. (Though this also applies to monetary rent, the potential for threatening the boundaries of consent is generally seen as more harmful than threatening the boundaries of one’s budget)
This could also be used as a point of leverage by the landlord to e.g. pressure the tenant to engage in sex acts they would otherwise not want to or else be evicted (unless the contract specifies from the beginning exactly what kind of sex the payment will entail). I think many people would see such actions by the landlord as more of an infringement upon the tenant than e.g. raising the amount of monetary rent (sacredness of sex/consent).
Additionally, this could be seen as a specific manifestation of the modern trend of more general opposition to sexual relationships with a power imbalance between the participants.
(Parenthetically, I also want to thank you for writing this post, as it’s a good expression of a principle I generally agree with)
In terms of similarity between telling the truth and lying, think about how much of a change you would have to make to the mindset of a person at each level to get them to level 1 (truth)
Level 2: they’re already thinking about world models, you just need to get them cooperate with you in seeking the truth rather than trying to manipulate you.
Level 3: you need to get them the idea of words as having some sort of correspondence with the actual world, rather than just as floating tribal signifiers. After doing that, you still have to make sure that they are focusing on the truth of those words, like the level 2 case.
Level 4: the hardest of them all; you need to get them the idea of words having any sort of meaning in the first place, rather than just being certain patterns of mouth movements that one does when it feels like the right time to do so. After doing that, you again still have the whole problem of making sure that they focus on truth instead of manipulation or tribal identity.
For a more detailed treatment of this, see Zvi’s https://thezvi.wordpress.com/2020/09/07/the-four-children-of-the-seder-as-the-simulacra-levels/
- Feb 13, 2023, 7:15 PM; 2 points) 's comment on Rationality-related things I don’t know as of 2023 by (
Re: “best vs better”: claiming that something is the best can be a weaker claim than claiming that it it better than something else. Specifically, if two things are of equal quality (and not surpassed) then both are the best, but neither is better than the other.
Apocryphally, I’ve heard that certain types of goods are regarded by regulatory agencies as being of uniform quality, such that there’s not considered to be an objective basis for claiming that your brand is better than another. However, you can freely claim that yours is the best, as there is similarly no objective basis on which to prove that your product is inferior to another (as would be needed to show that it is not the best).
One other mechanism that would lead to the persistence of e.g. antibiotic resistance would be when the mutation that confers the resistance is not costly (e.g. a mutation which changes the shape of a protein targeted by an antibiotic to a different shape that, while equally functional, is not disrupted by the antibiotic). Note that I don’t actually know whether this mechanism is common in practice.
Thanks for writing this nice article. Also thanks for the “Qualia the Purple” recommendation. I’ve read it now and it really is great.
In the spirit of paying it forward, I can recommend https://imagakblog.wordpress.com/2018/07/18/suspended-in-dreams-on-the-mitakihara-loopline-a-nietzschean-reading-of-madoka-magica-rebellion-story/ as a nice analysis of themes in PMMM.
It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.
Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.
Indeed humans are significantly non-aligned. In order for an ASI to be non-catastrophic, it would likely have to be substantially more aligned than humans are. This is probably less-than-impossible due to the fact that the AI can be built from the get-go to be aligned, rather than being a bunch of barely-coherent odds and ends thrown together by natural selection.
Of course, reaching that level of alignedness remains a very hard task, hence the whole AI alignment problem.
I had another thing planned for this week, but turned out I’d already written a version of it back in 2010
What is the post that this is referring to, and what prompted thinking of those particular ideas now?
I see it in a similar light to “would you rather have more or fewer cells in your body?”. If you made me choose I probably would rather have more, but only insofar as having fewer might be associated with certain bad things (e.g. losing a limb).
Correspondingly, I don’t care intrinsically about e.g. how much algae exists except insofar as that amount being too high or low might cause problems in things I actually care about (such as human lives).
Seeing the relative lack of pickup in terms of upvotes, I just want to thank you for putting this together. I’ve only read a couple of Dath Ilan posts, and this provided a nice coverage of the AI-in-Dath-Ilan concepts, many of the specifics of which I had not read previously.
My understanding of it is that there is conflict between different “types” of the mixed population based on e.g. skin lightness and which particular blend of ethnic groups makes up a person’s ancestry.
EDIT: my knowledge on this topic mostly concerns Mexico, but should still generally apply to Brazil.
That PDF seems like it is a part of a spoken presentation (it’s rather abbreviated for a standalone thing). Does there exist such a presentation? If so, I was not successful in funding it, and would appreciate it if you could point it out.
To add to this, if the ranked choice voting is implemented with a “no confidence” option (as it should to prevent the vote-in vote-out cycle described above), then you could easily end up in the same situation as the house currently is in, where no candidate manages to beat out “no confidence”.