@EvanWard97
Longtermist, singularitarian. Differential progress, buying time for AI alignment; epistemic infrastructure, prior elicitation, web dev.
@EvanWard97
Longtermist, singularitarian. Differential progress, buying time for AI alignment; epistemic infrastructure, prior elicitation, web dev.
To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.
<Tried to retract this comment since I no longer agree with it, but it doesn’t seem to be working>
There are trillions of quantum operations occurring in one’s brain all the time. Comparatively, we make very few executive-level decisions. Further, these high-level decisions are often based off a relatively small set of information & are predictable given that set of information. I believe this implies that a person in the majority of recently created worlds makes the same high-level decisions. It’s hard to imagine numerous different decisions we could make in any given circumstance given the relatively ingrained decision procedures we seem to walk the Earth with.
I know that your Occam’s prior for Bob in a binary decision is .5. That is a separate matter from how many Bobs make the same decision given the same set of external information and the same high-level decision procedures inherited from the Bob in a recent (perhaps just seconds ago) common parent world.
This decision procedure does plausibly effect recently created Everett worlds and allow people in them to coordinate amongst ‘copies’ of themselves. I am not saying I can coordinate w/ far past sister worlds. I can, I am theorizing, coordinate myselves in my soon-to-be-generated child worlds because there is no reason to think quantum operations would randomly delete this decision procedure from my brain over a period of seconds or minutes.
Do you think making decisions with the aid of quantum generated bits actually does increase the diversification of worlds?
You make a good point. I fixed it :)
I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.
I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback
I am glad you appreciated this! I’m sorry I didn’t respond sooner. I think you are write about the term “decision theory” and have opted for “decision procedure” in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/
I’m sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?
It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).
I agree that QM already creates a wide spread of worlds, but I don’t think that means it’s safe to put all of one’s eggs in one basket when one has doubt that their moral system is fundamentally wrong.
I too have been lurking for a little while. I have listened to the majority of Rationality from A to Z by Eliezer and really appreciate the clarity that Bayescraft and similar ideas offer. Hello :)
It’s great to see other people thinking and working on these ideas of efficiently eliciting preferences and very ‘subjective’ data, and building your own long-term decision support system! I’ve been pretty frustrated by the seeming lack of tooling for this. Inspired partially by Gwern’s Resorter as well, I’ve started experimenting with my own version, except my goal is to end up with random variables for cardinal utilities (at least across various metrics), and I’m having the inputs for comparisons be quickly-drawn probability distributions.