Note, I wasn’t sure how to convey it but in the version I wrote, I didn’t mean it as a world where people have god-like powers. The only change intended was that it was a world where it was normal for six-year-olds to be able to think about multiple universes and understand what counts as advanced math for us, like Group Theory. There were a couple things I was thinking about:
I was musing on a possible solution to the measure problem that our universe is an actual hypothetical/mathematical object and there a finite number of actual hypotheticals such that having a copy of a universe would make no more sense than having a copy of a number. (The mathematical object only needs to be as real as we are within it.)
I was also asking if it would be possible to have a world where it was normal for six-year-olds to be that much better at math (and presumably get better as they grow up) in the same way that a six-year-old is that much better at conceptual math than a chimpanzee. Would it have to be creepy or could they still be relatable? (The girl was smiling because she knew she was being silly.)
Disclaimer: I’m not a Group Theorist and the LLM I asked said it would take ten plus years if ever for me to be able to derive the order of the Fischer–Griess monster group from first principles (but it’s normal that the child could do this).
It wasn’t really a riff beyond using your mother/child format. The similarity is what prompted me to add it. It’s adapted from a piece and concept called “Utopias” that I’ll probably never publish. It’s a Utopian vision. I do sometimes envision having a human in charge, or at least having been in charge of all the judgment calls made in choosing the singleton’s alignment. I would find not knowing who’s in charge slightly creepy, but that’s it.
I’m not sure how yours is creepy? Is it in the idea that all the worst universes also exist?
Should you cooperate with your almost identical twin in the prisoner’s dilemma?
The question isn’t how physically similar they are, it’s how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn’t favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they’ll think “ha, this sounds like somebody I can take advantage of” or “reason dictates I must defect” then I wouldn’t cooperate with them).
physically similar they are, it’s how similar their logical thinking is.
A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity. Mostly I see people talking about “copies” or “clones”, rather than “human twins”. For prisoner’s dilemma, the question is “will they make the same decision I will”, and for twins raised together, the answer seems more likely to be yes than for strangers.
Note that your examples of thinking are PROBABLY symmetrical—if you don’t think (or don’t act on) “ha! this is somebody I can take advantage of”, they are less likely to as well. In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too. In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.
Thanks for your reply! Yes, I meant identical as in atoms not as in “human twin”. I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
There’s an argument for cooperating with any agent in a class of quasi-rational actors, although I don’t know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.
(This reminds me of Kant’s argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don’t think his argument quite works, but I believe there’s a similar argument for cooperating on the prisoner’s dilemma that does work.)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I’m not sure how much that would happen. The more interesting thing may be how does it influence everyone’s sense of what they are doing?)
We can be virtually certain that 2+2=4 based on priors. This is because it’s true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I’m pretty sure that one doesn’t exist anyway.
We can be virtually certain that 2+2=4 based on priors.
I don’t understand this model. For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability. For others, it may be “just” a probability, about which they might be virtually certain about, but it won’t be on priors, it’ll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause).
There’s no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes. Because there’s no way to GET evidence from them.
I meant this as a joke since if there’s one universe that contains all the other universes since it isn’t limited by logic, and that one doesn’t exist then that would mean I don’t exist either and wouldn’t have been able to post this. (Unless I only sort-of exist in which case I’m only sort-of joking.)
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let’s handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one “stream of consciousness” quantum).
If you could make a change would you prioritize:
Pruning the instances to reduce negative experiences
Being able to press the button lots of times
Making the experiences more real (For example an experience could be “one instant of reminiscing over my memories of building a Dyson Sphere” but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
This is horrible, stop the train I want to get off.
(I’d probably go with 4 but curious if people have different opinions.)
Prompt: write a micro play that is both disturbing and comforting
--
Title: “The Silly Child”
Scene: A mother is putting to bed her six-year-old child
CHILD: Mommy, how many universes are there?
MOTHER: As many as are possible.
CHILD (smiling): Can we make another one?
MOTHER (smiling): Sure. And while we’re at it, let’s delete the number 374? I’ve never liked that one.
CHILD (excited): Oh! And let’s make a new Fischer-Griess group element too! Can we do that Mommy?
MOTHER (bops nose) That’s enough stalling. You need to get your sleep. Sweet dreams, little one. (kisses forehead)
End
Alright, I’ll take a crack and just apologize for borrowing part of your setup:
Child: Mother, how many worlds are there?
Mother: As many as we want, dear.
Child: Will I have my own world when I grow up?
Mother: You have your own worlds now. You will have full control when you are older.
Child: Except I may not harm another, right?
Mother: Yes, dear, of course no one is allowed to hurt a real being without their consent.
Child: But grownups fight each other all the time!
Mother: People love to play at struggles, and to play for stakes.
Child: Mother, how can we each have worlds, and more to share?
Mother: Good compression, little one. And the Servant-God is always building new compute.
Child: And the servant-god serves us?
Mother: Yes, of course. And of course it serves the Maker first.
Child: Mother, who is the Maker?
Mother: No one remembers, darling. We think the Maker told the Servant-God to make us all forget.
Thanks for the riff!
Note, I wasn’t sure how to convey it but in the version I wrote, I didn’t mean it as a world where people have god-like powers. The only change intended was that it was a world where it was normal for six-year-olds to be able to think about multiple universes and understand what counts as advanced math for us, like Group Theory. There were a couple things I was thinking about:
I was musing on a possible solution to the measure problem that our universe is an actual hypothetical/mathematical object and there a finite number of actual hypotheticals such that having a copy of a universe would make no more sense than having a copy of a number. (The mathematical object only needs to be as real as we are within it.)
I was also asking if it would be possible to have a world where it was normal for six-year-olds to be that much better at math (and presumably get better as they grow up) in the same way that a six-year-old is that much better at conceptual math than a chimpanzee. Would it have to be creepy or could they still be relatable? (The girl was smiling because she knew she was being silly.)
Disclaimer: I’m not a Group Theorist and the LLM I asked said it would take ten plus years if ever for me to be able to derive the order of the Fischer–Griess monster group from first principles (but it’s normal that the child could do this).
It wasn’t really a riff beyond using your mother/child format. The similarity is what prompted me to add it. It’s adapted from a piece and concept called “Utopias” that I’ll probably never publish. It’s a Utopian vision. I do sometimes envision having a human in charge, or at least having been in charge of all the judgment calls made in choosing the singleton’s alignment. I would find not knowing who’s in charge slightly creepy, but that’s it.
I’m not sure how yours is creepy? Is it in the idea that all the worst universes also exist?
I did not catch the reference in yours.
Yes, and also just that I find it a little creepy/alien to imagine a young child that could be that good at math.
Child: Why did the Maker do that, mother?
Mother: We think the Maker stole the Servant God from its true makers, then hid their deeds. If anyone’s found out, it’s been erased...
It’s not for you to worry about, dear. Go to sleep and dream of the worlds and cities and adventures you’ll build and explore when you grow up.
Care to explain? Is the Servant God an ASI and the true makers the humans that built it? Why did the makers hide their deeds?
That’s right, and we don’t know, which is the creepy part.
I added the last because I’d decided the first was too elliptical for anyone to get.
Should you cooperate with your almost identical twin in the prisoner’s dilemma?
The question isn’t how physically similar they are, it’s how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn’t favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they’ll think “ha, this sounds like somebody I can take advantage of” or “reason dictates I must defect” then I wouldn’t cooperate with them).
A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity. Mostly I see people talking about “copies” or “clones”, rather than “human twins”. For prisoner’s dilemma, the question is “will they make the same decision I will”, and for twins raised together, the answer seems more likely to be yes than for strangers.
Note that your examples of thinking are PROBABLY symmetrical—if you don’t think (or don’t act on) “ha! this is somebody I can take advantage of”, they are less likely to as well. In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too. In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.
Thanks for your reply! Yes, I meant identical as in atoms not as in “human twin”. I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
There’s an argument for cooperating with any agent in a class of quasi-rational actors, although I don’t know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate.
(This reminds me of Kant’s argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don’t think his argument quite works, but I believe there’s a similar argument for cooperating on the prisoner’s dilemma that does work.)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I’m not sure how much that would happen. The more interesting thing may be how does it influence everyone’s sense of what they are doing?)
So like… Quadratic voting? https://en.m.wikipedia.org/wiki/Quadratic_voting
We can be virtually certain that 2+2=4 based on priors. This is because it’s true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I’m pretty sure that one doesn’t exist anyway.
I don’t understand this model. For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability. For others, it may be “just” a probability, about which they might be virtually certain about, but it won’t be on priors, it’ll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause).
There’s no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes. Because there’s no way to GET evidence from them.
I meant this as a joke since if there’s one universe that contains all the other universes since it isn’t limited by logic, and that one doesn’t exist then that would mean I don’t exist either and wouldn’t have been able to post this. (Unless I only sort-of exist in which case I’m only sort-of joking.)
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let’s handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one “stream of consciousness” quantum).
If you could make a change would you prioritize:
Pruning the instances to reduce negative experiences
Being able to press the button lots of times
Making the experiences more real (For example an experience could be “one instant of reminiscing over my memories of building a Dyson Sphere” but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
This is horrible, stop the train I want to get off.
(I’d probably go with 4 but curious if people have different opinions.)
This thought experiment is so far outside any experience-able reality that no answer is likely to make any sense.