There have been other sci-fi writers talking about AI and the singularity. Charles Stross, Greg Egan, arguably Cory Doctorow… I haven’t seen the episode in question, so I can’t say who I think they took the biggest inspiration from.
Snowyowl
9/16ths of the people present are female Virtuists, and 2/16ths are male Virtuists. If you correctly calculate that 2/(9+2) of Virtuists are male, but mistakenly add 9 and 2 to get 12, you’d get one-sixth as your final answer. There might be other equivalent mistakes, but that seems the most likely to lead to the answer given.
Of course, it’s irrelevant what the actual mistake was since the idea was to see if you’ll let your biases sway you from the correct answer.
The later Ed Stories were better.
In the first scenario the answer could depend on your chance of randomly failing to resend the CD, due to tripping and breaking your leg or something. In the second scenario there doesn’t seem to be enough information to pin down a unique answer, so it could depend on many small factors, like your chance of randomly deciding to send a CD even if you didn’t receive anything.
Good point, but not actually answering the question. I guess what I’m asking is: given a single use of the time machine (Primer-style, you turn it on and receive an object, then later turn it off and send an object), make a list of all the objects you can receive and what each of them can lead to in the next iteration of the loop. This structure is called a Markov chain. Given the entire structure of the chain, can you deduce what probability you have of experiencing each possibility?
Taking your original example, there are only 2 states the timeline can be in:
A: Nothing arrives from the future. You toss a coin to decide whether to go back in time. Next state: A (50% chance) or B (50% chance)
*B: A murderous future self arrives from the future. You and him get into a fight, and don’t send anything back. Next state: A (100% chance).
Is there a way to calculate from this what the probability of actually getting a murderous future self is when you turn on the time machine?
I’m inclined to assume it would be a stationary distribution of the chain, if one exists. That is to say, one where the probability distribution of the “next” timeline is the same as the probability distribution of the “current” timeline. In this case, that would be (A: 2⁄3, B: 1⁄3). (Your result of (A: 4⁄5, B: 1⁄5) seems strange to me: half of the people in A will become killers, and they’re equal in number to their victims in B.)
There are certain conditions that a Markov chain needs to have for a stationary distribution to exist. I looked them up. A chain with a finite number of states (so no infinitely dense CDs for me :( ) fits the bill as long as every state eventually leads to every other, possibly indirectly (i.e. it’s irreducible). So in the first scenario, I’ll receive a CD with a number between 0 and N distributed uniformly. The second scenario isn’t irreducible (if the “first” timeline has a CD with value X, it’s impossible to ever get a CD with value Y in any subsequent timeline), so I guess there needs to be a chance of the CD becoming corrupted to a different value or the time machine exploding before I can send the CD back or something like that.
Teal deer: This model works but the probability of experiencing each outcome can easily depend on the tiny chance of an unexpected outcome. I like it a lot because it’s more intuitive than NSCP but the structure makes more sense than branching-multiverse. I may have to steal it if I ever write a time-travel story.
I wasn’t reasoning under NSCP, just trying to pick holes in cousin_it’s model.
Though I’m interested in knowing why you think that one outcome is “more likely” than any other. What determines that?
You make a surprisingly convincing argument for people not being real.
Last time I tried reasoning on this one I came up against an annoying divide-by-infinity problem.
Suppose you have a CD with infinite storage space—if this is not possible in your universe, use a normal CD with N bits of storage, it just makes the maths more complicated. Do the following:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X+1 on your own CD and send it back in time.
What is the probability distribution of the number on your CD? What is the probability that you didn’t receive a CD from the future?
Once you’ve worked that one out, consider this similar algorithm:
If nothing arrives in your timeline from the future, write a 0 on the CD and send it back in time.
If a CD arrives from the future, read the number on it. Call this number X. Write X on your own CD and send it back in time.
What is the probability distribution of the number on your CD? What is the probability that you didn’t receive a CD from the future?
The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.
I don’t follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.
And why meld the cultures? Humans didn’t seem to care about the existence of shockingly ugly super happies.
For one, because they’re symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They’re annoyingly ethical in that regard.) And I do mean equal value—humans became partly superhappy, and superhappies became partly human. For two, because shared culture and psychology makes it possible to have meaningful dialogue between species: even with the Cultural Translator, everyone got headaches after five minutes. Remember that to the superhappies, meaningful communication is literally as good as sex.
I’d say it would make a better creepypasta than an SCP. Still, if you’re fixed on the SCP genre, I’d try inverting it.
Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it’s not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject’s deep subconscious. It’s only after a considerable amount of testing that they determine the process by which the divergence is caused—and it’s something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.
They came impressively close considering they didn’t have any giant shoulders to stand on.
I think it’s more the point that some of us have more dislikable alleles than others.
Yeah, that should work.
The latter one doesn’t work at all, since it sounds rather like you’re ignoring the very advice you’re trying to give.
I agree with Wilson’s conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.
Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state “This map is incorrect in all circumstances”, there are many which state “This map is correct in all circumstances”; you risk the Happy Death Spiral if you use one of the latter. (I should hope most of your maps state “This map is probably correct in these specific areas, and it may make predictions in other areas but those are less likely to be correct”.) Having several contradictory maps can be useful; it teaches you that no map is perfect.
Or accept that each map is relevant to a different area, and don’t try to apply a map to a part of the territory that it wasn’t designed for.
And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.
Does it matter? People read Glenn Beck’s books; this both raises awareness about the Singularity and makes it a more “mainstream” and popular thing to talk about.
I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.
Organ donation versus cryonics
Actually, x=y=0 still catches the same flaw, it just catches another one at the same time.
My personal philosophy in a nutshell.
Three years late, but: there doesn’t even have to be an error. The Gatekeeper still loses for letting out a Friendly AI, even if it actually is Friendly.