Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter—most of the universe’s mass was already used up by alien civs.
Jan_Rzymkowski
Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital—valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.
My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you’re not believing in God, people will come to your house and kill your family—and if you believed in God they wouldn’t do that, because they’d somehow fear the God. I don’t see how is this anything else but an emotional trick.
I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don’t regularly pay a racket. But the opposite thing is creating exciting graphic visions. Watching your loved one raped is not as bad as losing a loved one—but it creates a much better psychological effect, targeted to elicit emotional blackmail.
Can anybody point me to what choice of interpretation changes? From what I understand it is an interpretation, so there is no difference in what Copenhagen/MWI predict and falsification isn’t possible. But for some reason MWI seems to be highly esteemed in LW—why?
Small observation of mine. While watching out for sunk cost fallacy it’s easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it’s destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily—when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.
Big thanks for poiting me to Sleeping beauty.
It is a solution to me—it doesn’t feel like a suffering, just as few minute tease before sex doesn’t feel that way.
What I had in mind isn’t a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from “real” world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.
By “me” I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise—which might of course be a simulated mind.
Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of “now”.
Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure).
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
Baysian conundrum
What is R? LWers use it very often, but Google search doesn’t provide any answers—which isn’t surprising, it’s only one letter.
Also: why is it considered so important?
I’d say the only requirement is spending some time living on Earth.
Thanks, I’d get to sketching drafts. But it’ll take some time.
Tips for writing philosophical texts
There’s also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can’t use gravity to devise traps, there’s no fire, much simpler geology, lithe prospects for farming, etc.
Or, conversely, Great Filter doesn’t prevent civilizations from colonising galaxies, and we’ve been colonised long time ago. Hail Our Alien Overlords!
And I’m serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that’s not the case—even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn’t necessarily indicate to defect-defect scenario.
This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.
It wasn’t my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.
You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.