I took it.
shinoteki
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept
The human mind is finite, and there are infinitely many possible concepts. If you’re interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .
Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be?
Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology “Scientific”? and How probable is Molecular Nanotech?.
Counterexample: P(3^^^...3)(n “^”s) = 1/2^n P(anything else) = 0 This is normalized because the sum of a geometric series with decreasing terms is finite. You might have been thinking of the fact that if a probability distribution on the integers is monotone decreasing (i.e. if P(n)>P(m) then n <m) then P(n) must decrease faster than 1/n. However, a complexity-based distribution will not be monotone because some big numbers are simple while most of them are complex.
Those are the probabilities that both halves of a pair of photons are transmitted, so you can’t determine them without the information from both detectors. The distribution at each individual detector doesn’t change, it’s the correlation between them that changes.
A’ doesn’t become A″ by catching up to him, he becomes A″ when he uses his time machine to jump back 3 hours.
There would be three babies for 6 hours, but then the youngest two would use their time machines and disappear into the past.
A″ doesn’t cease to exist. A’ “ceases to exist” because his time machine sends him back into the past to become A″.
You don’t need a time machine to go forward in time—you can just wait. A″ cant leave everything to A’ because A’ will disappear within three hours when he goes back to become A″. If A’ knows A wasn’t reminded the A’ can’t remind A. the other three Harrys use their time turners to go backwards and close the loop. You do need both forward and backward time travel to create a closed loop, but the forward time travel can just be waiting; it doesn’t require a machine.
Do you also choose not to chew gum in Eliezer’s version of Solomon’s Problem?
The nice part about modal agents is that there are simple tools for finding the fixed points without having to search through proofs; in fact, Mihaly and Marcello wrote up a computer program to deduce the outcome of the source-code-swap Prisoner’s Dilemma between any two (reasonably simple) modal agents. These tools also made it much easier to prove general theorems about such agents.
Would it be possible to make this program publicly available? I’m curious about how certain modal agents play against each other, but struggling to caculate it manually.
If you can prove a contradiction, defect.
Should this be “If you can prove that you will cooperate, defect”? As it is, I don’t see how this prevents cooperation with Cooperatebot, unless the agent uses an inconsistent system for proofs.
It’s true that if you can prove that your opponent will cooperate counterfactual-if you cooperate and defect counterfacual-if you defect, then you should cooperate. But we don’t yet have a good formalization of logical counterfactuals, and the reasoning that cooperates with cooperatebot just uses material-if instead of conterfactual-if.
If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.
Constructively, (not ((not A) and (not B))) is weaker than (A or B). While you could call the former “A or B”, you then have to come up with a new name for the latter.
The Metamorphosis of Prime Intellect. The chapters aren’t in chronological order; the bootstrapping and power leveling happen in chapters two and four.
No. To get the 1⁄3 probability you have to assume that she would be just as likely to say what she says if she had 1 boy as if she had 2 (and that she wouldn’t say it if she had none). In your scenario she’s only half as likely to say what she says if she has one boy as if she has two boys, because if she only has one there’s a 50% chance it’s the one she’s just given birth to.
I took it.
Correspondence of beliefs to reality being desirable is no closer to being a tautology than financial institutes being on the side of rivers, undercover spies digging tunnels in the ground, or spectacles being drinking vessels.
It is hard to tell whether anyone took this seriously—but it seems that an isomorphic argument ‘proves’ that computer programs will crash—since “almost any” computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.
I don’t see why this makes the argument seem silly. It seems to me that the isomorphic argument is correct, and that computer programs do crash.
He’s not talking about impossibility
I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true.
both of which involve moving faster than light.
Moving from B to A slower than the speed of light does not involve moving faster than light.
It shouldn’t. Moving from B to A slower than light is possible*, moving from A to B faster than light isn’t, and you can’t change whether something is possible by changing reference frames.
*(Under special relativity without tachyons)
What’s the difference between
and
?