Fat Cock Bias (NSWF)
DanielVarga
If you were to read only one paper, make it the one by Michael Duff.
I have read this paper, and frankly I am a bit annoyed because of the time I wasted. There is basically no scientific content, just the usual physics blogosphere vitriol and gossip, now in a journal article. Where can I read about string theory, as opposed to the politics of string theory?
If you were ever wondering why Congress has a 95% incumbency rate despite an approval rating in the high teens, this study may be worth a read.
That also has something to do with polarized districts, which is often attributed to gerrymandering. As Swing Districts Dwindle, Can a Divided House Stand? − 538
Also, surely you mean “interstellar”? I was only thinking of interstellar travel for now; assuming intergalactic is impossible or whatever.
When you look at it from a Fermi paradox perspective, you have to be able to account for many hundred million years of expansion, because there can be many civilizations that are that much older than us. We are talking about some crazy thing that is supposed to be able to consume a galaxy with almost-optimal speed. I don’t expect galaxy boundaries to stop it completely, neither by intention nor by necessity. I am not even sure that it has to treat intergalactic space as the long boring travel between the rare interesting parts. Maybe all it really needs is empty space.
0.999c would get you a lag behind the light of 100 years, which is on the same order of magnitude as the time between detectability and singularity (looks like < 200 years for us).
Interesting point.
How would one eat a star without slowing down, even in principle?
Note that I speculated about photons as a substrate. Maybe major reorganization of atoms in unnecessary, and it can just fill the space around the star, and utilize the star as a photon source.
First, the cited paper is from 1994, and was updated 18 years later only to commemorate the Mayan calendar doomsday. Katja’s thesis does indeed cite this paper, so the red flag of a diseased discipline can be safely lowered.
Second, it is the favorite hobby of many physicists to spot some place in another field (biology, sociology etc.) where some concept from physics (percolation, self-organized criticality etc.) can be applied, and rush there without reading any of the already existing literature. This habit of physicists can be annoying even in itself for the practitioners of the given field. But then another physicists comes by, finds that the physicists did not properly cite the literature, and deems the field a diseased discipline? Ouch, that must be painful to hear. :)
I am not a physicist, so I didn’t and couldn’t do the calculations, but I don’t really believe that classic probes can reach .999c. They would be pulverised by intergalactic material. Even worse, literal .999c would not be fast enough for this fancy “hits us before we know it” filter idea to work. As I explained in some of the above-quoted threads, my bet would definitely be on the things you called “spaceships out of light”. A sufficiently advanced civilisation might switch from atoms to photons as their substrate. The only resource they would extract from the volume of space they consume would be negentropy, so they wouldn’t need any slowing down or seeds. Again, I am not a physicist. I discussed this with some physicists, and they were sceptic, but their objections seemed to be of the engineering kind, not theoretic kind, and I’m not sure they sufficiently internalized “don’t bet against the engineering skill of a superintelligence”.
For me, one source of inspiration for this light-speed expansion idea was Stanislav Lem’s “His Master’s Voice”, where precisely tuned radio waves are used to catalyse the formation of DNA-based life on distant planets. (Obviously that’s way too slow for the purposes we discuss here.)
Here are a couple of scattered short LW comments where I discussed this possibility and considered counterarguments and implementations.
By the way, I think nihilism often gets short changed around here. Given that we do not actually have at hand a solution to ontological crises in general or to the specific crisis that we face, what’s wrong with saying that the solution set may just be null? Given that evolution doesn’t constitute a particularly benevolent and farsighted designer, perhaps we may not be able to do much better than that poor spare-change collecting robot? If Eliezer is worried that actual AIs facing actual ontological crises could do worse than just crash, should we be very sanguine that for humans everything must “add up to moral normality”?
I really like the robot metaphor, and I fully agree with the kind of nihilism you describe here. Let me note, though, that nihilism is a technically precise but potentially misleading name for this world view. I am an old-fashioned secular humanist when it comes to 2012 humans. I am a moral nihilist only when I have to consider the plethora of paradoxes that come with the crazy singularity stuff we like to discuss here (most significantly, substrate-independence). Carbon-based 2012 humans already face some uncomfortable edge cases (e.g. euthanasia, abortion, animal rights), but with some introspection and bargaining we can and should agree on some ground rules. I am a big fan of such ground rules, that’s why I call myself an old-fashioned humanist. On the other hand, I think our morality simply does not survive the collision with your “ontological crisis”. After the ontological crisis forces itself on us, it is a brand new world, and it becomes meaningless to ask what we ought to do in this new world. I am aware that this is an aesthetically deeply unsatisfying philosophical position, so I wouldn’t accept it if I had some more promising alternatives available.
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
Your ethical theory is in deep trouble if it depends on a notion of ‘distinct individual’ in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.
Nice game, BUT. The Mac version caused me some pain. I chose the default fullscreen settings, and I couldn’t quit the application for minutes. No quit button that I could find. No drop-down menu bar. Cmd-tab stopped working. Mission Control (F3) stopped working. WTF? Finally I figured out that alt-cmd-esc still works, and then I could quit the game.
So you’d prefer to have the discussion about it on OB? I think there are many good reasons to have it here.
EDIT: Replying to a private message, I wrote a bit more about my reasoning: “As I wrote in the post, the link was just apropos to start a discussion about the issue, here, on LW. I contemplated whether to post it as a [LINK] or as a more generic “open-ended question to the audience”. I decided for [LINK] because that more clearly reflects that there’s no original content in my post. Should I change it to a more generic title? Do you disagree with posting even that?”
[LINK] Was Intrade being manipulated?
I love uncertainty. In many situations I’d rather try something just to see what happens. I’m the character that gets killed first in every horror movie, but that’s fine with me, since life is not generally like a horror movie.
That study is extremely interesting, but its central claim is disputed. Here it is claimed that when humans get to practice as much as Ayumu, they can reach his level:
http://www.springerlink.com/content/h842v2702r60u481/
The second part of his study involved an offended hospital challenging Rosenhan to send pseudopatients to its facility, whom its staff would then detect. Rosenhan agreed and in the following weeks out of 193 new patients the staff identified 41 as potential pseudopatients, with 19 of these receiving suspicion from at least 1 psychiatrist and 1 other staff member. In fact Rosenhan had sent no one to the hospital.
Conducting an important scientific experiment, teaching a painful lesson to those who challenge your authority, and showing your good sense of humor. Without any work whatsoever. It must have felt good.
Here you are: Best of short rationality quotes 2009-2012. I created it with a one-line modification of the script I used here: Best of Rationality Quotes, 2011 Edition. The threshold is 400 characters including XML markup. The user-names for the newer quotes are missing, I’ll fix this for the 2012 Edition.
Prelec’s formal results hold for large populations, but it held up well experimentally with 30-50 participants
Wait, wait, let me understand this. It’s the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn’t even have a full understanding of the system’s internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.
The ‘truth serum’ property of the method is only proved for infinite populations. Intuitively it seems quite clear to me that for small populations, the method can be gamed easily. Do you know of any results on the robustness of the method regarding population size when there is incentive to mislead?
Top quote contributors by total karma score collected:
1223 RichardKennaway
854 gwern
735 Rain
715 MichaelGR
637 Eliezer_Yudkowsky
618 GabrielDuquette
584 Alejandro1
560 anonym
551 Konkvistador
529 Jayson_Virissimo
485 lukeprog
458 NancyLebovitz
415 RobinZ
408 Yvain
374 CronoDAS
350 James_Miller
342 Tesseract
310 Grognor
308 Alicorn
297 DSimon
283 Stabilizer
280 peter_hurford
270 Nominull
258 billswift
253 Oscar_Cunningham
244 arundelo
239 Eugine_Nier
235 Kutta
234 Thomas
224 [deleted]
191 roland
189 Kaj_Sotala
178 Maniakes
177 J_Taylor
160 katydee
160 cousin_it
155 Will_Newsome
151 djcb
150 Nic_Smith
148 MinibearRex