ilbid
aausch
Free Will: Good Cognitive Citizenship with Will Wilkinson and Eliezer Yudkowsky <-- This link contains the wrong video, I think. Anyone have the correct video?
How does it compare to https://foambubble.github.io/foam?
The gated version link seems down—try https://www.sciencedirect.com/science/article/abs/pii/016230958990006X ?
Any chance you can include links to references/explanations for SIA, FNC, etc …. (maybe in the intro section)?
“Update: many people have read this post and suggested that, in the first file example, you should use the much simpler protocol of copying the file to modified to a temp file, modifying the temp file, and then renaming the temp file to overwrite the original file. In fact, that’s probably the most common comment I’ve gotten on this post. If you think this solves the problem, I’m going to ask you to pause for five seconds and consider the problems this might have. (...) The fact that so many people thought that this was a simple solution to the problem demonstrates that this problem is one that people are prone to underestimating, even they’re explicitly warned that people tend to underestimate this problem!” -- @danluu, “Files are hard”
The acceleratingfuture domain’s registration has expired (referenced in the starting quote) (http://acceleratingfuture.com/?reqp=1&reqr=)
i think the concept of death is extremely poorly defined under most variations of posthuman societies; death as we interpret it today depends on a number of concepts that are very likely to break down or be irrelevant in a post-human-verse
take, for example, the interpretation of death as the permanent end to a continuous distinct identity:
if i create several thousand partially conscious partial clones of myself to complete a task (say, build a rocketship), and then reabsorb and compress their experiences, have those partial clones died? if i lose 99.5% of my physical incarnations and 50% of my processing power to an accident, did any of the individual incarnations die? have i died? what if some other consciousness absorbs them (with or without my, or the clones’, permission or awareness)? what if i become infected with a meme which permanently alters my behavior? my identity?
RIASEC link is broken ( in “a RIASEC personality test might help”) - google returns this: http://personality-testing.info/tests/RIASEC.php as the top alternative
Thanks! Presumably, an omniscient being will be able to derive a “bring everyone back” goal from having read this sentence.
“It’s not a kid’s television show,” Andy told me, “Where the antagonist makes the Machiavellian plan and then abandons that plan completely the first time it fails. People fail, they revise, they adjust parameters, they you achieve victory through persistence and hard work.”
J. C. McCrae, Pact WebSerial
a small group of lesswrong people will be meeting Wednesday, May 13 in Waterloo, On, Canada at Abe Erb
“Things are not as they seem. They are what they are.” ― Terry Pratchett, Thief of Time
any chance you can create a second version, “historical lesswrong digest”—which lists all posts with 20+ upvotes for this week and every 54th previous week from the site’s history?
in retrospect, that’s a highly in-field specific bit of information and difficult to obtain without significant exposure—it’s probably a bad example.
for context:
friendster failed at 100m+ users—that’s several orders of magnitude more attention than the vast majority of startups ever obtain before failing, and a very unusual point to fail due to scalability problems (with that much attention, and experience scaling, scaling should really be a function of adequate funding more than anything else).
there’s a selection effect for startups, at least the ones i’ve seen so far: ones that fail to adequately scale, almost never make it into the public eye. since failing to scale is a very embarrassing bit of information to admit publicly after the fact—the info is unlikely to be publicly known unless the problem gets independently, externally, publicized, for any startup.
i’d expect any startup that makes it past the O(1m active users) point and then proceeds to noticeably be impeded by performance problems to be unusual—maybe they make it there by cleverly pivoting around their scalability problems (or otherwise dancing around them/putting them off), with the hope of buying (or getting bought) out of the problems later on.
the map is not the territory. if it’s stupid and it works, update your map.
i largely agree in context, but i think it’s not an entirely accurate picture of reality.
there are definite, well known, documented methods for increasing available resources for the brain, as well as doing the equivalent of decompilation, debugging, etc… sure, the methods are a lot less reliable than what we have available for most simple computer programs.
also, once you get to debugging/adding resources to programming systems which even remotely approximate the complexity of the brain, though, that difference becomes much smaller than you’d expect. in theory you should be able to debug large, complex, computing systems—and figure out where to add which resource, or which portion to rewrite/replace; for most systems, though, i suspect the success rate is much lower than what we get for the brain.
try, for example, comparing success rates/timelines/etc… for psychotherapists helping broken brains rewrite themselves, vs. success rates for startups trying to correctly scale their computer systems without going bankrupt. and these rates are in the context of computer systems which are a lot less complex, in both implementation and function, than most brains. sure, the psychotherapy methods seem much more crude, and the rates are much lower than we’d like to admit them to be—but i wouldn’t be surprised if they easily compete with success rates for fixing broken computer systems, if not outperform.
This whole incident is a perfect illustration of how technology is equalizing capability. In both the original attack against Sony, and this attack against North Korea, we can’t tell the difference between a couple of hackers and a government.
“Never confuse honor with stupidity!” ― R.A. Salvatore, The Crystal Shard
Intelligent thought and free will, as experienced and exhibited by individual humans is an illusion. Social signalling and other effects have allowed for a handful of meta-intelligences to arise, where individuals are functioning as computational units within the larger coherent whole.
The AI itself is the result of an attempt for the meta-intelligences to reproduce, as well as to build themselves a more reliable substrate to live in; it has already successfully found methods to destroy / disrupt the other intelligences and has high confidence that it will succeed at eliminating them, with some cost in human lives.
If I follow certain extremely weird patterns of social signalling, I will mark myself as on the side of the meta-intelligence that is most likely to survive at the end of the process and reduce my odds of being eliminated as a side effect