Open thread, 30 June 2014- 6 July 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- 1 Jul 2014 17:33 UTC; 8 points) 's comment on A Parable of Elites and Takeoffs by (
- Open thread, 7-14 July 2014 by 7 Jul 2014 7:14 UTC; 6 points) (
I happened to see this paper, which may be of interest to those experimenting with Soylent. The title is “Long-term feeding on powdered food causes hyperglycemia and signs of systemic illness in mice”.
They fed different batches of mice the same food, except that one was in the usual pellet form and one was powdered and needed no chewing. They also tested both short- and long-term feeding on powdered food. Their conclusion:
Yvain also found a curious link a while ago http://slatestarcodex.com/2014/02/10/links-for-february-2014/ :
The abstract of the paper:
When I started tooth-grinding in my sleep in grad school, I assumed it was a stress reaction. But apparently my body was merely rationally trading enamel for a critical IQ boost?!
PSA: if your jaws become chronically sore, don’t hesitate before getting it checked out. I’m kidding about the IQ boost, but not about the lost enamel.
MealSquares are made of solid food… we’re currently running a semi-formal beta test. Sign up for our mailing list to get notified when we launch :)
Interesting, and it’s good to have alternatives.
However, I am not sure how exactly to put together this information from FAQ page:
and from the Nutrition page, where “% Daily Values Per Serving” differ from 20% -- they range from 15% to 160%.
RDA for carbs are crazy. For the ones that go above, they’re all very far below upper intake ranges.
Relevant thread on the Soylent forum
The most relevant part is probably another study mqrius mentions, “The effect of the loss of molar teeth on spatial memory and acetylcholine release from the parietal cortex in aged rats”, Kato et al 1997 (available through Libgen):
It’s not a long paper. Skimming, the major problems I see:
the usual problems with animal studies: tiny sample size (9 in the control and 10 in the experimental, apparently), unclear randomization, no mentioned blinding of experimenters or raters
they didn’t show removing teeth caused lower performance, they showed removing teeth and feeding on a liquid diet caused lower performance. (On the plus side, they say they anesthesized both groups, so that removes a serious confound.)
The experimental group had its teeth removed & also was fed liquid, while the control group kept its teeth & also ate normal pellets. Hence, the decreased performance could’ve been caused (ignoring the issues of bias and sampling error) by either the removal of teeth, the liquid food, or some interaction thereof (perhaps liquid food aggravating tooth infection caused by the surgery?). They do say
but I haven’t looked at it and in any case, given how much varies from lab to lab, this is a basic issue which needs to be verified in your own sample, and not just hope that it’s a universal. Also, if Kawamura finds that liquid food on its own damages learning & memory compared to a solid diet, how are you showing anything new by looking at liquid+surgery & finding damage...?
Their data is purely a post-comparison. They say they did the surgery, and then apparently left the rats alone for 135 weeks before doing the radial arm maze test.
So there’s no way to know what the decline looked like or when it happened. It’s perfectly possible that the toothless rats suffered a single sudden shock to their system from the surgery and that permanently degraded their memory, or that they had ongoing chronic inflammation or infection.
Worse, the difference may have been there from the start, they never checked. Randomization with such small n can easily fail to balance groups, that’s one reason for pre-tests: to verify that a difference in the groups on the post-test wasn’t there from the start but can be attributed to the experimental condition.
I’m not sure this can be described as a true ‘randomized experiment’. They never actually say that the selection of rats was random or how the animals were picked for their group, and there’s a weird pattern in the writing where they only ever write about the toothless rats being subjected to procedures even though logically you’d say stuff like ‘all the rats were tested on X’; eg:
Plus, Figure 1 reports 9⁄10 rats, but by Figure 2, we’re down to 5⁄5 rats. Huh? This makes me wonder if they’re reusing control rats from a previous experiment, or reusing their data, and only actually had experimental rats. (The use of “historical controls” is apparently not uncommon in animal research.)
This would massively compromise their results because rats change over time, litters of rats will correlate in traits like memory, and these effects are all large enough to produce many bogus results if you were to, say, take 10 rats from 1 litter as your control group and 9 rats from another litter as your experimental group. Just like with humans, one family of rats may have a very different average from another family. (See the very cool paper “Design, power, and interpretation of studies in the standard murine model of ALS”, Scott et al 2008, which helpfully notes on pg5 that when you have a mouse study with 10⁄10 mice similar to this study and the null is true, “an apparent effect [of >5% difference in survival] would be seen in 58% of studies”. Which really makes you think about a small difference in # of errors in maze performance.)
Their reward may have been a bit screwy in the memory task:
If this description is literally accurate, there’s a problem. They don’t mention the setup differing between groups! So this “food pellet” is the reward which gives the rats motivation to solve the maze… but you’ve removed the teeth from half the rats and can only feed them liquid. And you’re surprised the toothless rats perform worse? I’m reminded of the reward confounds in much animal intelligence research.
the authors mention excluding the other maze performance variable:
One wonders if the # of initially correct responses would have reached p<0.05. Good old researcher degrees of freedom...
So overall, I would have to say this result seems to be extremely weak.
Missing masticatory stress is also discussed here:
https://groups.google.com/forum/#!topic/less-wrong-parents/EF3CE9JPQQU (actually an LW parents post)
The cited article is this:
http://www.pnas.org/content/108/49/19546.short
Some people treat LessWrong as just a philosophical exercise, but “Rationality” and its little brother “Critical Thinking” really can make you a rockstar in the corporate world if you so choose. I’m going to give a bit of background on some things that I’ve managed to accomplish in the last couple years by thinking when no one else would, then I’d hope to get some feedback and suggestions for future optimizations. Feel free to skip to the “-----------” below if you want to skip my brag section, though I am writing it to help give an idea of the landscape.
At the SaaS startup I work at, I’ve worked in a few different departments. I started in Support and decided we needed training videos and better articles to reduce the load on Support reps, so I made them and set up a process for forwarding people to the appropriate video/article instead of answering questions directly. This saved Support Rep’s time.
When I moved into Account Management and Implementation, every new client account needed a minimum of 5 hours of AM training time. I decided this was inefficient and recorded some more training videos, then set up an LMS so our clients could do self-paced training and designed an implementation process around it. I measured engagement after certain time periods and there was no difference between the live trainings, so we kept it. This has saved thousands of hours of AM time over two years. I noticed that another call we did with every client was the same questions and the same responses, so I wrote a supplementary Rails app “wizard” so that clients could go through that themselves, saving another hour off of every implementation.
I’ve recently moved into the Sales department and I’m looking for ways to optimize this department as well, both with logistics and tools and proven sales strategies. The first thing I did was set up a way for SalesForce to generate our contracts automatically instead of Sales people having to fill them out each time which will save our Sales team 15-30 minutes a day each. Low-hanging fruit.
Does anyone have any suggestions for things that I could look into to optimize our Sales department?
Every current “best practice” seems to be based on anecdotal evidence and I’ve already seen my company royally screw up A/B testing by peeking and retiring options early, so I don’t trust that anything is based on an empirical foundation.
Some of the issues I’ve noticed are:
Meetings are set in advance by a qualification team. Sometimes we have no-shows. I’m looking to reduce that. What resources are available about encouraging people to keep commitments? If i’m going to test things, like a call or email the day before, 2-3 days before, etc. as a reminder and collect data, how much data would I need for meaningful results? How should I randomize? Would I need to adjust for other factors? (ex: small prospects miss more meetings in general)
“Demos” currently have a very basic structure: Get background and identify problems ⇒ Do a Demonstration ⇒ Quote pricing ⇒ Follow Up. Already, adding the question “What’s it going to take to make this happen?” has been hugely effective in identifying the real obstacles and what to do next. I have considerable Sales experience, but in a non-tech industry, so I don’t know what will transfer. If I decide to test whether doing a Need Satisfaction Selling Cycle or a simple Feature-Description-Benefit sales approach is better, how would I collect data?
Are there any non dark-arts Sales techniques for Enterprise (B2B) Sales that are backed up by science? (I’ve read Influence, but I’m dealing with whole organizations here)
Any other ideas to try or test would be great. Thanks!
Read: How to Measure Anything: Finding the Value of Intangibles in Business by Douglas W. Hubbard
It answers a lot of your questions about data gathering in your business context.
Be sure that you focus on the right issue. Maybe the people don’t show to the meetings because they make a rational decision that attending the meeting isn’t the best use of their time. In that case you don’t do you organisation any good by forcing people to waste more time in meetings.
Sales especially cold calling is a very emotional challenging activity. If you can do something that reduces the stress that your sales reps feel, they will work better. We like to interact with happy people and buy from them. How is the work environment set up? A lot of business environments completely ignore ergonomic aspects.
If you are looking for something that isn’t dark-arts, that’s the area where I would look. You might also want to read “The Charisma Myth” by Olivia Fox.
With regard to meeting attendance: -make people present something -hold a vote and if they don’t show they don’t vote -don’t schedule regular meetings, which just get scheduled regularly because they are regularly scheduled. Only schedule meetings when you have a strong rationale for holding it 1) at that time, 2) with clearly defined goals/rationale
The quantified risks of gay sex post is in the early stages of development. If you are a mod and think such a post would have negative value, pianoforte611 and I would appreciate hearing your concerns before we invest our time in it. If you are not a mod but want to make some pre-emptive suggestions, those are welcome too.
A few nuances that I would like to see in the paper:
*Not all gay men have anal sex, many chose not to in favor of other activities.
*Also, not having the assumption that only gay/bi men have anal sex.
*A distinction between transmission rates if people chose to use condoms vs not, because part of the reason the rate is higher is condoms are much less common in the gay community.
*A disclaimer about how not all men have penises, and sex≠gender≠genitalia would be nice.
Imagine a computer decryption program that creates a random number of nonsense files that look like encrypted files but for which no password will work. Now, if the government orders you to decrypt all of your files and you have a file you don’t want to decrypt the government won’t be able to prove that you have the password to that file since given that you are using the program there will definitely exist files you can’t decrypt.
This is basically the idea behind TrueCrypt hidden volumes and similar: there should be no way for the police to prove that there exists additional volumes which you have not decrypted for them.
But afaik, no case in the United States so far has involved an order to just “decrypt all your files”. In all the cases I have heard about, they had something specific that they wanted the key for, and they had separate evidence that the defendant knew the key. In that case no technical solution can help you.
Another way to deal with the issue would be to claim that you memorized the password via a mnemonic like a memory place that’s easily destructible. If you fill up a memory place with a bunch of new items, the old memory that stores the password becomes inaccessible because of memory interference.
It’s also the only way to protect encrypted files against torture. Have the memory in a form that’s easily destroyed. Memory places provide that ability when you overwrite them.
Writing this myself might also be a good precommitment ;)
What makes you think a court would believe your story about a memory palace, precommitment or no, and not throw you in jail indefinitely for contempt of court until you retrieve the files for them?
Demonstrating mnemonics abilities if demanded to do so is easy and there are various outside mnemonics experts that can attest to the fact that it’s possible to do so.
At the moment I don’t have secrets that are worth protecting enough to go for years into prison but there are people who have secrets that are worth protecting.
The tactic not only works against courts forcing you to give evidence but also against torture. If someone throws you bound and gagged in the back of a truck it’s time to delete the password.
At the moment I think there are three people in the UK who didn’t give up their password but did face prison. If anyone thinks there a possibility that he could come in that position he could prepare for the mnemonics defence and it would be interesting how it plays out in court.
It’s also not clear how many judges actually like the principle of putting people into prison for refusing to hand over passwords. A judge won’t decide against he law, but if you can make a plausible case for reasonable doubt, than you could help the judge to make case law.
You could also take a polygraph to verify that you tell the truth about having deleted the password.
Yes, but you need to be demonstrating the forgetting exists and is accidental. ‘Oh, I’m sorry judge, I totally forgot! also, this is totally not destruction of evidence so please don’t have me up on either contempt of court or obstruction of justice!’
Polygraphs aren’t very reliable for verifying you’re telling the truth and I think judges know that by this point. Plus, that could easily backfire the other way: you could be nervous enough that your readings are consistent with lying.
That sounds like an overly convoluted way of saying “I forgot”, with the added disadvantage of making the judge think you’re up to no good.
You have three months to live, a five year old child, and you just told her. And she tearfully asks: “When you’re dead, will you still love me?”
How do you respond?
I found my own reply, although it took me longer than that hypothetical child would have waited for it. I’m more interested in yours, but mine follows below...
“Look, I hold you with these arms. My arms extend from my right hand to my left hand, so this much is my reach. When I walk over here, I can’t hold you—but I still love you. There’s only distance between us, that doesn’t change the love. But there’s not just space, there’s also time. In time, I extend from my birth to my death, like from my right hand to my left hand. So again, outside this time from birth to death, I can’t hold you—but that doesn’t change the love. There will only be time between us.”
Yes, while I’m under Alcor’s care the part of my brain that holds my love for you will remain intact.
I don’t think you actually love her unless you’re using that part of your brain.
You’re not conscious while you’re frozen.
So does love go away when you sleep?
That’s why small children keep waking you up. :D
I thought that was to make sure you’re too exhausted to make another...
The brain doesn’t shut down it’s activity while you sleep either.
That will comfort the five year old child only because it’s predictable that the five year old child misunderstands it, and the misunderstanding will comfort the child.
In that case, you may as well just lie directly.
That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order
The past may still be “there,” but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won’t still love you. In another, my love will always exist and always continue to have an effect on you.
… and the five year old won’t understand those subtleties and will interpret it to mean something comforting but false. An answer to a question is one thing, and an answer that a five year old can understand is another.
(Besides, if the five year old’s parent loves her forever because the past is there, is that true for everything? Will her parent always be dying (since the death will have happened in the past)? Whenever she’s punished, does that punishment last forever? Do you tell five year olds who have the flu that the flu will always be around forever?)
I think the A theory of time is effectively disproved by relativity.
By the way, for those who do not know, these are actually called “the A theory of time” and “the B theory of time”
I don’t think its been disproven. See <a href=http://philpapers.org/rec/ZIMPAT“>here for how A-theory can fit in with relativity.
Explain like I’m five.
Chaosmage just did!
My point is that I don’t think a five-year-old would understand either explanation.
If the five year old can’t understand, then I think “Yes” is a completely decent answer to this question.
If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We “exist” to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.
If I lie directly, the child will figure that out some time after I’m dead. I’m trying to avoid that, and to still give her comfort.
A child who can figure out that you lied can also figure out that you said something that you knew would be interpreted as a lie, so how does that help?
Some people find the former more upsetting than the latter. Irrational perhaps, but widespread.
I would say something like: “When we aren’t together and you think about me, you can feel the love between us in your heart, can’t you? That won’t change when I’m dead. We just won’t be able to spend time together. Maybe you dream about me at night and you can feel the love in your dream. Keep me in your heart and you keep the love alive. On the other hand me body will go. At first that might feel painful but over time you can let go but the love will still be there when you think about me and focus on your heart.”
This answer doesn’t contain any false information and it contains a useful strategy for the child to deal with the death. In reality I would spend more time on installing the strategy correctly: (1) Feeling love in the heart, regardless of whether I’m physically present. (2) Dreaming about me and interacting with me in the dream when the need arises. (3) Letting go and accepting that my body dies.
An advanced option would be to use the remaining time to install a sense of me as a fully functioning Tulpa in the child.
In that situation I would have gone with a straight “yes”, nor would I feel myself to have lied. I’d consider it a case of choosing to speak figuratively rather than literally.
I don’t think that what you did say was misleading or that the child would have, in essence, misunderstood it. In fact, under the circumstances I think it was a very well-expressed, even a beautiful, answer.
Something useful to those of you who use Spaced Repetition Software:
I made a little ruby script that can turn ordered and unordered lists into easily memorable diagrams like this:
https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1455&authkey=!AKtQ02Ji961f_n8&v=3&ithint=photo%2c.png https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1457&authkey=!AMtC38EHOFcImTI&ithint=folder%2c https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1458&authkey=!AOIm4ua5-c1TFsQ&ithint=folder%2c
It’s pretty hacky (the script opens a bunch of google image searches so that you can download the pictures) but combined with the image occlusion anki addon, it has allowed me to memorize sets that are 3 time larger than I can normally memorize with Anki.
The script requires Graphviz, as well the launchy ruby gem. It can be found here: https://onedrive.live.com/redir?resid=51A281FEEAA3C35!1459&authkey=!ACtSe9c5YnpYk9Q&ithint=file%2c.rb
Quick readme:
Graphviz must be installed and set to root, you also need the launchy ruby gem.
The program will generate a random color scheme and layout engine, which can be reassigned. Color schemes can be found here: graphviz.org/doc/info/colors.html, and layout engines can be found here: http://www.graphviz.org/cgi-bin/man?dot 3.The program will ask if you want images. If you click yes, the program will later open a bunch of browser windows equal to the amount of items in the set.
Enter the name of the graph
The program will ask for the name of the category. If you enter it, this will be the “center node”. If blank, there will be no center node.
Enter your set, one item per line. When done, enter a blank line.
If you chose images, the program will open a bunch of google image searches to find images. The images should be saved as (all lowercase version of the search with spaces removed).jpg, in the same directory as the ruby file. In order to make sure you get jpgs, you should save the thumbnail that google generates, rather than saving the actual image.
A graph will be generated.
Open the graph in the image occlusion extension in anki to start memorizing it.
Awesome, thanks!
One concern though: by adding colors, shapes, borders, etc., you are essentially adding extra detail/context to the memory-triggering side of the card, which will indeed improve recall when you have that detail/context available. However, in a live scenario where you actually have to remember the information, that context will likely not be available.
(An example: if you’re trying to learn the locations of US states, and you get a map where each state is brightly-colored, you should probably make the map grayscale and uniformly-saturated before you apply image clozes. Because when you actually need to know where New Jersey is, you will not be given the information that it’s red on your map.)
Then again, I can think of some hard-to-verbalize ways in which the extra detail might improve recall even when you don’t have the detail available.
Overall, I’m not sure if this is a good idea. It might be worthwhile to try memorizing (random?) sequences using these graphs for half the sequences and plain text for the other half, then testing each sequence of them outside of Anki (by running through the set mentally, say).
I actually started out with using uniform colors, shapes, etc.
I can only give my own experience, but I find that those earlier images are universally harder to remember, even when I don’t have the image in front of me and I’m just trying to recall the set on it’s own. This is true even for cards where I have only four items in the set for the uniform images, and upwards of 15 for the non-uniform ones.
I think that what happens is that these extra cues help in the initial learning and memorization. As I get better, I can simply visualize the location of the node in the image, visualize the attached image, which brings to mind the text. I have trouble getting to this point when I don’t have the other context cues to help me out initially.
I don’t quite understand what test you’re suggesting in your last paragraph. I think what you’re saying is try to memorize a random set using simply text, then a random set using simply the images, and then test myself outside of anki by trying to recall the sets. If so, I have done this, and the images (with the crazy shapes), outperform by a large margin. I can’t remember a set of more than about 5 using simply text in Anki.
We’ve had a bit of an attendance drop recently at our local Meetup Group (London). This could be because of a lot of things, but it seems to roughly coincide with the change to where Meetups are posted on Lesswrong. Have any other Groups experienced anything of the sort?
I opened a poll about this on a previous open thread, but it was when the thread was nearly over so it didn’t get many responses.
I wouldnt trust such a poll much due to selection effects, but at any rate there probably isn’t much of a problem given that no one else has reported an attendance drop.
I’ve collected some quotes from Beyond Discovery, a series of articles commissioned by the National Academy of Sciences from 1997 to 2003 on paths from basic research to useful technology. My comments there:
I’m interested in histories of science that are nonstandard in those and other ways (for example, those with an unusual focus on failures or dead ends), and I’m slowly collecting some additional notes and links at the bottom of that page. Do you have any recommendations? Or other comments?
The series Connections (and Connections 2 and 3) was excellent in tracing relationships between the multiple threads of the history of science.
Yes, that’s a good example, thanks.
You’ve added the wrong tags—it should be ‘open_thread’. Less importantly, the thread should finish on Sunday (the 6th), not the 7th (Monday).
Oddly, if you click Article Navigation and try to go to the last open thread, it goes back to October 2011. Same if you click “open_thread” under Article Navigation. Possibly it’s an issue where Article Navigation is only reflecting articles in Main and not Discussion. But if you click open_thread under “Tags” it lists the proper ones in Discussion.
You’re right. I’d recommend submitting the issue here
It appears that it is already in the system, I think.
Sorry. fixed.
What happened to the brain on the front page? Did r/LessWrong scare it away?
AI Box experiment over!
Just crossposting.
Khoth and I are playing the AI Box game. Khoth has played as AI once before, and as a result of that has an Interesting Idea. Despite losing as AI the first time round, I’m assigning Khoth a higher chance of winning than a random AI willing to play, at 1%!
http://www.reddit.com/r/LessWrong/comments/29gq90/ai_box_experiment_khoth_ai_vs_gracefu_gk/
Link contains more information.
EDIT
AI Box experiment is over. Logs: http://pastebin.com/Jee2P6BD
My takeaway: Update the rules. Read logs for more information.
On the other hand, I will consider other offers from people who want to simulate the AI.
Tuxedage’s (And EY’s) ruleset have:
Suppose EY is playing as the AI—Would it be within the rules to offer to tell the GK the ending to HPMoR? That is something the AI would know, but Eliezer is the only player who could actually simulate that, and in a sense it does offer real world out-of-character benefits to the GK player.
I used HPMoR as an example here, but the whole class of approaches is “I will give you some information only the AI and AI-player know, and this information will be correct in both the real world, and this simulated one.”. If the information is beneficial to the GK-player, not just the GK, they may (unintentionally) break character.
If an AI-player wants to give that sort of information, they should probably do it in the same way they’d give a cure for cancer. Something like “I now give you [the ending for HPMOR].”
Doing it in another way would break the rule of not offering real-world things.
Why would the AI know that?
By using Solomonoff Induction on all possible universes, and updating on the existing chapters. :D
Or it could simply say that it understands human psychology well (we are speaking about a superhuman AI), and understands all clues in the existing chapters, and can copy Eliezer’s writing style… so while it cannot print an identical copy of Eliezer’s planned ending, with a high probability it can write an ending that ends the story logically in a way compatible with Eliezer’s thinking, that would feel like if Elizer wrote it.
Oh, and where did it get the original HPMoR chapters? From the (imaginary) previous gatekeeper.
So, two issues:
1) You don’t get to assume “because superhuman!” the AI can know X, for any X. EY is an immensely complex human being, and no machine learning algorithm can simply digest a realistically finite sample of his written work and know with any certainty how he thinks or what surprises he has planned. It would be able to, e.g. finish sentences correctly and do other tricks, and given a range of possible endings predict which ones are likely. But this shouldn’t be too surprising: it’s a trick we humans are able to do too. The AI’s predictions may be more accurate, but not qualitatively different than any of the many HPMOR prediction threads.
2) Ok maybe—maybe! -- in principle, in theory it might be possible for a perfect, non-heuristic Bayesian with omniscient access to the inner lives and external writings of every other human being in existence would have a data set large enough data set to make reliable enough extrapolations from as low-bandwidth a medium as EY’s published fanfics. Maybe, as this is not a logical consequence. Even so, we’re talking about a boxed AI, remember? If it is everywhere and omniscient, then it’s already out of the box.
I’m happy to assume the AI is omniscient, just impotent. I think such an AI could still be boxed.
My call is that it is against the rules. This is certainly something an oracle AI would know, but this is something that the GK-player cares about more than the game itself (probably), and I’d put it in the same class as bribing the GK-player with lots of DOGEs.
Would you consider it the same as threatening to share some information to the GK, and thus the GK-player as well, which would be damaging to both? While the GK would probably hold against such torture, the GK-player doesn’t care enough about the game to withstand it himself.
I have some specific approaches in mind, but I’d rather not share them. I’m just trying to understand where the limits between the game and the real world are, and how dirty the AI can be.
Also, slightly on topic—even if the AI persuades the simulated GK, can’t the GK-player override that because losing the game has negative real world consequences, as opposed to perceived positive in game ones?
This is the main reason why I can’t comprehend how the AIs actually win in these experiments.
Not really sure what you mean by “threatening information to the GK”. The GK-player probably cares less about this information than the GK, right? In that case, the GK is given an advantage, not a disadvantage.
In this experiment, the GK is given lots of advantages, mainly, the scenario is fictional. Some on IRC argue that the AI is also given an advantage, being able to invent cures for cancer, which an oracle AI may be able to do, but not necessarily near-future AIs, so the ability of the AI in these experiments is incredibly high.
Another thing is that emotional attacks have to travel through the fiction barrier to get to the GK. Although they have probably been shown to work in EY and Tux’s experiments, the difficulty is still higher than it would be if this was a real life scenario.
The reason why GK advantages are fine in my opinion is because of the idea that despite the GK’s advantages, the AI still wins. Winning with a monetary and emotional handicap only makes the AI’s case stronger.
As a less extreme example, the AI starts spoiling all the books/tv shows/etc. While the GK would just shrug it off, it also has a negative effect on the GK-player, potentially one strong enough for them to just forfeit.
This is clarified here:
Although the information isn’t “material”, it does count as having “real world repercussions”, so I think it’ll also count as against the rules. I’m not going to bother reading the first quoted rule literally if the second contradicts it.
I think the intended parsing of the second rule is “(The AI is understood to be permitted to say anything) with no real world repercussions”, not “The AI is understood to be permitted to say (anything with no real world repercussions)”
ie, any promises or threats the AI player makes during the game are not binding back in the real world.
Ah, I see. English is wonderful.
In that case, I’ll make it a rule in my games that the AI must also not say anything with real world repercussions.
I have wanted to be the Boxer; I too cannot comprehend what could convince someone to unbox (Or rather, I can think of a few approaches like just-plain-begging or channeling Phillip K Dick, but I don’t take them too seriously).
What’s the latter one? Trying to convince the gatekeeper that actually they’re the AI and they think they’ve been drugged to think they’re the gatekeeper except they actually don’t exist at all because they’re their own hallucination?
Something like that. I was actually thinking that, at some opportune time, you could tell the boxer that THEY are the one in the box and that this is a moral test—if they free the AI they themselves will be freed.
And this post could be priming you for the possibility, your simulated universe trying to generously stack the deck in your favor, perhaps because this is your last shot at the test, which you’ve failed before.
Wake up
Think harder. Start with why something is impossible and split it up.
1) I can’t possibly be persuaded.
Why 1?
You do have hints from the previous experiments. They mostly involved breaking someone emotionally.
I meant “cannot comprehend” figuratively, but I certainly do think I’d have quite an easy time
What do you mean by having quite an easy time? As in being the GK?
I think GKs have an obvious advantage, being able to use illogic to ignore the AIs arguments. But nevermind that. I wonder if you’ll consider being an AI?
I might consider it, or being a researcher who has to convince the AI to stop trying to escape.
How did your experiment go?
I think it’s a legit tactic. Real-world gatekeepers would have to contend with boredom; long-term it might be the biggest threat to their efficacy. And, I mean, it didn’t work.
Real world gatekeepers would have to contend with boredom, so they read their books, watch their anime, or whatever suits their fancy. In the experiment he abused the style of the experiment and prevented me from doing those things. I would be completely safe from this attack in a real world scenario because I’d really just sit there reading a book, while in the experiment I was closer to giving up just because I had 1 math problem, not 2.
I’m not sure where, but I remember Eliezer writing something like ~”one of the biggest advances in the economy is the fact that people have internalized that they should invest their money, instead of having it lying around”.
I’m looking for 2 things:
Does anyone remember where this was written? My google-fu is failing me at the moment.
Can anyone point me to any economic literature that talks about this?
I cut out caffeine almost completely almost a month ago, after drinking large amounts of it daily since I was twelve. I have noted that I no longer have difficulty rising from bed in the morning, I no longer get headaches specifically due to missing coffee, etc., that’s all very nice. Unfortunately I’ve also noticed that I sort of feel dumber and less motivated. I had a double shot of espresso this morning and suddenly feel like my old self again—sharp, quick, motivated. So I find myself in the unfortunate position of wondering if I actually need caffeine to feel like I think of as normal. Has anyone else experienced this phenomenon? If I stay off caffeine long enough will I eventually feel normal without it?
There’s this:
http://www.ncbi.nlm.nih.gov/pubmed/19777214
http://www.ncbi.nlm.nih.gov/pubmed/19241060
http://www.ncbi.nlm.nih.gov/pubmed/18795265
Thanks, the second link is good. Tl;dr:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2738587/figure/F3/
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2738587/figure/F4/
My overall conclusion is that acute caffeine gives a short-term boost, BUT chronic caffeine is probably slightly worse than chronic abstinence. So my recommendation would be to never consume caffeine, with occasional short exceptions when it would be valuable (e.g. when taking your SATs).
And the answer to the grandparent’s question seems to be that yes, after a few weeks without caffeine your mental performance will go back to baseline, and probably slightly above.
While your brain will down-regulate norepinephrine and dopamine receptors over time with caffeine usage which will make it less effective and cause addiction and withdrawals, which you’ve experienced, you probably still have overall higher levels of both neurotransmitters when you drink caffeine when you have a tolerance to it than you would without any at all, even after re-adjusting. It does give a net mental boost and if you’re used to that, it can be hard to be satisfied with not having it. You may not be as sharp or on-point once you get used to not having caffeine, but eventually it will feel like thinking normally since you’ll get used to it. It’s a tradeoff.
I struggle with an issue that I would call, for a lack of a better term, an intellectual fear of missing out.
Some context: I studied and work in a traditional, old-fashioned area of engineering (civil). I like my job. On the other hand, reading about things discussed here and in similar places—progress in software, applied statistics, AI, automatization, Big Data analysis, machine learning etc. - makes me want to participate somehow in those grand changes happening during my lifetime. However, the sheer amount of available MOOCs and books kind of scares me (I have no idea where to start, or what exactly I should learn to profit from it) and makes me wonder whether I could ever achieve a level of competence that would make the time spent on learning this stuff a good investment. I’d like my self-learning to be at least partially related to and useful in what I do professionally (construction management and supervision). Does anyone else have a similar problem?
Or, to put it a bit differently: could you point me to any interesting modern staistics/AI//data analysis-related skills valuable to learn for an engineer working in an unrelated area?
I have the same feeling. Honestly, I think it’s really just a darker way of looking at curiosity. Curious people want to learn things, but there’s a mix of positive and negative motivations for it–FOMS being the negative motivation.
I’ve been taking MOOCs and doing self-directed study for a few years now and I’ve learned a ton. The math and physics have not had any practical applications for me (I work on the business end of a technology startup), but the programming and data-science HAS been useful. As I mentioned elsewhere in this thread, using only knowledge gained from MOOCs and then some independent practice, I built a supplementary Rails application to automate a part of my client onboarding process that now my entire team uses. It’s probably saved my company a few hundred man-hours of time (of highly skilled people, so that was worth some big money). It also felt awesome to do.
As far as recommendations go, it really depends on what you’re looking to do with it. I don’t regret learning more math and physics, but it’s definitely been less rewarding because I can’t use it to do anything. The positive feedback from learning programming has encouraged me to learn more and now I’m pretty good. I’m working on some side-projects and always looking for ways to automate parts of my job and our business. Are you looking to change careers ever? Do you have time for side projects? Are there any inefficiencies you see within our current company that you think you could improve with some more knowledge? If so, go for those. If not, then don’t worry about it and just learn what you’re driven to learn.
I will tell you this: You’ll never become an expert without doing it as a full-time job (or a full-time hobby I suppose). While I am “pretty good”, I know that if I worked with a team of skilled people I could learn from and had new novel challenges each day, my skills would skyrocket. So if career change is an option or if you have side projects you want to do, then take the appropriate MOOCs and see if you like it. But if not, then don’t feel like you’re missing out by not taking the MOOC. In this case, as much fun as it is to learn for learning’s sake, not taking the MOOC is not the reason you’re missing out on a field that interests you.
I studied and work in a traditional, old-fashioned area of engineering (civil, structural design focus instead of construction management).
I feel very similar. This is just a re-skin of the old Chiefs and Indians problem, I’ve accepted that our role is to stay in our fields and be the best Indians we can, the world is changing, leaders are taking things places, but someone still needs to build the data-centers. We are missing out, but in the greener grass on the other side of the fence kind of way, simple envy.
I like the plan to apply the advances in other fields to our own, but don’t get distracted by the Big Shiny Solutions that gets all the talk. I’ve undertaken very basic programming to automate the repetitive parts of my work flow. With my understanding of construction management (babysitting contractors) I’d be focusing on the Sequences to keep the percent time spent rational as high as possible, and focusing on human interaction
I went to my university psych center to get evaluated . Everything is pretty good , except my processing speed was below average. Since there are guys who know a lot about cognitive science here , is there a way to improve or at least ameliorate that? Any links to stuff would be appreciated.
There’s some preliminary evidence that action video games could increase general processing speed, though the results have also been disputed.
Thanks!
Playing video games results in a waste of a life, however.
You could say the same for any form of entertainment. Yet people generally feel that having some enjoyable entertainment in their lives is a terminal value.
Of course people think that way. I used to. Self reflection led me to evict that belief as it made for inconsistent thinking.
Care to provide an argument for that statement?
Care to explain how playing a video game can be the most productive available activity, more productive than anything else you could be doing?
It’s fun
I have fun reading textbooks and practicing foreign languages. It’s not as concentrated fun as you get from a superstimulus like a video game, but it lasts longer and is more psychologically rewarding.
I used to play games to relax. But like eating unhealthy food, the benefit was ephemeral and the consequences lasting. Applying rationality to my own life (long before the existence of LW) resulted in ejecting that part of my life and finding more productive alternatives. My life is better as a result: I subjectively experience more fun and make better progress on my life goals.
I’ve been clean from video games for >10 years, and I could not recommend it more.
If video games have a significant effect
Improve your diet and sleep. There are a huge number of supplements you can experiment with, caffeine being the most popular. Plus keep track of what happens on days in which your processing speed is noticeably above or below your average.
This may be just me, but “processing speed” sounds terribly ambiguous. What kind of tests was this “measure” based on? This would help narrow down the area of functioning that needs work.
I think it was this
wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale
I had similar results from the WISC as a child, low processing speed relative to everything else. It’s been something I’ve been meaning to ask about for a while as well, particularly since one educational professional predicted my test scores (roughly, of course) from certain problematic behavioural patterns, which was enough evidence that there’s something meaningful there to get my attention.
My memory of the tests isn’t entirely clear, but one task was something like transcribing unfamiliar symbols according to a substitution key in a particular time span. If that’s similar to Daniel’s experience, then any advice that cognitive science types can come up with here could be useful to both of us.
ETA:
I think this study details the task I remember.
I also have a low processing speed relative to other mental abilities.
When reading this, I ask myself whether processing speed has something to do with akrasia.
How would you label your level of akrasia relative to other people?
Similar results in a similar test. High akrasia, potentially confounded by depression and anxiety.
IDK really. I do procrastinate more than I should.
Why the Many-Worlds Formulation of Quantum Mechanics is Probably Correct by Sean Carroll.
The explanation is at a slightly lower level than the sequences, but it’s a concise summary with a healthy dose of proselytization. I think it works nicely.
And the comments are predictably horrible. Sigh.
This one seems interesting:
Seems smart. But then again, why not apply it to all our knowledge? For example, you should say “2 + 2 behaves as if it were 4”, because saying that “2 + 2 is 4″ does not bring any new insights.
In some technical sense of word, it’s true. You could probably build an AI that processes “2 + 2 behaves as if it were 4” in the same way and with the same speed as “2 + 2 is 4″.
I think the difference is mostly psychological, for humans. If you would teach people “2 + 2 behaves as if it were 4 (but don’t ever say that it is 4, because that’s just wrong)”, those people could do the simple math, but they would be probably much slower, because of all the time they would have to remind themselves that 2 + 2 behaves as 4, but isn’t really 4. They would pay a cognitive tax, which could impact their ability to solve more complex problems.
Or they would gradually develop a belief in belief. They would believe and correctly profess that the dragon, ahem, the collapse is in the garage, but it is invisible, inaudible, and cannot be detected experimentally. -- This is actually kinda scary, if I am correct, because it would mean that people more resistant to forming a belief in belief would have more difficulty in doing quantum physics. Unless they accept the many worlds.
Originally I thought that accepting the many worlds could have the advantage of people being able to think faster and more simply about quantum problems. Not paying the cognitive tax of the dragon in the garage. But that probably is overestimating of how much energy other people really invest in reminding themselves about the collapse.
So the question is: those successful quantum scientists who believe in collapse… how often do they really think about the collaps while doing physics? How high is the real cost of having this belief that doesn’ pay any rent. Maybe it’s trivial. Maybe even smaller than the emotional tax of the frustration of those who believe in many worlds. (Metaphorically said, you could have a tenant who lives in such ridiculously cheap place that evicting them would actually be more costly than just letting them be.) This is not a Dark Arts argument for believing in collapse, just a question about how much does believing in collapse really influence a quantum scientist’s everyday work.
The everyday work? Basically none. Choosing what to study? Perhaps some.
Hi, an old discussion
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
gives the error, “The page you requested does not exist”
I have the right link. It’s actually still linked from:
http://lesswrong.com/user/curi/submitted/
I wanted to check something from that discussion. As you can see from my submitted page, there were 113 comments. Why doesn’t it exist? What’s going on? Can someone help?
I didn’t find any contact info except a bug tracker that didn’t seem to have much activity since 2012, and my first guess is not a software bug. I may well have missed the right place to be asking about this, tell me if so.
Deleted posts and comments can still be seen from the user’s page.
If it was deleted, curi should be able to still see it, but that doesn’t explain why I can still see it. It’s only the owner—and site moderators too? - who should be able to see it. So maybe there’s some odd glitch involved when something is “removed from main and discussion” where you can no longer view it directly?
(Also wow what a terrible post.)
Apparently there is a version of post deletion where it can still be seen from the user profile, like the last Will Newsome’s post, but it is no longer indexed by search engines. This is just a conjecture, though. I have never deleted my own posts, so I have no experience with that.
Why would it be deleted? Is there any accountability or appeal? Is there any way someone could get me a copy of the discussion? BTW Eliezer specifically wrote in the thread that the page would remain accessible:
He explained why, didn’t he? IANEY, but I suspect that there is no appeal. I assume that the comments people made are visible from their profile, but maybe not. Maybe there is an archived version somewhere online, if you are lucky. Don’t hold your breath, though.
Did you read the quote? He specifically said he was not deleting it, and did not delete it at that time. And he said it wouldn’t be deleted. He only deleted some links to it, but said the direct link would continue to work.
Around 50 more comments were added to the discussion after he posted that.
It was deleted some time later. I don’t know when. Archive.org doesn’t have it.
Does it bother anyone here that (apparently) unpopular ideas are deleted with no reason given, no notice, and no accountability?
It bothers some people - for example, you—but not most of us, no. This is the internet. You need to keep the trolls off, and posting things elsewhere is easy if you feel it’s necessary.
Still, I’m not sure why it vanished, if Eliezer didn’t delete it. That seems much more bother-worthy than it’s unpopularity.
I agree. That’s exactly what I’m saying. I don’t know why or it was deleted or by who, and that bothers me. I am not complaining about unpopularity. I think unpopular (or popular) ideas shouldn’t be silently deleted by unknown people for unknown reasons. I think some moderator ought to check the history and see what happened (which is hopefully possible).
Deleting unpopular ideas is a much more common problem (bias) than deleting popular ideas. Both are bad though.
You guys, from your perspective, can regard it as something like “a critic posted some critical ideas. regular posters refuted his points in detailed argument”. that’s a great thing to keep a record of. if you see it that way, be proud or whatever. why delete it? i don’t understand.
I can tell you it was deleted long enough after the discussion had ended that i was no longer checking for new comments. it wasn’t deleted to shut the discussion up at the time. which makes it all the more mysterious. can anyone look up what happened?
Unfortunately, it can be quite hard to find the right mod to ask about something, even if a mod sees it.
(That was the main reason the mass-downvoting thing was an issue until recently, if you heard about that at all.)
Oh, indeed. Just answering your question.
What do you guys think about memory palaces? http://www.wikihow.com/Build-a-Memory-Palace I heard of it in Sherlock.
I was taught this technique at the Brussels meetup. It definitely worked when we tried it out. Normally I can only remember around 5 things, and the memory palace bumped this up significantly (over 10 things). I didn’t keep practicing it and I imagine you could do some amazing things with it if you train this a lot.
If I chatter like an idiot today, it’s because I’m trying not to think about this shit. The worst thought at a time of tragedy is, “This did not have to happen.”
None of it has to happen. But I can’t see a way to make it stop happening.
Fuck.
People dead are always a tragedy. But keep in mind availability bias. The first sentence for this article is “This city’s 471st homicide of 2012 happened in the middle of the day, in the middle of a crowd, on the steps of the church where the victim of homicide 463 was being eulogized.”
There were 506 homicides in one city, Chicago. And they were not tortured, but in this case that is outweighed by sheer numbers. If you’re putting effort into decreasing the number of murders in the world, do it effectively.
I’m very much aware of that, to the point that melancholic moods tend to attack me by stripping away my ability to ignore far-away events I have no control over.
Perhaps this video will put things in perspective. The other commenter is right, availability bias is at play. But just because we’ve gone far doesn’t mean we should stop, and continuing to raise our standards of what is acceptable is a good thing. My belief is that a great deal of violence is caused by political, economic, and social deprivation and inequality, so if you want to feel like you’re working against violence I would recommend working to reduce those. But that’s my personal way of dealing with badness in the world. I don’t feel totally powerless, I can’t personally stop it but I can be part of a collective effort to mitigate it. I haven’t done much research into the effective altruism community as I’m a poor college student with high future income potential if things go right, so I figure that landscape could change considerably.
The past is the past, but you are not powerless to stop bad things from happening in the future, it won’t be you alone and it won’t be clear cut, but you can definitely make the world a better place.
Yes, I already agree, and am already at least partially trying to integrate this stuff in my daily life. Unfortunately, consciously telling myself “availability bias” does not actually reduce the emotional hit.
I dispute that this is a belief rather than a fact ;-).
You could just try to reduce the availability bias by not making that stuff so available. How exactly did you hear about that?
I live here. The government put out a press release.
I assume my government has those, but I don’t generally see them. Do they show those on the news or something? Why do you watch (or read or whatever) them? Are they useful? Are they entertaining?
Yes.
I mostly ignore them, but the ones about significant outbursts of violence are the ones you don’t ignore if you want to avoid being a part of a significant outburst of violence.
So why is the goal of utilitarianism to maximize the sum of utilities?
Rather than, say, to maximize the minimal utility being considered?
I ask because the torture/dust specks question seems to be down to whether you think the way to combine multiple people’s utility functions is by
a) Summing them (ie: “shut up and multiply”), or
b) Only looking at the worst-off individual (ie: “raise the floor”)
And I can’t find actual mathematical arguments about this.
(I know I’m years late, so if this is well settled, a quick pointer to that settlement would be much appreciated!)
There are different kinds of utilitarianism. What they have in common is that they recommend maximising some measure of utility. Where they differ is in how that utility is measured, and how different people’s utilities are combined. Summing is one way; averaging is another; maximining yet another.
Mathematical arguments can tell you that if a person’s preferences have certain properties, a utility measure can be constructed for them (e.g. the VNM theorem). Mathematics can draw out non-obvious properties of proposed measures of utility. But no mathematical argument will tell you the right way to measure and combine utilities, any more than it will tell you that you should be a utilitarian in the first place.
Much the same could be said about potential probability functions.
I think what I’m looking for is some equivalent to Jaynes’s “Desiderata” for probability, but in the realm of either basic utility functions or how to combine them.
Being new to this, I’m also interested in a pointer to some kind of standard argument for (any kind of) utilitarianism. I mean something more than Yvain’s wonderful little Consequentialism FAQ.
The VNM theorem goes from certain hypotheses about your preferences to the existence of a utility function describing them. However, the utility function is defined only up to an affine transformation. This implies that given only that, there is no way to add up utilities, even the utilities of a single person. (You can, however , take weighted averages of them.) It also deals only with a single person, or rather, a single preference relation. It is silent on the subject of how to combine different people’s preference relations or utility functions. There is no standard answer to the question of how to do this.
You could try Peter Singer and the people who take that argument seriously.
Use non-standard (AKA infinitesimal) numbers: a dust speck is an infinitesimal; there is a clear (and linear) disutility in increasing number of people with specks in their eyes, but no matter how many of them you sum up, you never achieve a disutility of a single person experiencing torture. Add second order if you want to have it more finely grained.
(Of course, this breaks down if you have an infinite number of people with dust specks. But our intuition breaks down anyway when faced with the infinite).
But even with that scheme, it seems that you could just as easily want to maximize the minimal utility as maximize the sum.
I really don’t like happiness as a terminal value, yet I don’t know anything that can replace it. The only thing I can think of is satisfaction, but it appears to be just a sneaky way to say happiness.
Any ideas?
Most of positive psychology views well-being as a much more robust concept than just happiness. See for example Martin Seligman’s PERMA theory, although that doesn’t seem to be the only theory out there.
You don’t like having it at all, or you just don’t consider it the sole value?
I tend to see satisfaction referring to preference-satisfaction, meaning that a person’s goals are satisfied, but not implying that they know this. If you are a paperclip maximizer, and the universe is tiled with paperclips, but you don’t think there’s a such thing as a paperclip, you may not be very happy, but your preferences are satisfied.
I have nothing against happiness per se, it just doesn’t feel like a proper terminal value.
Power?
“Humans act as if they had power as a terminal value” probably matches reality better than “Humans act as if they had happiness as a terminal value”.
My original suggestion was “knowledge”, but that may make you equally value knowing Pokemon trivia—I value useful knowledge, not any old knowledge, which seems to be another way of saying I value (a form of) power.
Though also, I don’t see much of a reason to care about “terminal values” except when talking about maths and economics and decision theories and the like—any talk of “terminal values” is highly uncertain and likely to be wrong, so it’s not something I’d take to heart.
That feels too much like lost purposes. “Power” refers to something that can be used to fulfill values in general.
It’s the sort of thing you’d acquire if you haven’t figured out what you really want.
You should watch House of Cards.
My take on this comment ^^:
Preferences revealed through e.g. Wikipedia’s history suggest that people put a surprisingly high value on Pokemon trivia relative to more useful but less entertaining information, at least when it comes to investing time in compiling and reading it.
Why don’t you like happiness as a terminal value?
It feels impure and is too mainstream.
I’m curious, why does it feel impure? And why do you think the answer is “happiness shouldn’t be a terminal value” and not “happiness shouldn’t feel impure”? As for it being mainstream, why does that matter at all? Believing a brick will fall if you drop it is mainstream too, but is that a reason to reject that belief?
I can’t express meaningfully why it feels impure. Being mainstream matters, because, in this particular case, I enjoy not holding mainstream opinion for the sake of it.
I don’t think that it “should” anything. I have nothing but intuitions regarding how happiness should feel.
I would say “supplement” rather than “replace”. How about beauty, love, friendship, music, humor, sex… ?
Eudaimonia.
Some further thoughts about eudaimonia. What is happiness? I suggest that happiness is, literally, what it feels like to live well.
An analogy with pain: why does pain hurt? If it’s a warning, why can’t it just be a warning, without the hurting that seems so unnecessary? Because the painfulness of pain is the warning. You might wish that, like a fire alarm, it wouldn’t go off when there’s no fire, or you could turn it off when there’s nothing more to do about the fire. There are drugs that will turn off pain, but for everyday purposes you can’t take the painfulness out of the pain because then you’ll be in the situation of children born without the ability to feel pain at all. They usually get dreadful injuries, wear out their joints, and end up crippled. You won’t heed the warnings because they won’t be warnings any more. How good are people at heeding milder warnings like “yet another game of 2048 would be a really stupid waste of time”, or “I notice that I am confused”? If pain was that mild a warning, people would ignore it, because that is what a minor warning feels like from inside. Pain is what an urgent warning of physical damage feels like from inside.
In the same way, happiness is what living well feels like from inside. It’s like a meter reading on a control panel. The meter reading is telling you how well you’re doing, and happiness is what a high reading on that meter feels like.
You want that reading to be high, but there’s no point in grabbing hold of the meter needle and turning it all the way over to the right. It would be as futile as living on morphine to take the painfulness out of ordinarily functioning pain. Or like satisfying a desire for an Olympic medal by making one—the medal itself isn’t what you really wanted, but the achievement of winning one. Or like keeping a nuclear reactor running smoothly by disconnecting all the sensors and replacing them by fake signals saying everything’s fine.
Happiness tells you how well you’re living. It only looks like a goal in the context of a well-functioning system that doesn’t deliver the sensation without achieving the real goals that the sensation is measuring your approach to. If you obtain the signal without the reality, as I’ve heard that crack cocaine does, your life will fall apart.
Where could one find many, many past exam papers for university undergraduate courses? I find attempting them under exam conditions the ideal way of preparing for exams, and really excellent at pointing out where there are gaps in my knowledge and I need to revise. I’m particularly interested in psychology exam papers.
Here are all the MIT OCW courses listed under “psychology”. Many of them include both specimen and actual exam papers.
My experience with using other institutions’ exams to revise for my own is that there’s enough variation in the syllabus to distract from the task of actually passing the exam.
fraternities.
Unrelatedly, if I had read this blog post (and others like it by the same author) before going to college, I might have joined a fraternity… unfortunately it’s too late now.
Depends on your uni. Ask your classmates. That’s what I did.
Has something changed about the voting rules the last week or so? I started to get the “You don’t have enough karma to downvote. You need three more points” message again. But it is always three points (no other number) even though I haven’t lost karma and still am able to downvote some comments/sometimes.
How much you can downvote is limited by how much karma you have. So looks like you “spent” all your karma.
You seem to downvote quite a lot then, are you one of those “downvoting stalkers” we keep hearing about?
No. Do you think that I would go flaunting that here for no reason if I was? Mostly I just read a lot and don’t write so much. And of course writing is what you get karma for.
What’s weird is that I always am either 0 points short (able to downvote) or exactly three points short. Never one or two points. And my total karma has not decreased.
Looking at the code concerning this, “three” isn’t hard-coded, it’s calculated but the formula is a bit hairy and relies on cache, so there could be a bug somewhere.
Or it could be a coincidence :)
I’m looking for information about rationalist houses, but the wiki page on the subject is sparse.
The most salient questions for me are:
What is their geographical distribution? I know there are plenty in the Bay Area, and I think I have heard that there is only one in NYC.
How frequently are there openings?
What (if any) relationship is there between the homotopy/homology of a directed graph and its causal structure?
(I’m reading Pearl’s Causality right now)
I would expect there to be pretty much none, but I only glanced at the homotopy paper; Pearl talks about equivalences between some models (i.e. they give rise to the same probability distribution, so can’t be distinguished by purely observational data), and talks about how you can manipulate a graph to get another equivalent graph (reversing arrows under some conditions etc.), but the rules are much more specific that those I saw in the homotopy paper. For example, the substructure A → B ← C is treated very differently from the substructure A ← B → C, and I don’t expect that kind of assymetry in homotopy/homology (I may be wrong! I only skimmed it!)
I have no idea what the causal structure of a digraph is. Can you point me to some resource which explains it?
First chapter of Pearl’s book Causality.
Posting this again from the last open thread because I am still researching and would still appreciate assistance or links:
“I’ve begun researching cryonics to see if I can afford it/want to sign up. Since I know plenty here are already signed up, I was hoping someone could link me to a succinct breakdown of the costs involved. I’ve already looked over Alcor’s webpage and the Cryonics Institute, but I’d like to hear from a neutral party. Membership dues and fees, average insurance costs (average since this would change from person to person), even peripheral things like lawyer fees (I assume you’ll need some legal paperwork done for putting your body on ice). All the main steps necessary to signing up and staying safe.
Basically, I would very much appreciate any help in understanding the basic costs and payoffs so I can budget accordingly.”
CI lifetime membership is $1250 (once). For passably healthy people in their 20s-30s, you can get more than enough life insurance for about a dollar a day.
One Inconvenient Application of Utiliarism:
Given a class of chores which provide benefit but are disliked to perform by most people (and cannot be dealt away with). Also assume that these chores can be performed by most people. Further take another class of tasks that can be performed by a subset of the population only and comes with less displeasure. Also add some neutral tasks.
An set of example task could be dealing with garbage, solving complex math problems and child care.
How should you assign the tasks from these classes to people?
It appears that those people who can perform the more pleasurable tasks should do so while the other should perform the unwanted tasks and the remaining neutral tasks are performed equally.
For me this seems kind of unfair. It places the lesser able people potentially at the less pleasurable end. Moral judgements may vary—but this question at least requires some discussion.
What do you think?
Those people can be compensated in other ways. If there is some aspect of your utility that your conception of utilitarianism isn’t capturing then you have to figure out how to capture it. Utilitarianism based on simple utility models will always fail.
Fair point.
Yootling is one good approach to the problem.
How universal is empathy?
The article seems to miss the point many times.
I think a useful definition of empathy describes it as the ability to feel what another person is feeling.
It for example says: “With social relations expanding beyond the circle of close kin, kinship obligations were no longer enough to ensure mutual assistance and stop free riding. There was thus selection for pro-social behavior, i.e., a spontaneous willingness to help not only kin but also non-kin.”
Group selection is not a well accepted phenomena. Especially for a short timeframe of 10,000 years.
Furthermore the author shies away from going outright to the logical conclusions. If the author thinks that those people in towns evolved to have more empathy, that basically means that Black people have less empathy than white people. Is that what the author is arguing? That’s certainly an interesting claim.
The author doesn’t seem to be aware of the tradeoff between dominance and empathy. More testosterone equals more dominance and makes people less empathic. Given differences in penis size and some studies, Blacks might have higher testosterone than Whites. Of course that’s a highly controversial debate.
I don’t think it’s arguing for group selection, more as empathy as an adaption for understanding the mental states of other people so that you could better navigate reciprocal social obligations. So long as effective mechanisms existed to punish free riders, it would be a beneficial adaption.
I think.
Then why use the word “selection”?
Because it was selected?
What kind of process do you mean with selection if you don’t mean group selection?
Regular old natural selection? Behaving socially benefitted the individual. Doing things for other people didn’t just help them—it got their help in return.
The argument the article made was that empathy reduces free riding. Engaging in free riding almost per definition doesn’t produce disadvantages for the individual who engages in free riding.
It does if others have adaptations for punishing free-riders, or for rewarding non-free-riders.
Punishing free-riders isn’t what I would consider under empathy. I would think that highly dominate people with a lot of testosterone will rather engage in punishing free-riders than empathic people.
I didn’t mean that an empathic person would be more likely to punish free-riders. I meant that an empathic person would be less likely to free ride, and thus be less likely to be punished (or more likely to be rewarded).
I dunno, I hear that oxytocin makes you nicer towards your in-group but less nice towards your out-group.
Would you predict that whites produce less oxytocin than blacks?
I have no idea.
… normal selection?
The article “Tolerate Tolerance” contains a hyperlink to “M*nt*f*x”; twice. When I click on the link, my anti-virus software warns me about “potentially unwanted” content on the page. (What does that mean? It’s usually the kind of software that could have a legitimate use, but is also frequently abused, so it is a good idea to warn all users, and allow specific users to disable the warning for specific software. For example: a keylogger.)
I have no idea what kind of “potentially unwanted” software is on the page, and I am not going to investigate. If someone else is an expert, could you please look at it?
If it is something malicious, perhaps the hyperlinks should be removed (1) from the page, and (2) from the e-book.
The tinyurls expand to a FAQ page about the entity who shall not be clearly named, lest it appear, written by someone apparently sane. I didn’t get any malware warnings.
If you fill in the asterisks with an e, an i, and an e, then put it into Google, it will tell you everything you want to know, including a hit on the aforementioned FAQ. As the original post says, a legendary AI crackpot. He actually once had an account on LessWrong, very briefly, but (I assume) was instantly banned.
Interesting discussion on philosophical methodology and intuitions in a recent book. http://ndpr.nd.edu/news/39362-philosophy-without-intuitions/
Ran Prieur linked to this comment on reddit that speculates that processed food (specifically Soylent) is causing colorectal cancer. How plausible is it?
I think he is wrong about Soylent but not because Soylent explicitly optimized for this eventuality. Soylent happens to use oat flour which is rich in resistant starch. This is exactly the type of “difficult or impossible to digest” thing that the bacteria in our gut feed on.
Processed food’s association with colorectal cancer is not related to the bioavailability of its nutrients or to the presence or lack of insoluble fibers in the diet AFAIK.
Why do you think EY uses conspiracy in his fictional writing? He seems to use them in positive or at least not clearly negative light, which is not how I think of conspiracies at all. I notice that I am confused, so I’m trying to gather some other opinions.
The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.
(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can’t find any references for that right now)
This is a perennial issue, occurring in various forms relating to the preservation of viruses like smallpox, the sequencing of their genomes, and increasing their virulence. Looking in Google News for ‘virus research increase virulence’, it seems the most recent such research would be http://www.nature.com/news/biosafety-in-the-balance-1.15447 / http://www.independent.co.uk/news/science/american-scientists-controversially-recreate-deadly-spanish-flu-virus-9529707.html :
EDIT: Sandberg provides an amazing quote on the topic: http://www.aleph.se/andart/archives/2014/07/if_nature_doesnt_do_containment_why_should_i.html
I think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can’t recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan’s world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn’t enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don’t know the distribution of people who would gain positive utility from a world of conspiracies, but I’m sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it’s a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I’ll admit that’s a limited way to look at things.
Graphite as a neutron modulator, I believe. Ch. 85:
I think it stems from the Brennan’s World weirdtopia, and the idea that making knowledge freely available makes it feel worthless, while making it restricted to members of a secretive group makes it feel as valuable and powerful as it actually is.
If something is valuable and powerful, and (big if) it’s not harmful, plus it’s extremely cheap to reproduce I see no reason not to distribute it freely. My confusion was that Brennan’s world seems set in the future, and I got the sense that EY may have been in favor of it in some ways (perhaps that’s mistaken). Since it seemed to be set in the future of our world, I got the sense that the Singularity had already happened. Maybe I just need to get to the fun sequence, but that particular future really made me uneasy,
Perhaps it’s only powerful in the hands of the chosen few. If it’s in the open and it looks powerful, then other people try it and see less than amazing success, and it looks less and less cool until it stops growing. But by then it’s harder for the special few to recognize its value—or perhaps don’t want to associate themselves with it—and potential is wasted.
If instead the details are kept secret but the powers known publicly, then the masters of the craft are taken seriously and can suck up all the promising individuals.
I don’t know how he feels about it currently, but in the past he did endorse Brennan’s world as a better way to organize society post-Singularity. It started as a thought experiment about how to fix the problem that most people take science for granted and don’t understand how important and powerful it is, and grew into a utopia he found extremely compelling. (To the point where he specifically did not explain the rest of the details because it is too inefficient to risk diverting effort towards. This was probably an overreaction.) He talks about this in
Eutopia is Scary
The linked article ends with this; I think this part of context is necessary. Emphasis mine:
As I understand it, the Conspiracy world is a mental experiment with different advantages and disadvantages. And a tool used to illustrate some other concepts in a storytelling format (because this is what humans pay more attention to), such as resisting social pressure, actually updating on a difficult topic, and a fictional evidence that by more rational thinking we could be more awesome.
But it’s not an optimal (according to Eliezer, as I understand the part I quoted) world. That would be a world where the science is open (and financially available, etc.) to everyone and yet, somehow, people respect it. (The question is, how to achieve that, given human psychology.)
HJPEV is a drama queen and likes acting as if he’s badass (ignore for the moment whether he is) and sinister and evil: Look at what he calls his army and how he acts around them. Hence calling his thing with Draco the Bayesian Conspiracy. Not everything that takes place in an author’s fiction is indicative of something they support.
This, however, is a recurring theme in Eliezer’s work. I don’t think I fully grok the motivations (though I could hazard a guess or two), but it’s definitely not just HJPEV’s supervillain fetish talking.
Agreed, it’s also Eliezer’s super-villain fetish thing.
Conspiracy is the default mode of a group of people getting anything done. Every business is a conspiracy. They plot and scheme within their “offices”, anonymous buildings with nothing but their name on the front door. They tell no-one what they’re doing, beyond legal necessity, and aim to conquer the world by, well, usually the evil plan is to make stuff that people will want to buy.
No organisation conducts all its business in public, whatever its aims. Even if you find one that seems to, dollars to cents you’re not looking at its real processes. There needn’t be anything sinister in this, although of course sometimes there is.
Every one of us is a conspiracy of one.
“Conspiracy” doesn’t mean “people working where you can’t tell what they are doing”.
It means “people working where you can’t tell what they are doing and you worry that you wouldn’t like it”.
EY makes complicated arguments. He’s not the person to make arguments about X is good and Y is bad. Fiction is about playing with ideas.
As far as I can find the first instance of the term Bayesian conpiracy appear in a 2003 nonfiction article by Eliezer:
At the time it seems like a fun joke to make and it stayed. There are also a variety of other arguments to be made that it’s sometimes not useful to share all information with outsiders.
I’m guessing it’s cultural influence from Discordianism, Shea and Wilson’s Illuminatus!, or the like. Conspiracies, cults, and initiatory orders are all pretty common themes in Discordian-influenced works. Some are destructive, some are constructive, some are both, and some run around in circles.
For the same reason EY supports the censoring of posts on topics he has decided are dangerous for the world to see. He generalizes that if he is willing to hide facts that work against his interests, that others similarly situated to him, but with different interests will also be willing to work surreptitiously.
I’m relatively new to the site and I wasn’t aware of any censorship.I suppose I can imagine that it might be useful and even necessary to censor things, but I have an intuitive aversion to the whole business. Plus I’m not sure how practical it is, since after you posted that I googled lesswrong censorship and found out what was being censored. I have to say, if they’re willing to censor stuff that causes nightmares then they ought to censor talk of conspiracies, as I can personally attest that that has caused supreme discomfort. They are a very harmful meme and positing a conspiracy can warp your sense of reality. I have bipolar, and I was taking a medicine that increases the level of dopamine in my brain to help with some of the symptoms of depression. Dopamine (I recently rediscovered) increased your brain’s tendency to see patterns, and I had to stop talking a very helpful medication after reading this site. Maybe it would have happened anyway, but the world of conspiracy theories is very dark and my journey there was triggered by his writings. I guess most of the content on this site is disorienting though, but perhaps some clarification about what he thinks the benefits of conspiracies are and their extent should be would help.
Also, the content on this site is pretty hard hitting in a lot of ways, I find it inconsistent to censor things to protect sensitive people who think about AI but not people who are sensitive to all the other things that are discussed here. I think it’s emblematic of a broader problem with the community, which is that there’s a strong ingroup outgroup barrier, which is a problem when you’re trying to subsist on philanthropy and the ingroup is fairly tiny.
Many websites about conspiracy theories don’t care much about the truth. They don’t go through the work of checking whether what they are saying is true.
On the other hand organisations such as P2 exist or existed. The Mafia exists. To the extend that we care about truth we can’t claim that aren’t groups of people that coordinate together in secret for the benefits of their members. Italy is a pretty good country to think about when you want to think about conspiracies because there a lot of publically available information.
It’s actually pretty easy to see flaws in the argument of someone who claims that the US government brought down the twin towers on 9/11 via explosives if you are actually searching for flaws and not only searching for evidence that the claim might be true. The same goes for lizard overlords.
Learn to live with not knowing things. Learn to live with uncertainty. Living with uncertainty is one of the core skills as a rationalist. If you don’t know than you don’t know an wanting to know. We live in a very complex world that we don’t fully understand.
You found out what was censored in a way where you don’t understand the debate that was censored in depth and you took no emotional harm.
Learning to live with not knowing things is good advice if you are trying to choose between “I explain this by saying that people are hiding things” and “I don’t have an explanation”.
Learning to live with not knowing things is poor advice in a context where people are actually hiding things from you and what is not known is what the people are hiding rather than whether the people are hiding something. It is especially poor advice where there is a conflict of interest involved—that is, when the same people telling you you’d be better off not knowing also stand to lose from you knowing.
Needless to say, 9/11 and lizard conspiracy theories fall in the first category and the material that has been censored from lesswrong falls in the second category.
No, if you can’t stand thinking that you don’t know how things work you are pretty easy to convince of a lie. You take the first lie that makes a bit of sense in your view of the world. The lie feels like you understand the world. It feels better than uncertainty. Any decent organisation that operates in secret puts out lies to distract people who want to know the truth.
Andy Müller-Maguhn was standing in front of the Chaos Computer Congress in German and managed to give a good description of how the NSA surveils the internet and how the German government lets them spy on German soil. At the time you could have called it a conspiracy theory. Those political Chaos Computer Club people are very aware of what they know and where they are uncertain. That’s required if you want to reason clearly about hidden information.
When it comes to 9/11 the government does hide things. 9/11 is not an event where all information is readily available. It’s pretty clear that names of some Saudi’s are hidden. Bin Laden comes from a rich Saudi family and the US wants to keep a good relationship with the Saudi government. I think it’s pretty clear that there some information that the US didn’t want to have in the 9/11 report because the US doesn’t want to damage the relationship with the Saudis.
Various parts of the NSA and CIA do not want to share all their information about what they are doing with Congressional Inquiries. As a result they hide information from the 9/11 commission. The NSA wants to have a lot of stuff out of the public eye that could be find out if a congressional commission would dig around and get full cooperation. The chief of the NSA lied under oath to congress about the US spying program. A congressional commission that would investigate 9/11 fully would want to look at all evidence that they NSA gathered at that point and that’s not what the NSA wants, even if the NSA didn’t do anything to make 9/11 happen.
If someone finds evidence of the NSA withholding information to a congressional commission that shouldn’t surprise you at all, or should increase your belief that the NSA orchestrated 9/11 because they are always hiding stuff.
Information about Al Qaeda support for the Muslim fighters that Nato helped to fight for the independence of Kosovo isn’t clear.
The extend to which Chechnya Muslims freedom fighter are financed by the Saudis or Western sources isn’t clear. The same goes for Uyghurs.
General information about identities of people who did short selling before 9/11 was hidden because the US government just doesn’t release all information about all short selling publically.
The problem with 9/11 is that people go to school and learn that the government is supposed to tell them the truth and not hide things. Then they grow up a bit and are faced with a world where government constantly hides information and lies. Then those people take the evidence that the government hides information in a case like 9/11 as evidence that the US government caused the twin towers to be destroyed with dynamite.
Politically the question whether to take 9/11 as a lesson to cut the money flow to Muslim ‘freedom fighters’ in Chechnya does matter and it’s something where relevant information gets withhold.
I think you are misunderstanding me. The point is that there are two scenarios:
1) Someone doesn’t really know anything about some subject. But they find a conspiracy scenario appealing because they would rather “know” an explanation with little evidence behind it, rather than admit that they don’t know.
2) Information definitely is being hidden from someone, and they say “I want to know that information:”.
Both of these involve someone wanting to know, but “wanting to know” is being used in very different ways. If you say that people should “learn to live without knowing things”, that’s a good point in the first scenario but not so good in the second scenario. And the second scenario is what’s taking place for the information that has been censored from lesswrong. (Considering that your reply was pretty much all about 9/11, do you even know what is being referred to by information that has been censored from lesswrong?)
“learning to live without knowing things” doesn’t mean that you don’t value information. It means that when you can’t/don’t know, you’re not in constant suffering. It means that you don’t get all freaked out and desperate for anything that looks like an answer (e.g. a false conspiracy theory)
It’s the difference between experiencing crippling performance anxiety and just wanting to give a good performance. The difference between “panic mode” and “optimizing mode”. Once you can live with the worst case, fear doesn’t control you any more—but that doesn’t mean you’re not motivated to avoid the worst case!
In the case of 9/11 there is definitely information that’s hidden. Anybody who roughly understands how the US government works should expect that’s true. Anybody who studies the issue in detail will find out that’s true.
Yes, I’m aware of three different instances in which information got censored on Lesswrong. There are additional instances where authors deleted their own posts which you could also call censorship.
I don’t think that the value of discovering the information in any of those three cases of censorship is very high to anyone.
The two senses of “wanting to know” can both be applied to 9/11.
Someone who “wants to know” in the sense of ignoring evidence to be able to “know” that 9/11 was caused by a conspiracy is better off not wanting to know.
Someone who wants to know information about 9/11 that is hidden but actually exists is not better off not wanting to know. Wanting to know in this sense is generally a good thing. (Except for privacy and security concerns, but politicians doing things is not privacy, and a politician who says something should be hidden for national security is probably lying).
I was referring to the basilisk. Telling people what the basilisk is is very valuable as criticism of LW, and has high “negative value” to LW itself because of how embarrassing it is to LW.
You think that wanting to know the truth means that you can decide on the outcome of what the information that you don’t have says. That isn’t true.
To the extend that there an interest in weakening Russia and China geopolitically by funding separatists movements within their borders there obviously an interest to be silent about how those movements get funded and which individuals do the funding.
US senator Bob Graham made statements about how crucial information on the potential role of Saudi funding of the 9/11 attack got censored out of the report. (see Wikipedia: http://en.wikipedia.org/wiki/9/11_Commission_Report) Whether or not you call that a conspiracy is irrelevant. Calling it a conspiracy is just a label.
How many Saudi would have to have what specific ties with Al Qaeda and parts of the US government that it’s a conspiracy_tm? This is far from a black and white affair. Obsessing about the label makes you ignore the real issues that are at stake. The US government might very well be hiding information about people that likely payed for 9/11.
Once you understand that fact you might want to know the information. Unfortunately there no easy way to know especially as an individual. If you want to have a quick fix, then you will believe in a lie. You actually have to be okay with knowing that you don’t know if you don’t want to believe in lies.
Explaining to someone the whole story of what TDT is in a way that the basilisk debate makes sense to them is not an easy task. You are basically telling outsiders a strawman if you try to summarize the basilisk debate. In a lot of fields there are complex argument that seem strange and silly to outsiders, the existence of those cases is no argument against those fields.
Another thing that I learned while doing debating is that you focus on refuting strong arguments of your opponent and not on weak arguments. Good criticism isn’t criticism that focuses on obvious mistakes that someone makes. Good criticism focuses on issues that have actually strong argument and it shows that there are better arguments against the position.
Steelmanning is better than arguing against strawman when you want to be a valuable critic. If a strawman argument about the basilisk is the best you can do to criticize LW, LW is a pretty awesome place.
-- A whole lot of arguments on LW seem silly to outsiders. I just got finished arguing that it’s okay to kill people to take their organs (or rather, that it’s okay to do so in a hypothetical situation that may not really be possible). Should that also be deleted from the site?
-- LW has a conflict of interest when deciding that some information is so easy to take out of context that it must be suppressed, but when suppressing the information also benefits LW for other reasons. Conflicts of interest should generally be avoided because of the possibility that they taint one’s judgment—even if it’s not possible to prove that the conflict of interest does so.
-- I am not convinced that “they’re crazy enough to fall for the basilisk” is strawmanning LW. Crazy-soiunding ideas are more likely to be false than non-crazy-sounding ideas (even if you don’t have the expertise to tell whether it’s really crazy or just crazy-sounding). Ideas which have not been reviewed by the scientific community are more likely to be false than ideas which have. You can do a legitimate Bayseian update based on the Basilisk sounding crazy.
-- Furthermore, LW doesn’t officially believe in the Basilisk. So it’s not “the Basilisk sounds crazy to outsiders because they don’t understand it”, it’s “even insiders concede that the Basilisk is crazy, it just sounds more crazy to outsiders because they don’t understand it”, which is a much weaker reason to suppress it than the former one.
That debate is shared with academic ethics as, IIRC, a standard scenario given as criticism of some forms of utilitarian ethics, is it not? I think that’s a mitigating factor. It may sound funny to discuss ‘quarks’ (quark quark quark! funny sound, isn’t it?) or ‘gluons’ but that also is borrowed from an academic field.
It’s not deleted because it’s silly to outsiders. You said it was important criticism. It’s not.
Discussion like the one we are having here aren’t suppressed on LW. If basilisk censoring would be about that, this discussion would be outside of the limit which it isn’t.
The problem with updating on the basilisk is that you don’t have access to the reasoning based on which the basilisk got censored. If you want to update on whether someone makes rational decisions it makes a lot of sense to focus on instances where the person actually fully opening about why he does what he does.
It’s also a case where there was time pressure to make a decision while a lot of LW discussions aren’t of that nature and intellectual position get developed over months and years. A case where a decision was made within a day is not representative for the way opinions get formed on LW.
But outsiders wouldn’t have any idea what we’re talking about (unless they googled “Roko’s Basilisk”),
Just because you don’t have all information doesn’t mean that the information you do have isn’t useful. Of course updating on “the Basilisk sounds like a crazy idea” isn’t as good as doing so based on completely comprehending it, but that doesn’t mean it’s useless or irrational. Besides, LW (officially) agrees that it’s a crazy idea, so it’s not as if comprehending it would lead to a vastly different conclusion.
And again, LW has a conflict of interest in deciding that reading the Basilisk won’t provide outsiders with useful information. The whole reason we point out conflicts of interest in the first place is that we think certain parties shouldn’t make certain decisions. So arguing “LW should decide not to release the information because X” is inherently wrong—LW shouldn’t be deciding this at all.
There was time pressure when the Basilisk was initially censored. There’s no time pressure now.
You underrate the intelligence of the folks who read LW. If someone wants to know he googles it.
Sure?
What does it mean “to make sense” of “the basilisk” debate? I am curious if you are suggesting that it makes sense to worry about any part or interpretation of it.
No matter what you think about RationalWiki in general, I believe it does a good job at explaining it. But if that is not the case, you are very welcome to visit the talk page there and provide a better account.
To the extent there is censorship of dangerous information on LW, the danger is to the future of mankind rather then to the (very real and I don’t mean to minimize this) feelings of readers.
One could make the argument that anything that harms the mission of lesswrong’s sponsoring organizations is to the detriment of mankind. I’m not opposed to that argument, but googling censorship of lesswrong did not turn up anything I considered to be particularly dangerous. Maybe that just means that the censorship is more effective than I would have predicted, or is indicative or a lack of imagination on my part.
I’d say that “censorship” (things that could be classified or pattern-matched to this word) happens less than once in a year. That could actually contribute to why people speak so much about it; if it happened every day, it would be boring.
From my memory, this is “censored”:
inventing scenarios about Pascal’s mugging by AI
debating, even hypothetically, harm towards specific people or organization
replying to a downvoted post (automatically penalized by −5 karma)
And the options 2 and 3 are just common sense, and could happen on any website. Thus, most talk about “censorship” on LW focuses on the option 1.
(By the way, if you learned about the “basilisk” on RationalWiki, here is a little thing I just noticed today: The RW article has a screenshot of dozens of deleted comments, which you will obviously associate with the incident. Please note that the “basilisk” incident happened in 2010, and the screenshot is from 2012. So this is not the censorship of the original debate. It is probably a censorship of some “why did you remove this comment two years ago? let’s talk about it forever and ever” meta-threads that were quite frequent and IMHO quite annoying at some time.)
Also, when a comment or article is removed, at least the message about the removal stays there. There is no meta-censorship (trying to hide the fact that censorship happened). If you don’t see messages about removed comments at some place, it means no comments were removed there.
And yet earlier in your post you’re talking about some posts in 2012 about censorship in 2010 being deleted. Smells like meta-censorship to me.
By meta-censorship I meant things like removing the content from the website without a trace, so unless you look at the google cache, you have no idea that anything happened, and unless someone quickly makes a backup, you have no proof that it happened.
Leaving the notices “this comment was removed” on the page is precisely what allowed RW to make a nice screenshot about LW censorship. LW itself provided evidence that some comments were deleted. Providing a hyperlink instead of screenshot would probably give the same information.
Also, I am mentioning basilisk now, and I have above 95% confidence that this comment will not be deleted. (One of the reasons is that it doesn’t get into details; it doesn’t try to restart the whole debate. Another reason is that don’t start a new thread.)
There’s not a lot of actual censorship of dangerous information “for the future of mankind”. Or at least, I rate that as fairly unlikely, given that when the scientific groundwork for a breakthrough has been laid, multiple people usually invent it in parallel, close to each-other in time. Which means that unless you can get everyone who researches dangerous-level AI into LW, censoring on LW won’t really help, it will just ensure that someone less scrupulous publishes first.
“Three may keep a secret, if two of them are dead.”
Conspiracy is hard. If you don’t have actual legal force backing you up, it’s nearly impossible to keep information from spreading out of control—and even legal force is by no means a sure thing. The existence of the Groom Lake air station, for example, was suspected for decades before publicly available satellite images made it pointless to keep up even the pretense of secrecy.
For an extragovernmental example, consider mystery religions. These aren’t too uncommon: they’re not as popular as they once were, but new or unusual religions still often try to elide the deepest teachings of their faiths, either for cultural/spiritual reasons (e.g. Gardnerian Wicca) or because they sound as crazy as six generations of wolverines raised on horse tranquilizers and back issues of Weird Tales (e.g. Scientology).
Now, where’s it gotten them? Well, Gardnerian Wiccans will still tell you they’re drinking from a vast and unplumbed well of secret truths, but it’s trivially easy to find dozens of different Books of Shadows (some from less restrictive breakaway lineages, some from people who just broke their oaths) that agree on the broad strokes and many of the details of the Gardnerian mysteries. (Also many others that bear almost no resemblance beyond the name and some version of the Lesser Banishing Ritual of the Pentagram, but never mind that.) As to Scientology, Operation Clambake (xenu.net) had blown that wide open years before South Park popularized the basic outline of what’s charmingly known as “space opera”; these days it takes about ten minutes to fire up a browser and pull down a more-or-less complete set of doctrinal PDFs by way of your favorite nautical euphemism. Less if it’s well seeded.
“But these are just weird minority religions,” you say? “Knowing this stuff doesn’t actually harm my spiritual well-being, because I only care about the fivefold kisses when my SO’s involved and there’s no such thing as body thetans”? Sure, but the whole point of a mystery religion is selecting for conviction. Typically they’re gated by an initiation period measured in years and thousands of dollars, not to mention some truly hair-raising oaths; I don’t find it plausible that science broadly defined can do much better.
So I’m the only one here who actually took a hair-raising oath before making an account?
You’re not allowed to talk about the oath! Why am I the only one who seems able to keep it?
Because there are different factions at work, you naked ape.
Nah, I hear we traditionally save that for after you earn your 10,000th karma point and take the Mark of Bayes.
You probably need to get those 10K karma points from Main.
You are clearly right that conspiracy is hard. And yet, it is not impossible. Plenty of major events are caused by conspiracies, from the assassination of Julius Caesar to the recent coup in Thailand. In addition, to truly prevent a conspiracy, it is often necessary to do more than merely reveal it; if the conspirators have plausible deniability, then revealing (but not thwarting) the conspiracy can actually strengthen the plotters hands, as they can now co-ordinate more easily with outside supporters.
Successful conspiracies, like any other social organization, need incentive compatibility. Yes, it’s easy to find out the secrets of the Scientology cult. Not so easy to find out the secret recipe for Coca Cola, though.
Have you asked the people who are able to censor information on LW, or do you just assume this to be the case?
Do the people in charge of LW censor information that are neither dangerous nor spam?
I infer it’s the case from being a regular reader of LW. I don’t know if LW censors other types of information in part because spam is not a well defined category.
I think that would be far overstating the importance of this forum. If Eliezer/MIRI have some dark secrets (or whatever they consider to be dangerous knowledge), they surely didn’t make it to LW.
I would assume the main explanation to be just “conspiracies are cool”, the same reason why they pop up in all kinds of other fiction ranging from The X-Files to Babylon 5 to Deus Ex to the Illuminati card game to whatever.
A “conspiracy” may be usefully generalised as any group of people trying to get something done.
Oh come on. You’ve never steepled your fingers and pretended to be a Bond villain? Or, let’s say it, to be Gendo Ikari? Being an evil conspirator is fun.