But surely, good sir, common sense says that you should defect against CooperateBot in order to punish it for cooperating with DefectBot.
I thought you said people should tolerate tolerance :)
But surely, good sir, common sense says that you should defect against CooperateBot in order to punish it for cooperating with DefectBot.
I thought you said people should tolerate tolerance :)
What if you had a script that hid downvotes? Or one that used an unknown algorithm to hide all downvotes and also some upvotes so a lack of upvotes could still mean that you really got upvoted? Or maybe one that added fake downvotes so that you would never have certain knowledge of having been really downvoted? Etc. Personally I like your commenting and would enjoy more of it.
So what’s the program? Is it the one that runs every turing machine up to length 100 for BusyBeaver(100) steps, and gets the number BusyBeaver(100) by running the BusyBeaver_100 program whose source code is hardcoded into it? That would be of length 100+c for some constant c, but maybe you didn’t think the constant was worth mentioning.
Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn’t.
Obviously to really put the idea of people having bounded utility functions to the test, you have to forget about it solving problems of small probabilities and incredibly good outcomes and focus on the most unintuitive consequences of it. For one, having a bounded utility function means caring arbitrarily little about differences between the goodness of different sufficiently good outcomes. And all the outcomes could be certain too. You could come up with all kinds of thought experiments involving purchasing huge numbers of years happy life or some other good for a few cents. You know all of this so I wonder why you don’t talk about it.
Also I believe that Eliezer thinks that an unbounded utility function describes at least his preferences. I remember he made a comment about caring about new happy years of life no matter how many he’d already been granted.
(I haven’t read most of the discussion in this thread or might just be missing something so this might be irrelevant.)
programming resources
Since you’ve mentioned this before, here’s an offhand idea for how to maybe get some: put an announcement on the sidebar or banner asking for developers (and maybe noting that LW is open source—so it’s ok to ask people to work for free), that’s visible on every page and that links to a page with your list of wanted features and instructions for how to get involved. There could be a bunch of potential developers that don’t even know LW needs them, since the subject has only come up in some comment threads. Maybe you guys have already thought of this or know of a reason it wouldn’t work, just wanted to put it out there.
It tells us something bad about CFAR, right? But if they didn’t offer refunds, woudn’t you be saying that that tells us something bad about them too?
That’s awesome, thanks.
I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you’ve memorized, but you’d have to figure out how to convert it to bits on the fly.
Personally I’d recommend going to random.org, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you’ll always be able to act unpredictably.
Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who’s good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn’t actually choose pi, but something less well-known and preferably easier to calculate. I don’t know any such algorithms, and I guess if anyone knows a good one, they’re not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn’t share your seed. If anyone’s thought about this in more depth and is willing to share, I’m interested.
A couple years ago I deliberately used that strategy of reading an article again and again on successive days to grasp some hard sigfpe posts and decision theory posts here on LW. For some of them, it took several days, and some of them I never understood, but on the whole it worked very well. I always thought the reason it works is because of sleeping between sessions. (I still think this is a very useful technique but haven’t used it much due to general akrasia.)
He could have just been talking about trolling in the abstract. And even if not, after reading a bit of his history, his “trolling”, if any, is at most at the level of rhethorical questions. I’m not really a fan of his commenting, but if he’s banned, I’d say “banned for disagreement” will be closer to the mark as a description of what happened than “banned for trolling”, though not the whole story.
I’d love to see some blind testing of this brainwave stuff to see whether it’s more than placebo.
Doesn’t seem too hard to do. Just do a blind comparison of genuine binaural beats carefully crafted to induce a state of concentration or whatever, and random noise or misadjusted binaural beats. It probably requires two people though, the tester and someone other than the tester to create the audio files and give them to the tester without telling them which is which. The tester should preferably be a binaural beats virgin—they should never have heard binaural beats before.
Something along the lines of the above would probably work, but I haven’t thought about the experimental protocol in detail. If someone actually goes ahead with this, obviously they’re gonna have to flesh it out and agree on a more precise protocol.
Personally, I couldn’t be the tester because I’ve listened to binaural beats before and might recognize them. I might be able to be the fake audio file creator, but I’d have to look into it more to make sure I can create something that doesn’t accidentally have binaural beats in it, etc.
Well? Go ahead.
The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain
Maybe I’m missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.
Maybe it’s just that volunteers that will actually do any work are hard to find. Related.
Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.
After I had read your post but before I had read IlyaSphitser’s comment I thought that the particular model with a single integer chain was in fact a model of first-order arithmetic, so the post was definitely misleading to me in that respect.
Thanks for the link to workflowy. Here’s some thoughts on it.
Problems:
If I use it to keep a todo/projects list, I can’t assign priorities or dates to sort and filter by. (Hashtags aren’t enough for this.)
Can’t have the same item linked to from two different places. It’s a tree, not a graph. I read some people say they’ve exported their brains to workflowy, but your brain is more of a graph than a tree. (Duplication isn’t enough for this, because changes in the duplicate aren’t applied to the original.)
Regardless, it’s definitely better than the mess of text files I’m currently using.
The tagging and searching functionality seems to make it possible to implement the I’m bored and have 20 minutes, tell me what to do function that another commenter talked about. Just tag the relevant subtasks with #nextaction or something, and then search for that tag later.
For decision-theoric reasons, the dark lords of the matrix give superhumanly good advice about social organization to religious people in ways that look from inside the simulation like supernatural revelations; non-religious people don’t have access to these revelations so when they try to do social organization it ends in disaster.
Obviously.
If you’re willing to say “X” whenever you believe X, then if you say “I believe X” but aren’t willing to say “X”, your statement that you believe X is actually false. But in conversations, the rule that you’re willing to say everything you believe doesn’t hold.
RSS feeds for user’s comments seem to be broken with the update to how they display on the page. To see how, just look at eg. http://lesswrong.com/user/Yvain/overview/.rss . It contains a bunch of comments from other people than Yvain. This is pretty annoying, hope it’s fixed soon. I’m subscribed to tens of users’ comment feeds and it’s the main way I read LW. Today all of those feeds got a bunch of spurious updates from the new other-people-comments on everyone’s comments page.
Also, some months back there was another change to userpages and it broke all my RSS feeds too, I had to resubscribe to everyone’s /user/theirname/comments page where I had previously subscribed to the user/theirname page. I wish updates would never break RSS feeds, I’m sure I’m not the only one who makes significant use of them.