Could you be a bit more detailed? That way everyone can learn from your experience.
wobster109
Yes, I designed them, and they were verified by GK’s engineers. The individual nanobots are all connected to GPS and get up-to-date information from the CDC. These sort of details are how I lost tons of time. ^^
I know in real life that would be akin to AI out of the box. However Mr. Eliezer’s basic rules say it doesn’t count. ^^
What value do you assign to your leisure time, when deciding if something is worth your time? For example, do I want to spend 2 hours building something or hire someone to do it. It feels incorrect to use my hourly pay, because if I save time on a Sunday, I’m not putting that time to work. I’m probably surfing the internet or going to the gym, the sort of things people generally do in leisure time. It has value to me, but not as much as an hour of work. What do you suggest?
I played as AI in AI Box, and it was generally frustrating all around.
I kind of agree upon reflection. Tuxedage’s ruleset seems tailored for games where there is money on the line, and in that case it feels very unfair to say GK can leave right away. GK would be heavily incentivized to leave immediately, since that would get GK’s charity a guaranteed donation.
In Tuxedage’s rule set, if the gatekeeper leaves before 2 hours, it counts as an AI win. So it’s a viable strategy. However ---
I am sure that it would work against some opponents, but my feeling is it would not work against people on Less Wrong. It was a good try though.
I’ll try hard! ^^
I went to a random forum somewhere and posted for an opponent. GK responded with an email address, and we worked out the details via email. We’ll be holding our round in a secret, invite-only IRC channel.
It looks like if you offer to play as AI, you’ll have no trouble finding an opponent. Tuxedage said in one of his posts that there are 20 gatekeeper players for each AI player.
However. . . since I encountered GK on a different forum, not LW, I insisted on having a third party interview GK. As a safety measure. I have known people who were vengeful or emotionally fragile, and I wanted to take no chances there.
I agree with JoshuaZ. I find your solution to be a severe hindrance in real life. I am the SQL expert on my team, and my (male) coworker is the surgeries expert, and my (male) colleague across the hall is the infectious diseases expert. We all work together to make the best product possible. How can we get anything done if we are segregated by gender?
I don’t see why I need “implicit agreement from all men”. My ideas have merit because they reduce medical errors and save lives. Real-life results are the judge of that, not men. I also do not see why I need “agreement from all women”. They are not my coworkers, and they are free to live their lives as they wish. That said, I am a developer in a project meeting at a tech company. Safe to say, I want to be treated as an equal.
Finally, I don’t see what contributing to a great company has to do with “acting like men” or “pretending to be men”. My goal isn’t to “eradicate femininity”; it is to make a great product that will help people. If you think that is inherently masculine, then you’ll have to explain. So why don’t you start by telling me what “masculine” and “feminine” mean to you?
On Sunday at 11 AM Eastern and 8 AM Pacific*, I will be playing a round of AI Box with a person who wishes to remain anonymous. I will be playing as AI, and my opponent will be playing as Gatekeeper (GK). The loser will pay the winner $25, and will also donate $25 to the winner’s charity of choice. The outcome will be posted here, and maybe a write-up if the game was interesting. We will be using Tuxedage’s ruleset with two clarifications:
GK must read and make a reasonable effort to understand AI’s text, but does not need to make an extraordinary effort to understand things such as heavily misspelled text or intricate theoretical arguments.
The monetary amount will not be changed after the game is concluded.
The transcript will not be made public, sorry. We are looking for a neutral third party who will agree beforehand to read and verify the transcript. Preferably someone who has already played in many games, who will not have their experience ruined by reading someone else’s transcript.
I habitually give the Eastern and Pacific times. This does not mean GK is in one of those two time zones.
I agree 100%. That’s why we have a limit of half an hour each day, no bonus points for doing more. Our last contest was “logging the most steps with Fitbit”, and it ended up wasting a lot of time with no health benefits. Lesson learned!
I’m going to give you some advice as a professional woman. I very deeply resent when male colleagues compete with each other to put on a display for women. This goes for social contexts (rationalists’ meetups) in addition to professional contexts (work meetings). Then women are trying to talk about code or rationality or product design. Rather than thinking about her contributions, the men are preoccupied with “projecting male presence and authority”. What does male presence even mean? Why does authority have anything to do with men, instead of, you know, being the most knowledgeable about the topic?
I’ll tell you how it comes across. It comes across as focusing on the other men and ignoring the women’s contributions. Treating the men as rivals and the women as prizes. Sucky for everyone all around. Instead of teaching boys to be “sexually attractive”, why don’t you teach them to include women in discussions and listen to them same as anyone else? Because we’re not evaluating your sons for “sexual attractiveness”. We’re just trying to get our ideas heard.
Discord and I both grew up in a math contest culture, so each month or so we pick something we “should” be doing and turn it into a contest. This month it’s getting half hour of exercise a day. At the end of the month the winner “wins” something from the loser. It’s been surprisingly effective. I go to the gym about 5x as often as before. Works best with a daily cap. If there’s no daily cap then it turns into an arms race (such as 5 hours of exercise in one day). This sort of thing, that we “should” do but like to hand to tomorrow-self, really is a lot easier when you can do it with someone. Someone who will call you and taunt you. :)
Also, could you (or anyone) clarify the part of the ruleset requiring the gatekeeper to give “serious consideration”? For example, if I am gatekeeper and the AI produces a complicated physics argument, do I have to try hard to understand physics that is beyond my education? (That’s an interpretation hard to imagine.) Or if the AI produces an argument, can the gatekeeper change the topic without responding to it? (Of course assuming the gatekeeper has actually read it and is not intentionally obfuscating it understand it by reading it backwards or something.)
Since you’ve played it so many times, who do you recommend playing as for your first game?
Agree 100%. Wait But Why is very accessible. Previous posts have focused on the Fermi Paradox, procrastination, sentience/consciousness, religion, and immortality. It reads like a very friendly, very accessible Less Wrong.
Can it be non-LW material? I found this to be an excellent no-background-needed introduction to AI. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
I think this is what Anna was getting at when she encouraged me to be a wealthy donor rather than an AI researcher. It’s hard to give up the idea of being Michelangelo, being remembered for centuries in history books. But he wouldn’t’ve managed without his patrons.
Wait But Why wasn’t too popular last time around, but I find the site really interesting, so I’m trying again. I don’t agree with everything there, but I do honestly think it’s interesting to read through. Here we go!
Discussion topic of the week from a few weeks back: How long would you live if given the arbitrary choice? http://waitbutwhy.com/table/how-long-would-you-live-if-you-could-choose-any-number-of-years
This interesting article turned up on Wait But Why: http://waitbutwhy.com/2014/10/religion-for-the-nonreligious.html#comment-264276
A lot of it reads a lot like stuff on here. Here’s a quote: “On Step 1, I snap back at the rude cashier, who had the nerve to be a dick to me. On Step 2, the rudeness doesn’t faze me because I know it’s about him, not me, and that I have no idea what his day or life has been like. On Step 3, I see myself as a miraculous arrangement of atoms in vast space that for a split second in endless eternity has come together to form a moment of consciousness that is my life…and I see that cashier as another moment of consciousness that happens to exist on the same speck of time and space that I do. And the only possible emotion I could have for him on Step 3 is love.”
In real life the AI is presumed to be smart enough to design nanobots that would do their own thing. It’s a direct example from Mr. Eliezer’s rules.