Since we’re talking about Tulpas, I feel obligated to mention that I have one. In case anyone wants anecdata.
Tuxedage
Why does being able to speak in Japanese imply geekdom?
pm’ed.
I’ll actually take you up on your offer. I pledge to send you $150 if I don’t sign up for cryonics by 4 years time.
Congratulations on completing the entire thing!
I personally dislike the change, but I trust that you guys have changed your names for a reason. I think I may be reacting negatively due to the nostalgia factor.
I do apologize for the lack of logs (I’d like to publish them, but we agreed beforehand not to) , and I admit you have a valid point—it’s entirely possible that this experiment was faked, but I wanted to point out that if I really wanted to fake the experiment in order to convince people about the dangers of failing gatekeepers, wouldn’t it be better for me to say I had won? After all, I lost this experiment.
This is one of them that have been published: http://lesswrong.com/lw/9ld/ai_box_log/
Yeah unfortunately the logs are secret. Sorry.
yeah
I think for a superintelligence it would be a piece of cake to hack a human
although I guess I’m Cpt. Obvious for saying that here :)
accolade, I actually have no idea what the consensus is, now that the experiment was won by EY
We should do a poll or something
absolutely. I’m surprised that hasn’t been done yet
Poll: Do you think a superintelligent AGI could escape an AI-Box, given that the gatekeepers are highly trained in resisting the AI’s persuasive tactics, and that the guards are competent and organized? [pollid:394]
I attempted the AI Box Experiment (and lost)
Very well written; congratulations on having the courage to speak on this in the middle of very religious people.
Obligatory “took the survey” post.
Utilitarianism Subreddit
I agree; It must be done for science!
It would be an interesting experiment in the prevalence of the Halo effect of the LW community.
This is a surprising amount of information relevant to the Singularity that Glenn Beck is familiar with, including people like Luke, Anna, Eliezer, and Michael.
Could Glenn be part of Lesswrong? Is there a possibility that Glenn Beck has read the LW sequences?
″...smiling down from their huge piles of utility...” Oh my God.
Similarly. My previous beliefs about Glen Beck points towards a devout Christian fundamentalist. I would not have considered the fact that he would support the singularity, much less take it seriously. It seems I have to update my beliefs quite a bit.
When you talk about “AI research”, are you specifically referring to narrow AI or strong AI? Because these two fields are very different things, with very different requirements and skillsets.
If you happen to be referring to Strong AI, aka Friendly AGI, check out “So you want to be a Seed AI Programmer”, assuming you haven’t already seen it. http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer
Thanks for the feedback.
The term x-rationalist is generally used to describe LessWrong Rationalists, in the style of Eliezer Yudkowsky. X-rationalists is used to differentiate from other “ordinary” Rationalists, such as the atheist Rationalist Community, etc.
Frankly, I have no idea where the term originated from. It seems to pop up every once in a while on LW, though.
Without going into detail, overall my usage of Tulpas have benefited me more than it has hurt me, although it has somewhat hurt me in my early childhood when I would accidentally create Tulpas and not realize that they were a part of my imagination (And imagine them to come from an external source.) It’s very difficult to say if the same would apply for anyone else, since Your Mileage May Vary.
I also suspect creating Tulpas may come significantly easier for some people than others, and this may affect the cost-benefit analysis. Tulpas come very naturally for me, and as I’ve mentioned, my first Tulpa was completely accidental and I did not even realize it was a Tulpa until a year or two later. On the other hand, I’ve read posts about people on /r/Tulpa that have spend hours daily trying to force Tulpas without actually managing to create them. If I had to spend an hour every day in order to obtain a Tulpa, I wouldn’t even bother—also because there’s no way I’m willing to sacrifice that much time for a Tulpa. But the fact that I can will a Tulpa into existence relatively easily helps.
A different variable that may affect whether having a Tulpa is worth it is if you have social desires that are nearly impossible to satisfy through non-tulpa outlets such as meatspace friends. In this case, I do, and I satisfy these desires through Tulpas rather than forcing another human being to conform to my expectations. This also improves my ability to relate to others in real life, since I more easily accept imperfections from them. I suspect that if you’re cognitively similar, you may benefit from Tulpas. I can’t think of anything else right now, and if you have anything more specific, it may trigger more thoughts on the matter.