Rational Animations’ main writer and helmsman
Writer
The King and the Golem—The Animation
Update:
That Alien Message—The Animation
I think it would be very interesting to see you and @TurnTrout debate with the same depth, preparation, and clarity that you brought to the debate with Robin Hanson.
Edit: Also, tentatively, @Rohin Shah because I find this point he’s written about quite cruxy.
The world is awful. The world is much better. The world can be much better: The Animation.
Rational Animations’ intro to mechanistic interpretability
For me, perhaps the biggest takeaway from Aschenbrenner’s manifesto is that even if we solve alignment, we still have an incredibly thorny coordination problem between the US and China, in which each is massively incentivized to race ahead and develop military power using superintelligence, putting them both and the rest of the world at immense risk. And I wonder if, after seeing this in advance, we can sit down and solve this coordination problem in ways that lead to a better outcome with a higher chance than the “race ahead” strategy and don’t risk encountering a short period of incredibly volatile geopolitical instability in which both nations develop and possibly use never-seen-before weapons of mass destruction.
Edit: although I can see how attempts at intervening in any way and raising the salience of the issue risk making the situation worse.
S-Risks: Fates Worse Than Extinction
How people stopped dying from diarrhea so much (& other life-saving decisions)
Rational Animations offers animation production and writing services!
Noting that additional authors still don’t carry over when the post is a cross-post, unfortunately.
I’d guess so, but with AGI we’d go much much faster. Same for everything you’ve mentioned in the post.
Turn everyone hot
If we can do that due to AGI, almost surely we can solve aging, which would be truly great.
Looking for someone in Japan who had experience with guns in games, he looked on twitter and found someone posting gun reloading animations
Having interacted with animation studios and being generally pretty embedded in this world, I know that many studios are doing similar things, such as Twitter callouts if they need some contractors fast for some projects. Even established anime studios do this. I know at least two people who got to work on Japanese anime thanks to Twitter interactions.
I hired animators through Twitter myself, using a similar process: I see someone who seems really talented → I reach out → they accept if the offer is good enough for them.
If that’s the case for animation, I’m pretty sure it often applies to video games, too.
Thank you! And welcome to LessWrong :)
The comments under this video seem okayish to me, but maybe it’s because I’m calibrated on worse stuff under past videos, which isn’t necessarily very good news to you.
The worst I’m seeing is people grinding their own different axes, which isn’t necessarily indicative of misunderstanding.But there are also regular commenters who are leaving pretty good comments:
The other comments I see range from amused and kinda joking about the topic to decent points overall. These are the top three in terms of popularity at the moment:
The True Story of How GPT-2 Became Maximally Lewd
Stories of AI takeover often involve some form of hacking. This seems like a pretty good reason for using (maybe relatively narrow) AI to improve software security worldwide. Luckily, the private sector should cover it in good measure for financial interests.
I also wonder if the balance of offense vs. defense favors defense here. Usually, recognizing is easier than generating, and this could apply to malicious software. We may have excellent AI antiviruses devoted to the recognizing part, while the AI attackers would have to do the generating part.
[Edit: I’m unsure about the second paragraph here. I’m feeling better about the first paragraph, especially given slow multipolar takeoff and similar, not sure about fast unipolar takeoff]
Also I don’t think that LLMs have “hidden internal intelligence”
I don’t think Simulators claims or implies that LLMs have “hidden internal intelligence” or “an inner homunculus reasoning about what to simulate”, though. Where are you getting it from? This conclusion makes me think you’re referring to this post by Eliezer and not Simulators.
The two Gurren Lagann movies cover all the events in the series, and based on my recollection, they should be better animated. Still based on what I remember, the first should have a pretty central take on scientific discovery. The second should be more about ambition and progress, but both probably have at least a bit of both. It’s not by chance that some e/accs have profile pictures inspired by that anime. I feel like people here might disagree with part of the message, but I think it does say something about issues we care about here pretty forcefully. (Also, it was cited somewhere in HP: MoR, but for humor.)