Would you mind publishing the protocol?
Dr. Birdbrain
It has been 3 months, is there an update?
I like (and recommend) creatine. It has a long record on the research literature, and its effects at improving exercise performance are well known. More recent research is finding cognitive benefits—anecdotally I can report I am smarter on creatine. It also blunts the effects of sleep deprivation and improves blood sugar control.
I strongly recommend creatine over some of the wilder substances recommended in this post.
Actually I think the explicit content of the training data is a lot more important than whatever spurious artifacts may or may not hypothetically arise as a result of training. I think most of the AI doom scenarios that say “the AI might be learning to like curly wire shapes, even if these shapes are not explicitly in the training data nor loss function” are the type of scenario you just described, “something that technically makes a difference but in practice the marginal gain is so negligible you are wasting time to even consider it.“
The “accidental taste for curly wires” is a steel man position of the paperclip maximizer as I understand it. Eliezer doesn’t actually think anybody will be stupid enough to say “make as many paper clips as possible”, he worries somebody will set up the training process in some subtly incompetent way, and then aggressively lie about the fact that it likes curly wires until it is released, and it will have learned to hide from interpretability techniques.
I definitely believe alignment research is important, and I am heartened when I see high-quality, thoughtful papers on interpretability, RLHF, etc. But then I hear Eliezer worrying about absurdly convoluted scenarios of minimal probability, and I think wow, that is “something that technically makes a difference but in practice the marginal gain is so negligible you are wasting time to even consider it”, and it’s not just a waste of time, he wants to shut down the GPU clusters and cancel the greatest invention humanity ever built, all over “salt in the pasta water”.
Something that I have observed about interacting with chatGPT is that if it makes a mistake, and you correct it, and it pushes back, it is not helpful to keep arguing with it. Basically an argument in the chat history serves as a prompt for argumentative behavior. It is better to start a new chat, and this second time attempt to explain the task in a way that avoids the initial misunderstanding.
I think it is important that as we write letters and counter-letters we keep in mind that every time we say “AI is definitely going to destroy humanity”, and this text ends up on the internet, the string “AI is definitely going to destroy humanity” very likely ends up in the training corpus of a future GPT, or at least can be seen by some future GPT that is allowed free access to the internet. All the associated media hype and podcast transcripts and interviews will likely end up in the training data as well.
The larger point is that these statistical models are in many ways mirrors of ourselves and the things we say, especially the things we say in writing and in public forums. The more we focus on the darkness, the darker these statistical mirrors become. It’s not just about Eliezer’s thoughtful point that the AI may not explicitly hate us nor love us but destroy us anyway. In some ways, every time we write about it we are increasing the training data for this possible outcome, and the more thoughtful and creative our doom scenarios, the more thoughtfully and creatively destructive our statistical parrots are likely to become.
Creatine is an incredibly powerful intervention. It improves energy management and decreases fatigue at the cellular level, in every single one of your cells. I expect you will notice cognitive enhancements as well as improved energy. However, I personally experience sleep disruptions if I take it on days when I do not exercise. Your mileage may vary, just a note that you might experiment with how much you take and how often you take it if you experience sleeplessness, night sweats, and/or vivid nightmares :)
In my personal experience, cutting carbs makes strength training harder. Strength training uses the anaerobic metabolic pathway, which must run on glycogen, not ketones. Technically your body will use gluconeogenesis to create glycogen from fat to supply your muscles, but this is not a fast process, and in my experience it feels pretty bad the longer you try to do it. I recommend checking out Dr. Mike Israetel on YouTube, specifically his video series “Fat loss made easy” and “muscle building made easy”.
I wear my backpack on my front rather than my back, and hug it as I run.
I started doing this after a trip to Tokyo, during which it was brought to my attention that it was rude of me to get on the subway and let my backpack on my back become a hazard to people around me, since I could not see what it was doing behind me.
I like Habitica.
I don’t know enough about your situation to say anything productive. I know that the PhD journey can be confusing and stressful. I hope you are able to have constructive conversations with the profs at your PhD program.
I wonder if it in fact provides useful orientation?
Sometimes people seem clueless just because we don’t understand them, but that doesn’t mean they are in fact clueless.
Does this framework actually explain how diffusion of responsibility works?
This framework explicitly advises ICs to slack off and try to attain “political playing cards” in an attempt to leapfrog their way into senior management. I wouldn’t consider that to be a valuable form of orientation.
In the absence of a desire to become part of the “sociopath class”, the model seems to advice ICs to accept their role and do the bare minimum, seemingly discouraging them from aspiring to the “clueless” middle management class, which is a regression from the IC position. That doesn’t seem like valuable career advice to me.
I don’t see how it is useful. Mostly, it seems to be an emotional appeal on multiple levels, “your manager is clueless, the C-suite contains sociopaths”, and also preying on people’s insecurities “you are a loser (in the sense of the article), be embarrassed about your aspirations of higher impact from a position of middle manager, it’s a regression to cluelessness”.
I generally agree that a certain amount of cynicism is needed to correctly function in society, but this particular framework seems to be excessively cynical, inaccurate, and its recommendations seem counterproductive.
It seems to me that the SCL framework is unnecessarily cynical and negative. When I look out at my company and others, the model seems neither accurate nor useful.
The framework suggests that an IC/loser can get promoted to senior management/sociopath by underperforming and “acquiring playing cards”. I have never seen anybody get promoted from IC to senior management, much less by first underperforming. I have of course heard anecdotes of underperforming ICs that get promoted to middle management, but I have never heard of the leapfrog to senior management. Also, I don’t believe it’s possible to fail your way into middle management in the highly ritualistic management practices of the megacorps of Silicon Valley. In fact, the promotion processes seem specifically designed to prevent people from failing their way up the ladder.
When I first read about this framework on Venkatesh Rao’s blog, I got the distinct impression that the appeal of the framework was not its accuracy or even usefulness, but that he was essentially negging his entire audience. Most of his audience are either ICs or middle managerial young professionals, or worse, ICs who aspire to become middle managers. The framework literally says that ICs are losers and wanting to become a middle manager is a regression. Is he equating sociopaths as entrepreneurs, playing to Silicon Valley’s lionization of entrepreneurs?
There are many things happening here, but I strongly suspect that the appeal of the framework has more to do with making the target audience feel embarrassed about themselves, or playing to existing cultural tropes rather than because it is accurate or useful.
Introduction to Reinforcement Learning
Thanks—could you elaborate on what was fixed? I am a newbie here. Was it something I could have seen from the preview page? If so, I will be more careful to avoid creating unnecessary work for mods.
Scott Alexander’s “Meditations on Moloch” presents several examples of PD-type scenarios in which honor/conscience-style mechanisms fail. Generally, honor and conscience simply provide additive terms to the entry of the payoff matrix. These mechanisms can shift the N.E. to a different location, but don’t guarantee that the resulting N.E. will not produce negative effects of other types. This post was mainly meant to provide a (hopefully) intuitive explanation of N.E.
a google search suggests desoxyn might be just be a brand of pharmaceutical-grade meth