My website is here.
bilalchughtai
As a general rule, I try and minimise my phone screen time and maximise my laptop screen time. I can do every “productive” task faster on a laptop than on my phone.
Here are some things object level things I do that I find helpful that I haven’t yet seen discussed.
Use a very minimalist app launcher on my phone, that makes searching for apps a conscious decision.
Use a greyscale filter on my phone (which is hard to turn off), as this makes doing most things on my phone harder.
Every time I get a notification I didn’t need to get, I instantly disable it. This also generalizes to unsubscribing from emails I don’t need to receive.
What is the error message?
Yep, this sounds interesting! My suggestion for anyone wanting to run this experiment would be to start with SAD-mini, a subset of SAD with the five most intuitive and simple tasks. It should be fairly easy to adapt our codebase to call the Goodfire API. Feel free to reach out to myself or @L Rudolf L if you want assistance or guidance.
How do you know what “ideal behaviour” is after you steer or project out your feature? How would you differentiate a feature with sufficiently high cosine sim to a “true model feature” and a “true model feature”? I agree you can get some signal on whether a feature is causal, but would argue this is not ambitious enough.
Yes, that’s right—see footnote 10. We think that Transcoders and Crosscoders are directionally correct, in the sense that they leverage more of the models functional structure via activations from several sites, but agree that their vanilla versions suffer similar problems to regular SAEs.
Also related to the idea that the best linear SAE encoder is not the transpose of the decoder.
A LW feature that I would find helpful is an easy to access list of all links cited by a given post.
Agreed that this post presents the altruistic case.
I discuss both the money and status points in the “career capital” paragraph (though perhaps should have factored them out).
your image of a man with a huge monitor doesn’t quite scream “government policymaker” to me
In fact, this mindset gave me burnout earlier this year.
I relate pretty strongly to this. I think almost all junior researchers are incentivised to ‘paper grind’ for longer than is correct. I do think there are pretty strong returns to having one good paper for credibility reasons; it signals that you are capable of doing AI safety research, and thus makes it easier to apply for subsequent opportunities.
Over the past 6 months I’ve dropped the paper grind mindset and am much happier for this. Notably, were it not for short term grants where needing to visibly make progress is important, I would have made this update sooner. Another take that I have is that if you have the flexibility to do so (e.g. by already having stable funding, perhaps via being a PhD student), front-loading learning seems good. See here for a related take by Rohin. Making progress on hard problems requires understanding things deeply, in a way which making progress on easy problems that you could complete during e.g. MATS might not.
You might want to stop using the honey extension. Here are some shady things they do, beyond the usual:
Steal affiliate marketing revenue from influencers (who they also often sponsor), by replacing the genuine affiliate referral cookie with their affiliate referral cookie.
Deceive customers by deliberately withholding the best coupon codes, while claiming they have found the best coupon codes on the internet; partner businesses control which coupon codes honey shows consumers.
any update on this?
thanks! added to post
UC Berkeley has historically had the largest concentration of people thinking about AI existential safety. It’s also closely coupled to the Bay Area safety community. I think you’re possibly underrating Boston universities (i.e. Harvard and Northeastern, as you say the MIT deadline has passed). There is a decent safety community there, in part due to excellent safety-focussed student groups. Toronto is also especially strong on safety imo.
Generally, I would advise thinking more about advisors with aligned interests over universities (this relates to Neel’s comment about interests), though intellectual environment does of course matter. When you apply, you’ll want to name some advisors who you might want to work with on your statement of purpose.
Is there a way for UK taxpayers to tax-efficiently donate (e.g. via Gift Aid)?
Agreed. A related thought is that we might only need to be able to interpret a single model at a particular capability level to unlock the safety benefits, as long as we can make a sufficient case that we should use that model. We don’t care inherently about interpreting GPT-4, we care about there existing a GPT-4 level model that we can interpret.
Tangentially relevant: this paper by Jacob Andreas’ lab shows you can get pretty far on some algorithmic tasks by just training a randomly initialized network’s embedding parameters. This is in some sense the opposite to experiment 2.
I don’t think it’s great for post age-60 actually, as compared with a regular pension, see my reply. The comment on asset tests is useful though, thanks. Roughly LISA assets count towards many tests, while pensions don’t. More details here for those interested: https://www.moneysavingexpert.com/savings/lifetime-isas/
karpathy reviews sleep trackers: https://karpathy.bearblog.dev/finding-the-best-sleep-tracker/