I’m David. I’m a philosophy PhD student and longtime LessWrong/Overcoming Bias/SSC/rationalish-sphere lurker. This is me finally working up the strength to beat back my commenting anxiety! I discovered LW sometime in high school; my reading diet back then consisted of a lot of internet and not much else, and I just stumbled onto here on my own.
Right now I’m really interested in leveling up my modern math understanding and in working up to writing on AI safety/related topics.
My medium-term math goal is to pick up some algebra and analysis. I’ve heard from some people with math backgrounds that those are good basics to pick up if you’re interested in modern math. My roadmap from here to there is to finish off David Lay’s Linear Algebra textbook plus an equivalent textbook for calculus (which I haven’t done any of since high school), and then move on to intro real analysis and intro abstract algebra textbooks. So far, I’ve found self-studying math very rewarding, and so self-motivating as long as I’m not starved for time.
Lately I’ve been reading up on some of the stuff on persuasion tools/AI “social superpowers.” It’s an intrinsically interesting idea that in the medium-term future, following the best arguments you can find given that you read around broadly enough could cease to be a reliable route to holding the most accurate possible views—if we get widespread proliferation of accessible and powerful persuasion tools. If GPT-n gets really good at generating arguments that convince people, it might become dangerous (with regard to preserving your terminal values and sanity) to read around on the unfiltered internet. So this seems like a cool thing to think more about.
Hello!
I’m David. I’m a philosophy PhD student and longtime LessWrong/Overcoming Bias/SSC/rationalish-sphere lurker. This is me finally working up the strength to beat back my commenting anxiety! I discovered LW sometime in high school; my reading diet back then consisted of a lot of internet and not much else, and I just stumbled onto here on my own.
Right now I’m really interested in leveling up my modern math understanding and in working up to writing on AI safety/related topics.
Welcome, David! What sort of math are you looking to level up on? And do you know what AI safety/related topics you might explore?
My medium-term math goal is to pick up some algebra and analysis. I’ve heard from some people with math backgrounds that those are good basics to pick up if you’re interested in modern math. My roadmap from here to there is to finish off David Lay’s Linear Algebra textbook plus an equivalent textbook for calculus (which I haven’t done any of since high school), and then move on to intro real analysis and intro abstract algebra textbooks. So far, I’ve found self-studying math very rewarding, and so self-motivating as long as I’m not starved for time.
Lately I’ve been reading up on some of the stuff on persuasion tools/AI “social superpowers.” It’s an intrinsically interesting idea that in the medium-term future, following the best arguments you can find given that you read around broadly enough could cease to be a reliable route to holding the most accurate possible views—if we get widespread proliferation of accessible and powerful persuasion tools. If GPT-n gets really good at generating arguments that convince people, it might become dangerous (with regard to preserving your terminal values and sanity) to read around on the unfiltered internet. So this seems like a cool thing to think more about.