I used to be an early-retirement fanatic, which I half-jokingly called “effective egotism”. I took enormous quality of life hits to maximize my savings rate, which was extremely high—I now spend my money more. I also took some time off during COVID and found not-working doesn’t suit me (this is probably something I should have done before devoting my twenties to early retirement) so I’ll probably remain a professional programmer until I’m obsolete or a paperclip.
I try to help with alignment where I can, purchasing ads for AI-risk podcasts, occasionally got a charismatic alignment-pilled person I know on mildly-popular podcasts as an attempt to raise AI-risk awareness, but given timelines I do think these efforts are slightly more pathetic than I did last year. I used to organize meetups with Altman every year, but my timelines going down to 5-10 years makes me less enthusiastic about OpenAI’s behavior, and is one of the reasons I stopped.
In terms of things that will happen before full AGI, I started writing short stories occasionally (the models starting to understand humor was quite an update) as I think human writers will be obsolete very soon, so now’s the time to produce old-fashioned art while it still has a chance of being worth reading to some people.
I regret that I did not become an expert in AI and cannot contribute directly to alignment, but then again I’m not very high-g so I doubt I would have succeeded even if I tried my very hardest.
As an aside, if anyone is reading this and is a talented developer, https://www.conjecture.dev/are hiring. (Edit, guess they are not still hiring) I’ve interacted enough with the people involved to know they are sincerely trying to tackle the problem while avoiding safety washing and navel gazing.
Even if you think the odds are low, “going out fighting” doesn’t look like a miserable trek. It looks like joining a cool startup in London (getting a visa is pretty seamless), with very bright people, who share much of your worldview and love programming. It looks like being among “your people” working on very important problems for a historically astounding salary. If you’re lucky enough to be able to join such a fight, consider it may be an enjoyable one!
If I had the brains/chops to contribute that’s where I would want to be working right now.
I had hoped to be write too, someday, even if given the odds it would likely be more for my own self aggrandisement than actual financial gain. But right now, I think it would be a rather large waste of time to embark on writing a novel of any length, because I have more immediately satisfying ways of both passing the time, and certainly of making money.
When I feel mildly sad about that, I remind myself that I consume a great deal more media than I could ever produce, and since my livelihood isn’t at stake, it’s a net win for me to live in a world where GPT-N can produce great works of literature, especially the potential to just ask it to produce bespoke works for my peculiar tastes.
Maybe in another life my trajectory could have resembled Scott Alexander’s, although if I’m being realistic he’s probably a better doctor and writer than I am or could be haha. I still wish I had the chance to try without thinking it was even less fruitful..
I used to be an early-retirement fanatic, which I half-jokingly called “effective egotism”. I took enormous quality of life hits to maximize my savings rate, which was extremely high—I now spend my money more. I also took some time off during COVID and found not-working doesn’t suit me (this is probably something I should have done before devoting my twenties to early retirement) so I’ll probably remain a professional programmer until I’m obsolete or a paperclip.
I try to help with alignment where I can, purchasing ads for AI-risk podcasts, occasionally got a charismatic alignment-pilled person I know on mildly-popular podcasts as an attempt to raise AI-risk awareness, but given timelines I do think these efforts are slightly more pathetic than I did last year. I used to organize meetups with Altman every year, but my timelines going down to 5-10 years makes me less enthusiastic about OpenAI’s behavior, and is one of the reasons I stopped.
In terms of things that will happen before full AGI, I started writing short stories occasionally (the models starting to understand humor was quite an update) as I think human writers will be obsolete very soon, so now’s the time to produce old-fashioned art while it still has a chance of being worth reading to some people.
I regret that I did not become an expert in AI and cannot contribute directly to alignment, but then again I’m not very high-g so I doubt I would have succeeded even if I tried my very hardest.
As an aside, if anyone is reading this and is a talented developer, https://www.conjecture.dev/
are hiring.(Edit, guess they are not still hiring) I’ve interacted enough with the people involved to know they are sincerely trying to tackle the problem while avoiding safety washing and navel gazing.Even if you think the odds are low, “going out fighting” doesn’t look like a miserable trek. It looks like joining a cool startup in London (getting a visa is pretty seamless), with very bright people, who share much of your worldview and love programming. It looks like being among “your people” working on very important problems for a historically astounding salary. If you’re lucky enough to be able to join such a fight, consider it may be an enjoyable one!
If I had the brains/chops to contribute that’s where I would want to be working right now.
LessWrong is also still hiring.
I had hoped to be write too, someday, even if given the odds it would likely be more for my own self aggrandisement than actual financial gain. But right now, I think it would be a rather large waste of time to embark on writing a novel of any length, because I have more immediately satisfying ways of both passing the time, and certainly of making money.
When I feel mildly sad about that, I remind myself that I consume a great deal more media than I could ever produce, and since my livelihood isn’t at stake, it’s a net win for me to live in a world where GPT-N can produce great works of literature, especially the potential to just ask it to produce bespoke works for my peculiar tastes.
Maybe in another life my trajectory could have resembled Scott Alexander’s, although if I’m being realistic he’s probably a better doctor and writer than I am or could be haha. I still wish I had the chance to try without thinking it was even less fruitful..