Well, I don’t know that I’m a particularly good example to emulate but my responses to updating to a strong belief in a future of radical change in < ~15 years include:
quitting my comfy mainstream ML engineer job to do full-time AI safety research, despite this meaning a lot less money and security and additional challenges like having to work alone instead of in a good-feeling team environment. Also having to put extra will-power into self-motivating despite lack of clear objective rapid feedback loops.
stopping putting money into my 401k, stopping imagining I’m going to have anything like the normal retirement my grandparents and parent had
adjusting some of my stock investments
having tumultuous internal debate around whether to go forward with my plans to have a child (I lean yes, but it feels a lot trickier to be confident now than it did 5 years ago)
sleeping less well at night
becoming more anxious and acquiring corresponding ticks and bad habits
spending more time and energy on trying to actively combat this anxiety with better mental health practices like extra exercise, more time in nature, taking more frequent brief work breaks to play with my dog, active measures to improve my sleep (got a CPAP machine which helps a lot) plus simple stuff like better mattress and light blocking curtains.
being prepared to move my family to a new country in an emergency (passports up to date and such)
trying to intentionally put energy into social connections I could depend on in an emergency situation.
buying a home in a rural area near my extended family, with backup stores of food & fuel, wood stove for heat, generator, solar panels, electric and non-electric car, tools, various supplies, etc.
writing down plans and thoughts for ways in which I currently think are the best ways to handle a wide variety of futures. I’ve already spent a lot of my life trying to imagine weird ways the future might go and trying to acquire a very broad base of skills, so that’s not much of a change. One change that’s more recent is more of a focus on planning and skill development around how I might work with a complex digital world making good use of AI assistants. In a multi-polar world with a mix of aligned and unaligned proto-AGI that might or might not FOOM, what actions are beneficial to yourself and humanity for you to take?
Can you come up with a set of questions that you could ask an agent over the internet using a text-based medium to try to determine if that agent is human vs AI, to determine how aligned with your interests that agent is? What sorts of questions would be harder to fake answers to?
trying to actively think and worry less about things which have a longer time horizon than 15 years. Like, practicing getting rid of habits like feeling guilty for failing to recycle personal waste when it is inconvenient. The landfill near my home won’t run out of space in the next 15 years no matter how little I recycle! This is a silly worry! But there are so many little silly habits in my brain, and getting rid of them isn’t easy.
Anyone who thinks they have even remotely the sort of competence which could help build aligned AI should work on that, even if they think they aren’t of the highest caliber or not seeing immediate positive benefits accruing from their initial attempts. I’m definitely of the opinion that the more people we can get working on it the better (provided that you don’t hinder the best thinkers with the fumbling of the worst).
Even seemingly non-technical tasks can be quite useful, even if you can only afford to do them in your spare time. For instance, I think that writing down your moral intuitions around a wide variety of subjects could be quite useful. The more samples we have of human moral intuitions the better, for a variety of different alignment-related uses. Also, if we do hit a strong AGI singularity, it could be really useful for your personal morals and desires to be represented in the datasets used to train and align the AGI! Much more likely that the resulting world will match your desires.
Here’s a paper with some good examples of the sorts of questions it is useful for you to come up with and write down your personal answers to: https://arxiv.org/abs/2008.02275
I can relate to so many of your points. I too am getting less sleep, planning a rural well stocked estate, and stopping my 401k contributions.
The point about social connections makes a lot of sense, but that’s the hardest one for me. I think it would be best to have connections with people who share my view of the future, and who want to prepare together. I have my large family, but I think it would be best to have a larger community.
I think if you’re able to sleep well (if you can handle the logistics/motivation around it, or perhaps if sleeping well is a null action with no cost), it will be a win after a few days (or at most, weeks)
I don’t think Nathan was suggesting that “sleeping less well at night” is a desirable response to the situation, merely that it’s a response that they’ve developed, probably against their conscious will. Similarly to the next one on the list, “becoming more anxious and acquiring corresponding ticks and bad habits”.
Well, I don’t know that I’m a particularly good example to emulate but my responses to updating to a strong belief in a future of radical change in < ~15 years include:
quitting my comfy mainstream ML engineer job to do full-time AI safety research, despite this meaning a lot less money and security and additional challenges like having to work alone instead of in a good-feeling team environment. Also having to put extra will-power into self-motivating despite lack of clear objective rapid feedback loops.
stopping putting money into my 401k, stopping imagining I’m going to have anything like the normal retirement my grandparents and parent had
adjusting some of my stock investments
having tumultuous internal debate around whether to go forward with my plans to have a child (I lean yes, but it feels a lot trickier to be confident now than it did 5 years ago)
sleeping less well at night
becoming more anxious and acquiring corresponding ticks and bad habits
spending more time and energy on trying to actively combat this anxiety with better mental health practices like extra exercise, more time in nature, taking more frequent brief work breaks to play with my dog, active measures to improve my sleep (got a CPAP machine which helps a lot) plus simple stuff like better mattress and light blocking curtains.
being prepared to move my family to a new country in an emergency (passports up to date and such)
trying to intentionally put energy into social connections I could depend on in an emergency situation.
buying a home in a rural area near my extended family, with backup stores of food & fuel, wood stove for heat, generator, solar panels, electric and non-electric car, tools, various supplies, etc.
writing down plans and thoughts for ways in which I currently think are the best ways to handle a wide variety of futures. I’ve already spent a lot of my life trying to imagine weird ways the future might go and trying to acquire a very broad base of skills, so that’s not much of a change. One change that’s more recent is more of a focus on planning and skill development around how I might work with a complex digital world making good use of AI assistants. In a multi-polar world with a mix of aligned and unaligned proto-AGI that might or might not FOOM, what actions are beneficial to yourself and humanity for you to take?
Can you come up with a set of questions that you could ask an agent over the internet using a text-based medium to try to determine if that agent is human vs AI, to determine how aligned with your interests that agent is? What sorts of questions would be harder to fake answers to?
trying to actively think and worry less about things which have a longer time horizon than 15 years. Like, practicing getting rid of habits like feeling guilty for failing to recycle personal waste when it is inconvenient. The landfill near my home won’t run out of space in the next 15 years no matter how little I recycle! This is a silly worry! But there are so many little silly habits in my brain, and getting rid of them isn’t easy.
Anyone who thinks they have even remotely the sort of competence which could help build aligned AI should work on that, even if they think they aren’t of the highest caliber or not seeing immediate positive benefits accruing from their initial attempts. I’m definitely of the opinion that the more people we can get working on it the better (provided that you don’t hinder the best thinkers with the fumbling of the worst).
Even seemingly non-technical tasks can be quite useful, even if you can only afford to do them in your spare time. For instance, I think that writing down your moral intuitions around a wide variety of subjects could be quite useful. The more samples we have of human moral intuitions the better, for a variety of different alignment-related uses. Also, if we do hit a strong AGI singularity, it could be really useful for your personal morals and desires to be represented in the datasets used to train and align the AGI! Much more likely that the resulting world will match your desires.
Here’s a paper with some good examples of the sorts of questions it is useful for you to come up with and write down your personal answers to: https://arxiv.org/abs/2008.02275
another related paper: https://psyarxiv.com/tnf4e/
I can relate to so many of your points. I too am getting less sleep, planning a rural well stocked estate, and stopping my 401k contributions.
The point about social connections makes a lot of sense, but that’s the hardest one for me. I think it would be best to have connections with people who share my view of the future, and who want to prepare together. I have my large family, but I think it would be best to have a larger community.
I disagree with “sleeping less well at night”.
I think if you’re able to sleep well (if you can handle the logistics/motivation around it, or perhaps if sleeping well is a null action with no cost), it will be a win after a few days (or at most, weeks)
I don’t think Nathan was suggesting that “sleeping less well at night” is a desirable response to the situation, merely that it’s a response that they’ve developed, probably against their conscious will. Similarly to the next one on the list, “becoming more anxious and acquiring corresponding ticks and bad habits”.
Ah,
I thought it was “I’m going to sacrifice sleep time to get a few extra hours of work”
My bad