I don’t want to die, but I also want to live. The future is inherently uncertain, so if I were to take a decreased quality of life (no more driving) to better my chances of surviving to ASI, I had better have strong intuition that ASI would come.
I’m short on AI timelines (2035 median, right tailed distribution), but I also drive and take more risky activities on covid because I like being in person with my friends and family. The fact that I don’t know what a post singularity world would look like helps me feel comfortable taking these risks, it seems almost anything can happen post singularity good (hopefully) and weird.
My main worry about decreasing my quality of life by not driving / decreasing risk is the alignment problem for AI. I can image a counterfactual world where AI is created but it is not aligned, where if I were to take heavy precautions I would suffer decreased quality of life many years just to die to a rouge agent.
If I am to take this serious and hyperbolic discount my life I would need to be more assured AI is align-able, and I am too new on my journey into AI to feel comfortable talking about that yet. It may seem myopic to take these risks considering the odds, but everything is uncertain, and I could always die one day before ASI anyway.
I live in a more rural place however, if you live in a big city where walking everywhere is feasible, your calculations for quality of life change.
The future is inherently uncertain, so if I were to take a decreased quality of life (no more driving) to better my chances of surviving to ASI, I had better have strong intuition that ASI would come.
Yeah, I definitely hear ya. I have these feelings too. But at the same time, I think it’s in violation of Shut Up and Multiply.
My main worry about decreasing my quality of life by not driving / decreasing risk is the alignment problem for AI. I can image a counterfactual world where AI is created but it is not aligned, where if I were to take heavy precautions I would suffer decreased quality of life many years just to die to a rouge agent.
I hear ya here too. It’s once of the main places that affects the conclusion, I think. My reasoning for expecting a post-singularity year to have positive utility is because it seems like a place where it’d make sense to adopt the opinion of the experts I quoted in the article. (And that it’s less depressing.)
I don’t want to die, but I also want to live. The future is inherently uncertain, so if I were to take a decreased quality of life (no more driving) to better my chances of surviving to ASI, I had better have strong intuition that ASI would come.
I’m short on AI timelines (2035 median, right tailed distribution), but I also drive and take more risky activities on covid because I like being in person with my friends and family. The fact that I don’t know what a post singularity world would look like helps me feel comfortable taking these risks, it seems almost anything can happen post singularity good (hopefully) and weird.
My main worry about decreasing my quality of life by not driving / decreasing risk is the alignment problem for AI. I can image a counterfactual world where AI is created but it is not aligned, where if I were to take heavy precautions I would suffer decreased quality of life many years just to die to a rouge agent.
If I am to take this serious and hyperbolic discount my life I would need to be more assured AI is align-able, and I am too new on my journey into AI to feel comfortable talking about that yet. It may seem myopic to take these risks considering the odds, but everything is uncertain, and I could always die one day before ASI anyway.
I live in a more rural place however, if you live in a big city where walking everywhere is feasible, your calculations for quality of life change.
Yeah, I definitely hear ya. I have these feelings too. But at the same time, I think it’s in violation of Shut Up and Multiply.
I hear ya here too. It’s once of the main places that affects the conclusion, I think. My reasoning for expecting a post-singularity year to have positive utility is because it seems like a place where it’d make sense to adopt the opinion of the experts I quoted in the article. (And that it’s less depressing.)