I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
Also my rationalist housemate Daniel Filan often reminds me of his basic belief about how doing 30 mins of exercise a few times a week has an expected return of something like 10 hours of life or whatever. (I forget the details.) It definitely happens to me a bunch.
Also right now I’m pretty excited about figuring out more of the micromorts I spend on different things, and get used to calculating things with them (including diet, exercise, as well as things in the reference class of walking through shady places at night or driving without a seatbelt). Now that I’ve gotten lots of practice with microcovid estimates, I can do this sort of thing much easier.
>I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I’m definitely not implying that longevity is higher or equal priority than AI alignment—it’s not. I’m just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn’t come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I’m glad your housemate pushes you to do it! But, well, it’s like not going to big concerts in Feb 2020 - it’s basic sanity most regular people would also know to follow. Hell it’s literally the FDA advice and has been for decades.]
I’ll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
There’s no need for rhetorical devices like “I’ll go out there and say it”. Please.
Also the force of norms looks weak to me in this place, it’s a herd of cats, so that explanation makes little sense. Also, it’s fine to state your understanding of a topic without describing everyone else as “nerd sniped”, no one will needle you for your conclusion. Also, there’s little point to commenting if you only state your conclusion—the conclusion is uninteresting, we’re looking to learn from the thought process behind it.
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as “much impact for many people” on my book.
But also, what’s the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it’s a hotly debated topic so asking for your personal best estimate.
Sure, it’s a lot compared to most activities, but it’s not a lot compared to the total people who could live in the future lightcone. You have to be clear what you’re comparing to when you say something is large.
My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I’d be more surprised if it didn’t happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there’s a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I’d have to think more to try to pin it down more.
I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one’s own death.
One optimistic explanation is that rationalists care more about AI risk because it’s an altruistic pursuit. That’s one possible way of answering OP’s question.
I decide both my actions and, to varying extents, the actions of people like me.
On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you)
A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn’t care much about living to 90 instead of 80.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly.
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you’ll be compensated something like a thousandth of the value you provided, then you’re looking at $2M in present value.
I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
Also my rationalist housemate Daniel Filan often reminds me of his basic belief about how doing 30 mins of exercise a few times a week has an expected return of something like 10 hours of life or whatever. (I forget the details.) It definitely happens to me a bunch.
Also right now I’m pretty excited about figuring out more of the micromorts I spend on different things, and get used to calculating things with them (including diet, exercise, as well as things in the reference class of walking through shady places at night or driving without a seatbelt). Now that I’ve gotten lots of practice with microcovid estimates, I can do this sort of thing much easier.
>I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I’m definitely not implying that longevity is higher or equal priority than AI alignment—it’s not. I’m just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn’t come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I’m glad your housemate pushes you to do it! But, well, it’s like not going to big concerts in Feb 2020 - it’s basic sanity most regular people would also know to follow. Hell it’s literally the FDA advice and has been for decades.]
I’ll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
There’s no need for rhetorical devices like “I’ll go out there and say it”. Please.
Also the force of norms looks weak to me in this place, it’s a herd of cats, so that explanation makes little sense. Also, it’s fine to state your understanding of a topic without describing everyone else as “nerd sniped”, no one will needle you for your conclusion. Also, there’s little point to commenting if you only state your conclusion—the conclusion is uninteresting, we’re looking to learn from the thought process behind it.
It’s not a rhetorical device though? The OP said:
He wrote as if that was an open-and-shut case that needed no argumentation at all. I simply wrote that I am taking the other side.
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially
indefinitevery very long lives. This definitely counts as “much impact for many people” on my book.But also, what’s the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it’s a hotly debated topic so asking for your personal best estimate.
Sure, it’s a lot compared to most activities, but it’s not a lot compared to the total people who could live in the future lightcone. You have to be clear what you’re comparing to when you say something is large.
My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I’d be more surprised if it didn’t happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there’s a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I’d have to think more to try to pin it down more.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one’s own death.
One optimistic explanation is that rationalists care more about AI risk because it’s an altruistic pursuit. That’s one possible way of answering OP’s question.
I decide both my actions and, to varying extents, the actions of people like me.
On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you)
A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn’t care much about living to 90 instead of 80.
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you’ll be compensated something like a thousandth of the value you provided, then you’re looking at $2M in present value.