>I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I’m definitely not implying that longevity is higher or equal priority than AI alignment—it’s not. I’m just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn’t come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I’m glad your housemate pushes you to do it! But, well, it’s like not going to big concerts in Feb 2020 - it’s basic sanity most regular people would also know to follow. Hell it’s literally the FDA advice and has been for decades.]
I’ll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
There’s no need for rhetorical devices like “I’ll go out there and say it”. Please.
Also the force of norms looks weak to me in this place, it’s a herd of cats, so that explanation makes little sense. Also, it’s fine to state your understanding of a topic without describing everyone else as “nerd sniped”, no one will needle you for your conclusion. Also, there’s little point to commenting if you only state your conclusion—the conclusion is uninteresting, we’re looking to learn from the thought process behind it.
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as “much impact for many people” on my book.
But also, what’s the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it’s a hotly debated topic so asking for your personal best estimate.
Sure, it’s a lot compared to most activities, but it’s not a lot compared to the total people who could live in the future lightcone. You have to be clear what you’re comparing to when you say something is large.
My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I’d be more surprised if it didn’t happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there’s a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I’d have to think more to try to pin it down more.
>I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I’m definitely not implying that longevity is higher or equal priority than AI alignment—it’s not. I’m just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn’t come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I’m glad your housemate pushes you to do it! But, well, it’s like not going to big concerts in Feb 2020 - it’s basic sanity most regular people would also know to follow. Hell it’s literally the FDA advice and has been for decades.]
I’ll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
There’s no need for rhetorical devices like “I’ll go out there and say it”. Please.
Also the force of norms looks weak to me in this place, it’s a herd of cats, so that explanation makes little sense. Also, it’s fine to state your understanding of a topic without describing everyone else as “nerd sniped”, no one will needle you for your conclusion. Also, there’s little point to commenting if you only state your conclusion—the conclusion is uninteresting, we’re looking to learn from the thought process behind it.
It’s not a rhetorical device though? The OP said:
He wrote as if that was an open-and-shut case that needed no argumentation at all. I simply wrote that I am taking the other side.
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially
indefinitevery very long lives. This definitely counts as “much impact for many people” on my book.But also, what’s the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it’s a hotly debated topic so asking for your personal best estimate.
Sure, it’s a lot compared to most activities, but it’s not a lot compared to the total people who could live in the future lightcone. You have to be clear what you’re comparing to when you say something is large.
My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I’d be more surprised if it didn’t happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there’s a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I’d have to think more to try to pin it down more.