I misunderstood you then, sorry. But the pandemic analogy still feels a bit off. So, you basically mean that after a while of having the pandemic we just accepted it, because we’d rather have a higher risk of death than keep spending more time inconvenienced?
First, well, I haven’t looked at the data, but it would be interesting to know how many more people are dying in total because of COVID. With a 99% survival rate this is not exactly tuberculosis in the 19th century. So I’d guess that the difference is pretty small.
Second, people are doing what they can about it. Maybe not as efficiently as possible, but society is fully aware and committed to it. Very unfortunately, the situation regarding AI safety is quite the opposite.
And worse of all, AI safety is a thousand times more concerning than COVID.
It seems to me that either you’re deeply underestimating the former, or overestimating the latter.
I mean, someone tells you “AGI is coming in a year” and all you wanna do is survive that year so that maybe you can get uploaded? That’s like saying that the exam is next week and all you wanna do in the meantime is make sure you don’t forget your pen so you can maybe score an A.
Basically, in the near term, given that “AGI is coming in a year” and there is nothing you can do about it (assuming you are not a professional AI/ML researcher), there is a small but non-zero chance of benefiting from the AI superhuman capabilities, instead of being used for paperclips… so you may want to pay with some moderate daily inconvenience in order to increase your chances of survival until that moment.
AGI coming in 30 years makes this strategy unworkable. Covid showed us that the actual timeframe of voluntary inconvenience for most people is months, not decades. A year is already stretching it.
I’m sorry but doing nothing seems unnaceptable to me. There are some in this forum who have some influence on AI companies, so those could definitely do something. As for the public in general, I believe that if a good number of people took AI safety seriously, so that we could make our politicians take it seriously, things would change.
So there would definitely be the need to do something. Specially because unfortunately this is not a friendly AI / paperclipper dicotomy like most people here present it by only considering x-risk and not worse outcomes. I can imagine someone accepting death because we’ve always had to accept it, but not something worse than it.
Some people could definitely do something. I would not delude myself into thinking that I am one of those people, no matter how seriously I take AI safety.
Could be. I’ll concede that the probability that the average person couldn’t effectively do anything is much higher than the opposite. But imo some of the probable outcomes are so nefarious that doing nothing is just not an option, regardless. After all, if plenty of average people effectively decided to do something, something could get done. A bit like voting—one vote achieves nothing, many can achieve something.
If only it were a bit like voting, where everyone’s vote would add equally or at least close to it. Right now there is basically nothing you can do to help alignment research, unless you are a researcher. They have money, they have talent… It’s not even like voting blue in a deep red state, or vice versa. There you are at least adding to the statistics of votes, something that might some day change the outcome of another election down the road. Here you are past the event horizon, given the setup, and there will be no reprieve. You may die in the singularity, or emerge into another universe, but there is no going back.
I misunderstood you then, sorry. But the pandemic analogy still feels a bit off. So, you basically mean that after a while of having the pandemic we just accepted it, because we’d rather have a higher risk of death than keep spending more time inconvenienced?
First, well, I haven’t looked at the data, but it would be interesting to know how many more people are dying in total because of COVID. With a 99% survival rate this is not exactly tuberculosis in the 19th century. So I’d guess that the difference is pretty small.
Second, people are doing what they can about it. Maybe not as efficiently as possible, but society is fully aware and committed to it. Very unfortunately, the situation regarding AI safety is quite the opposite.
And worse of all, AI safety is a thousand times more concerning than COVID.
It seems to me that either you’re deeply underestimating the former, or overestimating the latter.
I mean, someone tells you “AGI is coming in a year” and all you wanna do is survive that year so that maybe you can get uploaded? That’s like saying that the exam is next week and all you wanna do in the meantime is make sure you don’t forget your pen so you can maybe score an A.
Basically, in the near term, given that “AGI is coming in a year” and there is nothing you can do about it (assuming you are not a professional AI/ML researcher), there is a small but non-zero chance of benefiting from the AI superhuman capabilities, instead of being used for paperclips… so you may want to pay with some moderate daily inconvenience in order to increase your chances of survival until that moment.
AGI coming in 30 years makes this strategy unworkable. Covid showed us that the actual timeframe of voluntary inconvenience for most people is months, not decades. A year is already stretching it.
I’m sorry but doing nothing seems unnaceptable to me. There are some in this forum who have some influence on AI companies, so those could definitely do something. As for the public in general, I believe that if a good number of people took AI safety seriously, so that we could make our politicians take it seriously, things would change.
So there would definitely be the need to do something. Specially because unfortunately this is not a friendly AI / paperclipper dicotomy like most people here present it by only considering x-risk and not worse outcomes. I can imagine someone accepting death because we’ve always had to accept it, but not something worse than it.
Some people could definitely do something. I would not delude myself into thinking that I am one of those people, no matter how seriously I take AI safety.
Could be. I’ll concede that the probability that the average person couldn’t effectively do anything is much higher than the opposite. But imo some of the probable outcomes are so nefarious that doing nothing is just not an option, regardless. After all, if plenty of average people effectively decided to do something, something could get done. A bit like voting—one vote achieves nothing, many can achieve something.
If only it were a bit like voting, where everyone’s vote would add equally or at least close to it. Right now there is basically nothing you can do to help alignment research, unless you are a researcher. They have money, they have talent… It’s not even like voting blue in a deep red state, or vice versa. There you are at least adding to the statistics of votes, something that might some day change the outcome of another election down the road. Here you are past the event horizon, given the setup, and there will be no reprieve. You may die in the singularity, or emerge into another universe, but there is no going back.