No, population is growing. Spending a few additional decades on AI safety research is likely improving our chances of survival. Of course listening to AI safety researchers and not just AI researchers from a random university matters as well.
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
I haven’t found a clear solution anywhere. There is only one point that everyone agrees on.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.
But isn’t humanity already killing itself? Maybe is an AI our last chance to survive?
No, population is growing. Spending a few additional decades on AI safety research is likely improving our chances of survival. Of course listening to AI safety researchers and not just AI researchers from a random university matters as well.
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.