I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
I haven’t found a clear solution anywhere. There is only one point that everyone agrees on.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.