Great work, it’s wonderful to know that Google DeepMind isn’t dismissing AI risk, and is willing to allocate at least some resources towards it.
When you have lists of problems and lists of solutions, can you quantify them by their relative importances? A very rough guess might be better than nothing.
Without knowing the relative importances, it’s hard to know if we agree or disagree, since we might both say “I believe in X, Y and Z, they are very important,” without realizing that one of us thinks the importance is 80% X, 10% Y, 10% Z, while the other one thinks it’s 5% X, 10% Y, 85% Z.
Great work, it’s wonderful to know that Google DeepMind isn’t dismissing AI risk, and is willing to allocate at least some resources towards it.
When you have lists of problems and lists of solutions, can you quantify them by their relative importances? A very rough guess might be better than nothing.
Without knowing the relative importances, it’s hard to know if we agree or disagree, since we might both say “I believe in X, Y and Z, they are very important,” without realizing that one of us thinks the importance is 80% X, 10% Y, 10% Z, while the other one thinks it’s 5% X, 10% Y, 85% Z.