Yudkowsky apparently councils ignoring the ticking as well—here:
Until you can turn your back on your rivals and the ticking clock, blank them completely out of your mind, you will not be able to see what the problem itself is asking of you. In theory, you should be able to see both at the same time. In practice, you won’t.
I have argued repeatedly that the ticking is a fundamental part of the problem—and that if you ignore it, you just lose (with high probability) to those who are paying their clocks more attention. The “blank them completely out of your mind” advice seems to be an obviously-bad way of approaching the whole area.
It is unfortunate that getting more time looks very challenging. If we can’t do that, we can’t afford to dally around very much.
Yudkowsky apparently councils ignoring the ticking as well
Yes, and that comment may be the best thing he has ever written. It is a dilemma. Go too slow and the bad guys may win. Go too fast, and you may become the bad guys. For this problem, the difference between “good” and “bad” has nothing to do with good intentions.
Another analyis is that there are at least two types of possible problem:
One is the “runaway superintelligence” problem—which the SIAI seems focused on;
Another type of problem involves the preferences of only a small subset of human being respected.
The former problem has potentially more severe consequences (astronomical waste), but an engineering error like that seems pretty unlikely—at least to me.
The latter problem could still have some pretty bad consequences for many people, and seems much more probable—at least to me.
In a resource-limited world, too much attention on the first problem could easily contribute to running into the second problem.
Yudkowsky apparently councils ignoring the ticking as well—here:
I have argued repeatedly that the ticking is a fundamental part of the problem—and that if you ignore it, you just lose (with high probability) to those who are paying their clocks more attention. The “blank them completely out of your mind” advice seems to be an obviously-bad way of approaching the whole area.
It is unfortunate that getting more time looks very challenging. If we can’t do that, we can’t afford to dally around very much.
Yes, and that comment may be the best thing he has ever written. It is a dilemma. Go too slow and the bad guys may win. Go too fast, and you may become the bad guys. For this problem, the difference between “good” and “bad” has nothing to do with good intentions.
Another analyis is that there are at least two types of possible problem:
One is the “runaway superintelligence” problem—which the SIAI seems focused on;
Another type of problem involves the preferences of only a small subset of human being respected.
The former problem has potentially more severe consequences (astronomical waste), but an engineering error like that seems pretty unlikely—at least to me.
The latter problem could still have some pretty bad consequences for many people, and seems much more probable—at least to me.
In a resource-limited world, too much attention on the first problem could easily contribute to running into the second problem.