Ok, so just to make sure I understand your position:
(a) Without friendliness, “foominess” is dangerous.
(b) Friendliness is hard—we can’t use existing academia resources to solve it, as it will take too long. We need a pocket super-intelligent optimizer to solve this problem.
(c) We can’t make partial progress on the friendliness question with existing optimizers.
We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can’t make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.
Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, “bazaar minds,” not just “cathedral minds.” Given this agnosticism, I see value in “straight science” that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.
Ok, so just to make sure I understand your position:
(a) Without friendliness, “foominess” is dangerous.
(b) Friendliness is hard—we can’t use existing academia resources to solve it, as it will take too long. We need a pocket super-intelligent optimizer to solve this problem.
(c) We can’t make partial progress on the friendliness question with existing optimizers.
Is this fair?
“Yes” to (a), “no” to (b) and (c).
We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can’t make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.
Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, “bazaar minds,” not just “cathedral minds.” Given this agnosticism, I see value in “straight science” that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.