We aren’t implicitly assuming (1) in this post. (Although I agree there will be economic pressure to expand the use of powerful AI, and this adds to the overall risk).
I don’t understand what you mean by (2). I don’t think I’m assuming it, but can’t be sure.
One hypothesis: That AI training might (implicitly? Through human algorithm iteration?) involve a pressure toward compute efficient algorithms? Maybe you think that this a reason we expect consequentialism? I’m not sure how that would relate to the training being domain-specific though.
We aren’t implicitly assuming (1) in this post. (Although I agree there will be economic pressure to expand the use of powerful AI, and this adds to the overall risk).
I don’t understand what you mean by (2). I don’t think I’m assuming it, but can’t be sure.
One hypothesis: That AI training might (implicitly? Through human algorithm iteration?) involve a pressure toward compute efficient algorithms? Maybe you think that this a reason we expect consequentialism? I’m not sure how that would relate to the training being domain-specific though.