How do you suspect your beliefs would shift if you had any detailed model of intelligence?
Consider trying to imagine a particular wrong model of intelligence and seeing what it would say differently?
(not sure this is a useful exercise and we could indeed try to move on)
For what it is worth, I tried this exercise, and found that it did suggest 1) that hard takeoff seems relatively more plausible and 2) that designing nano-tech or doing science definitely involves Consequentialism.
For what it is worth, I tried this exercise, and found that it did suggest 1) that hard takeoff seems relatively more plausible and 2) that designing nano-tech or doing science definitely involves Consequentialism.