I’ve noticed that part of me likes to dedicate disproportionate cognitive cycles to the question: “If you surgically excised all powerful AI from the world, what political policies would be best to decree, by your own lights?”
The thing is, we live in a world with looming powerful AI. It’s at least not consequentialist to spend a bunch of cognitive cycles honing your political views for a world we’re not in. I further notice that my default justification for thinking about sans-AI politics a lot is consequentialist… so something’s up here. I think some part of me has been illegitimately putting his thumb on an epistemic scale.
I’ve noticed that part of me likes to dedicate disproportionate cognitive cycles to the question: “If you surgically excised all powerful AI from the world, what political policies would be best to decree, by your own lights?”
The thing is, we live in a world with looming powerful AI. It’s at least not consequentialist to spend a bunch of cognitive cycles honing your political views for a world we’re not in. I further notice that my default justification for thinking about sans-AI politics a lot is consequentialist… so something’s up here. I think some part of me has been illegitimately putting his thumb on an epistemic scale.