I think there’s still some highly technical apparent-contradiction-resolution to do in the other direction: in a monist physical universe, you can’t quite say, “only Truth matters, not consequences”, because that just amounts to caring about the consequence of there existing a physical system that implements correct epistemology: the map is part of the territory.
To be clear, I think almost everyone who brings this up outside the context of AI design is being incredibly intellectually dishonest. (“It’d be irrational to say that—we’d lose funding! And if we lose funding, then we can’t pursue Truth!”) But I want to avoid falling into the trap of letting the forceful rhetoric I need to defend against bad-faith appeals-to-consequences, obscure my view of actually substantive philosophy problems.
I think there’s still some highly technical apparent-contradiction-resolution to do in the other direction: in a monist physical universe, you can’t quite say, “only Truth matters, not consequences”, because that just amounts to caring about the consequence of there existing a physical system that implements correct epistemology: the map is part of the territory.
To be clear, I think almost everyone who brings this up outside the context of AI design is being incredibly intellectually dishonest. (“It’d be irrational to say that—we’d lose funding! And if we lose funding, then we can’t pursue Truth!”) But I want to avoid falling into the trap of letting the forceful rhetoric I need to defend against bad-faith appeals-to-consequences, obscure my view of actually substantive philosophy problems.