I am still reading. I’m inclined to agree with you that if some sort of moral realism is correct and if some demonstrably-godlike being tells you “X is good” then you’re probably best advised to believe it. I don’t understand how you get from there to the idea that we should be studying the universe like physicists looking for answers to moral questions; so far, so far as I know, all even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren’t proceeding at all like physicists and (2) very unimpressive evidentially.
I think it’s no more obvious that increasing the intelligence of whatever part of reality is under your control is good than that (say) preventing suffering is good, and since we don’t even know whether there are any effects that go on for ever it seems rather premature to declare that only such effects matter.
I think it’s obvious (assuming any sort of moral realism, or else taking “obligations” in a suitably relativized way) that the moral obligations on an agent depend on the physical facts about the universe, and you don’t need to consider exotic things like godlike beings to discover that. If you’re driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there’s a person trying to cross the road immediately in front of you.
(You wrote 2^K where you meant 2^-K. I assume that was just a typo.)
I am still reading. I’m inclined to agree with you that if some sort of moral realism is correct and if some demonstrably-godlike being tells you “X is good” then you’re probably best advised to believe it. I don’t understand how you get from there to the idea that we should be studying the universe like physicists looking for answers to moral questions; so far, so far as I know, all even-remotely-credible claims to have encountered godlike beings with moral advice to offer have been (1) from people who weren’t proceeding at all like physicists and (2) very unimpressive evidentially.
I think it’s no more obvious that increasing the intelligence of whatever part of reality is under your control is good than that (say) preventing suffering is good, and since we don’t even know whether there are any effects that go on for ever it seems rather premature to declare that only such effects matter.
I think it’s obvious (assuming any sort of moral realism, or else taking “obligations” in a suitably relativized way) that the moral obligations on an agent depend on the physical facts about the universe, and you don’t need to consider exotic things like godlike beings to discover that. If you’re driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there’s a person trying to cross the road immediately in front of you.
(You wrote 2^K where you meant 2^-K. I assume that was just a typo.)