It’s not just a matter of pace; this perspective also implies a certain prioritization of the questions.
For example, as you say, it’s important to conclude soon whether animal welfare is important. (1) (2) But if we preserve the genetic information that creates new animals, we preserve the ability to optimize animal welfare in the future, should we at that time conclude that it is important. (2) If we don’t, then later concluding it’s important doesn’t get us much.
It seems to follow that preserving that information (either in the form of a breeding population, or some other form) is a higher priority, on this view, than proving that animal welfare is important. That is, for the next century, genetics research might be more relevant to maximizing long-term animal welfare than ethical philosophy research.
Of course, killing off animals is only one way to (hypothetically) irreversibly fail to optimize the future. Building an optimizing system that is incapable of correcting its initially mistaken terminal values—either because it isn’t designed to alter its programming, or because it has already converted all the mass-energy in the universe into waste heat, or whatever—is another. There are many more.
In other words, there are two classes of questions: the ones where a wrong answer is irreversible, and the ones where it isn’t. Philosophical work to determine which is which, and to get a non-wrong answer to the former ones, seems like the highest priority on this view.
===
(1) Not least because humans are already having an impact on it, but that’s beside your point.
(2) By “conclude that it’s important” I don’t mean adopting a new value, I mean become aware of an implication of our existing values. I don’t reject adopting new values, either, but I’m explicitly not talking about that here.
It’s not just a matter of pace; this perspective also implies a certain prioritization of the questions.
For example, as you say, it’s important to conclude soon whether animal welfare is important. (1) (2) But if we preserve the genetic information that creates new animals, we preserve the ability to optimize animal welfare in the future, should we at that time conclude that it is important. (2) If we don’t, then later concluding it’s important doesn’t get us much.
It seems to follow that preserving that information (either in the form of a breeding population, or some other form) is a higher priority, on this view, than proving that animal welfare is important. That is, for the next century, genetics research might be more relevant to maximizing long-term animal welfare than ethical philosophy research.
Of course, killing off animals is only one way to (hypothetically) irreversibly fail to optimize the future. Building an optimizing system that is incapable of correcting its initially mistaken terminal values—either because it isn’t designed to alter its programming, or because it has already converted all the mass-energy in the universe into waste heat, or whatever—is another. There are many more.
In other words, there are two classes of questions: the ones where a wrong answer is irreversible, and the ones where it isn’t. Philosophical work to determine which is which, and to get a non-wrong answer to the former ones, seems like the highest priority on this view.
===
(1) Not least because humans are already having an impact on it, but that’s beside your point.
(2) By “conclude that it’s important” I don’t mean adopting a new value, I mean become aware of an implication of our existing values. I don’t reject adopting new values, either, but I’m explicitly not talking about that here.