By “moral anti-realist” I just meant “not a moral realist”. I’m also not a moral objectivist or a moral universalist. If I was trying to use my understanding of philosophical terminology (which isn’t something I’ve formally studied and is thus quite shallow) to describe my viewpoint then I believe I’d be a moral relativist, subjectivist, semi-realist ethical naturalist. Or if you want a more detailed exposition of the approach to moral reasoning that I advocate, then read my sequence AI, Alignment, and Ethics, especially the first post. I view designing an ethical system as akin to writing “software” for a society (so not philosophically very different than creating a deontological legal system, but now with the addition of a preference ordering and thus an implicit utility function), and I view the design requirements for this as being specific to the current society (so I’m a moral relativist) and to human evolutionary psychology (making me an ethical naturalist), and I see these design requirements as being constraining, but not so constraining to have a single unique solution (or, more accurately, that optimizing an arbitrarily detailed understanding of them them might actually yield a unique solution, but is an uncomputable problem whose inputs we don’t have complete access to and that would yield an unusably complex solution, so in practice I’m happy to just satisfice the requirements as hard as is practical), so I’m a moral semi-realist.
Please let me know if any of this doesn’t make sense, or if you think I have any of my philosophical terminology wrong (which is entirely possible).
As for meta-philosophy, I’m not claiming to have solved it: I’m a scientist & engineer, and frankly I find most moral philosophers’ approaches that I’ve read very silly, and I am attempting to do something practical, grounded in actual soft sciences like sociology and evolutionary psychology, i.e. something that explicitly isn’t Philosophy. [Which is related to the fact that my personal definition of Philosophy is basically “spending time thinking about topics that we’re not yet in a position to usefully apply the scientific method to”, which thus tends to involve a lot of generating, naming and cataloging hypotheses without any ability to do experiments to falsify any of them, and that I expect us learning how to build and train minds to turn large swaths of what used to be Philosophy, relating to things like the nature of mind, language, thinking, and experience, into actual science where we can do experiments.]
By “moral anti-realist” I just meant “not a moral realist”. I’m also not a moral objectivist or a moral universalist. If I was trying to use my understanding of philosophical terminology (which isn’t something I’ve formally studied and is thus quite shallow) to describe my viewpoint then I believe I’d be a moral relativist, subjectivist, semi-realist ethical naturalist. Or if you want a more detailed exposition of the approach to moral reasoning that I advocate, then read my sequence AI, Alignment, and Ethics, especially the first post. I view designing an ethical system as akin to writing “software” for a society (so not philosophically very different than creating a deontological legal system, but now with the addition of a preference ordering and thus an implicit utility function), and I view the design requirements for this as being specific to the current society (so I’m a moral relativist) and to human evolutionary psychology (making me an ethical naturalist), and I see these design requirements as being constraining, but not so constraining to have a single unique solution (or, more accurately, that optimizing an arbitrarily detailed understanding of them them might actually yield a unique solution, but is an uncomputable problem whose inputs we don’t have complete access to and that would yield an unusably complex solution, so in practice I’m happy to just satisfice the requirements as hard as is practical), so I’m a moral semi-realist.
Please let me know if any of this doesn’t make sense, or if you think I have any of my philosophical terminology wrong (which is entirely possible).
As for meta-philosophy, I’m not claiming to have solved it: I’m a scientist & engineer, and frankly I find most moral philosophers’ approaches that I’ve read very silly, and I am attempting to do something practical, grounded in actual soft sciences like sociology and evolutionary psychology, i.e. something that explicitly isn’t Philosophy. [Which is related to the fact that my personal definition of Philosophy is basically “spending time thinking about topics that we’re not yet in a position to usefully apply the scientific method to”, which thus tends to involve a lot of generating, naming and cataloging hypotheses without any ability to do experiments to falsify any of them, and that I expect us learning how to build and train minds to turn large swaths of what used to be Philosophy, relating to things like the nature of mind, language, thinking, and experience, into actual science where we can do experiments.]