A shower thought I once had, intuition-pumped by MIRI’s / Luke’s old post on turning philosophy to math to engineering, was that if metaethicists really were serious about resolving their disputes they should contract a software engineer (or something) to help implement on GitHub a metaethics version of Table 2, where rows would be moral dilemmas like the trolley problem and columns ethical theories, and then accept that real-world engineering solutions tend to be “dirty” and inelegant remixes plus kludgy optimisations to handle edge cases, but would clarify what the SOTA was and guide “metaethical innovation” much better, like a qualitative multi-criteria version of AI benchmarks.
I gave up on this shower thought for various reasons, including that I was obviously naive and hadn’t really engaged with the metaethical literature in any depth, but also because I ended up thinking that disagreements on doing good might run ~irreconcilably deep, plus noticing that Rethink Priorities had done the sophisticated v1 of a subset of what I had in mind and nobody really cared enough to change what they did. (In my more pessimistic moments I’d also invoke the diseased discipline accusation, but that may be unfair and outdated.)
if metaethicists really were serious about resolving their disputes they should contract a software engineer (or something) to help implement on GitHub a metaethics version of Table 2
There is a progression from philosophy to maths to engineering. But this sounds like your anxious to skip to the engineering. As the old addage goes. Engineering must be done. This is engineering. Therefore this must be done.
If the LLM is just spitting out random opinions it found on r/philosophy, how is this useful? If we want a bunch of random opinions, we can check r/philosophy ourselves.
This plan sounds like a rush to engineer something without the philosophy, resulting in entirely the wrong thing being produced.
and then accept that real-world engineering solutions tend to be “dirty” and inelegant remixes plus kludgy optimisations to handle edge cases,
Because the tricky thing here isn’t making an algorithm to produce the right answer, but deciding what the right answer is.
Suppose I had an algorithm that could perfectly predict what Joe public would think about any ethics dilemma, given 1 minute to think. Is this algorithm a complete solution to meta-ethics.
I unironically love Table 2.
A shower thought I once had, intuition-pumped by MIRI’s / Luke’s old post on turning philosophy to math to engineering, was that if metaethicists really were serious about resolving their disputes they should contract a software engineer (or something) to help implement on GitHub a metaethics version of Table 2, where rows would be moral dilemmas like the trolley problem and columns ethical theories, and then accept that real-world engineering solutions tend to be “dirty” and inelegant remixes plus kludgy optimisations to handle edge cases, but would clarify what the SOTA was and guide “metaethical innovation” much better, like a qualitative multi-criteria version of AI benchmarks.
I gave up on this shower thought for various reasons, including that I was obviously naive and hadn’t really engaged with the metaethical literature in any depth, but also because I ended up thinking that disagreements on doing good might run ~irreconcilably deep, plus noticing that Rethink Priorities had done the sophisticated v1 of a subset of what I had in mind and nobody really cared enough to change what they did. (In my more pessimistic moments I’d also invoke the diseased discipline accusation, but that may be unfair and outdated.)
There is a progression from philosophy to maths to engineering. But this sounds like your anxious to skip to the engineering. As the old addage goes. Engineering must be done. This is engineering. Therefore this must be done.
If the LLM is just spitting out random opinions it found on r/philosophy, how is this useful? If we want a bunch of random opinions, we can check r/philosophy ourselves.
This plan sounds like a rush to engineer something without the philosophy, resulting in entirely the wrong thing being produced.
Because the tricky thing here isn’t making an algorithm to produce the right answer, but deciding what the right answer is.
Suppose I had an algorithm that could perfectly predict what Joe public would think about any ethics dilemma, given 1 minute to think. Is this algorithm a complete solution to meta-ethics.
No.