I’d be interested in any specific examples of things AI workers can learn from philosophy at the present time. There has been at least one instance in the past: AI workers in the 1960s should have read Wittgenstein’s discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word. But I can’t think of any other instances.
Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant’s Categorical Imperative. I’m re-reading Kant right now (it’s been half a decade), but my primary recollection was that the categorical imperative boiled down to “make decisions not on your own behalf, but as though you decided for all rational agents in your situation.”
Some related criticisms of EDT are weirdly reminiscent of Kant’s critiques of other moral systems based on predicting the outcome of your actions. “Weirdly reminiscent of” rather than “reinventing” intentionally, but I try not to be too quick to dismiss older thinkers.
AI workers in the 1960s should have read Wittgenstein’s discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word.
Can you elaborate on this? It sounds fascinating. I confess I can’t make heads or tails of Wittgenstein.
Wittgenstein, in his discussion of games (specifically, his idea that concepts are delineated by fuzzy “family resemblance”, rather than necessary and sufficient membership criteria) basically makes the same points as Eliezer does intheseposts.
Representative quotes:
Consider for example the proceedings that we call “games”. I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? -- Don’t say: “There must be something common, or they would not be called ‘games’ “-but look and see whether there is anything common to all. -- For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look! -- …
And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities.
I can think of no better expression to characterize these similarities than “family resemblances”; for the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and cries-cross in the same way.-And I shall say: ‘games’ form a family...
“All right: the concept of number is defined for you as the logical sum of these individual interrelated concepts: cardinal numbers, rational numbers, real numbers, etc.; and in the same way
the concept of a game as the logical sum of a corresponding set of sub-concepts.”
—It need not be so. For I can give the concept ‘number’ rigid limits in this way, that is, use the word “number” for a rigidly limited concept, but I can also use it so that the extension of the concept is not closed by a frontier. And this is how we do use the word “game”. For how is the concept of a game bounded?
Moral philosophy in general is under-appreciated in FAI discussion in this community.
LW Metaethics Sequence : Solving Actual Moral Dilemmas as Inventing Peano Arithmetic : Inventing Artificial Intelligence. In short, an important and insightful first step. Hardly a conclusive resolution of the outstanding issues.
But if we want Friendly AI, we need to be able to tell it how to resolve moral disputes somehow. I have no idea if recent moral philosophy (post-1980) has the solutions, but I feel that even folks around here underestimate the severity of the problems implied by the Orthogonality Thesis.
Could you please be more specific and give me one example of an actual moral dilemma that is solved by moral philosophy and could be a useful lesson for the metaethics?
I’d be interested in any specific examples of things AI workers can learn from philosophy at the present time. There has been at least one instance in the past: AI workers in the 1960s should have read Wittgenstein’s discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word. But I can’t think of any other instances.
Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant’s Categorical Imperative. I’m re-reading Kant right now (it’s been half a decade), but my primary recollection was that the categorical imperative boiled down to “make decisions not on your own behalf, but as though you decided for all rational agents in your situation.”
Some related criticisms of EDT are weirdly reminiscent of Kant’s critiques of other moral systems based on predicting the outcome of your actions. “Weirdly reminiscent of” rather than “reinventing” intentionally, but I try not to be too quick to dismiss older thinkers.
Can you elaborate on this? It sounds fascinating. I confess I can’t make heads or tails of Wittgenstein.
Wittgenstein, in his discussion of games (specifically, his idea that concepts are delineated by fuzzy “family resemblance”, rather than necessary and sufficient membership criteria) basically makes the same points as Eliezer does in these posts.
Representative quotes:
Moral philosophy in general is under-appreciated in FAI discussion in this community.
LW Metaethics Sequence : Solving Actual Moral Dilemmas as Inventing Peano Arithmetic : Inventing Artificial Intelligence. In short, an important and insightful first step. Hardly a conclusive resolution of the outstanding issues.
But if we want Friendly AI, we need to be able to tell it how to resolve moral disputes somehow. I have no idea if recent moral philosophy (post-1980) has the solutions, but I feel that even folks around here underestimate the severity of the problems implied by the Orthogonality Thesis.
Could you please be more specific and give me one example of an actual moral dilemma that is solved by moral philosophy and could be a useful lesson for the metaethics?