Somewhat tangential, but… what thinking-algorithm would lead to fabricated options popping up often? Some of the examples in the post just involve incomplete and/or wrong models, but I don’t think that’s the whole story.
Here’s one interesting model: fabricated options naturally pop up in relaxation-based search algorithms. To efficiently search for a solution to a problem, we “relax” the problem by ignoring some of the constraints. We figure out how to solve the problem without those constraints, then we go back and figure out how to satisfy the constraints. (At a meta-level, we also keep track of roughly how hard each constraint is to satisfy—i.e. how taut/slack it is—in order to figure out which constraints we can probably figure out how to satisfy later if we ignore them now. There’s even a nice duality which lets us avoid an infinite ladder of meta-levels here: solutions are the constraints on constraints, just as proofs are the counterexamples of counterexamples.)
To the extent that this model applies, fabricated options are practically useful as a search strategy. A “useful” fabricated option is one for which we can more easily solve the problem by (1) solving the problem by figuring out how to use the fabricated option, and (2) figuring out how to achieve the fabricated option (or something close to it, or something which can solve the original problem in a similar way, etc). On the other hand, an “unhelpful” fabricated option would be one which cannot be achieved (and nothing similar can be achieved, and there does not exist any achievable solution which would solve the original problem in a similar way, etc).
Personally, I would frame this in terms of tautness/slackness of the constraints: ignoring a very taut constraint is unhelpful, because it’s really hard to actually relax that constraint. For instance, coordination, interfaces, and the ability to recognize true expertise all tend to be very taut constraints in practice; options which ignore those are probably not going to be helpful. On the other hand, options which assume I can make anything at Home Depot magically appear in my living room are more likely to be helpful; I can usually do something pretty similar to that.
In the price gouging example, an economist would model this as a very literal supply constraint, and (market) prices directly quantify the tautness of that constraint. Outlawing price gouging is basically pretending that one can make the constraint no-longer-taut by outlawing the visible signal of tautness.
Yes. The attention given to price gouging could instead be given to programs to alleviate supply constraints during emergencies. For example, government-sponsored stockpiles and airlifts, municipal or statewide disaster insurance used to purchase such services from a private company, incentivizing private citizens to stockpile, sponsoring excess or rapid ramp-up production capacity, and so on.
My thoughts: fabricated options are propositions derived using syllogisms over syntactic or semantic categories (but more probably, more specific psycholinguistic categories which have not yet been fully enumerated yet e.g. objects of specific types, mental concepts which don’t ground to objects, etc.), which may have worked reasonably well in the ancestral environment where more homogeneity existed over the physical properties of the grounded meanings of items in these categories.
There are some propositions in the form “It is possible for X to act just like Y but not be Y” which are physically realizable and therefore potentially true in some adjacent world, and other propositions which are not. Humans have a knack for deriving new knowledge using syllogisms like the ones above, which probably functioned reasonably well — they at least improved the fitness of our species — in the ancestral environment where propositions and syllogisms may have emerged.
The misapplication of syllogisms happens when agents don’t actually understand the grounded meanings of the components of their syllogism-derived propositions — this seems obvious to me after reading the responses of GPT-3, which has no grounded understanding of words and understands how they work only in the context of other words. In the Twin Earth case, you might argue that the one fabricating the XYZ water-like chemical does not truly understand what H2O and XYZ are, but has some understanding at least of how H2O acts as a noun phrase.
I am extremely confused by your comment, probably due to my own lack of linguistic knowledge. (This whole reply should be seen as a call for help)
What I got is that fabricated options came from people “playing with word salad to form propositions” without fully understanding the implication of the words involved.
(I tried to generate an example of “propositions derived using syllogisms over syntactic or semantic categories”, but I am way too confused to write anything that makes sense)
Here are 2 questions: how does your model differ from/relate to johnswentworth’s model? Is john’s model a superset of yours? My understanding is that johnswentworth’s model says our algorithm relaxed some constraints, while yours specifically say that we relaxed the “true meaning” of the words (so the word “water” no longer requires a specific electronic configuration, or the melting point/boiling point to be specifically 0⁄100, “water” now just means something that feels like water and is transparent)
“Playing with word salad to form propositions” is a pretty good summary, though my comment sought to explain the specific kind of word-salad-play that leads to Fabricated Options, that being the misapplication of syllogisms. Specifically, the misapplication occurs because of a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments[1](the arguments being X, Y above). If you know the categories of the arguments that a syllogism takes, I would call that a grounded understanding (as opposed to symbolic), since you can’t merely deal with the symbolic surface form of the syllogism to determine which categories it applies to. You actually need to deeply and thoughtfully consider which categories it applies to, as opposed to chucking in any member of the expected syntax category, e.g. any random Noun Phrase. When you feed an instance of the wrong category (or kind of category) as an argument to a syllogism, the syllogism may fail and you can end up with a false proposition/impossible concept/Fabricated Option.
My model is an example of johnswentworth’s relaxation-based search algorithm, where the constraints being violated are the syllogism argument properties (the properties of X and Y above) that are necessary for the syllogism to function properly, i.e. yield a true proposition/realizable concept.
I suggested above that these categories could be syntactic, semantic, or some mental category. In the case that they are syntactic, a “grounded” understanding of the syllogism is not necessary, though there probably aren’t many useful syllogisms that operate only over syntactic categories.
Thanks for your clarifications! It cleared up all of my written confusions. Though I have one major confusion that I am only able to pinpoint after your reply: from wiki, I understand syllogism as the 24 out of 256 2-premise deductions that are always true, but you seem to be saying that syllogism is not what I think it is. You said ”… a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments”, so syllogisms does not work universally with any words substituted into it, and only work when a specific category of words are used? If so, then can you provide an example of syllogism generating a false proposition when the wrong category of words are used?
Glad I could clear some things up! Your follow-up suspicions are correct, syllogisms do not work universally with any words substituted into them, because syllogisms operate over concepts and not syntax categories. There is often a rough correspondence between concepts and syntax categories, but only in one direction. For example, the collection of concepts that refer to humans taking actions can often be described/captured in verb phrases, however not all verb phrases represent humans taking actions. In general, for every syntax category (except for closed-class items like “and”) there are many concepts and concept groupings that can be expressed as that syntax category.
Going back to the Wiki page, the error I was trying to explain in my original comment happens when choosing of the subject, middle, and predicate (SMP) for a given syllogism (let’s say, one of the 24[1]). The first example I can think of concerns the use of the modifier “fake,” but let’s start with another syllogism first:
All cars have value.
Green cars are cars.
Green cars have value.
This is a true syllogism, there’s nothing wrong with it. What we’ve done is found a subset of cars, green cars, and inserted them as the subject into the minor premise. However, a nieve person might think that the actual trick was that we found a syntactic modifier of cars, green, and inserted the modified phrase “green cars” into the minor premise. They might then make the same mistake with the modifier “fake,” which does not (most of the time[2]) select out a subset of the set it takes as an argument. For example:
All money has value.
Fake money is money.
Fake money has value.
Obviously the problem occurs in the minor premise, “Fake money is money.” The counterfeit money that exists in the real world is in fact not money. But the linguistic construction “fake money” bears some kind of relationship to “money” such that a nieve person might agree to this minor premise while thinking, “well, fake money is money, it’s just fake,” or something like that.
Though when I say syllogism I’m actually referring to a more general notion of functions over symbols that return other symbols or propositions or truth values.
My experience with that behavior has been:
1: have a desired outcome in mind.
2: consider the largest visible difference between that outcome and the currently expected one
3: propose a change that is expected to alter the world in a way that results in that difference no longer being visible
4: if there are still glaring visible differences between the expected future and the desired one, iterate until there are no visible differences.
For example, people who see homeless encampments in public parks wish that they were not reminded of income inequality. They propose courses of action which assign blame to individual homeless people for the lack of housing, justifying forcibly removing them from the public areas. Those courses of action are expected to make the world look (to the people making the proposals) exactly like one in which there is no income inequality, therefore they implement sweeps.
In many political spheres, solving problems by making everyone shut up about them (and making non-issues problems by getting people to continuously mention them) actually works.
Somewhat tangential, but… what thinking-algorithm would lead to fabricated options popping up often? Some of the examples in the post just involve incomplete and/or wrong models, but I don’t think that’s the whole story.
Here’s one interesting model: fabricated options naturally pop up in relaxation-based search algorithms. To efficiently search for a solution to a problem, we “relax” the problem by ignoring some of the constraints. We figure out how to solve the problem without those constraints, then we go back and figure out how to satisfy the constraints. (At a meta-level, we also keep track of roughly how hard each constraint is to satisfy—i.e. how taut/slack it is—in order to figure out which constraints we can probably figure out how to satisfy later if we ignore them now. There’s even a nice duality which lets us avoid an infinite ladder of meta-levels here: solutions are the constraints on constraints, just as proofs are the counterexamples of counterexamples.)
To the extent that this model applies, fabricated options are practically useful as a search strategy. A “useful” fabricated option is one for which we can more easily solve the problem by (1) solving the problem by figuring out how to use the fabricated option, and (2) figuring out how to achieve the fabricated option (or something close to it, or something which can solve the original problem in a similar way, etc). On the other hand, an “unhelpful” fabricated option would be one which cannot be achieved (and nothing similar can be achieved, and there does not exist any achievable solution which would solve the original problem in a similar way, etc).
Personally, I would frame this in terms of tautness/slackness of the constraints: ignoring a very taut constraint is unhelpful, because it’s really hard to actually relax that constraint. For instance, coordination, interfaces, and the ability to recognize true expertise all tend to be very taut constraints in practice; options which ignore those are probably not going to be helpful. On the other hand, options which assume I can make anything at Home Depot magically appear in my living room are more likely to be helpful; I can usually do something pretty similar to that.
In the price gouging example, an economist would model this as a very literal supply constraint, and (market) prices directly quantify the tautness of that constraint. Outlawing price gouging is basically pretending that one can make the constraint no-longer-taut by outlawing the visible signal of tautness.
Yes. The attention given to price gouging could instead be given to programs to alleviate supply constraints during emergencies. For example, government-sponsored stockpiles and airlifts, municipal or statewide disaster insurance used to purchase such services from a private company, incentivizing private citizens to stockpile, sponsoring excess or rapid ramp-up production capacity, and so on.
My thoughts: fabricated options are propositions derived using syllogisms over syntactic or semantic categories (but more probably, more specific psycholinguistic categories which have not yet been fully enumerated yet e.g. objects of specific types, mental concepts which don’t ground to objects, etc.), which may have worked reasonably well in the ancestral environment where more homogeneity existed over the physical properties of the grounded meanings of items in these categories.
There are some propositions in the form “It is possible for X to act just like Y but not be Y” which are physically realizable and therefore potentially true in some adjacent world, and other propositions which are not. Humans have a knack for deriving new knowledge using syllogisms like the ones above, which probably functioned reasonably well — they at least improved the fitness of our species — in the ancestral environment where propositions and syllogisms may have emerged.
The misapplication of syllogisms happens when agents don’t actually understand the grounded meanings of the components of their syllogism-derived propositions — this seems obvious to me after reading the responses of GPT-3, which has no grounded understanding of words and understands how they work only in the context of other words. In the Twin Earth case, you might argue that the one fabricating the XYZ water-like chemical does not truly understand what H2O and XYZ are, but has some understanding at least of how H2O acts as a noun phrase.
I am extremely confused by your comment, probably due to my own lack of linguistic knowledge.
(This whole reply should be seen as a call for help)
What I got is that fabricated options came from people “playing with word salad to form propositions” without fully understanding the implication of the words involved.
(I tried to generate an example of “propositions derived using syllogisms over syntactic or semantic categories”, but I am way too confused to write anything that makes sense)
Here are 2 questions: how does your model differ from/relate to johnswentworth’s model? Is john’s model a superset of yours? My understanding is that johnswentworth’s model says our algorithm relaxed some constraints, while yours specifically say that we relaxed the “true meaning” of the words (so the word “water” no longer requires a specific electronic configuration, or the melting point/boiling point to be specifically 0⁄100, “water” now just means something that feels like water and is transparent)
Sorry about that, let me explain.
“Playing with word salad to form propositions” is a pretty good summary, though my comment sought to explain the specific kind of word-salad-play that leads to Fabricated Options, that being the misapplication of syllogisms. Specifically, the misapplication occurs because of a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments[1] (the arguments being X, Y above). If you know the categories of the arguments that a syllogism takes, I would call that a grounded understanding (as opposed to symbolic), since you can’t merely deal with the symbolic surface form of the syllogism to determine which categories it applies to. You actually need to deeply and thoughtfully consider which categories it applies to, as opposed to chucking in any member of the expected syntax category, e.g. any random Noun Phrase. When you feed an instance of the wrong category (or kind of category) as an argument to a syllogism, the syllogism may fail and you can end up with a false proposition/impossible concept/Fabricated Option.
My model is an example of johnswentworth’s relaxation-based search algorithm, where the constraints being violated are the syllogism argument properties (the properties of X and Y above) that are necessary for the syllogism to function properly, i.e. yield a true proposition/realizable concept.
I suggested above that these categories could be syntactic, semantic, or some mental category. In the case that they are syntactic, a “grounded” understanding of the syllogism is not necessary, though there probably aren’t many useful syllogisms that operate only over syntactic categories.
Thanks for your clarifications! It cleared up all of my written confusions. Though I have one major confusion that I am only able to pinpoint after your reply: from wiki, I understand syllogism as the 24 out of 256 2-premise deductions that are always true, but you seem to be saying that syllogism is not what I think it is. You said ”… a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments”, so syllogisms does not work universally with any words substituted into it, and only work when a specific category of words are used? If so, then can you provide an example of syllogism generating a false proposition when the wrong category of words are used?
Glad I could clear some things up! Your follow-up suspicions are correct, syllogisms do not work universally with any words substituted into them, because syllogisms operate over concepts and not syntax categories. There is often a rough correspondence between concepts and syntax categories, but only in one direction. For example, the collection of concepts that refer to humans taking actions can often be described/captured in verb phrases, however not all verb phrases represent humans taking actions. In general, for every syntax category (except for closed-class items like “and”) there are many concepts and concept groupings that can be expressed as that syntax category.
Going back to the Wiki page, the error I was trying to explain in my original comment happens when choosing of the subject, middle, and predicate (SMP) for a given syllogism (let’s say, one of the 24[1]). The first example I can think of concerns the use of the modifier “fake,” but let’s start with another syllogism first:
All cars have value.
Green cars are cars.
Green cars have value.
This is a true syllogism, there’s nothing wrong with it. What we’ve done is found a subset of cars, green cars, and inserted them as the subject into the minor premise. However, a nieve person might think that the actual trick was that we found a syntactic modifier of cars, green, and inserted the modified phrase “green cars” into the minor premise. They might then make the same mistake with the modifier “fake,” which does not (most of the time[2]) select out a subset of the set it takes as an argument. For example:
All money has value.
Fake money is money.
Fake money has value.
Obviously the problem occurs in the minor premise, “Fake money is money.” The counterfeit money that exists in the real world is in fact not money. But the linguistic construction “fake money” bears some kind of relationship to “money” such that a nieve person might agree to this minor premise while thinking, “well, fake money is money, it’s just fake,” or something like that.
Though when I say syllogism I’m actually referring to a more general notion of functions over symbols that return other symbols or propositions or truth values.
Actually, it’s contextual, some fake things are still those things.
this seem related: word aren’t type safe
My experience with that behavior has been: 1: have a desired outcome in mind. 2: consider the largest visible difference between that outcome and the currently expected one 3: propose a change that is expected to alter the world in a way that results in that difference no longer being visible 4: if there are still glaring visible differences between the expected future and the desired one, iterate until there are no visible differences.
For example, people who see homeless encampments in public parks wish that they were not reminded of income inequality. They propose courses of action which assign blame to individual homeless people for the lack of housing, justifying forcibly removing them from the public areas. Those courses of action are expected to make the world look (to the people making the proposals) exactly like one in which there is no income inequality, therefore they implement sweeps.
In many political spheres, solving problems by making everyone shut up about them (and making non-issues problems by getting people to continuously mention them) actually works.
(Feels more like “going deeper” than being on a tangent. Very useful comment imo.)