The promise of mind reading techniques whether it is a former FBI analyst or one of Paul Ekman’s microexpression reading human lie detectors. I become aware of this cottage industry during every trial-by-media where suspicion piles upon someone not yet charged with murder.
I have to admit I am skeptical that anyone has such an amazing power to see through the facade of a stranger and with a greater-than-chance determine if they are telling the truth or not. Doubly so because I am someone who is constantly misinterpreted, I have to manage my gestures and facial expressions because my confusion is often misread as disagreement; my approval for disapproval; even a simple statement like “I’m not hungry right now” is wrongly generalized as not liking the particular cuisine… and not that I just don’t want to eat anything right at this moment.
However if placed under the microscope by one of these former FBI body language experts would I feel a intense sense of validation ? Would I exclaim “yes, I feel seen, heard… you get me!”?
I have no doubt some people are more perceptive about emotional nuances than others: film and theatre actors who are trained to observe and mimic, people who have grown up in abusive or emotionally unstable households and are hyper sensitive to small changes in the mood of others (which of course may make them prone to more ‘false positives’ and paranoia), and of course mentalists like cold readers and palmists.
However being more emotionally perceptive doesn’t necessarily mean you can tell if someone is lying—or a particular statement is false, especially if that person is especially good at telling the truth, or like me—their natural body language and expression doesn’t express what you’d expect.
What I have greater faith in is that given even a small but emblematic example of a person’s extemporaneous speech you could derive an accurate personality and world-view portrait of them. In the same way that an accent can help you pinpoint the geographical and economic origin of a person (think of comedies like The Nanny that play up on this convention). Harry Shearer once explained that to play Richard Nixon he channeled Jack Benny—believing that Nixon’s persona and particularly his way of telling jokes was consciously or unconsciously modelled on that of Benny. Likewise Vladimir Putin’s distinctive gait has been attributed to a prenatal stroke, or that his subordinates including Dmitry Medvedev have “copied the boss”, the more persuasive explanation is that they all picked up the habit from watching Soviet Spy films as youngsters and wanting to emulate the hero.
The kinds of films, television, and role models, books, music and lyrics that someone has absorbed would also influence or at least be indicative of their world view. Given enough of these tells, while I am not sure that you could tell if someone is or isn’t a murderer, you could certainly gain a accurate insight into their worldview, the mental models they have about the world, what they value, what their ethics system is like etc. etc.
How much information can you extract about a person from a written transcript that they aren’t aware they are sharing is probably startling, but rarely or predictably “he’s a murderer” level.
On the scale between “pseudoscience that provides either completely random results or exactly what its operator wants to hear” and “always provides the correct answer”, there are some uncomfortable points where we probably get first, such as “provides the correct answer 99% of the time” (and with the 1% chance you are unlucky, and you are screwed because no one is going to believe you) or “provides the correct answer for neurotypical people” (and if you are an autist, you are screwed).
I am someone who is constantly misinterpreted… even a simple statement like “I’m not hungry right now” is wrongly generalized as not liking the particular cuisine
Similar here. My face expression is always on “neutral”, and my statements, no matter how simple and literal, are often creatively interpreted. And I guess I am sufficiently unusual, so heuristics like “let’s assume that he thinks/feels what an average person would think/feel in this situation” also fail. It took me a lot of time to understand myself to the level where I can explain things about myself verbally, but when I do, people usually find it implausible and try to find some hidden meaning behind my words.
So… a machine that could read my thoughts could feel validating. Assuming it does so correctly. But there is also a chance it would provide correct answers for most people, and incorrect answers for the few unusual ones.
On the scale between “pseudoscience that provides either completely random results or exactly what its operator wants to hear” and “always provides the correct answer”, there are some uncomfortable points where we probably get first, such as “provides the correct answer 99% of the time” (and with the 1% chance you are unlucky, and you are screwed because no one is going to believe you) or “provides the correct answer for neurotypical people” (and if you are an autist, you are screwed).
I’m afraid I need you to rephrase or elaborate on what you meant by this—are you saying, aware of a technique or method which is right 99% of the time or thereabouts. Or are you saying human variability makes such a technique impossible for anything but the most narrow populations? Or have I likely (and in a meta-way appropriately) completely missed the point? What do you think of more generally—as I explicate in the second half—revelations about a person’s internalized belief structures, including their hero’s and related moral system, but also the idea of idiolect being a symptom of their thinking and model of the world even if it is not a mechanism for directly ascertaining their personal belief in this or that specific statement?
Oops, I actually misinterpreted one part—when you wrote “if placed under the microscope”, my brain interpreted this literally, as if you were talking about a hypothetical future version of “mind reading” that would include checking your neurons by a microscope and probably interpreting the results using an AI.
What I meant is that people usually think about these things in “yes or no” categories. For example, if you asked people whether existing lie detectors work, the most frequent answers would probably be “of course yes, why would they use them otherwise?” or “of course not, it’s total bunk”.
There I didn’t mean to make a statement about lie detectors per se, but about: this is how people think about technologies when you ask them. They think the correct answer is either “yes” or “no”, even if it is something complicated like “sometimes” or “yes, but with exceptions”. If the popular belief happens to be an unqualified “yes”, and you happen to be the exception, you are screwed.
I believe the current “mind-reading” techniques like Paul Ekman’s are hit and miss. That they probably often work with typical people in typical situations, but fail when something unusual happens. (Someone may be scratching their nose because they are lying, but sometimes the nose is just itchy for a completely unrelated reason. Or the person is lying, but in a different way than you assume. Or is just generally uncomfortable, maybe thinking “this is true, but they seem unlikely to believe me”.)
Practically, “films, television, and role models, books, music and lyrics that someone has absorbed” are an enormous amount of data, especially for people who consume a lot of these media. Maybe someone who reads one book in ten years and only watches the mainstream TV could be modeled this way. But if you asked me to give you a list of books I have read and the movies I have seen, I could probably remember only a small fraction of them. How specifically is Paul Ekman going to find out which movies I have downloaded from internet, watched in private, then deleted and forgot about them? Is he really going to read the LW Sequences, and Worm, and other extremely long texts written for tiny subcultures, just to get a better model of me? No way. -- However, with the help of an AI processing the tons of texts could become feasible. (The problem of figuring out who read what still remains.)
You probably could figure out some things, for example you could notice that people are, maybe unknowingly, repeating some sentences or argument structures from some book or movie heroes. Then it would make sense to assume that they somehow identify with the heroes and the values they represent. But even then, people sometimes adopt some values differently than they were originally meant. For example, someone could be a 99% Randian, but believe that Rand made a mistake about one specific topic. But maybe the topic is relevant to the thing the mind-reader is trying to figure out. Also it’s possible that someone was a Randian (sorry for repeating the same example) in the past, but is not anymore, but still keeps some verbal manners or something. In other words, you might figure out the sources that influenced the person… but not whether the person has adopted that perspective wholesale or just partially, and how their opinions developed over time.
This will probably work better for some kinds of people, and worse for others. For example, many people, if they join a movement, they adopt 100% of the movement’s beliefs, because they really strongly want to belong. But other people are just like “yeah, they make some really good points, but are mistaken about many things”. Modelling the latter would be much more difficult. I assume that being difficult to model will positively correlate with intelligence and autism. (As in: intelligent people have more complex and therefore less predictable models of the world; autists care less about being compatible with their group and therefore knowing which groups they belong to has smaller predictive power.)
I am overwhelmingly confident that analysis of the kinds of narratives that a particular person spins, including what tropes they evoke—even if you’re not familiar with the tropes previously—would reveal a lot about their worldview, their ethical structure, the assumptions and modelling they have about how people, institutions, and general patterns they believe underlay the world.
A oversimplified example is a person who clearly has a “victim “mentality” and an obsession with the idea of attractiveness because they always use sentence structures (i.e. “they stopped me”) and narratives where other people have inhibited, bullied, envied, or actively sought to stifle the person telling the story and these details disproportionately make reference to people’s faces, figures, and use words like “ugly” “hot” “skinny” etc. It is not necessary to know what films, books, periodicals they read.
I think you would get the set of topics, but not necessarily the right idea about how exactly those topics apply to the current situation. To use your example, if someone’s speech patterns revolve around the topic of “bullying”, it might mean that the person was bullied 50 years ago and still didn’t get over it, or that the person is bullied right now, or perhaps that someone they care about is bullied and they feel unable to help them. (Or could be some combination of that; for example seeing the person they care about bullied triggered some memories of their own experience.)
Or if someone says things like “people are scammers”, it could mean that the person is a scammer and therefore assumes that many other people are the same, or it could mean that the person was scammed recently and now experiences a crisis of trust.
This reminds me of an anime Psycho Pass, where a computer system detects how much people are mentally deranged...
...and sometimes fails to distinguish between perpetrators and their victims, who also “exhibit unusual mental patterns” during the crime; basically committing the fundamental attribution error.
Anyway, this sounds like something that could be resolved empirically, by creating profiles of a few volunteers and then checking their correctness.
To use your example, if someone’s speech patterns revolve around the topic of “bullying”, it might mean that the person was bullied 50 years ago and still didn’t get over it
Yes. Which is invaluable information about how they see the world currently. How is that not the ‘right idea’? If that is how they continue to currently mentally represent events?
Your ‘people are scammers’ example is irrelevant, what is important is if they constantly bring in tropes or examples or imply deception. They may never use the word ‘scammer’ ‘mistrustful’ or make a declaration like ‘no one has integrity’. The pattern is what I’m talking about.
Examples of how not to write a paragraph are surprisingly rare
Epistemic Status: one person’s attempt to find counter-examples blew apart their own ( subjective) expectations
I try to assemble as many examples of how not to do something as ‘gold standard’ or best practice examples of how the same task should be done. The principle is similar to what Plutarch wrote: Medicine to produce health must examine disease, and music to create harmony must investigate discord.
However when I tried to examine how not to write, in particular examples of poorly written paragraphs—I was surprised by how rare they were. There are a great many okay paragraphs on the internet and in books, but very few that were so unclear or confusing that they were examples of ‘bad’ paragraphs.
In my categorization paragraphs can be great—okay—bad.
Okay paragraphs are the most numerous, they observe the rule of thumb—keep one idea to one paragraph. To be an ‘okay’ paragraph and rise above ‘bad’ all a paragraph needs to do is to successfully convey at least one idea. Most paragraphs I found do that.
What elevates great paragraphs above okay paragraphs is they do an especially excellent job of conveying at least one idea. There are many qualities they may exhibit, including persuasiveness, the appearance of insight, brevity and simplicity in conveying an otherwise impenetrable or ‘hard to grasp’ idea.
In some isolated cases a great paragraph may actually clearly and convincingly communicate disinformation or a falsehood. I believe there is much more to learn about the forms paragraphs take from a paragraph that conveys a falsehood convincingly than a paragraph that clearly conveys what is generally accepted as true.
What was surprising is how hard it is to find examples that invert the principle—a paragraph that is intended to convey an idea that is truthful but is hard to understand would be a bad paragraph in my categorization. Yet, despite actively looking for examples of ‘bad paragraphs’ I struggled to find some that were truly confusing or hopeless at conveying one single idea. This experience is especially surprising to me because it challenges a few assumptions or expectations that I had:
Assumption 1 - people who have mistaken or fringey beliefs are disproportionately incapable of expressing those beliefs in a clear and intelligible form. I expected that looking for the least popular comments on Reddit, I would find many stream of consciousness rants that failed to convey ideas. This was far less common than rants that at least conveyed intent and meaning intelligibly.
Assumption 2 - that as a whole, people need to learn to communicate better. I must reconsider, it appears on the transmission side, they already communicate better than I expected (counter-counterpoint: 1% rule)
Assumption 3 - the adage that good writing = good thinking. Perhaps not, it would seem that you can write clearly enough to be understood yet that doesn’t mean your underlying arguments are strong or your thinking is more ‘intelligent’.
Assumption 4 - That I’m a merely a below average communicator. It appears that if everyone is better than I expected, than I’m much further below average than I expected.
I have no take-out or conclusion on this highly subjective observation, hence why it is a quick-take and not a post. But I will add my current speculation:
My current theory for why is “I wasn’t looking in the right places”. For example, I ignored much academic or research literature because the ability of the writers to convey an idea is often difficult to assess without relevant domain knowledge as they are seldom made for popular consumption. Likewise I’m sure there’s many tea-spilling image boards where more stream-of-consciousness rants of greater impenetrability might be found.
My second theory is pareidolia: perhaps I highly overrate my comprehension and reading skills because I’m a ‘lazy reader’ who fills in intention and meaning that is not there?
A difficulty to common understanding I see here is that you’re talking of “good” or “bad” paragraphs in the absolute, but didn’t particularly define “good” or “bad” paragraph by some objective standard, so you’re relying on your own understanding of what’s good or bad. If you were defining good or bad relatively, you’d look for a 100 paragraphs, and post the worse 10 as bad. I’d be interested in seeing what were the worse paragraphs you found, some 50 percentile ones, and what were the best, then I’d tell you if I have the same absolute standards as you have.
Caveat: I know little to nothing about the architecture of such things, please take this as naive user feedback if you wish, or you could ignore it.
Just now I was asking the Meta AI chatbot how to do an ‘L-Cut’ using the Android Instagram app. It hallucinated for quite a few messages instructions how to ‘overlap’ two video tracks when editing a reel before it finally admitted that no such ability in fact exists in the Instagram App.
My grossly negligent mistake was assuming that a AI LLM with Meta Branding would have current or accurate knowledge of Meta properties and products.
However, imagine that there are two versions of the Instagram App, one that has this feature and one that doesn’t—why didn’t it ask “Just to check, what kind of phone are you using?” (which, also, would allay fears of ‘spying’ and invasion of privacy) and then, and only then give me advice or apologetically inform me that such a feature is not available. (In other branded/commercial chatbots, wouldn’t this be an opportunity for an up-sell?[1])
Maybe I’m not prompting LLMs in the right way, but they almost never ask me questions. And certainly never clarification questions. They make an assumption, and produce a slab of text with a boilerplate “let me know if this helps?” tacked on the end. I assume this is a conscious choice and not a limitation of the technology. To test my observation I entered the same prompt into three different LLMs:
I just asked Claude Haiku 3 “When was Rosberg’s first Grand Prix?” to see if it assumes I mean Keke Rosberg, or his son, Nico Rosberg.Claude apologies for not having information on Nico Rosberg’s first Grand Prix… and then goes on to state that “Formula One has had multiple drivers named Rosberg compete over the years. More details would be needed to determine the exact driver and their debut race.”
What I find interesting, not one sentence in the answer takes the form of an interrogative!
Next, I asked GPT-4o mini the exact same question, to which it confidently assumes which Rosberg I mean: “Nico Rosberg’s first Formula One Grand Prix was at the 2006 Bahrain Grand Prix, which took place on March 12, 2006. He raced for the Williams team.”
Microsoft CoPilot makes the same assumption, with the traditional tacked on boilerplate question: ”Nico Rosberg made his Formula One debut at the 2006 Bahrain Grand Prix. He went on to have a successful career, culminating in winning the World Drivers’ Championship in 2016. Is there anything else you’d like to know about his career or Formula One in general? [FI car Emoji, Smoke Emoji]”
Meta AI makes the same assumption, and gives a lot more unsolicited information, but it does mention his father: ”Nico Rosberg’s first Grand Prix was the 2006 Bahrain Grand Prix. This marked the beginning of his Formula One career, which would eventually see him drive for Williams from 2006 to 2009 and then Mercedes from 2010 to 2016. During his time with Mercedes, Rosberg won the Formula One Driver’s championship in 2016, a feat that made him and his father, Keke Rosberg, the second-father pairing to achieve this impressive milestone” Not that it is relevant, but Graham and Damon Hill were the first.
This is not an exhaustive sample by any means, but does confirm my naive observation that LLMs do not ask clarifying questions or use interrogatives in order to better shape their responses to my needs.
I imagine such a commercial hellscape would look a little like this:
”I was just wondering why I wasn’t earning any Ultrabonus points with my purchases” “Before we continue, could you tell me, do you have a Overcharge Co. Premium savings account, or a Overcharge Co. Platinum savings account?” ”Uhh I think it is a Premium.” ″I’m so sorry. if you have a Overcharge Co. Platinum savings account then you will not be able to enjoy our Overcharge co. ultrabonus points loyalty system. However you may be suprised that for only a small increase in account fee, you too can enjoy the range of rewards and discounts offered with the Overcharge co. ultrabonus points loyalty system. Would you like to learn more?”
These sorts of behavioral choices are determined by the feedback given by the people who train the AI. Nothing to do with the AI’s architecture or fundamental inclinations.
So the question to ask is, “Why do all the AI companies seem to think it’s less ideal for the AI to ask clarifying questions?”
One part of the reason is that it’s a lot easier to do single turn reinforcement. It’s hard to judge whether a chatbot’s answer is going to end up being helpful if it’s current turn consists of just a clarifying question.
Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly ‘guess’ what you mean.
I have some hard data that disproves I’m a realist, but suggests I’m a pessimist (i.e. more than 50% of the time my best expectations of personal outcomes are worse than the case). Now what?
The hard data is a handful of specific predictions for example, a prediction about a financial investment (where performance consistently exceeded my expectations) where I had no control over the performance, and things like a exam where I had control (although an instructor noted that I performed worse on the exam than a practice exam because of ‘nerves’).
Arguably the most immediate step is “just get more data: make sure this isn’t an aberration, or at the least see which specific types of predictions or expectations you’re more prone to systematic pessimism”.
Sure and then what? Should I change my investment style to allow more upside? If it means revising my models of the world—what shape does that take? What beliefs or habits are the cause of my pessimism?
Sounds like the right kind of questions to ask, but without more concrete data on what questions your predictions were off by how much, it is hard to give any better advice than: if your gut judgement tends to be 20% off after considering all evidence, move the number 20% up.
Personally me and my partner have a similar bias, but only for ourselves, so making predictions together on things like “Application for xyz will succeed. Y will read, be glad about and reply to the message I send them” can be helpful in cases where there are large disagreements.
Previously I thought that if you ask better questions then you will get better at solving problems. However questions are the shadows or reflections cast from the actual framing of the problem. If you have a well framed problem you will naturally ask better questions. If you haven’t framed the problem well, then you will ask bad questions.
Bad questions are still useful because they are a signal that you are “barking up the wrong tree” or that you need to reformulate the problem.
What marks a bad question and therefore signals a framing of the problem that is unconducive to solving it?
There’s probably a myriad of ways a question can be bad. It appears that most of the questions which signal a failure to frame a problem well are vague. Imagine someone who wants to become a Academy Award winning Cinematographer asks “has anyone every won an Academy Award for Cinematography without going to film school?” the answer is of course “yes”, especially in the early days of the award. But it is not a useful question in that it doesn’t narrow down which actions this aspiring cinematographer should take, avoid, nor clarifies which factors will most impede or expedite their journey. It is only useful in that it shows they are not asking useful questions and therefore their entire formulation of the problem needs work. Better questions are more useful questions.
Much like measures better questions are ones that influence decisions—if a change in answer to a question doesn’t change your decision, then it’s not a useful question.
Popular wisdom encourages us to ask open ended questions, especially those which ask “why?” or “how?”. While this is true for seeking advice or having discussions with experts or building consensus. Better questions even in these circumstances tend to be specific. (i.e. asking for vague help “how can I be a great cinematographer?” versus asking for specific advice “how did Michael Ballhaus light this scene in the nightclub in Under the Cherry Moon? How does it differ to his colour nightclub cinematography in Lola? Why did he make those decisions?” ). However open ended questions may not be better questions in the absence of an expert to ask, however specific they may be.
It is less Socratic, more something out of Yes, Minister, in that I don’t know what I don’t know—so if I ask myself rather than an expert “Why does this problem pervade?” all I can answer is a sort of tentative guess or what I believe is not a likely answer. Whereas an expert may be able to plug my knowledge gaps.
I am undecided whether this means why/how questions potentially better suited for assessing our knowledge or at least our confidence in our knowledge concerning the framing of the problem, but in the absence of an expert, not particularly useful.
Counterpoint: the circumstances where the questions appear to be “good” or “better” questions but you’re still solving the wrong problem? They are good for the problem you are mistakenly trying to solve.
DON’T write instructions like that, instead try this...
“Don’t...” “Stop doing this but instead...” “when you find yourself [operative verb] try to...” headed instructions tend to be more useful and actionable for me than non-refutative instructions. Or to get meta:
Don’t start instructions with the operative verb, instead begin with “Don’t [old habit] instead…[operative verb and instruction]” or “Stop [old habit] and [operative verb and instruction]
I find I’m terrible at making an instruction, advice or a note actionable because it is exceedingly difficult to find suitable cues, situations or contexts to use them. This is further complicated by the struggle to remember the instruction correctly in the ‘fog of war’ as it were.
For example, Nicholas Nassim Taleb notes that people are so prone to “overcausation” that you can get most people to become loquacious by simply asking “why?” (others say ‘why’ can come off as too accusatory and ‘how come?’ is more polite). I may like to see how true this is, but now I need to find a situation to use it in… uhhh… hmmm… okay, next time someone gives a one-word response about their weekend. Sure… now how can I remember it? In the panicky situation where a conversation grows quiet, how can I remember to ask “why?”?
Provided that an instruction or note that begins with “stop...” or “don’t” does in fact describe a habit you have or recurring situation you continue to encounter, then there is already a cue you can recognize.
For example, often when I hit an impasse while brainstorming, I will absentmindedly check my Instagram or a news website or here. That is a cue, and I can say “Don’t check Instagram, instead write down a brief description of what the next step is in your brain storming process.”
To test Taleb’s observation, I’d do well to think of something I often do or notice when a conversation peters out, something like “don’t say ‘haha yeah’, ask ‘why’”? (and trust I have the sense to not implement that robotically and ask ‘why?’ as a non-sequitur)
So my advice to myself: Don’t write instructions or notes that begin with “try to...” “you should...” or even “write instructions that begin with refutations” but instead use “Don’t… but instead” as a template.
Problem solving with Verbs: This came up when I was trying to think about how to better frame questions with the form “How should I X?”
When outlining stories or screenplays I find action or visual verbs immeasurably useful in creating more interesting scenes. Instead of “Joe talks to Bill” he can negotiate, remonstrate, beg, plead, mock, pontificate etc. Each of which makes the scene much more specific. “Maria goes to the store” is too vague, she may either meander to the store, sprint to the store, or even search for the store. These action verbs not only give us a sense of the character’s intentions and allow us to (appropriately enough for film) imagine visually how it plays out, but are more interesting and useful.
Why doesn’t the same apply to practical questions?
At the risk of going meta, take the question “Should I make a short film?”. There are surprisingly few action verbs that describe being a filmmaker[1] other than ‘make’ and off the top of my head you can either film, direct,”do a pitchya[2]”. If you want to be more specific about how it will be done, you can say you will improvise a film or you can say you will “go vertie”. I’m sure you can use Denominal verbs based off of directors with distinct processes:
“I’m going to Altmann this film” (i.e. lots of overlapping dialogue)
“I’m going to Malick this film” (i.e. lots of improvisation in natural light)
“I’m going to Maysles this film” (i.e. “direct cinema”[3]- long form documentary filmmaking with an observational “fly on the wall” approach to shooting)
“I’m going Kapadia/Morgen this film” (i.e. documentary assembled from archival material)
It kind of works with the question “How should I get to the party?”—rather than “get” I can drive, or I can walk, or I can carpool, I can rideshare, I can bike etc. I may even opt to describe my entrance, I can sneak in, I can explode and make an entrance.… In deed, if I choose to sneak in then I may opt to arrive on foot or rideshare so no one notices my car.
Yes, there are a plethora of department specific verbs—you can lens or shoot a film, score it, colour it, mix it, dub it, cue it, do foley, light it, dress it, design it, (location) scout it, produce it, cut or edit it etc. etc.
I noticed in interviews with Classic Hollywood directors Raoul Walsh and John Ford, they don’t call them “films” or “movies” but “picture”, pronounced more like “pitch-ya” as in “motion picture”.
Most people when they say Cinéma vérité mean “Direct Cinema”—the classic device of Cinéma vérité is the Vox Pop. The proverbial ‘man on the street’ is pulled in front of the camera—often looking down the barrel, already an artificial and performative situation, to give a honest opinion or barometer of public feeling.
I did consider adding “Kubrick it” as a example but I couldn’t decide if “do a lot of takes and wait for something strange or weird to happen as the actors get exhausted/bored” was sufficiently identifiable as a filmmaking process. Many directors do a lot of takes. Chaplain did a lot of takes. You can’t be Kubrick if you do a lot of takes, however there is something unusual and distinct about the way Altmann handled scenes with many characters.
The key here is it should describe both the manner and means in which the task is done. Going or getting to a party or store is too vague. Making or shooting a film tells me nothing about the style, genre, or logistics of filming.
What is the functional difference between Agency and having social power? This is likely a question that reflects my ignorance of the connotations of ‘Agency’ in Rationalist circles. When people say “he’s a powerful man in this industry” does that imply he is greatly Agentic? Can one be Agentic without having social power? Is one the potential and the other the actuality?
I’ll need some clarification: Does that mean that someone who habitually starts new processes or projects but seldom is able to finish them or see them through to completion has lots of (Rationalist sense) Agency?
But also, does that mean in a hypothetical organization where one person has the means to veto any decision others man, but the veto-holder seldom exercises it despite very easily being able to, the veto-holder would not be Agentic?
No. Initiative is, well, it’s an ordinary English word with a generally understood meaning. Pulled from the web:
“The ability to assess and initiate things independently”, “the power or opportunity to act or take charge before others do”, “the ability to use your judgment to make decisions and do things without needing to be told what to do”, synonyms “ambition, action, enterprise, drive, spirit, aggressiveness, vigor, hustle, energy, go, gumption, grit, spunk, assertiveness” etc. I think that paints a pretty clear picture.
This is what I have always understood by the word “agency” in the LW-sphere, at least when applied to people. The LW coinages “agenty” and “agentic” mean having agency in that sense.
So habitually starting things and letting them wither doesn’t cut it, and neither does nominally having some role but never executing it. It’s an inner quality that by its nature must manifest in outward actions.
The word “Agency” also has specific other, more technical uses. Here it is in philosophy, where it means something distantly similar but far broader. It’s a “porridge word” (Edward de Bono’s coinage), a hazy concept with little content that, like porridge, takes up the shape of whatever container it is put in. “Fake explanations” often consist of calling the thing to be explained by a porridge word.
Then there is “Agency” in the context of AIs having it, or being Agents. This is something that I don’t think the users of the word understand themselves. They’re trying to project human agency in the sense described above onto these giant weight matrices without having a non-mentalistic characterisation of the phenomenon they’re trying to find there. Not knowing what you’re looking for makes it difficult to find. From time to time I’ve suggested that control systems, hierarchically organised in a specific way, are the concept they need, but haven’t got much traction.
Thank you for taking the time to try and give me a broad overview of the different nuances of the word, unfortunately here the student has failed the teacher. I’m still very confused.
I previously have understood the porridge sense of agency (tangent—I like that phrase ‘porridge word’, reminds me of Minksy’s ‘suitecase word’) to be “an entity that has influence or can affect change”. Here on LW I have been brought to believe it just means acting, verging on thoughtlessly, which I understood to be since acting is the only way to catalyze change (i.e. change towards one’s goals).
So habitually starting things and letting them wither doesn’t cut it, and neither does nominally having some role but never executing it. It’s an inner quality that by its nature must manifest in outward actions.
I failed to explain my confusion: It’s not so much “letting them wither” let me put it another way: if you are in a bunker, there’s a armed conflict overhead, and therefore the smartest thing to do is “nothing” by staying put in the bunker, are you being agentic/acting agentically? The only things they can initiate at that point are unnecessary risk.
Likewise, I don’t mean nominally having some role. Not nominally but actually having the means, the power, the authority, the social status, the lack of negative repercussions to exercise the means, the knowledge but choosing not to exercise it because they evaluate it as not being worthwhile. They could initiate changes, but they rarely see the need, not from fear or reluctance, but from weighing up the pros and cons. Are they being agentic?
Agency here is not “change for the sake of change” but presumedly “acting in a way that materializes the agent’s goals” and that requires initiative, analogous to Aristotle’s Kinoun (Efficient) Cause—the carpenter who takes the initiative of making wood into a table. However the connotation of spunk, hustle, ambition etc. etc. and generally acting with energy and enthusiasm towards goals—knowing that these are not golden tickets to success (Necessary factors? Probably. Sufficient? Probably not.) -- confuses me what this quality is describing.
You’re looking at edge cases in order to understand the concept. I think looking at the centre works better than mapping out the periphery, which was my reason for giving those definitions and synonyms of “initiative”. If someone is in a situation where circumstances forestall any effective action, then to ask whether they are being “agentic” in doing nothing is like asking whether an unheard falling tree makes a sound.
I’m afraid I just have to give up on understanding what Agency means then. Thank you for trying though.
If someone is in a situation where circumstances forestall any effective action, then to ask whether they are being “agentic” in doing nothing is like asking whether an unheard falling tree makes a sound.
Unlike initiative because you can take initiative and it not deliver intended results. But it’s still initiative. While is being Agentic a potential or an actuality? I don’t know.
Agency has little to do with social power. It’s kind of hard to describe agency, but it’s characterized by deliberateness: carefully and consciously thinking about your goals as well as having conscious models for how they help you achieve your goals, in contrast to unthinkingly adhering to a routine or doing what everyone else is doing because it is what everyone else is doing. Also has some aspect of being the kind of person who does things, who chooses action over inaction.
So by that definition would you consider trickster archetype characters (you can see why I have been wondering) like Harpo Marx or Woody Woodpecker who appear to be very impulsive, albeit not bound by routines or what everyone else is doing because everyone else is doing it would not have Agency because he is highly reactionary and doesn’t plan?
Let me write out my current assumptions as it might make it easier to correct them:
Analysis Paralysis is not Agentic because while it involves carefulness and consciously plotting moves towards goals, it lacks action towards them.
Hedonic and Impulsive activity is not agentic because while it does involve action towards one’s goals, it lacks careful planning.
Agency then is making plans and acting upon them irrespective of whether one is able to see them through to completion, provided one has the intention and will, and the forethought.
“Babbling Better” this is a work in progress -and still requires more thinking
In short—need a methodology or at least heuristics for identifying the “right problem” to solve, and noticing when one is solving the “wrong problem”. Better problem framing leads to better and more focused answers to questions and hopefully eventual resolving of problems. I’ve come across two techniques: The Five Whys to understand problems better, and using adverbs of manner to babble more constructively.
So far:
It is easy to babble, babies do it. It is still quite easy to babble comprehensible but wrong sentences, such as LLM hallucinations. Your pruning is only as good as your babble.
With regards to problem solving, low quality babble doesn’t contribute to resolving the problem. For example, let’s say the problem is “camera autofocus doesn’t focus on eyes” a low quality “babble” answer might be “Burn a stick of incense and pray to Dionysius”. The acts themselves are feasible and the sentence is comprehensible. But any desired change in the camera’s autofocus performance will be pure coincidence.
My hunch is that to babble better not only do you need better methods of babbling, but you need to better understand what goals you are trying to babble towards. And that requires better understanding of why the problem is a problem.
5 Why’s on yourself: Asking “why I think this is a problem?” to at least 5 levels
Not to be mistaken for the Burger joint. The “Five Whys” technique was apparently invented at the Toyota Corporation as a system for uncovering the root causes of production faults.
The choice of “why” falls into broader pattern which takes me back to documentary filmmaking and interviewing: you learn more through open ended questions, usually those where the key interrogative is “why” or “how” than through close ended questions. These, as Wittgenstein pointed out, basically seek to affirm or negative a proposition or conditional: “Do you like him?” “Is he still there?” “Would you call that green or turquoise?”.
If I am a manager or investigator, trying to ascertain the cause of a fault on a production line, open ended questions make sense since I will not be in possession of all known or knowable facts. This still holds if I am a novice or just someone enquiring to an expert for help in achieving some goal. If I ask an experienced cinematographer “how would that scene be light?” even if they don’t know specifically, they have a large body of experience and knowledge that would mean they could probably make useful guesses on how to replicate the effect.
If i intend on asking for advice from an expert, I can’t give them the responsibility of figuring out the kind of help I need. The better I can define the problem myself the better and more informative the question I can ask them. Be too vague about your problem and you can only hope to get generic responses like “be confident”.
It seems ridiculous though, doesn’t it? Socratic or even from Yes, Minister: Why should I ask myself open ended questions if I don’t know what I don’t know? While I may not understand the problem, what I can do is at least explain why it’s a problem and how I see it. And one effective way to do that I’ve found is to use the Five Whys Technique.
It is often exceedingly difficult to know what the right problem to solve is, what we may have a better chance of defining is why we perceive it as a problem and why we expect it to cause conflict.
To—Do: add more techniques to my arsenal to better defined problems… the step before babbling
Adverbs and Creativity? Strategically Efficaciously Productively Babbling
I have recently come across a technique for higher-quality babble, at least for creative purposes. It is as simply as employing a Adverb of Manner to modify a verb. This is a minor variation on a technique used to allow mime artists to create a character—you take a situation or process like “make breakfast” and do it with an attitude: happy, hungover, lovelorn.
It is surprisingly easy to come up with scenarios and even stories with arcs—goals, conflict, and comedic pay-offs complete with a character who has distinct mannerisms—by just cycling through adverbs. Compare these three adverbs: grumpily, overzealously, nervously.
He bartends grumpily—he tries to avoid eye contact with customers, sighs like a petulant teenager when he does make eye contact, he slams down glasses, he spills drinks, on his face a constant scowl, he waves customers away dismissively. Even a simple glass of beer he treats like one of the labours of Herakles
He bartends overzealously—he invites customers to the bar, he slams down glasses too, he spills them, he accidently breaks glasses in his zeal but always with a smile on his face, he’s more than happy to do a theatrical shake of the mixer, throw it even if it doesn’t quite make it’s landing. He’s always making a chef’s kiss about any cocktail the customer asks for
He bartends nervously—he doesn’t realize when a customer is trying to order, giving a “who me?” reaction, he scratches his head a lot, he takes his time, he fumbles with bottles and glasses, he even takes back drinks and starts again.
These scenarios appear to “write themselves” for the purposes of short pantomime bits. This is the exact type of technique I have spent years searching for.
To do—Does this technique of better babbling through adverbs of manner apply to non-creative applications? If not then develop methodology or at least heuristics for identifying the right problem, noticing a “wrong problem”
Update (October 2024)- it is interesting looking back on this 8 months later as I think I have just hit upon a means of “babbling better”. I intend to revise and go into detail this means after a period of actually trying it out. It’s certainly not original, it vaguely resembles the method at Amazon of writing Memos and speculative Press Releases for a new proposal and uses your ‘internal simulator’.
in brief the way I employ this new method is taking the first kneejerk ‘babble’ or solution to the problem I come up with. Then I try to write a speculative narrative where this solution or action delivers a satisfactory or worthwhile result, being very methodical about the causation. This is not, I stress, a prediction or prognostication. What I find is that by writing a speculative narrative, and making it as convincing as possible to myself, it forces me to explicate my framework and mental model around the problem, my hunches, suspicions, assumptions, belief, fears, hopes, observations, knowledge and reasoning. Much of which I may not be consciously aware of.
With the framework explicated, I can now go about babbling. But it will be much more targeted and optimized based on my expectations, knowledge, and the framework in general.
Some (not yet confirmed) secondary bonuses of this method:
- it fights analysis paralysis, instead of babbling for breadth, it forces thinking about causation and consequences - it is inherently optimistic, as you’re forcing yourself to write a structured argument why this could or would work - having explicated your framework, you may be able to verify specific hunches or assumptions that hereto you weren’t aware they were influencing your thinking
One caveat: why a satisfactory narrative, why not a best case scenario? I think a best case scenario will assume a lot of coincidence, serendipity and as a means for reflection and explication of your mental modelling or framework of the problem is less informative. For that reason, causative words and phrases like “because” “owing to” “knowing that.… it follows such...” “for this reason” should be abundant.
To think about: Shannon Information and cataloguing ‘rushes’ from a documentary. This is not about the actual amount of entropy in any given frame of a uncompressed video. Rather the entropy of all the metadata from all the footage.
Eisenstenian film theory was an attempt to marry Marxist Dialectic with film editing. The “highest” type of film cut was “Intellectual Montage” the bone to nuclear-satellite cut in 2001: A Space Odyssey is perhaps the most iconic example in film history. Eisenstein himself used the more on-the-nose approach of showed crowds of protesters being mowed down by Tsarist troops being interspliced with footage of animals being slaughtered in an abattoir.
The Dialectic of cuts, the juxtaposition between image A and image B—be it the Kuleshov experiment—the actor appearing to look at either soup or a corpse lying in state thereby changing the inferred emotion of the actor—is a critical film language technique.
Documentary Rushes of similar thematic content—i.e. “Shot 1 - mid shot children playing” “Shot 2 - mid shot different children playing” and lower entropy. “Shot 1 - mid shot children playing” “Shot 87 - close up of old man smiling”. We want to avoid homogenous sets.
The problem for a film editor, especially a observational documentary film editor or someone working with archive material (think of the films of Bret Morgan and Asif Kapadia) is every time you create a sequence you have to watch all of the material, again, hoping to find the dialectic or invent a narrative that combines at least two shots together.
Binary Search algorithms are also relevant here.
CLIP and visual Semantic Networks can automate part of the search if the editor has something specific in mind. I want to cultivate serendipity—unforseen juxtapositions.
The promise of mind reading techniques whether it is a former FBI analyst or one of Paul Ekman’s microexpression reading human lie detectors. I become aware of this cottage industry during every trial-by-media where suspicion piles upon someone not yet charged with murder.
I have to admit I am skeptical that anyone has such an amazing power to see through the facade of a stranger and with a greater-than-chance determine if they are telling the truth or not. Doubly so because I am someone who is constantly misinterpreted, I have to manage my gestures and facial expressions because my confusion is often misread as disagreement; my approval for disapproval; even a simple statement like “I’m not hungry right now” is wrongly generalized as not liking the particular cuisine… and not that I just don’t want to eat anything right at this moment.
However if placed under the microscope by one of these former FBI body language experts would I feel a intense sense of validation ? Would I exclaim “yes, I feel seen, heard… you get me!”?
I have no doubt some people are more perceptive about emotional nuances than others: film and theatre actors who are trained to observe and mimic, people who have grown up in abusive or emotionally unstable households and are hyper sensitive to small changes in the mood of others (which of course may make them prone to more ‘false positives’ and paranoia), and of course mentalists like cold readers and palmists.
However being more emotionally perceptive doesn’t necessarily mean you can tell if someone is lying—or a particular statement is false, especially if that person is especially good at telling the truth, or like me—their natural body language and expression doesn’t express what you’d expect.
What I have greater faith in is that given even a small but emblematic example of a person’s extemporaneous speech you could derive an accurate personality and world-view portrait of them. In the same way that an accent can help you pinpoint the geographical and economic origin of a person (think of comedies like The Nanny that play up on this convention). Harry Shearer once explained that to play Richard Nixon he channeled Jack Benny—believing that Nixon’s persona and particularly his way of telling jokes was consciously or unconsciously modelled on that of Benny. Likewise Vladimir Putin’s distinctive gait has been attributed to a prenatal stroke, or that his subordinates including Dmitry Medvedev have “copied the boss”, the more persuasive explanation is that they all picked up the habit from watching Soviet Spy films as youngsters and wanting to emulate the hero.
The kinds of films, television, and role models, books, music and lyrics that someone has absorbed would also influence or at least be indicative of their world view. Given enough of these tells, while I am not sure that you could tell if someone is or isn’t a murderer, you could certainly gain a accurate insight into their worldview, the mental models they have about the world, what they value, what their ethics system is like etc. etc.
How much information can you extract about a person from a written transcript that they aren’t aware they are sharing is probably startling, but rarely or predictably “he’s a murderer” level.
On the scale between “pseudoscience that provides either completely random results or exactly what its operator wants to hear” and “always provides the correct answer”, there are some uncomfortable points where we probably get first, such as “provides the correct answer 99% of the time” (and with the 1% chance you are unlucky, and you are screwed because no one is going to believe you) or “provides the correct answer for neurotypical people” (and if you are an autist, you are screwed).
Similar here. My face expression is always on “neutral”, and my statements, no matter how simple and literal, are often creatively interpreted. And I guess I am sufficiently unusual, so heuristics like “let’s assume that he thinks/feels what an average person would think/feel in this situation” also fail. It took me a lot of time to understand myself to the level where I can explain things about myself verbally, but when I do, people usually find it implausible and try to find some hidden meaning behind my words.
So… a machine that could read my thoughts could feel validating. Assuming it does so correctly. But there is also a chance it would provide correct answers for most people, and incorrect answers for the few unusual ones.
I’m afraid I need you to rephrase or elaborate on what you meant by this—are you saying, aware of a technique or method which is right 99% of the time or thereabouts. Or are you saying human variability makes such a technique impossible for anything but the most narrow populations? Or have I likely (and in a meta-way appropriately) completely missed the point? What do you think of more generally—as I explicate in the second half—revelations about a person’s internalized belief structures, including their hero’s and related moral system, but also the idea of idiolect being a symptom of their thinking and model of the world even if it is not a mechanism for directly ascertaining their personal belief in this or that specific statement?
Oops, I actually misinterpreted one part—when you wrote “if placed under the microscope”, my brain interpreted this literally, as if you were talking about a hypothetical future version of “mind reading” that would include checking your neurons by a microscope and probably interpreting the results using an AI.
What I meant is that people usually think about these things in “yes or no” categories. For example, if you asked people whether existing lie detectors work, the most frequent answers would probably be “of course yes, why would they use them otherwise?” or “of course not, it’s total bunk”.
There I didn’t mean to make a statement about lie detectors per se, but about: this is how people think about technologies when you ask them. They think the correct answer is either “yes” or “no”, even if it is something complicated like “sometimes” or “yes, but with exceptions”. If the popular belief happens to be an unqualified “yes”, and you happen to be the exception, you are screwed.
I believe the current “mind-reading” techniques like Paul Ekman’s are hit and miss. That they probably often work with typical people in typical situations, but fail when something unusual happens. (Someone may be scratching their nose because they are lying, but sometimes the nose is just itchy for a completely unrelated reason. Or the person is lying, but in a different way than you assume. Or is just generally uncomfortable, maybe thinking “this is true, but they seem unlikely to believe me”.)
Practically, “films, television, and role models, books, music and lyrics that someone has absorbed” are an enormous amount of data, especially for people who consume a lot of these media. Maybe someone who reads one book in ten years and only watches the mainstream TV could be modeled this way. But if you asked me to give you a list of books I have read and the movies I have seen, I could probably remember only a small fraction of them. How specifically is Paul Ekman going to find out which movies I have downloaded from internet, watched in private, then deleted and forgot about them? Is he really going to read the LW Sequences, and Worm, and other extremely long texts written for tiny subcultures, just to get a better model of me? No way. -- However, with the help of an AI processing the tons of texts could become feasible. (The problem of figuring out who read what still remains.)
You probably could figure out some things, for example you could notice that people are, maybe unknowingly, repeating some sentences or argument structures from some book or movie heroes. Then it would make sense to assume that they somehow identify with the heroes and the values they represent. But even then, people sometimes adopt some values differently than they were originally meant. For example, someone could be a 99% Randian, but believe that Rand made a mistake about one specific topic. But maybe the topic is relevant to the thing the mind-reader is trying to figure out. Also it’s possible that someone was a Randian (sorry for repeating the same example) in the past, but is not anymore, but still keeps some verbal manners or something. In other words, you might figure out the sources that influenced the person… but not whether the person has adopted that perspective wholesale or just partially, and how their opinions developed over time.
This will probably work better for some kinds of people, and worse for others. For example, many people, if they join a movement, they adopt 100% of the movement’s beliefs, because they really strongly want to belong. But other people are just like “yeah, they make some really good points, but are mistaken about many things”. Modelling the latter would be much more difficult. I assume that being difficult to model will positively correlate with intelligence and autism. (As in: intelligent people have more complex and therefore less predictable models of the world; autists care less about being compatible with their group and therefore knowing which groups they belong to has smaller predictive power.)
I am overwhelmingly confident that analysis of the kinds of narratives that a particular person spins, including what tropes they evoke—even if you’re not familiar with the tropes previously—would reveal a lot about their worldview, their ethical structure, the assumptions and modelling they have about how people, institutions, and general patterns they believe underlay the world.
A oversimplified example is a person who clearly has a “victim “mentality” and an obsession with the idea of attractiveness because they always use sentence structures (i.e. “they stopped me”) and narratives where other people have inhibited, bullied, envied, or actively sought to stifle the person telling the story and these details disproportionately make reference to people’s faces, figures, and use words like “ugly” “hot” “skinny” etc. It is not necessary to know what films, books, periodicals they read.
I think you would get the set of topics, but not necessarily the right idea about how exactly those topics apply to the current situation. To use your example, if someone’s speech patterns revolve around the topic of “bullying”, it might mean that the person was bullied 50 years ago and still didn’t get over it, or that the person is bullied right now, or perhaps that someone they care about is bullied and they feel unable to help them. (Or could be some combination of that; for example seeing the person they care about bullied triggered some memories of their own experience.)
Or if someone says things like “people are scammers”, it could mean that the person is a scammer and therefore assumes that many other people are the same, or it could mean that the person was scammed recently and now experiences a crisis of trust.
This reminds me of an anime Psycho Pass, where a computer system detects how much people are mentally deranged...
...and sometimes fails to distinguish between perpetrators and their victims, who also “exhibit unusual mental patterns” during the crime; basically committing the fundamental attribution error.
Anyway, this sounds like something that could be resolved empirically, by creating profiles of a few volunteers and then checking their correctness.
Yes. Which is invaluable information about how they see the world currently. How is that not the ‘right idea’? If that is how they continue to currently mentally represent events?
Your ‘people are scammers’ example is irrelevant, what is important is if they constantly bring in tropes or examples or imply deception. They may never use the word ‘scammer’ ‘mistrustful’ or make a declaration like ‘no one has integrity’. The pattern is what I’m talking about.
Examples of how not to write a paragraph are surprisingly rare
Epistemic Status: one person’s attempt to find counter-examples blew apart their own ( subjective) expectations
I try to assemble as many examples of how not to do something as ‘gold standard’ or best practice examples of how the same task should be done. The principle is similar to what Plutarch wrote: Medicine to produce health must examine disease, and music to create harmony must investigate discord.
However when I tried to examine how not to write, in particular examples of poorly written paragraphs—I was surprised by how rare they were. There are a great many okay paragraphs on the internet and in books, but very few that were so unclear or confusing that they were examples of ‘bad’ paragraphs.
In my categorization paragraphs can be great—okay—bad.
Okay paragraphs are the most numerous, they observe the rule of thumb—keep one idea to one paragraph. To be an ‘okay’ paragraph and rise above ‘bad’ all a paragraph needs to do is to successfully convey at least one idea. Most paragraphs I found do that.
What elevates great paragraphs above okay paragraphs is they do an especially excellent job of conveying at least one idea. There are many qualities they may exhibit, including persuasiveness, the appearance of insight, brevity and simplicity in conveying an otherwise impenetrable or ‘hard to grasp’ idea.
In some isolated cases a great paragraph may actually clearly and convincingly communicate disinformation or a falsehood. I believe there is much more to learn about the forms paragraphs take from a paragraph that conveys a falsehood convincingly than a paragraph that clearly conveys what is generally accepted as true.
What was surprising is how hard it is to find examples that invert the principle—a paragraph that is intended to convey an idea that is truthful but is hard to understand would be a bad paragraph in my categorization. Yet, despite actively looking for examples of ‘bad paragraphs’ I struggled to find some that were truly confusing or hopeless at conveying one single idea. This experience is especially surprising to me because it challenges a few assumptions or expectations that I had:
Assumption 1 - people who have mistaken or fringey beliefs are disproportionately incapable of expressing those beliefs in a clear and intelligible form. I expected that looking for the least popular comments on Reddit, I would find many stream of consciousness rants that failed to convey ideas. This was far less common than rants that at least conveyed intent and meaning intelligibly.
Assumption 2 - that as a whole, people need to learn to communicate better. I must reconsider, it appears on the transmission side, they already communicate better than I expected (counter-counterpoint: 1% rule)
Assumption 3 - the adage that good writing = good thinking. Perhaps not, it would seem that you can write clearly enough to be understood yet that doesn’t mean your underlying arguments are strong or your thinking is more ‘intelligent’.
Assumption 4 - That I’m a merely a below average communicator. It appears that if everyone is better than I expected, than I’m much further below average than I expected.
I have no take-out or conclusion on this highly subjective observation, hence why it is a quick-take and not a post. But I will add my current speculation:
My current theory for why is “I wasn’t looking in the right places”. For example, I ignored much academic or research literature because the ability of the writers to convey an idea is often difficult to assess without relevant domain knowledge as they are seldom made for popular consumption. Likewise I’m sure there’s many tea-spilling image boards where more stream-of-consciousness rants of greater impenetrability might be found.
My second theory is pareidolia: perhaps I highly overrate my comprehension and reading skills because I’m a ‘lazy reader’ who fills in intention and meaning that is not there?
Interesting thoughts, ty.
A difficulty to common understanding I see here is that you’re talking of “good” or “bad” paragraphs in the absolute, but didn’t particularly define “good” or “bad” paragraph by some objective standard, so you’re relying on your own understanding of what’s good or bad. If you were defining good or bad relatively, you’d look for a 100 paragraphs, and post the worse 10 as bad. I’d be interested in seeing what were the worse paragraphs you found, some 50 percentile ones, and what were the best, then I’d tell you if I have the same absolute standards as you have.
Why don’t LLM’s ask clarifying questions?
Caveat: I know little to nothing about the architecture of such things, please take this as naive user feedback if you wish, or you could ignore it.
Just now I was asking the Meta AI chatbot how to do an ‘L-Cut’ using the Android Instagram app. It hallucinated for quite a few messages instructions how to ‘overlap’ two video tracks when editing a reel before it finally admitted that no such ability in fact exists in the Instagram App.
My grossly negligent mistake was assuming that a AI LLM with Meta Branding would have current or accurate knowledge of Meta properties and products.
However, imagine that there are two versions of the Instagram App, one that has this feature and one that doesn’t—why didn’t it ask “Just to check, what kind of phone are you using?” (which, also, would allay fears of ‘spying’ and invasion of privacy) and then, and only then give me advice or apologetically inform me that such a feature is not available. (In other branded/commercial chatbots, wouldn’t this be an opportunity for an up-sell?[1])
Maybe I’m not prompting LLMs in the right way, but they almost never ask me questions. And certainly never clarification questions. They make an assumption, and produce a slab of text with a boilerplate “let me know if this helps?” tacked on the end. I assume this is a conscious choice and not a limitation of the technology.
To test my observation I entered the same prompt into three different LLMs:
I just asked Claude Haiku 3 “When was Rosberg’s first Grand Prix?” to see if it assumes I mean Keke Rosberg, or his son, Nico Rosberg.Claude apologies for not having information on Nico Rosberg’s first Grand Prix… and then goes on to state that “Formula One has had multiple drivers named Rosberg compete over the years. More details would be needed to determine the exact driver and their debut race.”
What I find interesting, not one sentence in the answer takes the form of an interrogative!
Next, I asked GPT-4o mini the exact same question, to which it confidently assumes which Rosberg I mean: “Nico Rosberg’s first Formula One Grand Prix was at the 2006 Bahrain Grand Prix, which took place on March 12, 2006. He raced for the Williams team.”
Microsoft CoPilot makes the same assumption, with the traditional tacked on boilerplate question:
”Nico Rosberg made his Formula One debut at the 2006 Bahrain Grand Prix. He went on to have a successful career, culminating in winning the World Drivers’ Championship in 2016. Is there anything else you’d like to know about his career or Formula One in general? [FI car Emoji, Smoke Emoji]”
Meta AI makes the same assumption, and gives a lot more unsolicited information, but it does mention his father:
”Nico Rosberg’s first Grand Prix was the 2006 Bahrain Grand Prix. This marked the beginning of his Formula One career, which would eventually see him drive for Williams from 2006 to 2009 and then Mercedes from 2010 to 2016. During his time with Mercedes, Rosberg won the Formula One Driver’s championship in 2016, a feat that made him and his father, Keke Rosberg, the second-father pairing to achieve this impressive milestone”
Not that it is relevant, but Graham and Damon Hill were the first.
This is not an exhaustive sample by any means, but does confirm my naive observation that LLMs do not ask clarifying questions or use interrogatives in order to better shape their responses to my needs.
I imagine such a commercial hellscape would look a little like this:
”I was just wondering why I wasn’t earning any Ultrabonus points with my purchases”
“Before we continue, could you tell me, do you have a Overcharge Co. Premium savings account, or a Overcharge Co. Platinum savings account?”
”Uhh I think it is a Premium.”
″I’m so sorry. if you have a Overcharge Co. Platinum savings account then you will not be able to enjoy our Overcharge co. ultrabonus points loyalty system. However you may be suprised that for only a small increase in account fee, you too can enjoy the range of rewards and discounts offered with the Overcharge co. ultrabonus points loyalty system. Would you like to learn more?”
These sorts of behavioral choices are determined by the feedback given by the people who train the AI. Nothing to do with the AI’s architecture or fundamental inclinations.
So the question to ask is, “Why do all the AI companies seem to think it’s less ideal for the AI to ask clarifying questions?”
One part of the reason is that it’s a lot easier to do single turn reinforcement. It’s hard to judge whether a chatbot’s answer is going to end up being helpful if it’s current turn consists of just a clarifying question.
Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly ‘guess’ what you mean.
I have some hard data that disproves I’m a realist, but suggests I’m a pessimist (i.e. more than 50% of the time my best expectations of personal outcomes are worse than the case). Now what?
The hard data is a handful of specific predictions for example, a prediction about a financial investment (where performance consistently exceeded my expectations) where I had no control over the performance, and things like a exam where I had control (although an instructor noted that I performed worse on the exam than a practice exam because of ‘nerves’).
Arguably the most immediate step is “just get more data: make sure this isn’t an aberration, or at the least see which specific types of predictions or expectations you’re more prone to systematic pessimism”.
Sure and then what? Should I change my investment style to allow more upside? If it means revising my models of the world—what shape does that take? What beliefs or habits are the cause of my pessimism?
Sounds like the right kind of questions to ask, but without more concrete data on what questions your predictions were off by how much, it is hard to give any better advice than: if your gut judgement tends to be 20% off after considering all evidence, move the number 20% up.
Personally me and my partner have a similar bias, but only for ourselves, so making predictions together on things like “Application for xyz will succeed. Y will read, be glad about and reply to the message I send them” can be helpful in cases where there are large disagreements.
Previously I thought that if you ask better questions then you will get better at solving problems. However questions are the shadows or reflections cast from the actual framing of the problem. If you have a well framed problem you will naturally ask better questions. If you haven’t framed the problem well, then you will ask bad questions.
Bad questions are still useful because they are a signal that you are “barking up the wrong tree” or that you need to reformulate the problem.
What marks a bad question and therefore signals a framing of the problem that is unconducive to solving it?
There’s probably a myriad of ways a question can be bad. It appears that most of the questions which signal a failure to frame a problem well are vague. Imagine someone who wants to become a Academy Award winning Cinematographer asks “has anyone every won an Academy Award for Cinematography without going to film school?” the answer is of course “yes”, especially in the early days of the award. But it is not a useful question in that it doesn’t narrow down which actions this aspiring cinematographer should take, avoid, nor clarifies which factors will most impede or expedite their journey. It is only useful in that it shows they are not asking useful questions and therefore their entire formulation of the problem needs work. Better questions are more useful questions.
Much like measures better questions are ones that influence decisions—if a change in answer to a question doesn’t change your decision, then it’s not a useful question.
Popular wisdom encourages us to ask open ended questions, especially those which ask “why?” or “how?”.
While this is true for seeking advice or having discussions with experts or building consensus. Better questions even in these circumstances tend to be specific. (i.e. asking for vague help “how can I be a great cinematographer?” versus asking for specific advice “how did Michael Ballhaus light this scene in the nightclub in Under the Cherry Moon? How does it differ to his colour nightclub cinematography in Lola? Why did he make those decisions?” ). However open ended questions may not be better questions in the absence of an expert to ask, however specific they may be.
It is less Socratic, more something out of Yes, Minister, in that I don’t know what I don’t know—so if I ask myself rather than an expert “Why does this problem pervade?” all I can answer is a sort of tentative guess or what I believe is not a likely answer. Whereas an expert may be able to plug my knowledge gaps.
I am undecided whether this means why/how questions potentially better suited for assessing our knowledge or at least our confidence in our knowledge concerning the framing of the problem, but in the absence of an expert, not particularly useful.
Counterpoint: the circumstances where the questions appear to be “good” or “better” questions but you’re still solving the wrong problem? They are good for the problem you are mistakenly trying to solve.
DON’T write instructions like that, instead try this...
“Don’t...” “Stop doing this but instead...” “when you find yourself [operative verb] try to...” headed instructions tend to be more useful and actionable for me than non-refutative instructions. Or to get meta:
I find I’m terrible at making an instruction, advice or a note actionable because it is exceedingly difficult to find suitable cues, situations or contexts to use them. This is further complicated by the struggle to remember the instruction correctly in the ‘fog of war’ as it were.
For example, Nicholas Nassim Taleb notes that people are so prone to “overcausation” that you can get most people to become loquacious by simply asking “why?” (others say ‘why’ can come off as too accusatory and ‘how come?’ is more polite). I may like to see how true this is, but now I need to find a situation to use it in… uhhh… hmmm… okay, next time someone gives a one-word response about their weekend. Sure… now how can I remember it? In the panicky situation where a conversation grows quiet, how can I remember to ask “why?”?
Provided that an instruction or note that begins with “stop...” or “don’t” does in fact describe a habit you have or recurring situation you continue to encounter, then there is already a cue you can recognize.
For example, often when I hit an impasse while brainstorming, I will absentmindedly check my Instagram or a news website or here. That is a cue, and I can say “Don’t check Instagram, instead write down a brief description of what the next step is in your brain storming process.”
To test Taleb’s observation, I’d do well to think of something I often do or notice when a conversation peters out, something like “don’t say ‘haha yeah’, ask ‘why’”? (and trust I have the sense to not implement that robotically and ask ‘why?’ as a non-sequitur)
So my advice to myself: Don’t write instructions or notes that begin with “try to...” “you should...” or even “write instructions that begin with refutations” but instead use “Don’t… but instead” as a template.
Problem solving with Verbs:
This came up when I was trying to think about how to better frame questions with the form “How should I X?”
When outlining stories or screenplays I find action or visual verbs immeasurably useful in creating more interesting scenes. Instead of “Joe talks to Bill” he can negotiate, remonstrate, beg, plead, mock, pontificate etc. Each of which makes the scene much more specific. “Maria goes to the store” is too vague, she may either meander to the store, sprint to the store, or even search for the store. These action verbs not only give us a sense of the character’s intentions and allow us to (appropriately enough for film) imagine visually how it plays out, but are more interesting and useful.
Why doesn’t the same apply to practical questions?
At the risk of going meta, take the question “Should I make a short film?”. There are surprisingly few action verbs that describe being a filmmaker[1] other than ‘make’ and off the top of my head you can either film, direct,”do a pitchya[2]”. If you want to be more specific about how it will be done, you can say you will improvise a film or you can say you will “go vertie”. I’m sure you can use Denominal verbs based off of directors with distinct processes:
It kind of works with the question “How should I get to the party?”—rather than “get” I can drive, or I can walk, or I can carpool, I can rideshare, I can bike etc. I may even opt to describe my entrance, I can sneak in, I can explode and make an entrance.… In deed, if I choose to sneak in then I may opt to arrive on foot or rideshare so no one notices my car.
Yes, there are a plethora of department specific verbs—you can lens or shoot a film, score it, colour it, mix it, dub it, cue it, do foley, light it, dress it, design it, (location) scout it, produce it, cut or edit it etc. etc.
I noticed in interviews with Classic Hollywood directors Raoul Walsh and John Ford, they don’t call them “films” or “movies” but “picture”, pronounced more like “pitch-ya” as in “motion picture”.
Most people when they say Cinéma vérité mean “Direct Cinema”—the classic device of Cinéma vérité is the Vox Pop. The proverbial ‘man on the street’ is pulled in front of the camera—often looking down the barrel, already an artificial and performative situation, to give a honest opinion or barometer of public feeling.
I did consider adding “Kubrick it” as a example but I couldn’t decide if “do a lot of takes and wait for something strange or weird to happen as the actors get exhausted/bored” was sufficiently identifiable as a filmmaking process. Many directors do a lot of takes. Chaplain did a lot of takes. You can’t be Kubrick if you do a lot of takes, however there is something unusual and distinct about the way Altmann handled scenes with many characters.
The key here is it should describe both the manner and means in which the task is done. Going or getting to a party or store is too vague. Making or shooting a film tells me nothing about the style, genre, or logistics of filming.
What is the functional difference between Agency and having social power? This is likely a question that reflects my ignorance of the connotations of ‘Agency’ in Rationalist circles.
When people say “he’s a powerful man in this industry” does that imply he is greatly Agentic? Can one be Agentic without having social power? Is one the potential and the other the actuality?
“Agency” is rationalist jargon for “initiative”, the ability to initiate things.
I’ll need some clarification:
Does that mean that someone who habitually starts new processes or projects but seldom is able to finish them or see them through to completion has lots of (Rationalist sense) Agency?
But also, does that mean in a hypothetical organization where one person has the means to veto any decision others man, but the veto-holder seldom exercises it despite very easily being able to, the veto-holder would not be Agentic?
No. Initiative is, well, it’s an ordinary English word with a generally understood meaning. Pulled from the web:
“The ability to assess and initiate things independently”, “the power or opportunity to act or take charge before others do”, “the ability to use your judgment to make decisions and do things without needing to be told what to do”, synonyms “ambition, action, enterprise, drive, spirit, aggressiveness, vigor, hustle, energy, go, gumption, grit, spunk, assertiveness” etc. I think that paints a pretty clear picture.
This is what I have always understood by the word “agency” in the LW-sphere, at least when applied to people. The LW coinages “agenty” and “agentic” mean having agency in that sense.
So habitually starting things and letting them wither doesn’t cut it, and neither does nominally having some role but never executing it. It’s an inner quality that by its nature must manifest in outward actions.
The word “Agency” also has specific other, more technical uses. Here it is in philosophy, where it means something distantly similar but far broader. It’s a “porridge word” (Edward de Bono’s coinage), a hazy concept with little content that, like porridge, takes up the shape of whatever container it is put in. “Fake explanations” often consist of calling the thing to be explained by a porridge word.
Then there is “Agency” in the context of AIs having it, or being Agents. This is something that I don’t think the users of the word understand themselves. They’re trying to project human agency in the sense described above onto these giant weight matrices without having a non-mentalistic characterisation of the phenomenon they’re trying to find there. Not knowing what you’re looking for makes it difficult to find. From time to time I’ve suggested that control systems, hierarchically organised in a specific way, are the concept they need, but haven’t got much traction.
Thank you for taking the time to try and give me a broad overview of the different nuances of the word, unfortunately here the student has failed the teacher. I’m still very confused.
I previously have understood the porridge sense of agency (tangent—I like that phrase ‘porridge word’, reminds me of Minksy’s ‘suitecase word’) to be “an entity that has influence or can affect change”. Here on LW I have been brought to believe it just means acting, verging on thoughtlessly, which I understood to be since acting is the only way to catalyze change (i.e. change towards one’s goals).
I failed to explain my confusion: It’s not so much “letting them wither” let me put it another way: if you are in a bunker, there’s a armed conflict overhead, and therefore the smartest thing to do is “nothing” by staying put in the bunker, are you being agentic/acting agentically? The only things they can initiate at that point are unnecessary risk.
Likewise, I don’t mean nominally having some role. Not nominally but actually having the means, the power, the authority, the social status, the lack of negative repercussions to exercise the means, the knowledge but choosing not to exercise it because they evaluate it as not being worthwhile. They could initiate changes, but they rarely see the need, not from fear or reluctance, but from weighing up the pros and cons. Are they being agentic?
Agency here is not “change for the sake of change” but presumedly “acting in a way that materializes the agent’s goals” and that requires initiative, analogous to Aristotle’s Kinoun (Efficient) Cause—the carpenter who takes the initiative of making wood into a table. However the connotation of spunk, hustle, ambition etc. etc. and generally acting with energy and enthusiasm towards goals—knowing that these are not golden tickets to success (Necessary factors? Probably. Sufficient? Probably not.) -- confuses me what this quality is describing.
You’re looking at edge cases in order to understand the concept. I think looking at the centre works better than mapping out the periphery, which was my reason for giving those definitions and synonyms of “initiative”. If someone is in a situation where circumstances forestall any effective action, then to ask whether they are being “agentic” in doing nothing is like asking whether an unheard falling tree makes a sound.
I’m afraid I just have to give up on understanding what Agency means then. Thank you for trying though.
Unlike initiative because you can take initiative and it not deliver intended results. But it’s still initiative. While is being Agentic a potential or an actuality? I don’t know.
Agency has little to do with social power. It’s kind of hard to describe agency, but it’s characterized by deliberateness: carefully and consciously thinking about your goals as well as having conscious models for how they help you achieve your goals, in contrast to unthinkingly adhering to a routine or doing what everyone else is doing because it is what everyone else is doing. Also has some aspect of being the kind of person who does things, who chooses action over inaction.
So by that definition would you consider trickster archetype characters (you can see why I have been wondering) like Harpo Marx or Woody Woodpecker who appear to be very impulsive, albeit not bound by routines or what everyone else is doing because everyone else is doing it would not have Agency because he is highly reactionary and doesn’t plan?
Let me write out my current assumptions as it might make it easier to correct them:
Analysis Paralysis is not Agentic because while it involves carefulness and consciously plotting moves towards goals, it lacks action towards them.
Hedonic and Impulsive activity is not agentic because while it does involve action towards one’s goals, it lacks careful planning.
Agency then is making plans and acting upon them irrespective of whether one is able to see them through to completion, provided one has the intention and will, and the forethought.
Is that correct?
“Babbling Better” this is a work in progress -and still requires more thinking
In short—need a methodology or at least heuristics for identifying the “right problem” to solve, and noticing when one is solving the “wrong problem”. Better problem framing leads to better and more focused answers to questions and hopefully eventual resolving of problems. I’ve come across two techniques: The Five Whys to understand problems better, and using adverbs of manner to babble more constructively.
So far:
It is easy to babble, babies do it. It is still quite easy to babble comprehensible but wrong sentences, such as LLM hallucinations. Your pruning is only as good as your babble.
With regards to problem solving, low quality babble doesn’t contribute to resolving the problem. For example, let’s say the problem is “camera autofocus doesn’t focus on eyes” a low quality “babble” answer might be “Burn a stick of incense and pray to Dionysius”. The acts themselves are feasible and the sentence is comprehensible. But any desired change in the camera’s autofocus performance will be pure coincidence.
Yet, sometimes low quality babble appears to be high quality babble because we simply are not solving the right problem but it appears to be perfectly suited for the problem. Especially if incentives are involved.
My hunch is that to babble better not only do you need better methods of babbling, but you need to better understand what goals you are trying to babble towards. And that requires better understanding of why the problem is a problem.
5 Why’s on yourself: Asking “why I think this is a problem?” to at least 5 levels
Not to be mistaken for the Burger joint. The “Five Whys” technique was apparently invented at the Toyota Corporation as a system for uncovering the root causes of production faults.
The choice of “why” falls into broader pattern which takes me back to documentary filmmaking and interviewing: you learn more through open ended questions, usually those where the key interrogative is “why” or “how” than through close ended questions. These, as Wittgenstein pointed out, basically seek to affirm or negative a proposition or conditional: “Do you like him?” “Is he still there?” “Would you call that green or turquoise?”.
If I am a manager or investigator, trying to ascertain the cause of a fault on a production line, open ended questions make sense since I will not be in possession of all known or knowable facts.
This still holds if I am a novice or just someone enquiring to an expert for help in achieving some goal. If I ask an experienced cinematographer “how would that scene be light?” even if they don’t know specifically, they have a large body of experience and knowledge that would mean they could probably make useful guesses on how to replicate the effect.
If i intend on asking for advice from an expert, I can’t give them the responsibility of figuring out the kind of help I need. The better I can define the problem myself the better and more informative the question I can ask them. Be too vague about your problem and you can only hope to get generic responses like “be confident”.
It seems ridiculous though, doesn’t it? Socratic or even from Yes, Minister: Why should I ask myself open ended questions if I don’t know what I don’t know? While I may not understand the problem, what I can do is at least explain why it’s a problem and how I see it. And one effective way to do that I’ve found is to use the Five Whys Technique.
It is often exceedingly difficult to know what the right problem to solve is, what we may have a better chance of defining is why we perceive it as a problem and why we expect it to cause conflict.
To—Do: add more techniques to my arsenal to better defined problems… the step before babbling
Adverbs and Creativity?
Strategically EfficaciouslyProductively BabblingI have recently come across a technique for higher-quality babble, at least for creative purposes. It is as simply as employing a Adverb of Manner to modify a verb. This is a minor variation on a technique used to allow mime artists to create a character—you take a situation or process like “make breakfast” and do it with an attitude: happy, hungover, lovelorn.
It is surprisingly easy to come up with scenarios and even stories with arcs—goals, conflict, and comedic pay-offs complete with a character who has distinct mannerisms—by just cycling through adverbs. Compare these three adverbs: grumpily, overzealously, nervously.
These scenarios appear to “write themselves” for the purposes of short pantomime bits. This is the exact type of technique I have spent years searching for.
To do—Does this technique of better babbling through adverbs of manner apply to non-creative applications? If not then develop methodology or at least heuristics for identifying the right problem, noticing a “wrong problem”
Update (October 2024)- it is interesting looking back on this 8 months later as I think I have just hit upon a means of “babbling better”. I intend to revise and go into detail this means after a period of actually trying it out. It’s certainly not original, it vaguely resembles the method at Amazon of writing Memos and speculative Press Releases for a new proposal and uses your ‘internal simulator’.
in brief the way I employ this new method is taking the first kneejerk ‘babble’ or solution to the problem I come up with. Then I try to write a speculative narrative where this solution or action delivers a satisfactory or worthwhile result, being very methodical about the causation. This is not, I stress, a prediction or prognostication.
What I find is that by writing a speculative narrative, and making it as convincing as possible to myself, it forces me to explicate my framework and mental model around the problem, my hunches, suspicions, assumptions, belief, fears, hopes, observations, knowledge and reasoning. Much of which I may not be consciously aware of.
With the framework explicated, I can now go about babbling. But it will be much more targeted and optimized based on my expectations, knowledge, and the framework in general.
Some (not yet confirmed) secondary bonuses of this method:
- it fights analysis paralysis, instead of babbling for breadth, it forces thinking about causation and consequences
- it is inherently optimistic, as you’re forcing yourself to write a structured argument why this could or would work
- having explicated your framework, you may be able to verify specific hunches or assumptions that hereto you weren’t aware they were influencing your thinking
One caveat: why a satisfactory narrative, why not a best case scenario? I think a best case scenario will assume a lot of coincidence, serendipity and as a means for reflection and explication of your mental modelling or framework of the problem is less informative. For that reason, causative words and phrases like “because” “owing to” “knowing that.… it follows such...” “for this reason” should be abundant.
I will update after more real world employment.
To think about:
Shannon Information and cataloguing ‘rushes’ from a documentary. This is not about the actual amount of entropy in any given frame of a uncompressed video. Rather the entropy of all the metadata from all the footage.
Eisenstenian film theory was an attempt to marry Marxist Dialectic with film editing. The “highest” type of film cut was “Intellectual Montage” the bone to nuclear-satellite cut in 2001: A Space Odyssey is perhaps the most iconic example in film history. Eisenstein himself used the more on-the-nose approach of showed crowds of protesters being mowed down by Tsarist troops being interspliced with footage of animals being slaughtered in an abattoir.
The Dialectic of cuts, the juxtaposition between image A and image B—be it the Kuleshov experiment—the actor appearing to look at either soup or a corpse lying in state thereby changing the inferred emotion of the actor—is a critical film language technique.
Documentary Rushes of similar thematic content—i.e. “Shot 1 - mid shot children playing” “Shot 2 - mid shot different children playing” and lower entropy. “Shot 1 - mid shot children playing” “Shot 87 - close up of old man smiling”. We want to avoid homogenous sets.
The problem for a film editor, especially a observational documentary film editor or someone working with archive material (think of the films of Bret Morgan and Asif Kapadia) is every time you create a sequence you have to watch all of the material, again, hoping to find the dialectic or invent a narrative that combines at least two shots together.
Binary Search algorithms are also relevant here.
CLIP and visual Semantic Networks can automate part of the search if the editor has something specific in mind. I want to cultivate serendipity—unforseen juxtapositions.