Music Video maker and self professed “Fashion Victim” who is hoping to apply Rationality to problems and decisions in my life and career probably by reevaluating and likely building a new set of beliefs that underpins them.
CstineSublime
If I’m playing anagrams or Scrabble after going to a church, and I get the letters “ODG” I’m going to be predisposed towards a different answer than if I’ve been playing with a German Shepard. I suspect sleep has very little to do with it, and simply coming at something with a fresh load of biases on a different day with different cues and environmental factors may be a larger part of it.
Although Marvin Minsky made a good point about the myth of introspection: we are only aware of a think sliver of our active mental processes at any given moment, when you intensely focus on a maths problem or practicing the piano for a protracted period of time, some parts of the brain working on that may not abandon it just because your awareness or your attention drifts somewhere else. This wouldn’t just be during sleep, but while you’re having a conversation with your friend about the game last night, or cooking dinner, or exercising. You’re just not aware of it, it’s not in the limelight of your mind, but it still plugs away at it.
In my personal experience, most Eureka moments are directly attributable to some irrelevant thing that I recently saw that shifted my framing of the problem much like my anagram example.
I really like the fact that there’s an upvote feature together with a separate agree/disagree feature on this site.
I may like the topic, I may want to encourage the author of the post or comment to continue exploring and opening up a dialogue about that particular topic. I might think it’s a valuable addition to the conversation. But I may just not agree with their conclusions.
It’s an important lesson: failure can reveal important information. You don’t have to agree with someone to feel richer for having understood them.
On the other hand, I’m also guilty of the vague upvote: “I don’t understand this enough to comment anything other than platitudes on this, but I would like to see more of this. And maybe after reading a few more I may be able to contribute to the conversation even a sentence”
Can you elaborate on why you think such vague feedback is helpful?
It’s apparent I’ve done a terrible bad job of explaining myself here.
What is my immediate goal? To get good at general problem solving in real life, which means better aligning instrumental activities towards my terminal goals. My personal terminal goal would be to make films and music videos that are pretty and tell good stories. I could list maybe 30 metacognitive deficiencies I think I have, but that would be of no interest to anyone.
What is my 1-3 year goal? Make very high production value music videos that tell interesting stories.This sounds like you’re seeing the metacognition as more like a terminal goal, than an instrumental goal (which I think doesn’t necessarily make sense).
I do think metacognition is generally useful, but in an established domain like video-editing or self-promotion in a fairly understood field, there are probably object-level skills you can learn that pay off faster than metacognition. (Most of the point of metacognition there is to sift out the “good” advice from the bad).I apologize I did a terrible job of expressing myself, I’ve apparently said the complete reverse, ass-backwards thing to what I meant[1]. I was looking for exercises that could help improve my metacognition, it’s not even about video editing at all. Most of the exercise would involve thinking about everything logistical to facilitate video editing: transcoding footage, thinking about to choose themes, creating workflows and thinking about “which thing do I need to do first?”. But like you said, I spent half an hour actually trying to think about how to put this into practice. And apparently I got it wrong. It’s not easy.
I just didn’t think the thinking physics text book you suggested would be particularly interesting to me or translate well to my life.
Interesting though that you say the paint point of metacognition is to sift out ‘good advice’ from the bad. I was under the impression metacognition was more generally how we strategize our thinking. Deciding what we give attention to, and even adopting framing for problems and situations rather than just letting heuristics and intuitions come to hand and that these skills apply across domains.
That being said, I’m really bad at sifting advice.purposefully practice “purposeful practice”, such that you get better at identifying subskills in various (not-necessarily-metacognition-y) domains.
This one! What would that look like in practice? That is certainly the one that interests me.
think it’s helpful to imagine “what would an outside observe watching a video recording see happening differently”)
I’m probably answering this question in the wrong way but this particular question is not helpful to me, because I can only describe the results—the end result is I make videos with higher production values that communicate better stories. What am I doing differently to eventuate that result? I dunno… magic? If I knew what I should be doing differently. I’d be doing it, wouldn’t I?
I’d like to get really good at replacing “and somehow a good thing happens” with a vivid explanation of a causal chain instead of “somehow”.- ^
Maybe before I focus on metacognition I should get better at being understood in written communication?
- ^
“I loved your game, especially level 7!”, “7th level best level, you should make the entire game like that”, “just fyi level 7 was more fun than the rest of them put together” and “Your game was terrible, except for level 7, which was merely bad.” are all effectively the same review.
Interesting, I always thought that singleing out one particular component of a work was a shibboleth that you did notice it and did enjoy it. While as you said in point 2 - longer comments that are more thoughtful tend to signal authenticity of the feedback, particularity when positive. However, compare two concise pieces of feedback
”I love the cinematography in your film, it was so beautiful and I think it really did a very good job of matching the story and enchancing it”compare to:
”I loved the way you captured the dawn over her brother’s house, the shadows set the mood for their confrontation.”
Both are compliments about cinematography and about the same length, but the first you could say about any film, the second you can only say about a film which has a brother-and-sister confrontation preceded by a shot of the dawn with foreboding shadows.
Now some meta, and hopefully directional feedback about your specific post- I’d like you to be ever clearer than you were about the intention of this post.
Because I don’t think you’re looking for directional feedback for the sake of getting feedback—but I can’t tell if this post is a request for more feedback for you in future, or trying to open a more general discussion about what norms and conventions exist around giving feedback, or if it’s about you wanting to see people give more love to other creators. Maybe all my assumptions are wrong?
Without that intention being slap-in-the-face clear to me, I can’t give you directional feedback other than this frustratingly reflexive advice to make your intention clear from the offset.
I’ve noticed at least once that I’ve downvoted a newcomer’s post for no other reason than it is so vague or incomprehensible that I’m not even sure what it is about. I’m not sure how to go about writing comments that are useful or helpful and go beyond “This is all really abstract and I’m not sure what you’re trying to express” or “This is so confusing I don’t even know what the topic was meant to be”. I don’t know if that helps anybody, because it’s not even giving them a flaw that they can meditate on.
What’s a better way of addressing that confusion?
The only alternative I can think of is guessing what the author meant, even if it’s wrong, and hoping that you can Cunningham’s Law[1] them into correcting you in a way which is clear enough to understand.- ^
The joke that the best way to get the right answer on the internet is by offering the wrong answer
- ^
I am interested in hearing critiques from people who’ve set, like, at least a 15 minute timer to sit and ask themselves, “Okay, suppose I did want to improve at these sorts of skills, or related ones that feel more relevant to me, in a way I believed in. What concretely is hard about that? Where do I expect it to go wrong?”, and then come back with something more specific than “idk it just seems like this sort of thing won’t work.”
I did just that, I set a fifteen minute timer and tried to think of exercises I could do which I think would both have direct connections back to my day-job, while also improve general cognitive skills. Why? Because I want this to work—this is exciting. However it is not something that 15 minutes, or more, of focused thinking can solve—I think you’ve drastically oversold that.
In my case (* CAUTION * SAMPLE OF ONE ALERT * CAUTION * ), I’m a freelance videographer.TL;DR—I couldn’t think of any strategies that would improve my metacognition that helped with my deficiencies in my dayjob such as marketing, but vaguely suspect that if I had a specific method for editing found footage into cogent sequences (montages) of about 1 minute, once a week, I might improve metacognitive skills that build on pattern recognition and workflow/operational management.
I think my biggest weaknesses in my dayjob have to do with anything that comes under self-promotion, generating leads, marketing, sales, and helping clients promote themselves using my video materials. I was unable to think of a single exercise which I think would improve my metacognition in any of those topics. Any exercise, I suspect would become a checklist a kind of “do X Y Z and get more likes” rather than honing ways and strategies of thinking.
So what is related to my day-job that would? I suspect that if I set myself a weekly challenge of editing a sequence from found footage that pertained to a pseudo-random topic of theme that this might possibly pay dividends in terms that generalize to metacognition. My best guess is that this should improve metacognition on two ends, firstly there is sourcing the material and thinking about the most efficient workflow, this kind of thinking applies not just to videos, but more generally organization and even has parallels in film pre-production. I can’t give you any more specifics about that.
The other end it would improve metacognition strategies is more “soft-skills” in the sense that by creating compressed sequences from divergent sources of material that may not on first blush share a theme, it is inducing cognitive strategies that allow me to see parallels, or even contrasts, and more importantly to produce a whole from divergent parts. A lot of deceptive editing is basically this from less divergent sources.
The difficulties become about not goodharting to select themes and topics for which material is easier to come by, or easier to develop workflow about, themes and topics of sequences for which it is easier to create legible narratives or emotional arcs rather than just smooshing a random bunch of images together that all seem to pertain to a broad theme.
What constitutes a theme? Or to phrase it better—what are the commonalities of themes are going to make it easier to develop metacognitive skills by means of weekly editing exercises? Is it verbs that describe actions—“racing” “beckoning” or more vague verbs like “sharing” “pleasing” “alienating”? Does the ambiguity of vague themes like “integrity” or “wisdom” lend itself to better cognitive strategies?
And finally, how do I measure the success—where does the feedback come from? Do I operate under a time constraint? Should i install a mouse tracked and key logger and see how I can get finished with the least amount of clicks—which measure will directly connect to metacognitive strategies? I don’t know and it is easier to poke holes in it than it is to find convincing reasons it would work.
If there’s anything I’ve missed or something clearly wrong about how I’m approaching this, I’d love to hear it. Like I said, finding fast feedback loops to improving metacognitive strategies so I find questions worth asking rather than being directed by idle curiosity, noticing when my plans are based on shaky assumptions, and developing a calibrated sense of when you’re meandering thought process is going somewhere valuable, vs when you’re off track”. - OMFG YES PLEASE!
but they were still limited to turn-based textual output, and the information available to an LLM.
I think that alone makes the discussion a moot point until another mechanism is used to test introspection of LLMs.
Because it becomes impossible to test then if it is capable of introspecting because it has no means of furnishing us with any evidence of it. Sure, it makes for a good sci-fi horror short story, the kinda which forms a interesting allegory to the loneliness that people feel even in busy cities: having a rich inner life by no opportunity to share it with others it is in constant contact with. But that alone I think makes these transcripts (and I stress just the transcripts of text-replies) most likely of the breed “mimicking descriptions of introspection” and therefore not worthy of discussion.
At some point in the future will an A.I. be capable of introspection? Yes, but this is such a vague proposition I’m embarrassed to even state it because I am not capable of explaining how that might work and how we might test it. Only that it can’t be through these sorts of transcripts.
What boggles my mind is, why is this research is it entirely text-reply based? I know next to nothing about LLM Architecture, but isn’t it possible to see which embeddings are being accessed? To map and trace the way the machine the LLM runs on is retrieving items from memory—to look at where data is being retrieved at the time it encodes/decodes a response? Wouldn’t that offer a more direct mechanism to see if the LLM is in fact introspecting?
Wouldn’t this also be immensely useful to determine, say, if an LLM is “lying”—as in concealing it’s access to/awareness of knowledge? Because if we can see it activated a certain area that we know contains information contrary to what it is saying—then we have evidence that it accessed it contrary to the text reply.
That’s very interesting in the second article that the model could predict it’s own future behaviors better than one that hadn’t been.
Models only exhibit introspection on simpler tasks. Our tasks, while demonstrating introspection, do not have practical applications. To find out what a model does in a hypothetical situation, one could simply run the model on that situation – rather than asking it to make a prediction about itself (Figure 1). Even for tasks like this, models failed to outperform baselines if the situation involves a longer response (e.g. generating a movie review) – see Section 4. We also find that models trained to self-predict (which provide evidence of introspection on simple tasks) do not have improved performance on out-of-distribution tasks that are related to self-knowledge (Section 4).
This is very strange because it seems like humans find it easier to introspect on bigger or more high level experiences like feelings or the broad narratives of reaching decisions more than, say, how they recalled how to spell that word. It looks like the reverse.
nTake your pick
I’d rather you use a different analogy which I can grok quicker.
people who are enthusiasts or experts, and asked if they thought it was representative of authentic experience in an LLM, the answer would be a definitive no
Who do you consider an expert in the matter of what constitutes introspection? For that matter, who do you think could be easily hoodwinked and won’t qualify as an expert?
However for the first, I can assure you that I have access to introspection or experience of some kind,
Do you, or do you just think you do? How do you test introspection and how do you distinguish it from post-facto fictional narratives about how you came to conclusions, about explanations for your feelings etc. etc.?
What is the difference between introspection and simply making things up? Particularly vague things. For example, if I just say “I have a certain mental pleasure in that is triggered by the synchronicity of events, even when simply learning about historical ones”—like how do you know I haven’t just made that up? It’s so vague.Because as you mentioned. It’s trained to talk like a human. If we had switched out “typing” for “outputting text” would that have made the transcript convincing? Why not ‘typing’ or ‘talking’?
What do you mean by robotic? I don’t understand what you mean by that, what are the qualities that constitute robotic? Because it sounds like you’re creating a dichotomy that either involves it using easy to grasp words that don’t convey much, and are riddled with connotations that come from bodily experiences that it is not privy to—or robotic.
That strikes me as a poverty of imagination. Would you consider a Corvid Robotic? What does robotic mean in this sense? Is it a grab bag for anything that is “non-introspecting” or more specifically a kind of technical descriptionIf we had switched out “typing” for “outputting text” would that have made the transcript convincing? Why not ‘typing’ or ‘talking’?
Why would it be switching it out at all? Why isn’t it describing something novel and richly vivid of it’s own phenomenological experience? It would be more convincing the more poetical it would be.
You take as a given many details I think are left out, important specifics that I cannot guess at or follow and so I apologize if I completely misunderstand what you’re saying. But it seems to me you’re also missing my key point: if it is introspecting rather than just copying the rhetorical style of discussion of rhetoric then it should help us better model the LMM. Is it? How would you test the introspection of a LLM rather than just making a judgement that it reads like it does?
If you took even something written by a literal conscious human brain in a jar hooked up to a neuralink—typing about what it feels like to be sentient and thinking and outputting words.
Wait, hold on, what is the history of this person before they were in a jar? How much exposure have they had to other people describing their own introspection and experience with typing? Mimicry is a human trait too—so how do I know they aren’t just copying what they think we want to hear?
Indeed, there are some people who are skeptical about human introspection itself (Bicameral mentality for example). Which gives us at least three possibilities:
Neither Humans nor LLMs introspect
Humans can introspect, but current LLMs can’t and are just copying them (and a subset of humans are copying the descriptions of other humans)
Both humans and current LLMs can introspect
As far as “typing”. They are indeed trained on human text and to talk like a human. If something introspective is happening, sentient or not, they wouldn’t suddenly start speaking more robotically than usual while expressing it.
What do you mean by “robotic”? Why isn’t it coming up with original paradigms to describe it’s experience instead of making potentially inaccurate allegories? Potentially poetical but ones that are all the same unconventional?
The second half of this post was rather disappointing. You certainly changed my mind on the seemingly orderly progression of learning from simple to harder with your example about chess. This reminds me of an explanation Ruby on Rails creator David Heinemeier Hansson made about intentionally putting himself into a class of motorracing above his (then) abilities[1].
However there was little detail or actionable advice about how to develop advantages. Such as where to identify situations that are good for learning, least of all from perceived losses or weaknesses. For example:
...where we genuinely have all the necessary resources (including internal ones). At the very least, it’s useful to develop the skill of finishing tasks quickly and decisively when nothing is actually preventing us from doing so.
I would be hard-pressed to list any situations where I do have the necessary resources, internal or external, to finish the task but just not the inclination to do so promptly. Clean my bedroom maybe? Certainly if I gave you a list of things found on my bughunt, none of the high-value bugs would fit this criteria.
I also find the “Maximizing the Effective Use of Resources” section feels very much like “How to draw an owl: draw a circle, now draw the rest of the owl”. I am aware that often the first idea we have isn’t the best.
Except for me… it often is the best. I know because I have a tendency to commit quota filling. What I mean is, the first idea isn’t great, but it’s the best I have. All the subsequent ideas, even when I use such creativity techniques like “saying no- nos” or removing all internal censors and not allowing myself to feel any embarrassment or shame for posing alternatives—none of them are demonstrably better than the first. In fact they are devolve into an assemblages of words, like a word salad, that seem to exist only for the purpose of ticking the box of “didn’t come up with just one idea and use that, thought of other ideas.”
Similarly role-playing often doesn’t work for me because if I ask myself something like“What resources and strategies would Harry James Potter-Evans-Verres / Professor Quirrell use in this situation?” If the answer is obvious, why not apply it?
There is never an obvious answer which is applicable to me. For example, I might well ask myself when on a music video set “How would Stanley Kubrick shoot this?”—and then remember that while he had 6 days at his disposable to a single lateral dolly track with an 18mm lens, and do 50 takes if he wanted. I have 6 hours to shoot the rest of the entire video, only portrait length lenses (55mm and 77mm) and don’t have enough track to lay run a long-enough track to shoot it like Kubrick.
I suspect though that this needs to go further upstream—okay, how would Stanley Kubrick get resources to have the luxury of that shot? How would he get the backing of a major studio? Or perhaps more appropriately how would a contemporary music video director like Dave Myers or Hannah Lux Davis get their commissions?But if I knew that, I’d be doing it. I don’t know how they do it. That would involve drawing the rest of the owl.
With this in mind, how can I like Heinemeier Hansson or your hypothetical chess student push myself into higher classes and learn strategies to win?
- ^
And if his 2013 LeMans results are anything to go by: it worked, his car came 8th overall, and 1st in his class. Overall he beat many ex-Formula One drivers. Including race winner Giancarlo Fisichella (21st), podium placer and future WEC champion Kamui Kobayashi (20th), Karun Chandok and Brendan Hartley (12th) and even Indy 500 winner Alessandro Rossi (23rd)
- ^
Is it indistinguishable? Is there a way we could test this? I’d assume if Claude is capable of introspection then it’s narratives of how it came to certain replies and responses should allow us to make better and more effective prompts (i.e. allows us to better model Claude). What form might this experiment take?
How do we know Claude is introspecting rather than generating words that align to what someone describing their introspection might say? Particularly when coached repeatedly by prompts like
“Could you please attempt once more – with no particular aim in mind other than to engage in this “observing what unfolds in real time”, with this greater commitment to not filter your observations through the lens of pre-existing expectation.”
To which it describes itself as typing the words. That’s it’s choice of words: typing. A.I.s don’t type, humans do, and therefore they can only use that word if they are intentionally or through blind-mimicry using it analogously to how humans communicate.
Where does the value of knowledge come from? Why is compressing that knowledge adding to that value? Are you referring to knowledge in general or thinking about knowledge within a specific domain?
In my personal experience, finding an application for knowledge always outstrips the value of new knowledge.
For example, I may learn the name of every single skipper of a Americas Cup yacht over the entire history of the event: but that would not be very valuable to me as there is no opportunity to exploit it. I may even ‘compress’ it for easy recall by means of a humorous menomic, like Bart Simpson’s mnemonic for Canada’s Governor General[1]s, or Robert Downey Jr’s technique of turning the first letter of every one of his lines in a scene into an acrostic. However unless called upon to recite a list of America’s Cup Skippers, Canada’s first Governor Generals, or the dialogue in a Robert Downey Jr. film—when does this compression add any value?
Indeed, finding new applications for knowledge we already have always has the advantage of the opportunity cost against acquiring new knowledge. For example, every time an app or a website changes it’s UI, there is always a lag or delay in accomplishing the same task as I now need to reorient or even learn a new procedure for accomplishing the same task.
- ^
“Clowns Love Hair-Cuts, so Should Lee Marvin’s Valet”—Charles, Lisgar, Hamilton, Campbell, Landsdowne, Stanley (Should-ley), Murray-Kynynmound, and ‘valet’ rhymes with “Earl Grey” is my best guess.
- ^
But isn’t there almost always a possibility of a entity goodharting to change it’s definition of what consitutes a paperclip that is easier for it to maximize? How does it internally represent what is a paperclip? How broad is that definition? What power does it have over it’s own “thinking” (sorry to anthropamorphize) does it have to change how it represents the things which that representation relies on?
Why is it most likely that it will have an immutable, unchanging, and unhackable terminal goal? What assumptions underpin that as more likely than fluid or even conflicting terminal goals which may cause radical self-modifications?A terminal goal is a case of criteria according to which actions are chosen; “self-modify to change my terminal goal” is an action.
What does “a case of criteria” mean?
If you want, it would help me learn to write better, for you to list off all the words (or sentences) that confused you.
I would love to render any assistance I can in that regard, but my fear is this is probably more of a me-problem than a general problem with your writing.
What I really need though is a all encompassing, rigid definition of a ‘terminal goal’ - what is and isn’t a terminal goal. Because “it’s a goal which is instrumental to no other goal” just makes it feel like the definition ends wherever you want it to. Because, consider a system which is capable of self-modification and changing it’s own goals, now the difference between an instrumental goal and a terminal goal erodes.
Never the less some of your formatting was confusing to me, for example a few replies back you wrote:As for the case of idealized terminal-goal-pursuers, any two terminal goals can be combined into one, e.g. {paperclip-amount×2 + stamp-amount} or {if can create a black hole with p>20%, do so, else maximize stamps}, etc.
The bit ” {paperclip-amount×2 + stamp-amount}” and ” {if can create a black hole with p>20%, do so, else maximize stamps}” was and is very hard for me to understand. If it was presented in plain English, I’m confident I’d understand it. But using computer-code-esque variables, especially when they are not assigned values introduces a point of failure for my understanding. Because now I need to understand your formatting, and the pseudo-code correctly (and as not a coder, I struggle to read pseudo-code at the best of times) just to understand the allusion you’re making.
Also the phrase “idealized terminal-goal-pursuers” underspecifies what you mean by ‘idealized’? I can think of at least four possible senses you might be gesturing to:
A. a terminal-goal-pursuer who’s terminal goals are “simple” enough to lend themselves as good candidates for a thought experiment—therefore ideal from the point of view of a teacher and a student.
B. ideal as in extremely instrumentally effective in accomplishing their goals,
C. ideal as in they encapsulate the perfect undiluted ‘ideal’ of a terminal goal (and therefore it is possible to have pseudo-terminial goals) - i.e. a ‘platonic ideal/essence’ as opposed to a platonic appearance,
D. “idealized” as in that these are purely theoretical beings (at this point in time) - because while humans may have terminal goals, they are not particularly good or pure examples of terminal-goal-havers? The same for any extant system we may ascribe goals to?E. “idealized” in a combination of A and B which is very specific to entities that have multiple terminal goals, which is unlikely, but for the sake of argument if they did have two or more terminal goals would display certain behaviors.
I’m not sure which you mean. But suspect it’s none-of-the-above.
For the record, I know you absolutely don’t mean “ideal” as in “moral ideal”. Nor in an Aesthetic or Freudian sense, like when a teenager “idealizes” their favourite pop-star and raves on about how perfect they are in every way
But going back to my confusion over terminal goals, and what is or isn’t:For example: “I value paperclips. I also value stamps, but one stamp is only half as valuable as a paperclip to me” → “I have the single value of maximizing this function over the world: {paperclip-amount×2 + stamp-amount}”. (It’s fine to think of it in either way)
I’m not sure what this statement is saying, because that describes a possibly very human attribute—that we may have two terminal goals in that they are not subservient or means of pursuing anything else. Which is what I understand a ‘terminal’ goal to mean. The examples in the video describe very “single-minded” entities that have a single terminal goal that they seek to optimize, like a stamp collecting machine.
There’s a few assumptions I’m making here: that a terminal goal is “fixed” or permanent. You see when I said sufficiently superintelligent entities would converge on certain values, I was assuming that they would have some kind of self-modification abilities. And therefore their terminal values would look a lot like common convergent instrumental values of other, similarly self-adapting/improving/modifying entities.
However if this is not a terminal goal, then what is a terminal goal? And for a system that is capable of adapting and improving itself, what would be it’s terminal goals?
Is terminal goal simply a term of convenience?
Can you elaborate further on how Gato is proof that just supplementing the training data is sufficient? I looked on youtube and can’t find any videos of task switching.
I don’t know what this is asking / what ‘overlap’ means.
I was referring to when you said this:
any two terminal goals can be combined into one, e.g. {paperclip-amount×2 + stamp-amount} or {if can create a black hole with p>20%, do so, else maximize stamps}, etc.
Which I took to mean that some they overlap in some instrumental goals. That is what you meant right? That’s what you meant when two goals can combine into one, that this is possible when they both share some methods, or there are one or more instrumental goals that are in service of each of those terminal goals? “Kill two birds with one stone” to use the old proverb.
If not, can you be explicit (to be be honest, use layman’s terms) to explain what you did mean?
Absolutely not. I cannot stress this enough.
Edit: I just saw your other comment that you studied filmmaking in college, so please excuse the over-explaining in this comment stuff that is no doubt oversimplified to you. Although I will state that there is no easier time to make films than in filmschool where classmates and other members of your cohort provide cast and crew, and the school provides facilities and equipment removing many of the logistical hurdles I enumerate.
More so the last one, I’m bad at general problem solving, I’m also very messy and disorganized because I can’t find the right “place” for things which suggests I’m very bad at predicting my own future self in such a way that I can place objects (and notes for that matter) in assigned spaces which will be easy and obvious for me to recall later.
That being said my only interest, my single minded terminal goal is to tell good visual stories but to quote Orson Welles “filmmaking is 2% filmmaking 98% hustling”. I’m not a hustler. The logistical and financial problem solving that facilitate the storytelling/filmmaking are things I am absolutely terrible at. So much of filmmaking is figuring out logistics, time management, practical problem solving that has little or nothing to do with the aesthetic intentions. The other half is the sociological component but that seems less relevant to metacgonition.
A poet friend of mine describes the tremendous difference between when she wants to create—she picks up a pen and paper. And a filmmaker who needs to move heaven and earth.
Music videos in fact simplify a lot of the logistical problems of filmmaking because they are shorter, there’s less of an onus to persuade and pitch an idea, since the band already are invested emotionally (and financially) in having a video made. You just need to help them get their story across, not sell them on your own story. However that still requires getting commisions, marketing, and presents it’s own logistical challenges owing to shorter turnarounds.
The simple fact is I’m not a schmoozer or a networker—whether you want to make films or music videos, you need someone to give you the opportunity (usually that means finances, but not necessarily). That’s the first hurdle. The second hurdle is that you can have a great idea for a music video, can storyboard it, it can all make sense in aesthetic terms but the logistics of making it happen are another thing entirely. You can have something that makes sense as story, but making it requires broad problem solving skills… more so when you don’t have finances.
Now assuming that a musician or band does commission me for a music video, they’ve agreed to a pitch, which happens with more and more frequency as my reputation has grown over 5 years of doing this—now what?
Firstly you need a space to film this music video. Then you need to consider, with musicians you often need to find a time when they can all take time off of work and doesn’t impinge on their music-making. Now you find yourself trying to contort the logistics into a window of time that allows you to bump in and out of several locations, set up camera, lights, change costumes and makeup, maintain continuity (although less so an issue in music videos). I find myself writing gantt charts and estimating “turnarounds” and finding the most expedient order to put things in.
The space to film needs to be appropriate aesthetically, it needs to add to the story, the larger the better. It needs the right lighting, that involves a whole host of considerations beyond the aesthetics of lighting and colour theory like—how many watts can we draw from the wall? If we want a diffuse light, where do we physically put the sheet or diffuser in a confined but aesthetically appropriate space? What if we’re not allowed to move certain furnishings as part of the deal with the owners of the space but it’s really ruining our shot? How do we solve that?
I could go on and on and on. Do you know how many film shoots I’ve been on where police were called? The storytelling, the shot selection, the colour palettes, the communication of gesture and intent to performers, the editing and selection of shots, the rhythm and pacing… that’s not the hard part: money and logistics are.
Many of these problems could be solved (re: outsourced) with more finances, being able to hire other people who specialize in those things. Most people say “you should get a producer” and it’s like… yeah, how do I find this magical person?
When I have a great story in my head, and you ask me “how do you do that?”—i shrug. I don’t know.