My muses saddled me with this idea for doing subtitles in a different way. I don’t know if it’s ever been tried. I think it might end up being extremely good for language learning.
In short
Fine Mapping Subtitles are subtitles where words (or parts of words) in the subtitles animate in some way (for example, moving or glowing or becoming underlined), right as words are spoken in the voiceover that share their meaning.
For many many reasons I can’t be the one to implement or test this. Wondering if anyone could dismiss it as impractical and relieve me of my burden, or, failing that, reach out to some fansubbing communities and get some fine mapping subtitles rendered and see how they feel.
I considered the term “bouncing ball subtitles” yeah, but there are a couple of reasons that animation wouldn’t really work here
Sometimes a word in the voiceover language will share meaning with multiple words in the subtitle language (in which case the ball would have to split into multiple balls), or to parts of words (in which case it might not be clear that the ball is only supposed to be indicating only part of a word, or which part). Also it’s kind of just visually cluttered relative to other options.
I don’t think the research in that area would map either. Children are learning the subtitle language after learning the voiced language, whereas with adults watching subtitled video, they know the subtitled language extremely well.
It would probably work better when the speech is slow, so you have more time to notice which currently pronounced word corresponds to which highlighted word / word part / set of words.
Also, the subtitles would have to be a very literal translation, which I suspect is usually not the case. (At least, if I would make subtitles, I would sacrifice exactness in favor of shortness, because people need to be able to read the text in real time, and shorter is better.)
It doesn’t like, break, when a non-literal translation is used. When the translation doesn’t map directly, this is communicated to the viewer quite clearly as certain words in the VO produce no pulses and certain words in the subtitle fail to pulse at all.
So you don’t have to do a literal translation at all. It sort of imposes a mild pressure towards doing more literal translations; the demographic for fine mapping kinda want them. You don’t have to give it to them all of the time. The most important thing is making sure that they understand what’s being communicated.
My muses saddled me with this idea for doing subtitles in a different way. I don’t know if it’s ever been tried. I think it might end up being extremely good for language learning.
In short
see rest
For many many reasons I can’t be the one to implement or test this. Wondering if anyone could dismiss it as impractical and relieve me of my burden, or, failing that, reach out to some fansubbing communities and get some fine mapping subtitles rendered and see how they feel.
I’ve seen this done in children’s shows. There’s a song along with subtitles, and an object moves to each written word as it is spoken.
I considered the term “bouncing ball subtitles” yeah, but there are a couple of reasons that animation wouldn’t really work here
Sometimes a word in the voiceover language will share meaning with multiple words in the subtitle language (in which case the ball would have to split into multiple balls), or to parts of words (in which case it might not be clear that the ball is only supposed to be indicating only part of a word, or which part). Also it’s kind of just visually cluttered relative to other options.
I don’t think the research in that area would map either. Children are learning the subtitle language after learning the voiced language, whereas with adults watching subtitled video, they know the subtitled language extremely well.
It would probably work better when the speech is slow, so you have more time to notice which currently pronounced word corresponds to which highlighted word / word part / set of words.
Also, the subtitles would have to be a very literal translation, which I suspect is usually not the case. (At least, if I would make subtitles, I would sacrifice exactness in favor of shortness, because people need to be able to read the text in real time, and shorter is better.)
It doesn’t like, break, when a non-literal translation is used. When the translation doesn’t map directly, this is communicated to the viewer quite clearly as certain words in the VO produce no pulses and certain words in the subtitle fail to pulse at all.
So you don’t have to do a literal translation at all. It sort of imposes a mild pressure towards doing more literal translations; the demographic for fine mapping kinda want them. You don’t have to give it to them all of the time. The most important thing is making sure that they understand what’s being communicated.
Thanks for the explanation, and I agree now that the two are too different to infer much.