If you haven’t been sleeping under a rock, I’m sure you’ve heard about or used ChatGPT in recent months. Since its public release 5 months ago, it seems as if the technology has made leaps and bounds, and endless media coverage laments the potential loss of jobs/fields, the end of work entirely, or the complete destruction of humanity (hello Skynet). Well, the purpose of this blog post stays much more narrow and doesn’t touch on the philosophical or moral conundrums of our looming AI overlords. In this post we’ll look at how you can currently leverage AI, or more accurately AI-like tools, in the video production field. I did ask ChatGPT to write a blog post like this for me, and while the result was somewhat useful, it lacked enough specific detail or trademark humor. But I did use AI to create all the images!

The future of video production?

1. Automatic Transcription and Translation

This is the easiest and currently most effective use of AI in video production. Not even 5-6 years ago, we relied on a few human transcribers to transcribe all of our video interviews. They were quite good, but scheduling them and turnaround time were sometimes issues, and the cost was a little high. Then along came Rev, and we switched to their service, which provided on-demand transcription services for a fraction of the cost and with much quicker turn-around times. Rev did introduce an AI transcription option a few years ago as well, for a very reduced cost. But the big game changer in this field was when Adobe included AI transcription into Premiere Pro. I won’t go into the detail of how to use it, but basically it will transcribe any sequence (in a variety of languages), and create a very accurate transcript that syncs perfectly to your sequence, so that you can instantly click on any word and the playhead will go directly to that point in the video. This has three huge advantages over any other transcription option. One is speed – it takes mere minutes to transcribe sequences. Longer sequences (30 min +) may take a bit longer, but it’s miles ahead of waiting 4-12 hours for human transcription. Second – syncing with the video is a huge time saver. Previously we’d need to toggle back and forth between a transcription document and Premiere in order to edit, and occasionally scrub through the video a little to get to the right spot (human created transcription isn’t 100% perfect with time-code sync). Now everything is done within Premiere, and you can easily read the transcript directly inside the app and quickly start editing. This doesn’t work as well if you have multiple interviews you need to cut together, because you’d likely want to craft those into a script first. But it’s obvious that’s an easy future option that I’m sure Adobe will integrate at some point. Third – having multiple languages is a huge time saver. Previously, we’d need to search for foreign language transcribers that didn’t cost an arm and a leg ($300-500+ for an hour of audio), and would often be disappointed with the results, especially with the time-code sync. Now we can quickly get time-code accurate transcripts in 15 languages (and growing), use them immediately, or get them translated into English much more cheaply and quickly. Hopefully Adobe will add an auto translate feature soon as well, but there’s a host of existing AI services that will also do that (Google or ask ChatGPT 😉

2. Video Editing & Processing

Sensei is Adobe’s AI platform that integrates into its creative suite. You can check out all the cool features (with more coming every update), but here’s a few key ones:

Color Match – this still feels like it has a good amount of improvement to make, but it is quite useful as a starting point. We film lots of interviews that have 2 or more camera angles, and as best as we try to match cameras in production, there’s always some color correction required to match them in post. By using Color Match, you can leverage the power of AI to get two shots closely matched. Most of the time I’d say that Color Match gets 80% of the way there, but that remaining 20% can make a huge difference. It’s that last little bit of color wheel or HSL tweaks that really help get two shots from different cameras matching correctly. But hey, 80% sure can save a ton of time when you need to match lots of shots.

Morph Cut – we haven’t used this feature too often, but it can help in a pinch when you need to cut a single camera video interview, and have nothing to cover the edit with. If the person isn’t moving too much between the cuts, Morph Cut works pretty well to hide the edit. I find it also helps to make the Morph Cut only a few frames long (3-4). However, many times I find that the Morph Cut is obvious, and in some cases a straight cut actually works better, or even just punching in 20% for a fake close-up angle seems less obtrusive. Hopefully this is one that will improve with time.

Content-aware Fill – we’ve only used this feature in Photoshop (with mixed success), but After Effects offers it for video. It seems quite extensive and tricky to use, but definitely worth exploring if you need to remove something from your video – logos, products, etc.

3. Features to Come

Adobe has unveiled an exhilarating glimpse into its AI-powered universe through Firefly. Promising a suit of generative AI models such as generating custom vectors and textures to full-on color correction, Firefly does have the potential to make creating video content much quicker and efficient. All of these features look great in theory, but we’ll wait and see how they pan out. In general Adobe Premiere has been plagued with lots of bugs and issues over the years, so introducing a new suite of AI enabled features seems ripe for more failure points. However, the current updates to Adobe alongside its existing AI tools has made our job a little easier in recent months. Fields like motion design and animation seem poised to be most affected by Firefly, for better or worse – as AI makes it much easier for users to quickly conjure original designs, graphics and animations without much effort. The key is to find ways to leverage AI to help improve your workflow, expand your creativity and ultimately create better content.

Your next videographer?

Leave a Comment

Your email address will not be published. Required fields are marked *