9+ Best FREE Compressor VST / AU Plugins There are many free VST compressors, but in this post, I gathered some of the best and most popular free compressor plugins that can be downloaded for free in VST or AU plugin formats for both Windows and Mac based digital audio workstations. Of R&D, 3 Sep 2012 SHARE AND DISCUSS Tweet Soothe cracked heels (Thinkstock. Originally designed as vocal processor, the plugin gives you the perfect. Rider Plug-in; Jun 18, 2017 Oeksound Release Soothe Vocal Processor.
Picking up from here are 10 more tips to help you plan for a successful production. Create a plan and work it. Being a successful filmmaker – that is, making a living at it – is more than just producing a single film. Such projects almost never go beyond the festival circuit, even if you do think it is the “great American film”. An indie producer may work on a project for about four years, from the time they start planning and raising the funds – through production and post – until real distribution starts. Therefore, the better approach is to start small and work your way up.
Start with a manageable project or film with a modest budget and then get it done on time and in budget. If that’s a success, then start the next one – a bit bigger and more ambitious. If it works, rinse and repeat. If you can make that work, then you can call yourself a filmmaker. I have a on this subject, but in a nutshell, an indie film that doesn’t involve union talent or big special effects will likely cost close to one million dollars, all in.
You can certainly get by on less. I’ve cut films that were produced for under $150,000 and one even under $50,000, but that means calling in a lot of favors and having many folks working for free or on deferment.
You can pull that off one time, but it’s not a way to build a business, because you can’t go back to those same resources and ask to do it a second time. Learn how to raise the money to do it right and proceed from there. Contingencies at the end. Intelligent budgeting means leaving a bit for the end. A number of films that I’ve cut had to do reshoots or spend extra days to shoot more inserts, establishing shots, etc. Plan for this to happen and make sure you’ve protected these items in the budget.
You’ll need them. Some producers see their film projects as a way to buy gear. That may or may not make sense. If you need a camera and can otherwise make money with it, then buy it. Or if you can buy it, use it, and then resell it to come out ahead – by all means follow that path.
But if gear ownership is not your thing and if you have no other production plans for the gear after that one project, then it will most likely be a better deal to work out rentals. After all, you’re still going to need a lot of extras to round out the package. Shooting ratios.
In the early 90s I worked on the post of five half-hour and hourlong episodic TV series that were shot on 35mm film. Back then shooting ratios were pretty tight.
A half-hour episode is about 20-22 minutes of content, excluding commercials, bumpers, open, and credits. An hourlong episode is about 44-46 minutes of program content. Depending on the production, these were shot in three to five days and exposed between 36,000 and 50,000 feet of negative. Therefore, a typical day meant 50-60 minutes of transferred “dailies” to edit from – or no more than five hours of source footage, depending on the series. This would put them close to the ideal mark (on average) of approximately a 10:1 shooting ratio.
Today, digital cameras make life easier and with the propensity to shoot two or more cameras on a regular basis, this means the same projects today might have conservatively generated more than 10 hours of source footage for each episode. This impacts post tremendously – especially if deadline is a factor. As a new producer, you should strive to control these ratios and stay within the goal of a 10:1 ratio (or lower). Block and rehearse. The more a scene is buttoned down, the fewer takes you’ll need, which leads to a tighter shooting ratio. This means rehearse a scene and make sure the camera work is properly blocked.
Don’t wing it! Once everything is ready, shoot it. Odds are you’ll get it in two to three takes instead of the five or more that might otherwise be required. Control the actors. Unless there’s a valid reason to let your actors improvise, make sure the acting is consistent. That is, lines are read in the same order each take, props are handled at the same point, and actors consistently hit their marks each take.
If you stray from that discipline, the editorial time becomes longer. If allowed to engage in too much freewheeling improvisation, actors may inadvertently paint you into a corner. To avoid that outcome, control it from the start. Visual effects planning. Most films don’t require special effects, but there are often “invisible” fixes that can be created through visual effects. For example, combining elements of two takes or adding items to a set.
A recent romantic drama I post-supervised used 76 effects shots of one type or another. If this is something that helps the project, make sure to plan for it from the outset. Adobe After Effects is the ubiquitous tool that makes such effects affordable. The results are great and there are plenty of talented designers who can assist you within almost any budget range. Multiple cameras vs. Single camera vs.
Some producers like the idea of shooting interviews (especially two-shots) in 4K (for a 1080 finish) and then slice out the frame they want. I contend that often 4K presents focus issues, due to the larger sensors used in these cameras. In addition, the optics of slicing a region out of a 4K image are different than using another camera or zooming in to reframe the shot. As a result, the look that you get isn’t “quite right”.
Naturally, it also adds one more component that the editor has to deal with – reframing each and every shot. Conversely, when shooting a locked-off interview with one person on-camera, using two cameras makes the edit ideal. One camera might be placed face-on towards the speaker and the other from a side angle. This makes cutting between the camera angles visually more exciting and makes editing without visible jump cuts easier. In dramatic productions, many new directors want to emulate the “big boys” and also shoot with two or more cameras for every scene. Unfortunately this isn’t always productive, because the lighting is compromised, one camera is often in an awkward position with poor framing, or even worse, often the main camera blocks the secondary camera.
At best, you might get 25% usability out of this second camera. A better plan is to shoot in a traditional single-camera style. Move the camera around for different angles.
Tweak the lighting to optimize the look and run the scene again for that view. The script is too long. An indie film script is generally around 100 pages with 95-120 scenes. The film gets shot in 20-30 days and takes about 10-15 weeks to edit. If your script is inordinately long and takes many more days to shoot, then it will also take many more days to edit. The result will usually be a cut that is too long.
The acceptable “standard” for most films is 90-100 minutes. If you clock in at three hours, then obviously a lot of slashing has to occur.
You can lose 10-15% (maybe) through trimming the fat, but a reduction of 25-40% (or more) means you are cutting meat and bone. Scenes have to be lost, the story has to be re-arranged, or even more drastic solutions. A careful reading of the script and conceiving that as a finished concept can head off issues before production ever starts. Losing a scene before you shoot it can save time and money on a large scale. So analyze your script carefully.
In these next two entries, I’d like to tackle 21 tips that will make your productions go more smoothly, finish on time, and not become a disaster during the post production phase. Although I’ve framed the discussion around indie features, the same tips apply to commercials, music videos, corporate presentations, and videos for the web. Modern digital cameras handle white elements within a shot much better than in the past, but hitting a white shirt with a lot of light complicates your life when it comes to grading and directing the eye of the viewer. This is largely an issue of art direction and wardrobe.
The best way to handle this is simply to replace whites with off-whites, bone or beige colors. The sitcom, which earned recognition for getting artful looks out of his video cameras, is said to have had the white shirts washed in coffee to darken them a bit.
The whiteness was brought back once the cameras were set up. The objective in all of this is to get the overall brightness into a range that is more controllable during color correction and to avoid clipping. Expose to the right. When you look at a signal on a histogram, the brightest part is on the righthand side of the scale.
By pushing your camera’s exposure towards a brighter, slightly over-exposed image (“to the right”), you’ll end up with a better looking image after grading (color correction). That’s because when you have to brighten an image by bringing up highlights or midtones, you are accentuating the sensor noise from the camera. If the image is already brighter and the correction is to lower the levels, then you end up with a cleaner final image. Since most modern digital cameras use some sort of log or hyper gamma encoding to record a flatter signal, which preserves latitude, opening up the exposure usually won’t run the risk of clipping the highlights. In the end, a look that stretches the shadow and mids to expose more detail to the eye gives you a more pleasing and informative image than one that places emphasis on the highlight portion.
Productions almost ubiquitously use green paint, but that’s wrong. Each paint color has a different luminance value. Green is brighter and should be reserved for a composite where the talent should appear to be outside. Blue works best when the composited image is inside.
Paint matters. I’ve even had producers go so far as to rig up a silk with a blue lighting wash and expect me to key it! When you light the subject, move them as far away from the wall as possible to avoid contamination of the color onto their hair and wardrobe. This also means, don’t have your talent stand on a green or blue floor, when you aren’t intending to see the floor or see them from their feet to their head.
Rim lighting. Images stand out best when your talent has some rim lighting to separate them from the background. Even in a dark environment, seek to create a lighting scheme that achieves this rimming effect around their head and shoulders. Tonal art direction.
The various “blockbuster” looks are popular – particularly the “orange and teal” look. This style pushes skin tones warm for a slight orange appearance, while many darker background elements pick up green/blue/teal/cyan casts. Although this can be accentuated in grading, it starts with proper art direction in the set design and costuming. Whatever tonal characteristic you want to achieve, start by looking at the art direction and controlling this from step one.
Digital cameras have nearly all adopted some method of recording an image with a flat gamma profile that is intended to preserve latitude until final grading. This doesn’t mean you have to use this mode. If you have control over your exposure and lighting, there’s nothing wrong with recording Rec. 709 and nailing the final look in-camera. I highly recommend this for “talking head” interviews, especially ones shot on green or blue-screen. Microphone direction/placement.
Every budding recording engineer working in music and film production learns that proper mic placement is critical to good sound. Pay attention to where mics are positioned, relative to where the person is when they speak. For example, if you have two people in an interview situation wearing lavaliere mics on their lapels, the proper placement would be on each’s inner lapel – the side closer to the other person. That’s because each person will turn towards the other to address them as they speak and thus talk over that shoulder. Having the mic on this side means they are speaking into the mic. If it were on their outer lapel, they would be speaking away from the mic and thus the audio would tend to sound hollow.
For the same reasons, when you use a boom or fish pole overhead mic, the operator needs to point the mic in the direction of the person talking. They will need to shift the mic’s direction as the conversation moves from one person to the next to follow the sound. Multiple microphones/iso mics. When recording dialogue for a group of actors, it’s best to record their audio with individual microphones (lavs or overhead booms) and to record each mic on an isolated track. Cameras typically feature on-board recording of two to four audio channels, so if you have more mics than that, use an external multi-channel recorder.
When external recording is used, be sure to still record a composite track to your camera for reference. Microphone types. There are plenty of styles and types of microphones, but the important factors are size, tonal quality, range, and the axis of pick-up. Make sure you select the appropriate mic for the task. For example, if you are recording an actor with a deep bass voice using a lavaliere, you’d be best to use a type that gives you a full spectrum recording, rather than one that favors only the low end. There are plenty of ways to sync sound to picture in double-system sound situations. Synchronizing by matched timecode is the most ideal, but even there, issues can arise.
Assure that the camera’s and sound recorder’s timecode generators don’t drift during the day – or use a single, common, external timecode generator for both. It’s generally best to also include a clapboard and, when possible, also record reference audio to the camera. If you plan to sync by audio waveforms (PluralEyes, FCP X, Premiere Pro CC), then make sure the reference signal on the camera is of sufficient quality to make synchronization possible. Record wild lines on set. When location audio is difficult to understand, ADR (automatic dialogue replacement, aka “looping”) is required.
This happens because the location recording was not of high quality due to outside factors, like special effects, background noise, etc. Not all actors are good at ADR and it’s not uncommon to watch a scene with ADR dialogue and have it jump out at you as the viewer. Since ADR requires extra recording time with the actor, this drives up cost on small films. One workaround in some of these situations is for the production team to recapture the lines separately – immediately after the scene was shot – if the schedule permits. These lines would be recorded wild and may or may not be in sync. The intent is to get the right sonic environment and emotion while you are still there on site.
Recently in I explained a workflow combining color grading tools with LUTs to create custom looks. In this post, I’m going to follow a similar process for FCP X users. (Note: This post was written before the release of FCP X 10.2. However, the fundamental items I discuss herein haven’t changed with the update. The main differences are that the Color Board has become a standard color correction effect and that all effects filters now have built-in masking.) to define the overall look and then color correct individual clips for consistency.
A creative LUT should only be considered as spice, not as the main course. You can’t rely solely on the creative LUT for your shot. There is no “easy” button when grading shots on a timeline. In this example, I’m using one of the SpeedLooks LUTs from.
They offer a variety of styles from clean to stylized. To use any third-party LUT with FCP X, you have to use some plug-in that reads and applies LUTs as an effects filter. Any.cube formatted LUT copied into its folder (located in the Motion Templates folder) will show up as a pulldown option when LUT Utility is applied to a clip in FCP X. ( Click images to enlarge.) are based on log or Rec 709 color space. If you have log footage and it has already been corrected to Rec 709, then you could simply use one of the Rec 709 versions.
However, if you want to get the most out of their looks, then it’s best to shoot log and use a log-based LUT. Since log values vary among camera manufacturers, LookLabs designed their LUTs around a universal log value used within their LUT curves. To properly use one of their looks requires two stages of LUTs. The first stage is a camera patch, which shifts the video (by camera type) into LookLabs’ intermediate log space.
They even include a patch for generic Rec 709 video. Once the first LUT has been applied, you may add the second LUT for the desired look. In our grading strategy, the grading filters and/or tools are sandwiched between the first LUT (camera patch) and the second LUT (creative look). For this example, I’m using ARRI Alexa footage that was encoded with a log-C gamma profile. FCP X has built-in LUT processing to convert these clips into Rec 709 color space. Disable that in the inspector for all clips. Assuming you have installed the LUTs into the correct template folder, apply LUT Utility to the first clip.
From the pulldown menu select a camera patch LUT appropriate for the camera (in this case, Alexa log-C). Now copy-and-paste-attributes for just this filter to all clips on the timeline (assuming all clips use the same camera and gamma profile).
Add your preferred color correction effect to the clip. It will be stacked after the LUT Utility filter. I’m using Color Grade from group.
I like it because the controls are fast and I’ve grown fond of using exposure/contrast/temperature/tint controls in this type of grading. I could just as easily use one of the color wheel, color correction filters (, ) or even the. If the camera clips are reasonably consistent, the creative LUT you select is going to define the tonality of shadows and highlights, so there’s no reason to get carried away with big color balance changes in this grade.
Note: At this stage, you can copy-and-paste the Color Grade filter to all other clips or wait until later when you’ve actually started to make adjustments. If all shots are different, you might at well copy-and-paste now to have the filter in place with default starting values. If it’s a situation where you want to match the same cameras cutting back and forth – like A and B cameras in an interview – then you might opt to grade the first few clips and then copy-and-paste for the rest. Next it’s time to apply the creative LUT. Since you want to apply a single LUT across all clips, you’ll want to apply a blank, adjustment layer title effect as a connected clip. The length of the adjustment layer should span the length of your timeline. To this title clip, add LUT Utility and select the desired SpeedLooks LUT (or any other you’ve added) from the pulldown menu.
In this example, I used one of their Clean Kodak looks. I generally apply a slight vignette to most of my graded clips. This is used to subtly darken the edge of the frame. FCP X won’t let you do this using a shape mask within the Color Board setting of a blank title, like the adjustment layer. (Note: This was corrected in 10.2. It is now possible to add a mask and color correction adjustment within an adjustment layer.) You will need to add a specific Vignette effect as another connected title.
I’m using the in this example. Adjust the vignette’s size, shape, and darkening to taste. The RT Vignette lets you also blur of the edges and mix in an overall sepia toning to the clip as added features. I wouldn’t use these features as part of a standard vignette effect, but in some cases they might be appropriate. You’ve arrived.
How you handle color correction depends on your temperament and level of expertise. Some editors want to stay within the NLE, so that editorial adjustments are easily made after grading. Others prefer the roundtrip to a powerful external application. When Adobe added the Direct Link conduit between and, they gave Premiere Pro editors the best of both worlds. Displays SpeedGrade is a standalone grading application that was initially designed around an SDI feed from the GPU to a second monitor for your external video. After the Adobe acquisition, Mercury Transmit was eventually added, so you can run SpeedGrade with one display, two computer displays, or a computer display plus a broadcast monitor. With a single display, the video viewer is integrated into the interface.
At home, I use two computer displays, so by enabling a dual display layout, I get the SpeedGrade interface on one screen and the full-screen video viewer on the other. To do this you have to correctly offset the pixel dimensions and position for the secondary display in order to see it. Otherwise the image is hidden behind the interface. Using Mercury Transmit, the viewer image is sent to an external monitor, but you’ll need an appropriate capture/monitoring card or device. AJA products seem to work fine.
Some Blackmagic devices work and others don’t. When this works, you will lose the viewer from the interface, so it’s best to have the external display close – as in next to your interface monitor. Timeline When you use Direct Link, you are actually sending the Premiere Pro timeline to SpeedGrade. This means that edits and timeline video layers are determined by Premiere Pro and those editing functions are disabled in SpeedGrade. It IS the Premiere Pro timeline.
This means certain formats that might not be natively supported by a standalone SpeedGrade project will be supported via the Direct Link path – as long as Premiere Pro natively supports them. There is a symbiotic relationship between Premiere Pro and SpeedGrade. For example, I worked on a music video that was edited natively using RED camera media. The editor had done a lot of reframing from the native 4K media in the 1080 timeline. All of this geometry was correctly interpreted by SpeedGrade. When I compared the same sequence in Resolve (using an XML roundtrip), the geometry was all wrong. SpeedGrade doesn’t give you access to the camera raw settings for the.r3d media, but Premiere Pro does.
So in this case, I adjusted the camera raw values by using the source settings control in Premiere Pro, which then carried those adjustments over to SpeedGrade. Since the Premiere Pro timeline is the SpeedGrade timeline when you use Direct Link, you can add elements into the sequence from Premiere, in order to make them available in SpeedGrade. Let’s say you want to add a common edge vignette across all the clips of your sequence. Simply add an adjustment layer to a top track while in Premiere. This appears in your SpeedGrade timeline, enabling you to add a mask and correction within the adjustment layer clip.
In addition, any video effects filters that you’ve applied in Premiere will show up in SpeedGrade. You don’t have access to the controls, but you will see the results interactively as you make color correction adjustments. All SpeedGrade color correction values are applied to the clip as a single Lumetri effect when you send the timeline back to Premiere Pro.
All grading layers are collapsed into a single composite effect per clip, which appears in the clip’s effect stack (in Premiere Pro) along with all other filters. In this way you can easily trim edit points without regard to the color correction. Traditional roundtrips render new media with baked-in color correction values. There, you can only work within the boundaries of the handles that you’ve added to the file upon rendering.
Not so with Direct Link, since color correction is like any other effect applied to the original media. Any editorial changes you’ve made in Premiere Pro are reflected in SpeedGrade should you go back for tweaks, as long as you continue to use Direct Link. 12-way and more Most editors are familiar with 3-way color correctors that have level and balance controls for shadows, midrange and highlights. The grading interface features a 3-way (lift/gamma/gain) control for four ranges of correction: overall, shadows, midrange, and highlights. Each tab also adds control of contrast, pivot, color temperature, magenta (tint), and saturation. Since shadow, midrange, and highlight ranges overlap, you also have sliders that adjust the overlap thresholds between shadow and midrange and between the midrange and highlight areas. Color correction is layer based – similar to Photoshop or After Effects.
SpeedGrade features primary (“P”), secondary (“S”) and filter layers (the “+” symbol). When you add layers, they are stacked from bottom to top and each layer includes an opacity control.
As such, layers work much the same as rooms in Apple Color or nodes in DaVinci Resolve. You can create a multi-layered adjustment by using a series of stacked primary layers. Shape masks, like that for a vignette, should be applied to a primary layer. The mask may be normal or inverted so that the correction is applied either to the inside or the outside of the mask.
Secondaries should be reserved for HSL keys. For instance, highlighting the skin tones of a face to adjust its color separately from the rest of the image.
The filter layer (“+”) is where you’ll find a number of useful tools, including Photoshop-style creative effect filters, LUTs, and curves. Working with grades The application of color correction can be applied to a clip as either a master clip correction or just a clip correction (or both). When you grade using the default clip tab, then that color correction is only being applied to that single clip. If you grade in the master clip tab, then any color correction that you apply to that clip will also be applied to every other instance of that same media file elsewhere on the timeline. Theoretically, in a multicam edit – made up of four cameras with a single media file per camera – you could grade the entire timeline by simply color correcting the first clip for each of the four cameras as a master clip correction. All other clips would automatically inherit the same settings.
Of course, that almost never works out quite as perfectly, therefore, you can grade a clip using both the master clip and the regular clip tabs. Use the master for a general setting and still use the regular clip tab to tweak each shot as needed. Grades can be saved and recalled as Lumetri Looks, but typically these aren’t as useful in actual grading as standard copy-and-paste functions – a recent addition to SpeedGrade CC. Simply highlight one or more layers of a graded clip and press copy (cmd+c on a Mac).
Then paste (cmd+v on a Mac) those to the target clip. These will be pasted in a stack on top of the default, blank primary correction that’s there on every clip.
You can choose to use, ignore, or delete this extra primary layer. SpeedGrade features a cool trick to facilitate shot matching. The timeline playhead can be broken out into multiple playheads, which will enable you to compare two or more shots in real-time on the viewer. This quick comparison lets you make adjustments to each to get a closer match in context with the surrounding shots. A grading workflow Everyone has their own approach to grading and these days there’s a lot of focus on camera and creative LUTs. My suggestions for prepping a Premiere Pro CC sequence for SpeedGrade CC go something like this. Once, you are largely done with the editing, collapse all multicam clips and flatten the timeline as much as possible down to the bottom video layer.
Add one or two video tracks with adjustment layers, depending on what you want to do in the grade. These should be above the last video layer. All graphics – like lower thirds – should be on tracks above the adjustment layer tracks. This is assuming that you don’t want to include these in the color correction.
Now duplicate the sequence and delete the tracks with the graphics from the dupe. Send the dupe to SpeedGrade CC via Direct Link. In SpeedGrade, ignore the first primary layer and add a filter layer (“+”) above it. Select a camera patch LUT. For example, an ARRI Log-C-to-Rec-709 LUT for Log-C gamma-encoded Alexa footage. Repeat this for every clip from the same camera type. If you intend to use a creative LUT, like one of the, you’ll need one of their camera patches.
This shifts the camera video into a unified gamma profile optimized for their creative LUTs. If all of the footage used in the timeline came from the same camera and used the same gamma profile, then in the case of SpeedLooks, you could apply the creative LUT to one the adjustment layer clips. This will apply that LUT to everything in the sequence.
Once you’ve applied input and output LUTs you can grade each clip as you’d like, using primary and secondary layers. Use filter layers for curves. Any order and any number of layers per clip is fine. Using this methodology all grading is happening between the camera patch LUT and the creative LUT added to the adjustment layer track. Finally, if you want a soft edge vignette on all clips, apply an edge mask to the default primary layer of the topmost adjustment layer clip. Adjust the size, shape, and softness of the mask. Darken the outside of the mask area.
(Note that not every camera uses logarithmic gamma encoding, nor do you want to use LUTs on every project. These are the “icing on the cake”, NOT the “meat and potatoes” of grading.
While this might work within a closed loop, like a self-contained Avid, Adobe or Apple workflow, it breaks down when you have to move your project across multiple applications. It’s common for an editor to send files to a Pro Tools studio for the final mix and to a colorist running Resolve, Baselight, etc. For the final grade. In doing so, you have to ensure that editorial decisions aren’t incorrectly translated in the process, because the NLE might handle a native camera format differently than the mixer’s or colorist’s tool. To keep the process solid, I’ve developed some disciplines in how I like to handle media. The applications I mention are for Mac OS, but most of these companies offer Windows versions, too.
If not, you can easily find equivalents. Copying media The first step is to get the media from the camera cards to a reliable hard drive. It’s preferable to have at least two copies (from the location) and to make the copies using software that verifies the back-up.
This is a process often done on location by the lowly “data wrangler” under less than ideal conditions. A number of applications, such as and let you do this task, but my current favorite is. It uses a dirt simple interface permitting one source and two target locations. It has the sole purpose of safely transferring media with no other frills. Processing media on location With the practice of shooting footage with a flat-looking log gamma profile, many productions like to also see the final, adjusted look on location. This often involves some on-site color grading to create either a temporary look or even the final look. Usually this task falls to a DIT (digital imaging technician).
Several applications are available, including,. Some new applications, specifically designed for field use, include and from Sony Creative Software.
Catalyst Browse in free and designed for all Sony cameras, whereas Catalyst Prepare is a paid application that covers Sony cameras, but also other brands, including Canon and GoPro. Depending on the application, these tools may be used to add color correction, organize the media, transcode file formats, and even prepare simple rough assemblies of selected footage. All of these tools add a lot of power, but frankly, I’d prefer that the production company leave these tasks up to the editorial team and allow more time in post. In my testing, most of the aforementioned apps work as advertised; however, BulletProof continues to have issues with the proper handling of timecode. Transcoding media I’m not a big believer in always using native media for the edit, unless you are in a fast turnaround situation. To get the maximum advantage for interchanging files between applications, it is ideal to end up in one of several common media formats, if that isn’t how the original footage was recorded.
You also want every file to have unique and consistent metadata, including file names, reel IDs and timecode. The easiest common media format is QuickTime using the.mov wrapper and encoded using either Apple ProRes, Panasonic AVC-Intra, Sony XDCAM, or Avid DNxHD codecs. These are generally readable in most applications running on Mac or PC. My preference is to first convert all files into QuickTime using one of these codecs, if they originated as something else. That’s because the file is relatively malleable at that point and doesn’t require a rigid external folder structure. Applications like BulletProof and Catalyst can transcode camera files into another format.
Of course, there are dedicated batch encoders like,. My personal choice for a tool to transcode camera media is either (free).
Both feature easy-to-use batch processing interfaces, but EditReady adds the ability to apply LUTs, change file names and export to multiple targets. It also reads formats that MPEG Streamclip doesn’t, such as C300 files (Canon XF codec wrapped as.mxf).
If you want to generate a clean master copy preserving the log gamma profile, as well as a second lower resolution editorial file with a LUT applied, then EditReady is the right application. Altering your media I will go to extra lengths to make sure that files have proper names, timecode and source/tape/reel ID metadata. Most professional video cameras will correctly embed that information. Others, like the Canon 5D Mark III, might encode a non-standard timecode format, allow duplicated file names, and not add reel IDs. Once the media has been transcoded, I will use two applications to adjust the file metadata. For timecode, I rely on.
This application lets you alter QuickTime files in a number of ways, but I primarily use it to strip off unnecessary audio tracks and bad timecode. Then I use it to embed proper reel IDs and timecode. Because it does this by altering header information, processing a lot of files happens quickly. The second tool in this mix is, which is batch renaming utility.
I use it frequently for adding, deleting or changing all or part of the file name for a batch of files. For instance, I might append a production job number to the front of a set of Canon 5D files. The point in doing all of this is so that you can easily locate the exact same point within any file using any application, even several years apart. Speed is a special condition. Most NLEs handle files with mixed frame rates within the same project and sequences, but often such timelines do not correctly translate from one piece of software to the next. Edit lists are interchanged using EDL, XML, FCPXML and AAF formats and each company has its own variation of the format that they use. Some formats, like FCPXML, require third party utilities to translate the list, adding another variable.
Round-tripping, such as going from NLE “A” (for offline) to Color Correction System “B” (for grading) and then to NLE “C” (for finishing), often involves several translations. Apart from effects, speed differences in native camera files can be a huge problem. A common mixed frame rate situation in the edit is combining 23.98fps and 29.97fps footage. If both of these were intended to run in real-time, then it’s usually OK. However, if the footage was recorded with the intent to overcrank for slomo (59.94 or 29.97 native for a timebase of 23.98) then you start to run into issues. As long as the camera properly flags the file, so that every application plays it at the proper timebase (slowed), then things are fine.
This isn’t true of DSLRs, where you might shoot 720p/59.94 for use as slomo in a 1080p/29.97 or 23.98 sequence. With these files, my recommendation is to alter the speed of the file first, before using it inside the NLE. One way to do this is to use Apple Cinema Tools (part of the defunct Final Cut Studio package, but can still be found). You can batch-conform a set of 59.94fps files to play natively at 23.98fps in very short order. This should be done BEFORE adding any timecode with QtChange. Remember that any audio will have its sample rate shifted, which I’ve found to be a problem with FCP X. Therefore, when you do this, also strip off the audio tracks using QtChange.
They play slow anyway and so are useless in most cases where you want overcranked, slow motion files. Audio in your NLE The last point to understand is that not all NLEs deal with audio tracks in the same fashion. Often camera files are recorded with multiple mono audio sources, such as a boom and a lav mic on channels 1 and 2. These may be interpreted either as stereo or as dual mono, depending on the NLE. Premiere Pro CC in particular sees these as stereo when imported.
If you edit them to the timeline as a single stereo track, you will not be able to correct this in the sequence afterwards by panning. Therefore, it’s important to remember to first set-up your camera files with a dual mono channel assignment before making the first edit. This same issue crops up when round-tripping files through Resolve. It may not properly handle audio, depending on how it interprets these files, so be careful. These steps add a bit more time at the front end of any given edit, but are guaranteed to give you a better editing experience on complex projects. The results will be easier interchange between applications and more reliable relinking. Finally, when you revisit a project a year or more down the road, everything should pop back up, right where you left it.
While advanced audio editing and mixing is still best done in a DAW and by a professional who uses those tools everyday, it’s long been the case that most local TV commercials and a lot of corporate videos are mixed by the editor within the NLE. Although most modern NLEs have very strong audio tools, I find that Adobe Premiere Pro CC is one of the better NLEs when it comes to basic audio mixing. There is a wide range of built-in plug-ins and it accepts most third party VST and AU (Mac) filters. Audio can be mixed at both the clip and the track level using faders, rubber-banding in the timeline or by writing automation mix passes with the track mixer. The following are some simple tips for getting good mixes for TV using Premiere Pro CC. Repair – If you have problem audio tracks, don’t forget that you can send your audio clip to Audition.
When you select a clip to edit in Audition, a copy of the file is extracted and sent to Audition. This extracted copy replaces the original clip on the Premiere timeline so the original stays untouched.
Audition is good for surgery, such as removing background noise. There are both waveform and spectral views where it’s possible to isolate and “heal” noise elements visible in the spectral view. I recently used this to reduce the noise from a lawn mower heard in the background of an on-location interview. Third-party filters – In addition the built-in tools, Premiere Pro supports any compliant audio filters on your system. By scanning the system, Premiere Pro (as well as Audition) can access plug-ins that you might have installed as part of other applications.
Several good filter sets are available from Focusrite, Waves and iZotope. When it comes to audio mixing for simple projects, I’m a fan of the Vocal Rider and One Knob plug-ins from Waves. Vocal Rider is best with voice-overs by automatically “riding” the level between a minimum and maximum setting.
It works a bit like a human operator in evening out volume variations and is not as blunt a tool as a compressor. The One Knob filters are a series of comprehensive filters for EQ or reverb controlled by a single adjustment knob. For example, you can use the “brighter” filter to adjust a multi-band, parametric-style EQ that increases the trebleness of the sound. Mixing formula – This is my standard formula for mixing TV spots in Premiere Pro. My intention is to end up with voices that sit well against a music track without the music volume being too low. A handy Premiere tool is the vocal enhancer. It’s a simple filter with an adjustment dial that balances the setting for male or female voices as well as for music.
Dial in the setting by ear to the point that the voice “cuts” through the mix without sounding overly processed. For music, I’ll typically apply an EQ filter to the track and bring down the broader mid-range by -2dB. Across the master bus (or a submix bus for each stem) I’ll apply a dynamic compressor/limiter. This is just used to “soft clip” the bus volume at -10dB. Overall, I’ll adjust clip and track volumes to run under this range, so as not to be harshly compressed or clipped.
CALM – Most audio delivered for US broadcast has to be compliant to the loudness specs of the CALM Act. There are similar European standards.
Adobe aids us in this, by including the TC Electronics Radar metering plug-in. If you use this, place it on the master bus and make sure audio is routed first through a submix bus. I’ll place a compressor/limiter on the submix bus. This way, all volume adjustments and limiting happen upstream of the meter.
This works in FCP X, Premiere Pro CC and Media Composer. The first two actually have adjustment layer effects, though in FCP X, it’s based on a blank title generator. In Media Composer, you can add edits into empty video tracks and apply effects to any section of a blank track, which effectively makes this process the same as using an adjustment layer. The Media Composer approach was described nicely by, which got me thinking about this technique more broadly. Generally, it works the same in all three of these NLEs. The examples and description are based on Premiere Pro CC, but don’t let that stop you from trying it out on your particular software of choice. To start, create a new adjustment layer and add a marker to the middle of it.
This helps to center the layer over the cut between two shots. Place the adjustment layer effect over a cut between shots, making sure that the marker lines up with the edit point. If the transition is to be a one-second effect, then trim the front and back of the adjustment layer so that one-half second is before the marker and one-half second is after the marker. Depending on the effect, you may or may not also want a short dissolve between the two shots on the base video track. For example, an effect that flashes the screen full frame at the midpoint will work with a cut. A blur effect will work best in conjunction with a dissolve, otherwise you’ll see the cut inside the blur.
The beauty of this technique is that you can apply numerous filters to an adjustment layer and get a unique combination of effects that isn’t otherwise available. For example, a blur+glow+flare transition. At this point, it’s important to realize that not all effects plug-ins work the same way and you will have varying results.
Boris filters tends not to work when you stack them in the same adjustment layer and start to change keyframes. In Avid’s architecture, the BCC filters have a specific pipeline and you have to define which filter is the first and which is the last effect. I didn’t find any such controls in the Premiere version. A similar thing happened with the Red Giant Universe filters. On the other hand, most of the native Premiere Pro filters operated correctly in this fashion. The basic principle is that you want the filters to start and end at a neutral value so that the transition starts and ends without a visible effect.
The midpoint of the transition (over the cut) should be at full value of whatever it is you are trying to achieve. If it’s a lens flare, then the middle of the transition should be the midpoint of the lens flare’s travel and also its brightest moment. If you are using a glow, then the intensity is at its maximum in the middle. Typically this means three keyframe points – beginning, middle and end.
The values you adjust will differ with the plug-in. It could be opacity, strength, intensity or anything else. Sometimes you will adjust multiple parameters at these three points. This will be true of a lens flare that travels across the screen during the transition. The point is that you will have to experiment a bit to get the right feel. The benefit is that once you’ve done this, the adjustment layer clip – complete with filters and keyframes – can be copied-and-pasted to other sections of the timeline for a consistent effect.
Here are some examples of custom transition effects in Premiere Pro CC, using this adjustment layer technique. ( Click the image for an enlarged view.) This is a combination of a Basic 3D horizontal spin and Ripples. The trick is to get the B-side image to not be horizontally flipped, since it’s the backside of the rotating image. To do this, I added an extra Transform filter with a middle keyframe that reverses the scale width to -100. This transition combines a Directional Blur with a Chromatic Glow and requires a dissolve on the base video track. This is a lens flare transition where the flare travels and changes intensity.
The brightest part is the midpoint over the shot change. This could work as a cut or dissolve, since the flare’s brightness “wipes” the screen. In addition, I have the flare center traveling from the upper left to the lower right of the frame. Here, I’ve applied the BCC Pencil Sketch filter, bringing it in and out during the length of the transition, with a dissolve on the base layer. This gives us a momentary cartoon look as part of the shot transition.
Custom UI filters like Magic Bullet Looks also work. This effect combines Looks using the “blockbuster” preset with a Glow Highlights filter. First set the appearance in Looks and then use the strength slider for your three keyframes. This transition is based on the Dust & Scratches filter in Premiere Pro. I’m not sure why it produced this blotchy artistic look other than the large radius value.
Quite possibly this is a function of its behavior in an adjustment layer. Nevertheless, it’s a cool, impressionistic style.
This transition takes advantage of the BCC Water Color filter. Like my Pencil Sketch example, the transition briefly turns into a watercolor during the length of the transition. Like the previous two BCC example, this is a similar approach using the Universe ToonIt Paint filter. This transition combines several of the built-in Premiere Pro effects, including Transform and Radial Blur.