Why ProRes?

Why you should use Apple’s very nice mezzanine codec — or not

Profile Picture


Iain Anderson

August 10, 2022

Introduction

Why ProRes? 2I remember seeing QuickTime 1.0 when I was a teenage nerd with a new Mac LC. That first version was simple enough — tiny postage-stamp videos of the birth of a child and the launch of an Apollo rocket could be copied and pasted to create something new. In the era of VHS-to-VHS dubs, this was revelatory. Fast forward a few years to DV, the iMac DV, then HDV, and the last twenty or so years have seen an explosion of codecs, cameras, formats and ever-increasing resolutions. So where does ProRes fit into the story, and what’s its place in today’s video world? With the help of Steve Bayes, Product Manager of ProRes at its launch and for a decade afterwards, let’s dig into it.

History of ProRes and other codecs

Making basic DV and HDV edits was not too stressful for the Macs of the early 2000s, but if you wanted to create fancy effects or get into color correction, you might start to struggle. But DV wasn’t the main game in town, and if you were dealing with some other capture format, you might have to rely on a capture card and their proprietary codecs. Each NLE also rolled their own codecs and depending on how much storage you needed, you made a trade-off between quality and file size. For example, on Avid Media Composer, you might choose AVR 12 for low quality, or AVR 77 for high.

Apple’s initial “mezzanine codec” solution for Final Cut Pro was the Apple Intermediate Codec, a performant, high-quality codec you could use as a neutral baseline. But it wasn’t quite scalable enough. From Steve Bayes:

“It was 8-bit 4:2:0 which was OK for transcoding HDV, but not good enough for transcoding other higher quality formats. I think frame size was limited to the 3 HD formats for HDV.”

If you’re not across the finer points of the technical details, 8-bit means that each channel has 256 (2^8) potential different values. 4:2:0 means that there’s one chroma pixel for every four luminance pixels, so a UHD image in 4:2:0 has the same amount of color information as a 1080p JPEG. Better than 4:2:0 is 4:2:2, where there’s one chroma pixel for every two luminance pixels, and 4:4:4, with full 1:1 chroma resolution. For best quality, a format that supports at least 10-bit (1024 values for each channel) and 4:2:2 is needed, but many codecs are limited to 8-bit 4:2:0, even today.

Why ProRes? 3
A clear description of just how much color information you’ll always throw away, (from Apple’s ProRes White Paper)

With the future more firmly in mind, ProRes was created (in 2007) as a transcoding codec which balanced post-production needs and image quality, and it later grew into an acquisition and delivery codec. For a look at ProRes in 2009, check out Gary Adcock’s article here, and for the modern, official details, check Apple’s white paper: PDF link. It was meant to be easier to work with than heavily compressed formats of the time (like HDV and XDCAM) and also to provide a solid, high-quality file-based standard for delivery. ProRes comes in many flavors, from ProRes Proxy all the way through to ProRes 4444XQ (apparently added at ARRI’s request), and resolution support has grown over time to encompass the 8K future.

But let’s take a step back to consider what’s actually stored in a video codec.

Video codecs — intra or inter?

Audio and video codecs are designed with specific goals in mind and can be optimized for speed or even to avoid patents. Uncompressed data doesn’t compromise on quality, but for video, if you don’t compress, you’ll usually create files that are too large to handle. Intelligently throwing away some of that data with lossy compression saves a lot of space, but poor compression can result in visible compression artifacts, or video that is difficult for computers to decode quickly. Some codecs are also difficult for computers to encode data into, which makes them time-consuming to use as a delivery format. (Hello, DCP!)

Broadly speaking, there are two main ways to encode video data: intra-frame and inter-frame.

  • Intra-frame is like a series of still images, each compressed independently with no reference to the other frames. Compression described as “ALL-I” is intra-frame compression, and meaning that each frame is an “I-frame”. Intra-frame is usually easier for computers to work with, but takes far more storage space than more heavily compressed media. ProRes and “ALL-I” flavors of H.264 are intra-frame.
  • Inter-frame means that most frames are described in relation to a preceding frame (P-frame) or even to a frame coming up in the future (B-frame). This is far more efficient in terms of storage space because static objects don’t need to be fully described in every frame, but working with these files can sometimes be much harder. To access a single frame, several nearby frames may need to be retrieved and interpreted. A codec described as “Long-GOP”, for “long group of pictures”, is an inter-frame codec, and this includes most H.264 and HEVC footage.
Why ProRes? 4
The ALL-I intra-frame on top is simpler when compared to the Long-GOP’s I, B and P frames below

For many years and to this day, you’ll find advice online to “avoid Long-GOP codecs”. However, much like the advice to never store your media on your internal drive, this advice is dated, and not always correct. Modern Macs have long enabled decoding of H.264 and HEVC files in hardware, so it’s usually easy to work with modern standards-based compressed files. While you’ll still find the occasional Long-GOP file that’s tricky to work with, not every “ALL-I” file is easy to work with either — blame the manufacturers, the NLEs, or both. When the GH5 came out, its ALL-I H.264 files were both huge and difficult to work with, though thankfully, that has changed.

So, which way to go during capture and post-production?

Choosing a codec

As ever, your choices during production and in post-production are going to come down to personal preferences. Are you working alone, using a single camera, and managing the files yourself? Do whatever you like! Depending on the camera you’re using and the amount of color grading you want to apply, either inter- or intra-frame codecs could work well. Many people have done tests to try to spot the differences, but in a head-to-head pixel-peeping showdown, with the same amount of color data stored in each codec, differences can be hard to see, even after color correction.

Why ProRes? 5
Even pixel peeping at ~5.8K originals, I can’t tell the difference between ProRes and HEVC from a GH6

In a larger production, using a variety of cameras, the benefits of standardization become easier to see. Not every camera that makes H.264 or HEVC files does it the same way, and some files are easier to work with than others. At the higher-end, rarer formats like Canon RAW Light and REDCODE are optimized for picture quality, and because they’re not designed for easy editing, these formats can still bring even a modern Mac to its knees.

So, if you want to be sure to avoid problems with any tricky old GoPro files, drone files or custom camera formats, you can either demand that all cameras deliver ProRes or convert them to ProRes before editing with them. ProRes 422 is probably the best choice for most cameras, but you can dial it down to ProRes LT for less demanding jobs, up to HQ for more demanding ones, or (if you can generate the information to justify it) 4444XQ for the top level of quality. Any way you go, you’ll need more hard drive space, which is the cost of a more reliable workflow. Here’s Steve’s completely impartial advice:

”I would recommend shooting ProRes for any production that uses multiple camera manufacturers on set or multiple manufacturers are being used during the entire length of the production for consistent material from freelancers, etc. Modern computers can handle camera original codecs better these days so that is no longer the main reason to use ProRes. Since ProRes is a known quantity, you will have better consistency across all aspects of using the footage. You will just have more color data to work with in terms of image color grading, image matching, VFX, etc. compared to codecs that are 4:2:0 or highly compressed to take less space on an SD card.

If you’re delivering PR then absolutely shoot PR. Exports will be much faster if there is no transcoding.

I would recommend it every time you have a choice :-)”

That last point is worth looking into a little further — why are exports faster with ProRes?

Special advantages of ProRes

First, the computer does indeed have less work to do with ProRes when compared to H.264 or HEVC, but more importantly, the intra-frame nature of the codec means that it’s much easier to parallelize the export process. ProRes files are easier to segment and can be split up across a multi-threaded CPU or even multiple Macs. The hardware acceleration on the latest M1 Pro, Max and Ultra chips also means that even laptops can cope with very high resolutions. Plus: if you’ve shot ProRes and have been rendering to ProRes along the way, your final export can be as fast as a file copy.

ProRes is also designed to preserve quality through multiple generations of compression, to maintain more image data compared to competing mezzanine codecs like DNxHD, and to scale to more powerful devices. That means each new Mac can handle not just higher resolutions, but more streams in multicam workflows.

Why ProRes? 6
Premiere, FCP or Avid MC all like ProRes

Still, probably the biggest advantage of ProRes is the long-term stability of the format. Because it’s been around for 15 years, and Apple’s not going anywhere, it’s a safe choice for deliverables for any level of production. Even better: all the information needed to decode it is publicly available. This means you can safely archive ProRes and play it back without restrictions in the future, because any manufacturer or developer can include the ability to decode ProRes in their products.

Today, you can create official ProRes files on Macs and PCs in all the major NLEs, and be sure the process is going to work predictably because it’s officially licensed. A few words from Steve about the licensing process:

“The manufacturer has to submit samples for quality assurance. Apple has to sign off that they are doing the encoding correctly. There is a form to submit, but otherwise that’s it.”

ProRes — so why wouldn’t you use it?

It’s all about workflow. For a large production, moving everything to a standard like ProRes can simplify your workflow, and it’s an easy choice. But if you’re a one- or two-person operation and you know your computer can handle the compressed codecs your cameras create, ProRes might not be worth it. The added space requirements of ProRes, and the extra time needed to copy those larger files, can create issues rather than solve them.

A personal example. Years ago, my Blackmagic Cinema Camera could record ProRes to its on-board SSD. The footage looked great, and it was easy to work with, but the data rate for 1080p footage was around 130Mbps; pretty big for the time. Sure, I went through a few more hard drives, but while recording 1080p, it was manageable. But for my next camera, I went to 4K H.264 at 100Mbps, had no problems editing, and saved space on every file — ProRes wasn’t an option in my price bracket at the time.

Why ProRes? 7
This is roughly half the data rate of the 5.7K ProRes option but way higher than compressed alternatives

Modern compressed formats do let me record in 10-bit and 4:2:2 if I want to, but the file size difference moving to ALL-I recording is still significant. On my GH6 with the newest v2.0 firmware, let’s compare the options for C4K, 4096×2160. If I want to record 4:2:2 10-bit at 25p, I can use:

    • 150Mbps for LongGOP H.264
    • 400Mbps for ALL-I H.264
    • 541Mbps for ProRes 422
    • 811Mbps for ProRes HQ

For lower-end jobs, I could also lower my standards and use:

    • 72Mbps for UHD 4:2:0 10-bit HEVC
    • 100Mbps for UHD 4:2:0 8-bit H.264
Why ProRes? 8
A graph showing just how high data rates can get with higher quality formats

In the field, the high data rates of ProRes requires the use of expensive CF Express Type B cards rather than dirt-cheap slower SD cards — a price premium of around 500% for 128GB cards. And of course, those same high data rates mean you’ll consume that expensive storage much more quickly. A future direct-to-SSD recording firmware update would save me from buying too many CF Express Cards, but I’d then have to buy more batteries if the USB-C slot is taken up with an SSD instead of a large external battery. It’s all about trade-offs.

Lastly, backup and archiving costs will go up if you shoot ProRes and want to keep your camera originals. Today, hard drives are faster and cheaper than they once were, but not so cheap that I’d want to spend 3-4 times as much on storage as I already do.

At the end of the day, ProRes is a great format to standardize on, but the ease of editing and long-term stability comes at a cost in media and long-term storage that’s a problem for smaller productions. If you want to move up to ProRes, make sure it’s solving a problem and not creating one.

Why ProRes? 9
If you do move to an external ProRes-capable recorder like an Atomos Ninja, you can also log a shoot live, in real-time

If you’ve decided to capture ProRes, and your camera can output HDMI, you can always go for an Atomos or BlackMagic recorder. But as storage devices have gotten faster and larger, capturing ProRes on-device has become more common. Today, my GH6 can record ProRes to CF Express cards and even my iPhone can record ProRes, though, as discussed, the very high data rates make storage space a concern. Still, if you didn’t or couldn’t capture in ProRes, you can convert it by optimizing in FCP, or transcoding with your NLE of choice. Exporting is similarly simple, and even if you eventually deliver to a different format, a ProRes export is a great idea if you need to pass a high-quality master to someone else.

Conclusion

What’s next for ProRes? Here’s Steve:

“From my completely unofficial point of view, I think it is clear that ProRes will continue to be supported by more new cameras and the mirrorless models will get better recording capabilities (less overheating, larger frame sizes) and SD cards will get faster and larger capabilities. Until now the different manufacturers saw their own branded codec was a differentiator, but no one buys a Canon camera for Canon RAW Light and the manufacturers eventually saw that. Atomos bridges that gap at the moment with PR RAW and better monitoring, etc. but there are many, many other devices for recording just straight up PR.

It will continue to be required as a delivery format for broadcast and streaming.

Shooting with the iPhone will mean more people than ever will use it (even if the wireless transfer is currently very slow). As people use it and see the quality difference they will also see a clear benefit to upgrading to M1 and M2 Macs. The benefit of buying more storage on the iPhone and the internal HD of their Macs will also become evident.”

Hardware acceleration of the three major video codecs (H.264, HEVC and ProRes) has several consequences. First, your Mac can handle more than it once could, and compressed footage is not the problem it used to be. But supporting ProRes means that there’s no limit at the higher end of production either — if you shoot ProRes, even a laptop can play back original camera masters.

While you may not need ProRes on every job, it’s a great fallback for the next time you need to work with complex formats, or simply other people. Predictability and reliability have value, and for many professionals, it’s worth the extra storage requirements. Test it out, see if it works for you, and remember that it’s good to have a safe standard to fall back on. Happy editing!

Support ProVideo Coalition

Shop with
Filmtools Logo

Filmtools

Filmmakers go-to destination for pre-production, production & post production equipment!

Shop NowOriginal Article

Leave a Reply

Your email address will not be published. Required fields are marked *