Hacker News new | past | comments | ask | show | jobs | submit
> A crash mid video write out can corrupt a lengthy render. With image sequences you only lose the current frame.

You wouldn't contain FFv1 in MP4, the only format incompetent enough for such corruption.

Apple has an interest against people using codecs that they get no fees from. And Apple don't have a lossless codec. So they don't offer lossless compressed video acceleration.

The idea is that when working as a part of a team, and you get handed a CG render, you can avoid sending a huge .tar or .zip file full of TIFF which you then decompress, or ProRes which loses quality, particularly when in a linear colorspace like ACEScg.

I’m curious what kind of teams you’re working in that you’re handing compressed archives of image sequences? And using tiff vs EXR (unless you mean purely after compositing)?

Another reason to use image sequences is that it’s easier to re-render just a portion of the sequence easily. Granted this can be done with video too, but has higher overhead.

But even then why does the GPU encoding change the fact that you’d send it to another NLE? I just feel like there are a lots of jump in thought process here.