• MHLoppy@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    12 hours ago

    For most applications I assume you want to leverage AV1 hardware encoding to use it, and I imagine it would be the same for AV2 etc. We might’ve been a bit spoilt by H.264 being literally >20 years old at this point (and even VP9 and H.265 being >10 years old).

    • Alphane Moon@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      It is my understanding that for archival/“close to transparent” encodes you want to use the CPU. With x264/x265 I get noticeably worse quality when using the GPU. For a video that I may use for the next ~10 years (I’ve started ripping, to Xvid, in the early 2000s), I would rather focus on quality.