Are accelerators the cure for the video power problem or just an excuse to peddle GPUs?

Analysis There’s no denying that video streaming is incredibly popular – now accounting for over 80% of all internet traffic, by some estimates.

Video is also an incredibly resource-intensive medium. As the Financial Times reported, a munitions factory in Norway found out the hard way. The factory, which produced ammunition for the war in Ukraine, had planned to expand only to find that there was not enough power. For what? Because a new TikTok data center was sucking up every spare watt it could find.

And while TikTok may be on uncertain ground as it faces the prospect of an outright ban in the US, it’s not even the biggest video contributor. That title goes to Google (Youtube) and Netflix, according to the latest Sandvine report.

So what can be done? Well, if you ask Nvidia CEO Jensen Huang, the answer is accelerated computing, preferably using its GPUs to encode and decode video on the fly, and its data processing units (DPUs) to accelerate the movement of data as it rushes through the intertubes to your phone or TV.

“User-generated video is driving significant growth and consuming massive amounts of energy,” Huang said at the company’s GTC event in March. “We should speed up all video processing and reclaim that power.”

Accelerated transcoding gear

The idea of ​​using GPUs or other dedicated accelerators to transcode video is not new. Nvidia’s small P4 and T4 GPUs have been a popular choice for video streaming applications for years. Last month, Nvidia unveiled the L4.

The company claims that an eight L4 node can transcode over 1,000 720p streams at 30 fps when using the P1 preset. If these numbers seem a little optimistic, it’s because Nvidia is upping the numbers a bit by using the lowest possible quality preset.

But Nvidia isn’t the only company that sees an opportunity to meet the growing demand for video streaming and sell its wares in the process. Earlier this month, AMD’s Xilinx division unveiled its new Alveo media accelerator card, the MA35D.

AMD claims the card can transcode up to 32 AV1 1080p streams at 60fps while consuming just 35W under load.

Even Intel, a company that only recently entered the GPU market, has its skin in the game and has actually beaten AMD and Nvidia to market with an AV1 compatible chip. Intel’s Flex-series GPUs – codenamed Arctic Sound – were announced at Hot Chips last August. The cards were designed with video in mind.

All of these parts, whether from Nvidia, AMD, or Intel, feature single-slot designs and sub-75W TDPs (most of them anyway), allowing eight or more of them to be combined into a single system. And that’s not the only thing these cards have in common. All are marketed for live video, cloud gaming and AI image processing – think Twitch or live sports streaming.

No miracle solution for the big screen

Compared to conventional CPU-based software rendering, these cards may seem like the future, but it’s actually not that simple. This is because most streaming services, like Netflix, don’t have to deal with large volumes of low-latency video. Instead, they can spend hours converting a video master into multiple copies for each format, resolution, and bitrate they plan to stream the video to.

The streaming service can then distribute the videos to content delivery networks (CDNs) and stream them from there. And since software transcoding, while slower, tends to render higher quality video for a given bitrate, and storage is relatively cheap compared to a bunch of GPU nodes, the economics will likely continue to favor this approach for conventional video.

Software encoders will remain relevant despite gains in accelerated computing, said Sean Gardner, head of video strategy at AMD. The register.

“Software encoders continue to evolve… For some applications – some use cases – [these cards] will be good enough to eventually be considered for some of these files, not realtime [applications]. But for Netflix? I do not think so.”

AV1 to the rescue

Despite what Jensen would have you believe – remember he wants to sell you GPUs – accelerated computing is not video’s panacea.

That said, accelerated computing isn’t the only way to improve the efficiency of video platforms. The bandwidth consumed to send the video to your phone or TV is another important factor.

“Typically, it’s between 6 and 9 percent of their revenue that they spend on bandwidth,” he said of AMD’s streaming video customers. “One of the biggest power consumers is actually the communication side, the network that delivers these, these Ethernet packets to the user.”

This is where the AV1 codec from the Alliance for Open Media comes in. AV1 is a relatively new codec that many people are excited about, partly because it’s royalty-free, but also because it’s exceptionally space-efficient.

Various tests have shown AV1 to be 20-40% more efficient compared to popular web streaming codecs, including H.265.

For a platform like YouTube or Netflix, the bandwidth savings would be huge, which is probably one of the reasons why almost all major streaming services are members of the Alliance for Open Media.

However, despite having been around since 2018, AV1 is still in its infancy. Only the latest generation of GPUs from AMD, Nvidia or Intel support full AV1 encoding/decoding. Meanwhile, Apple has yet to add support for the codec on its devices. But given AV1’s strong industry support, this particular issue is really more of a growing pain than anything else. ®

#accelerators #cure #video #power #problem #excuse #peddle #GPUs

Leave a Reply

Your email address will not be published. Required fields are marked *