BARCELONA, Spain — The quality of the Netflix videos you watch on your smartphone is about to get a whole lot better.
The American video-streaming-and-rental company is going to make use of artificial intelligence (AI) to improve how it encodes its videos on a scene-by-scene basis for mobile, it announced this week — cutting down the amount of data required to stream video, and letting users on slow connections view better-quality video.
Working with the University of Southern California, Netflix conducted a study that showed test subjects shots at different qualities, and asked them to judge which one looked better.
It then used these results to train an AI neural network on what quality footage looks like — which then goes through Netflix’s videos scene-by-scene to make them use as little data as possible without sacrificing file quality.
This is possible because not all video requires the same amount of data to look good. In a press briefing in Barcelona, Spain, on Wednesday, Netflix vice president of product innovation Todd Yellin used two examples: “Daredevil” and “Bojack Horseman.”
“Daredevil” is a live-action drama full of special effects and complex scenes, while “Bojack Horseman” is a cartoon. Clearly, it should take less space to save a video of Bojack Horseman than Daredevil, because there’s less detail in any given scene that requires captured.
And because the AI encoding done shot-by-shot, it means shows of varying visual complexity can be optimised so they don’t take up unnecessary bandwidth while still having sufficient detail when it really matters.
Yellin showed a demonstration — the trailer for upcoming Marvel action series “Iron First.” One was encoded traditionally, while the other had been given the AI treatment. The visual quality was roughly the same (the traditional one was perhaps a tiny bit better, but it was hard to tell) — but the traditional one was a 555kbps stream, while the AI-powered one was 227kbps, half the size.
The feature isn’t available today, but will be rolling out in the next couple of months (between two and five months, Yellin said). It will be utilised for mobile video at first, though there are apparently plans to bring it to desktop and smart TV viewing as well, and the viewer doesn’t need to do anything — it works automatically in the background.