Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Nomad

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Thought leadership

Computer vision and broadcasting: A media match made in heaven

February 16, 2023
March 1, 2022
Clock Icon
4
min
Introduction

Even before the Covid-19 pandemic, the broadcasting sector was undergoing massive disruption. The launch of streaming services such as Netflix, Amazon Prime, Apple TV and Disney+ transformed the sector and audience behavior. In Europe, state broadcasters also entered the fray, including the BBC whose iPlayer platform predates many commercial services.

During pandemic lockdowns, competition intensified as platforms competed for the attention of viewers spending most of their time indoors. Blockbuster films, originally slated for theatrical release, were premiered on streaming platforms as studios and platforms negotiated their way through uncertain times.

As well as exclusive content, platforms have introduced smart features to boost viewer engagement and loyalty. Examples include X-Ray, an Amazon Prime feature that overlays information about the current program.

Hit the pause button and you’re served up cast filmographies and bios, soundtrack details and even character back stories for viewers struggling to keep up. It’s a smart move for Amazon because all of X-Ray’s data comes from IMDB, which Amazon also owns.

Although Amazon keeps its algorithms secret, most commentators assume that X-Ray content is determined by computer vision software, including facial recognition. This is supplemented by a team of human curators who refine the software’s output and make their own content choices.

Personalization on the rise

The use of AI is also on the rise as platforms serve up bespoke promotional content to viewers.  For instance, platforms often cut multiple versions of trailers and serve different menu tiles according to the demographic of the viewer.

Take the latest superhero blockbuster release. If you’re a fan of action movies, the algorithm will serve you a trailer containing chase scenes, martial arts, and anything else that sets the pulse racing. If you’re more into comedies and drama, the clip is more likely to include dialogue and humorous scenes that match your taste.

But what if you could cut dozens of trailers based on popular genres or even emotions? That’s where the latest computer vision software is beginning to find a foothold.

The technology, including software from Mobius Labs, can scan video ten times faster than the standard speed, and can recognise genre (action, horror, romance), emotions (excitement, fear, affection) and changes in scene and camera angles. This makes it possible to time stamp the content with a rich array of tags that make scenes and sequences instantly discoverable.

The software is also able to recognise the context of an action or expression. For example, a James Bond villain laughing means something completely different from the laughter in a rom-com. Once computer vision gets to grips with all these nuances, it could even edit and organise scenes to tell a story.

We’re a long way from replacing long nights in the editing suite, but it could certainly help film makers assemble a rough cut or source a replacement scene from many hours of footage.

Product placement and brand promotion

Other commercial opportunities include product placement and brand visibility in a film or TV series. Until recently the budget for product placement was based on a relatively simple formula based on the size of the audience, time on screen and perhaps the added value of a superstar drinking a certain brand of soft drink.

But what if you could measure the value of a t-shirt logo glimpsed for only a few seconds? Mobius Labs is building software detection that can pick out the tiniest glimpse of a brand, determine the context and report the findings back to the platform.

This is also highly valuable in scenarios such as gaming or live streaming. What if a family friendly corporation discovers that its logo is on a t-shirt worn by a competitor playing a violent video game? You can quickly advise the participant to change into something more neutral at the next break.

Computer vision can also help widen the global appeal of films or series by flagging content that is inappropriate for a specific territory. Mobius Labs software is already being used by some clients to help remove nudity, violence or consumption of narcotics. By reaching a truly global audience, streaming platforms can increase the return on investment on programming that potentially costs millions of dollars per episode.

Advertising, audience engagement, and programming ROI are only a few of the possibilities. Indeed, once you understand the goals and challenges facing the broadcasting sector, the potential for computer vision is boundless. As the market becomes increasingly saturated, the technology will be at the heart of every platform’s efforts to retain and grow their audiences in the coming decade.



Customer details
Solution
Benefits
Written by
Peter Springett
|
Edited by
|
Illustrated by
Xana Ramos

Discover more articles

Announcements
Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Features
Turn Back Time with Visual DNA
Thought leadership
Unlocking the True Potential of Content with Visual AI
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.