Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Nomad

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Customer Stories

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

February 6, 2024
September 1, 2023
Clock Icon
5
min
Introduction
“The EditShare team studied the market and found that Mobius Labs offered the right vision with an incredible technology stack that meant partnering with them was the best solution to meet our customer’s needs.”
Stephen Tallamy

CTO, EditShare

EditShare empowers the media and entertainment industry to craft and sharetheir stories through smart technology, offering a number of Emmy award-winning solutions for collaborative storage and media management workflows.

Founded in 2004, EditShare is headquartered in Watertown, Massachusetts withadditional offices in the UK and Australia.

The Challenge

EditShare customers were looking for a solution that could help resolve the endless task of manually adding additional metadata to each of their assets within the EditShare ecosystem.

They wanted a way for the data to be available for everyone in a variety of roles including Producers, Media Assistants and Editors. They knew there must be a faster and more efficient way to do this.


Solution

Integration of the Mobius Labs SDK through APIs meant that Mobius Labs could offer EditShare customers a wide choice of solutions to this problem including:

Automated Video Keyword Identification
to analyze video content and describe it using a catalog of thousands of tags.

Shot Detection
to highlight relevant shots and automatically create segments.

Face Detection
that can save users time from manually reviewing and tagging content by adding tags using a database of thousands of celebrities and public figures.

Fully integrated into EditShare’s FLOW MAM and using FLOW automation, media can be scanned and processed, with matching metadata sent back to the FLOW central database.

Search, using the FLOW panel integrations in Adobe Premiere or DaVinci Resolve, then becomes a very simple task for any FLOW user, whether a Producer, Media Manager or an Editor/Colorist.

Results

“The partnership with Mobius Labs and the powerful built-in algorithms mean our customers can immediately benefit from the technology as soon as it’s switched on. Now they can spend less time tagging high-throughput content and more time being creative using FLOW AI. “By adding Video Tagging, FLOW AI now not only saves time but also allows for further creativity by framing search requests by mood, weather or by time of day. This can help an editor by matching requirements more accurately whether in a Newsroom, working with unscripted content, in Sports Production, Compliance Editing or in a Post House. “Mobius AI makes our FLOW toolset much more powerful than before.”

Stephen Tallamy
CTO, EditShare

Customer details


Solution


Benefits


Written by
|
Edited by
|
Illustrated by

Discover more articles

Announcements
Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Features
Turn Back Time with Visual DNA
Thought leadership
Unlocking the True Potential of Content with Visual AI
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.