Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Editshare - Enhancing EditShare’s flow media asset management with Automated AI


Turn Back Time with Visual DNA

May 19, 2023
May 15, 2023
Clock Icon

As anyone familiar with modern-day Media Asset Management (MAM) or Digital Asset Management (DAM) systems will tell you, your image and video archives  are only as valuable as your ability to search and retrieve specific content in a cost-effective and timely way that is tailored and appropriate for your business.

If you have a lot of valuable visual content, whether in an archive or a live stream, you need to extract metadata to index, organize and search that content effectively.  “Tagging”, “labelling” or “annotating” are terms that are generally used interchangeably and all refer to the act of adding metadata to unstructured collections of images or video at the point of ingest.

The ingest process is both time consuming and highly critical from an indexing perspective. Choosing appropriate tags and applying them consistently are two extremely important considerations if content is to maintain its future value, whether you are in the business of monetizing visual assets or managing your own visual assets as part of your daily workflow. In short, getting the indexing wrong at the outset can have a very big cost associated with it.

The dream of being able to re-index your content days, weeks or even years after the original ingest with a set of entirely new, more current tags has seemed pretty far fetched, even in the not too distant past. But now, thanks to AI metadata and Visual DNA, this is set to become the reality for an ever-increasing number of visual archives.

Traditionally if you wanted to add new tags to your taxonomy once you had ingested your original content, the standard and only option was to re-ingest all of that content again, assigning new, updated tags as you go along.

With media libraries typically running to hundreds of thousands or even millions of hours, whilst this made cloud vendors happier and richer, the time and money considerations effectively created a massive barrier to extracting the full value of your content.

Today, innovative AI metadata solutions and Visual DNA mean that this is a thing of the past.

So how is this possible?

Well, it can be a difficult concept to grasp but the “secret” is to capture the essence of the video during the initial ingest process. Here at Mobius Labs we call this the “Visual DNA” of the content.

When Visual DNA is combined with advanced AI, it is possible to effectively turn the clock back and re-index the ingested content in seconds based on any future tags that you choose to create.

So we ingest once to decode frames and extract the visual DNA, the essence of the video, that will:

  1. Enable rapid search
  2. Allow the addition of new concepts e.g. new tags, without the costly need to re-ingest
  3. Add people to the face database that were not even “known” or popular at the time of the original ingest, and then find them in seconds in your whole archive, without having to re-scan the archive.
  4. Make module updates easy.
  5. Allow the addition of new modules that can be run on already indexed videos
  6. Save money and countless hours of time

This is true future-proofing in action, allowing you to turn back time and re-index with an entirely new set of tags at any future time that you prefer, without having to go back through the costly and time consuming ingest process again from scratch. So, no more sleepless nights agonising that you have got your taxonomy correct at the outset!

To discover other ways in which you can future-proof your media library, why not download our latest guide “5 Essential Steps to Future-Proof your Visual Assets with AI Metadata”.

Future-Proof your Visual Assets with AI Metadata FREE guide
Customer details
Written by
Edited by
Illustrated by

Discover more articles

Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Thought leadership
Unlocking the True Potential of Content with Visual AI
Thought leadership
Top 3 reasons why computer vision is changing the asset management game
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.