Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Thought leadership

Unlocking the True Potential of Content with Visual AI

February 16, 2023
September 7, 2022
Clock Icon


I am always fascinated why we look at pictures. I humbly put forward the proposition is that "To communicate visually, is deeply Human."  

For example, one of the earliest stories we can trace back about our ancestors is the cave paintings that can be found in various locations around the world such as Spain, France, Indonesia and Borneo. The oldest known is dated as early as 60,000 BC and was made by a Neanderthal. In fact, visual communication might have predated language, and some historians are now suggesting the origin of language was derived from visual.

Caption: Chauvet Cava Paintings ( circa 30,000 BC ), with corresponding metadata output of MobiusTagging system.

Both our visual and linguistic faculties have evolved leaps and bounds, and is the super power that enabled us to be the dominant species in this world. We currently live in a land of big data. For example, it is estimated the amount of data in the internet is around 18 Zettabytes ( a.k.a 18,000,000,000,000,000,000,000 bytes ), and the majority of it is a form of visual content, either images or video. To put this in perspective, this is 1000 times more bytes than the amount of sand grains on Earth.

That said, the act of story telling remains the main core of visual communication. For example, a clever and humorous take on how-to-make a Harry Potter style effect with a stolen longboard gathered more than 2.2 billion views in TikTok, or this video about a baby shark dance, though an acute ear-worm, has gathered more than 11 billion views in youtube, and a conventional press-style image of Lionel Messi announcing his departure from Barcelona gathered 21 billion views on Instagram.

The Challenge and the Opportunity

Digital asset management community is tasked with the gargantuan task of storing, organising, retrieving and distributing effectively these assets. Businesses have invested a lot of money in acquiring content. But if the content is not discoverable it is practically useless. There are businesses whose catalogues amount to only a few thousands of assets, who can make it look like there are millions, by bringing out the story, aesthetics and diversity within the content. Conversely, there are businesses that have millions of assets in their catalogue, but to a larger public it looks like their catalogue only has a paltry few handfuls, since the search and presentation is broken.

To make the optimal use of the content, our best solution today is metadata i.e. data about data. It is the heart of any DAM systems. These are the modern day compass and maps that helps us to discover content in the wild.

However, populating with metadata is no easy task. First of all, it is severely time-consuming, tedious and hard to scale with our content. In fact, lack of metadata is one of the principal cause of an asset getting buried forever in large databases. But the practical realities of time and budget often causes business to keep metadata unpopulated.

The recent advances in AI has started to cost effectively solve the metadata problem. For the first 60 years of computing history, humanity thought it was difficult to handle matters of perceptions ( i.e. ability to see, talk or understand language). However the recent work in past 5 years or so has completely changed the situation. Modern day computer vision systems are able to mimic visual perception to a large extent. We are able to get very high granularity of metadata (including conceptual ideas) from both images and videos with high accuracy using automated AI system such as the one we developed.

Use Cases

One of the main advantages of the modern AI, such as the solutions provided by Mobius Labs, is enabling users to deeply know the content. One of the questions people ask me often is where to start integrating AI into the systems? My answer is always to think of the business objective.

From a top-level perspective, there are two core areas where visual AI will support your business. It can drive up the revenue significantly and it can reduce operating costs.

Driving Up Revenue

Visual AI can increase your revenue by powering core applications such as search and recommendation.

One of our clients, ANP, the leading Dutch press agency, was adding everyday 50,000 images into their database, including up-to 2000 from its own team of photographers, and with their archive size of over 100 million images. A full case study of this can be found here. With the highly critical nature of press business, their photo team needed to find a way to scale up their operation to keep up with these images as real time as possible. We worked with them to find a solution that can index their content to find highly relevant tags that are not merely objects, but also emotional content of a photograph. For example, a photograph of a woman practising yoga, was enriched with abstract tags such as "happiness", "calm" and "mindfulness".

Face recognition was also a priority for ANP. To evaluate this feature, ANP sent Mobius Labs and a second organisation images to train their machine learning models. Once this step was completed, the ANP team picked another 300 images for classification. Mobius Labs achieved a 94% success rate that astonished and impressed the agency in equal measure. Patrick Rasenberg, Product Manager Photo ANP says, “Mobius met our two main objectives: highest success rate, including emotional recognition while demonstrating a clear understanding of our industry. Together they contributed to our decision to select their computer vision technology.”

"The speed and accuracy of the Mobius SDK is excellent. It will ensure that we squeeze as much revenue as possible from our archive and new additions to the collection,” says Patrick.

Saving Costs

Getting metadata in order is manual, tedious and expensive since it needs an internal and external workforce to support it. At the same time, it is not the most creative or efficient use of ones time. This is where automation comes into the picture. Mobius Labs solution is often able to reduce operational costs by over 80% by use of automated tagging, face identification, with creatives doing basic quality assurance on a subset and using the learnings to retrain the AI system.

But what is really fascinating is that recent advances in AI has also provided tools for the creative community. For example, ensuring brand compliance is a necessity, but really time consuming to ensure in large databases. Mobius Labs' aesthetics and custom training module allows creatives to specify brand guidelines in terms of a few example visuals and use AI to hone into the content that fit these guidelines.  

Future is together

Digital Asset Management industry is going through a renaissance, where it is moving forward in full throttle from being a mere data storage, to a complete suite of tools that serve the end to end lifecycle of content; from the moment an asset is created, to reaching a distribution channel, and channeling back the learning on the field back to optimise the system.

AI, from our perspective, is possibly the most exciting piece which have come in, that can accelerate and spark new innovation. Its core advantage is that it can handle content at scale (a few million distinct assets is not a problem for AI), giving us superhuman capabilities. Further the modern breed is trainable and usable by users who do not have coding skills, i.e. an ordinary user can impart our knowledge into these systems and put them into workflows. It is a perfect fertile ground for novel recipes that mixes human ingenuity and machine capabilities.

One of the ways we enable this from Mobius Labs is by partnering with some of the leading DAM vendors. We have announced partnerships with Bynder, Cloudinary and Editshare. We are also integrated into Canto, OrangeLogic, Extensis to support our clients who are using these services.

Quoting Brad Kofeod, Senior Vice President of Global Alliances and Channels of Bynder: “Mobius Labs delivers a solution that offers deep content insights alongside simple management and powerful performance. We are particularly impressed with the data points and results we have seen from Mobius Labs.”

Let us Mingle

DAM and AI have translated to a must-have solution from nice-to have solutions in the last 5 to 10 years. It is an intersection of where user experience and business value is being uplifted manifold. We'll be attending DAM New York on September 15th and 16th and we are looking forward to the new ideas and projects that can be spawned. It is a unique chance to put our customers in forefront and give value unparalleled to what was offered before.

If you are also attending, please visit us at stand #20 or schedule a meeting with us following this link.

Customer details
Written by
Appu Shaji
Edited by
Illustrated by

Discover more articles

Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Turn Back Time with Visual DNA
Thought leadership
Top 3 reasons why computer vision is changing the asset management game
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.