Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Editshare - Enhancing EditShare’s flow media asset management with Automated AI


Partnership with Bynder

February 16, 2023
December 6, 2021
Clock Icon

We are thrilled to announce our official partnership with Bynder, a leading digital asset management (DAM) platform. Bynder offers seamless integrations with some of the biggest names in the software industry. With the addition of Mobius Labs to their roster of integrations, Bynder customers can analyse, classify and deploy visual media with groundbreaking quality and speed.

The partnership comes at a critical moment as demand for AI and computer vision, especially in marketing and media organisations, is increasing dramatically due to the sheer volume of digital assets managed by these businesses. Brad Kofoed, the Senior Vice President of Global Alliances and Channels at Bynder commented, “We are pleased to partner with Mobius Labs and to welcome them to the Bynder Marketplace offering an AI and computer vision-led solution for our clients.” Talking about our technology, Kofoed added: “Mobius Labs delivers a solution that offers deep content insights alongside simple management and powerful performance. We are particularly impressed with the data points and results we have seen from Mobius Labs.”

Our partnership with Bynder also reflects major advances in computer vision over the last couple of years. This includes the innovative ‘edge’ solutions, where AI runs on local apps or devices instead of third-party servers and data centers. As a leader in this space, our technology enables Bynder customers to deploy Superhuman Vision™ as an SDK on-premise, thereby protecting content and maintaining complete data privacy.

Bynder partners with top agencies, digital consultants, DAM experts, and technology leaders who can take advantage of Superhuman Vision™ 's image and video keywording, similarity search, aesthetic ranking and facial recognition. While many features are available out of the box, organisations can also train the software to recognise the content and context of their archives. This ‘no-code’ interface enables non-technical employees to fine-tune the technology without the need for internal IT specialists or external resources.    

“We are extremely proud of this new partnership with Bynder and to be part of their established marketplace is a testament to our teams’ dedication and hard work over the last few years,” Appu Shaji, our CEO and Chief Scientist commented. “Bynder is recognised for working with global brands in the DAM sector and we look forward to bringing our solution suite to these companies and working with the Bynder team to expand our offering even further,” he added.

We look forward to working with Bynder and ensuring a successful partnership with them.

Interested in becoming a partner? Check out our Partners page or contact Idan Nelkenbaum, our Partnerships Team Lead, at

Customer details
Written by
Peter Springett
Edited by
Trisha Mandal
Illustrated by

Discover more articles

Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Turn Back Time with Visual DNA
Thought leadership
Unlocking the True Potential of Content with Visual AI
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.