Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Wanna be
a Mobian?

Join the team behind the world's most advanced computer vision solutions.

Our Mission

To create next-generation computer vision technology that allows everyone to make the most out of ever-growing visual media.

Open positions

At this time, we don't have any job vacancies. Be sure to check back later.

Benefits & Perks

Remote-first

Cafe, living room couch, beach house patio. Our team works from wherever they find it convenient (time-zones allowing!).

Flexible benefits

Lifestyle, health & wellness, education, travel: choose the benefits that mean most to you with a monthly perks budget from Heyday.

The best gear

Laptop of your choice and everything else that you need to work efficiently.

Personalized avatars

Customized, personal avatars to concrete your identity as a true 'Mobian', designed to accompany your internal and external communications.

Good times (with masks)

Global off-site events where we get the entire company together for team building, socializing and having fun in exciting places.

Time to de-stress

With 30 annual vacation days, and flexible working hours to help you maintain that much needed work-life balance.

Do you have questions?
Drop a message.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.