Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Nomad

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Guides

4 Ways Superhuman Computer Vision Is Transforming Journalism and Broadcasting

May 19, 2023
February 16, 2023
Clock Icon
4
min
Introduction

Superhuman Visionis next generation computer vision technology by Mobius Labs that empowers journalists by taking over the many time-consuming and mundane activities, leaving them focused on the human side of reporting.

In recent years, computer vision solutions have helped break some of the biggest international news stories. Elsewhere, this technology combats the spread of fake news and bolsters the narrative skills of photo-editors.


Here are just some of the areas where computer vision supports these activities, as well as the skills that journalists need to flourish in this new environment.

1. Research: the demand for data literacy

Computer vision has played a role in breaking some of the biggest international stories of recent years. For example, the ‘Panama Papers’ exposé, where machine learning helped an international team of researchers to identify loan agreements in more than 13 million records that were leaked to the press.

Journalists were able to ‘follow the money’ exposing the practices of offshore tax havens and the businesses taking advantage of tax loopholes. Machine learning also played a valuable role in the Implant Files investigation. Here, it sifted through reports sent to the U.S. Food and Drug Administration, and helped to uncover patient deaths potentially caused by faulty medical devices.

For now, most machine learning is supervised by software engineers. But in the future, ‘domain experts’, including journalists will play a greater role. If machine learning can help doctors assess valuable information from x-rays and CT scans in order to triage patients, then it should be able to further assist journalists. As the opportunities for big-data investigations increase, reporters may need to acquire skills to instruct machine-learning research.

2. Categorizing content: how computer vision is transforming photo agencies

Digital photography has transformed the business models of photo agencies. Gone are the days of thousand-dollar invoices for the exclusive rights to a photograph. Instead agencies are moving to a high volume, low margin, royalty-free model.

For this to be viable, many agencies are adopting computer vision technology, a form of deep learning that can tag images with their contents including objects, people and emotions. This makes it easier for broadcast media to find, and then purchase, images to illustrate their own content.

Such technology includes facial recognition, essential when classifying archives that contain millions of photographs gathered over dozens of years. The most advanced systems, like Superhuman Vision, also include an ‘aesthetic ranking feature’, which enables photographers to filter their best images based on composition, depth of field, position of subject, contrast and others.

Computer Vision, in this case machine learning, enables specialists to focus on their core strengths. Photo-editors have more time to examine the narrative possibilities of new pictures. Photographers, who may take thousands of images at a single event, no longer have to sift through every single image. Instead the software recommends a much smaller set of images to work from.

3. Writing copy: the advance of automation

Artificial intelligence can write news articles, just not very complicated ones. In most cases, such systems rely on what is known as robo-journalism, the automated writing of stories based on structured data. This works well for deadline-driven stories based on sports results, financial news, weather and elections to name but four.

The Radar News Service, in the UK, is a good example of this approach at scale. Launched in 2018, its five reporters filed 250,000 articles in the first 18 months of service. The journalists use specialist NLG (natural language generation) technology, to draft articles based on data sets released by the UK government.  

Using their investigative skills, the journalists identify data sets from which they can derive a story and then build a template into which the data and standard phrases can be assembled. Stories are then published to subscribers, especially local news outlets who may publish the original content or use it as the basis for their own reporting.

For journalists to flourish in this environment, being able to work with automation templates is essential. Some simple programming knowledge is also required. It’s another good example of AI empowering news reporters, helping them to be more effective at their jobs.

4. Facts versus fiction: building trust in the age of social media and deep fakes

Technology has impacted fact checking in several ways. It supports the collaboration of literally hundreds of journalists and fact checkers across different offices. In the case of the Implant Files story, fact-checking involved a team of 11 people manually sifting the results, to make sure that every case flagged by the algorithm was correctly identified (see 1 above).

Machine learning has a role to play in other fact checking scenarios, either supporting human specialists, or validating information itself. On the one hand, it can help separate out sentences with claims, making it easier for fact checkers to focus on statements that require validation. On the other, it can verify claims autonomously, checking against databases of information in real time.

Machine learning also plays a valuable part in the technological arms race against fake news and its propagators. As the volume of misinformation increases it will play an ever-greater role detecting such content and preventing it from contaminating articles authored by journalists and robo-journalists.

Welcome to a new era of journalism

Newsrooms today need to keep delivering quality content on-the-go. Superhuman Vision offers features like Few-shot Learning, facial expression analysis and conceptual image tagging which return better search results and help press agencies sort their huge visual archives in a matter of seconds. Furthermore, it improves business efficiency by undertaking time-consuming research and fact checking activities.

This cutting-edge technology is not meant to replace journalists, but to empower them. At a time when free speech and honest broadcasting are under threat, it frees up reporters to focus on what they do best: nurturing sources, coordinating research, and assembling articles using professional expertise and creative skills. Welcome to a new era of journalism, more human and more relevant than ever. Are you ready?



Customer details
Solution
Benefits
Written by
Peter Springett
|
Edited by
Christian Konigs
|
Illustrated by

Discover more articles

Announcements
Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Features
Turn Back Time with Visual DNA
Thought leadership
Unlocking the True Potential of Content with Visual AI
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.