New! Sign up for our free email newsletter.
Science News
from research organizations

Understanding deep-sea images with artificial intelligence

GEOMAR research team develops new workflow for image analysis

Date:
September 10, 2018
Source:
Helmholtz Centre for Ocean Research Kiel (GEOMAR)
Summary:
More and more data and images are generated during ocean research. In order to be able to evaluate the image data scientifically, automated procedures are necessary. Researchers have now developed a standardized workflow for sustainable marine image analysis for the first time.
Share:
FULL STORY

More and more data and images are generated during ocean research. In order to be able to evaluate the image data scientifically, automated procedures are necessary. Researchers have now developed a standardized workflow for sustainable marine image analysis for the first time.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. Diving robots or autonomous underwater vehicles, which carry out measurements independently in the deep sea, can now record large quantities of high-resolution images. To evaluate these images scientifically in a sustainable manner, a number of prerequisites have to be fulfilled in data acquisition, curation and data management. "Over the past three years, we have developed a standardized workflow that makes it possible to scientifically evaluate large amounts of image data systematically and sustainably," explains Dr. Timm Schoening from the "Deep Sea Monitoring" working group headed by Prof. Dr. Jens Greinert at GEOMAR. The background to this was the project JPIOceans "Mining Impact." The ABYSS autonomous underwater vehicle was equipped with a new digital camera system to study the ecosystem around manganese nodules in the Pacific Ocean. With the data collected in this way, the workflow was designed and tested for the first time. The results have now been published in the international journal Scientific Data.

The procedure is divided into three steps: Data acquisition, data curation and data management, in each of which defined intermediate steps should be completed. For example, it is important to specify how the camera is to be set up, which data is to be captured, or which lighting is useful in order to be able to answer a specific scientific question. In particular, the meta data of the diving robot must also be recorded. "For data processing, it is essential to link the camera's image data with the diving robot's metadata," says Schoening. The AUV ABYSS, for example, automatically recorded its position, the depth of the dive and the properties of the surrounding water. "All this information has to be linked to the respective image because it provides important information for subsequent evaluation," says Schoening. An enormous task: ABYSS collected over 500,000 images of the seafloor in around 30 dives. Various programs, which the team developed especially for this purpose, ensured that the data was brought together. Here, unusable image material, such as those with motion blur, was removed.

All these processes are now automated. "Until then, however, a large number of time-consuming steps had been necessary," says Schoening. "Now the method can be transferred to any project, even with other AUVs or camera systems." The material processed in this way was then made permanently available for the general public.

Finally, artificial intelligence in the form of the specially developed algorithm "CoMoNoD" was used for evaluation at GEOMAR. It automatically records whether manganese nodules are present in a photo, in what size and at what position. Subsequently, for example, the individual images could be combined to form larger maps of the seafloor. The next use of the workflow and the newly developed programs is already planned: At the next expedition in spring next year in the direction of manganese nodules, the evaluation of the image material will take place directly on board. "Therefore we will take some particularly powerful computers with us on board," says Timm Schoening.


Story Source:

Materials provided by Helmholtz Centre for Ocean Research Kiel (GEOMAR). Note: Content may be edited for style and length.


Journal Reference:

  1. Timm Schoening, Kevin Köser, Jens Greinert. An acquisition, curation and management workflow for sustainable, terabyte-scale marine image analysis. Scientific Data, 2018; 5: 180181 DOI: 10.1038/sdata.2018.181

Cite This Page:

Helmholtz Centre for Ocean Research Kiel (GEOMAR). "Understanding deep-sea images with artificial intelligence." ScienceDaily. ScienceDaily, 10 September 2018. <www.sciencedaily.com/releases/2018/09/180910111243.htm>.
Helmholtz Centre for Ocean Research Kiel (GEOMAR). (2018, September 10). Understanding deep-sea images with artificial intelligence. ScienceDaily. Retrieved March 27, 2024 from www.sciencedaily.com/releases/2018/09/180910111243.htm
Helmholtz Centre for Ocean Research Kiel (GEOMAR). "Understanding deep-sea images with artificial intelligence." ScienceDaily. www.sciencedaily.com/releases/2018/09/180910111243.htm (accessed March 27, 2024).

Explore More

from ScienceDaily

RELATED STORIES