Copy this text
New imaging and analysis approaches for marine species detection and classification
Photos and videos taken underwater enable us to get a visual impression of ocean processes which would otherwise remain hidden from our eyes. Marine imaging is used to explore and monitor the marine environment and creates comprehensible material for captivating ocean narratives. Imaging can address questions about the ocean across many spatial and temporal scales from in-situ microscopy to satellite remote sensing, and from slow-motion capture to long-term observatories.
Marine imaging, as a method, covers a range of technical aspects, which makes it a versatile technology employed in many research, industry and public applications. This includes aspects such as sensors, cameras, platforms, illumination, human interaction such as annotation, automated information extraction such as machine learning, or Findable, Accessible, Interoperable and Reusable (FAIR) image publication. And this range is expanding as general technology advancements such as hyperspectral imaging or deep learning are making their way into ocean applications. The wider availability of underwater platforms equipped with camera systems creates ever-increasing volumes of big data. The availability of commercial low-cost camera systems for deep-sea exploration is not only increasing survey coverage, but is proving critical to expand this capacity to areas of the world previously unable to survey deep-sea ecosystems.
During the iAtlantic project, selected aspects across the marine imaging data workflow were conceptualised, implemented, deployed and operated. This document provides a summary of the developments that occurred, partly during the global Covid-19 pandemic. It includes details on imaging technology, image data, image metadata, and image processing methods. Four main aspects are described: a) underwater hyperspectral imaging that was trailed and successfully demonstrated by IFREMER; b) a low-cost camera system that was designed and built by IMAR and the successful deployment of this system in Portuguese waters; c) making marine image data FAIR for robust science, and d) efficient machine learning applications, both led by GEOMAR.
Imaging research described in this technical report contributes to iAtlantic objectives in several ways. The efforts in making marine image data FAIR and developing a FAIR machine learning infrastructure support to align and standardise ocean observing across geographical regions, marine science domains, research institutes and marine sectors. Similarly, the publication of the low-cost camera system and its widespread adoption can lead to the creation of standardised data sets across the Atlantic Ocean. This camera system has been successfully deployed to map deep and open-ocean ecosystems at local and regional scale in the Azorean Exclusive Economic Zone (EEZ). An adoption of the open hardware concept by others can similarly support creating ecosystem data at a global scale. These data have already been used in other aspects of iAtlantic research to assess the stability, vulnerability, and tipping points of ecosystems, e.g., in habitat mapping applications.
However, the most prominent contribution of the imaging work is towards the objective to build and enhance human and technological capacities which is represented by the innovative approaches to link underwater hyperspectral imaging with 3D reconstructions, the low-cost camera hardware development, operationalising seagoing high-performance computers and the standardisation efforts and software development for the FAIR marine image and the FAIR machine learning environments. These technologies have been presented and advertised and are potential elements for commercialisation, supporting a sustainable blue economy.
The technology readiness levels (TRLs) of the three proposed aspects of marine imaging (low-cost cameras, hyper-spectral imaging, machine learning) were advanced as expected since the start of the project. The newly developed concept of FAIR marine images added a fourth marine imaging aspect for which the state-of-the-art before iAtlantic was TRL 2 which could be advanced through the project efforts to TRL 7. The Azor drift-cam has been advanced through the project from TRL 6 to TRL 7. For the machine learning, selected methods were implemented operationally at TRL 8 (from 7). With the additionally developed FAIR machine learning infrastructure, a generalisation of efficient machine learning towards TRL 9 has been proposed and needs operationalising through future projects. The technology for hyper-spectral imaging was already at TRL 8 through employing a proven off-the-shelf system. Integration into operational, mission-proven equipment at IFREMER is currently progressing towards TRL 9.
The developments described in detail below have led to ideas for future innovation and requirements for their implementation that are described in the outlook section at the end of this document.
Full Text
Alternative access
File | Pages | Size | Access | |
---|---|---|---|---|
Publisher's official version | 92 | 5 Mo |