For marine research, AI has a transformative potential that helps improving both the efficiency and accuracy of ecosystem monitoring.





Applying AI to Marine Ecosystems Monitoring
Artificial intelligence has emerged as a transformative tool in various scientific fields, particularly marine ecosystem monitoring. Evaluating fish populations and other marine organisms is crucial for understanding biodiversity, abundance, and biomass, which are vital for promoting sustainable fishing practices and protecting marine habitats. Traditionally, these tasks were carried out manually, requiring extensive observation of thousands of images to identify and count fish specimens - a process that was not only labour-intensive but also time-consuming. A comparison between AI and human methodologies has demonstrated that computer vision methods significantly improve the accuracy and efficiency of these assessments.
Here, we present our experience using AI techniques for fish detection and classification and evaluating an artificial reef (constructed as part of the SLAGREEF project). Using the YOLO (You Only Look Once) object detection model, image analysis is optimised to assess the effectiveness of AI in automatically identifying marine species. AI accelerates the process, enabling more efficient monitoring of environmental impacts and ecosystem health.
In this context, it's essential to acknowledge the transformative potential of AI, as it improves both the efficiency and accuracy of ecosystem monitoring. This technological advancement facilitates a more streamlined approach to assessing fish populations, ultimately supporting the development of sustainable environmental management practices.
AI in Practice
We used YOLOv8 to detect fish in various images obtained from the OBSEA underwater observatory and other monitoring platforms. Based on convolutional neural networks (CNN), this model divides each image into a grid, where each cell predicts multiple bounding boxes and class probabilities, achieving fast and accurate detection. We trained five YOLOv8 architectures on the dataset (Nano, Small, Medium, Large, and Extra Large), which included images from the SLAGREEF project and the MINKA platform.
We fine-tuned our image recognition system by testing different settings. We experimented with things like the learning rate (how quickly the system learns), data augmentation techniques - ways to enhance the training images (like rotating or resizing them), and the images' size. Key metrics for evaluating the model include Intersection over Union (IoU), Precision, Recall, and mean Average Precision (mAP):
- IoU measures the accuracy of predicted bounding boxes
- Precision and Recall assess the model's ability to make accurate positive predictions comprehensively.
- mAP provides a holistic evaluation of precision across various IoU thresholds (mAP@0.5 or mAP@0.95)
The following figure illustrates this process.

Schematic representation of the combinations used in each training.
Once we achieved the first satisfactory results (e.g., with mAP between 0.7 and 0.9), we used the model to automatically label new images, augmenting the quantity and quality of the training dataset. This results in a semi-automatic labelling tool, leveraging the model's precision to expand the dataset efficiently. Figure 2 shows the steps carried out.

Following steps to label the images in a semi-automatic way.
We combined two use cases from the SLAGREEF and iMagine projects.
The SLAGREEF project, conducted at the OBSEA observatory, implemented artificial biotopes 3D-printed from recycled materials. The OBSEA camera captures images of these biotope structures at regular intervals, which we then analyse using the YOLOv8 model finetuned in the iMagine project to detect and classify species.
We used a twofold approach:
- Scientific Image Analysis: configuring YOLOv8 for high detection accuracy, we analysed 50,000 images in 3 hours—a significant improvement compared to the manual method, which required one year to analyse 30,000 photos.
- Real-time Monitoring: we configured the system to perform real-time video inference (25 frames per second), allowing live streaming with predictions. Although this setup offers lower accuracy, it provides a quick and useful tool for real-time visualisation.
Based on our experience, we can confirm that the use of YOLOv8 in marine image analysis has proven extremely effective, with substantial accuracy in species identification, enabling reliable monitoring of the ecological impact of the artificial reef. Also, the comparison between manually and AI-labeled images highlights the efficiency of AI, which requires only seconds to process each image instead of minutes.
The following example shows the same image analysed manually and analysed by AI. The first image took 150 seconds, and the second one took only 5 seconds.

Images analysed manually.

Image analysed with AI.
AI Impact on Marine Research
Our results suggest that AI-based detection models can replace manual analysis, providing speed and consistent accuracy. The semi-automatic labelling feature also improves dataset quality, creating a continuous cycle of enhanced model performance.
Implementing the YOLOv8 model has significantly reduced analysis time without sacrificing accuracy and optimising research and environmental management efforts.
The effectiveness of semi-automatic labelling also highlights the potential of AI to continuously improve its own outcomes as new images are incorporated. These approaches broadly affect sustainable practices and biodiversity management in marine ecosystems.
The fish detection and classification activities presented above are part of the use case “Ecosystem Monitoring at EMSO Sites by Video Imagery” of the iMagine project.
This work received financial support from the Spanish State Plan for Scientific and Technical Research and Innovation 2021, under the SLAGREEF project (3D slag concrete manufacturing solutions for marine biotopes, TED2021-129760B-I00) and the iMagine project by the European Commission (HORIZON-INFRA-2021-SERV-01, grant agreement 101058625). This work utilised the EGI infrastructure with dedicated support from EGI-CESGA-STACK.
Related magazine news
Using EISCAT and ENES as examples, we demonstrate how will the Data Exploitation Platforms of
By creating 3D models of artifacts, EUreka3D is breaking down barriers and opening up cultural