From traditional Christmas trees to the exteriors of shopping centers, and extending to public parks, private gardens, and gaming setups, digital LEDs strings are utilized in numerous scenarios. Illumination is often merely a secondary objective; these devices are primarily employed for decoration and even for advertising purposes. For this reason, displaying a simple alternate pattern is not enough, but they are often asked to show complex arrangements like companies' logos or advertisements. To achieve this, each individual LED light in a string must be independently controllable, and its spatial position must be known to facilitate coordination with other elements. This mapping and calibration of LED strings of an unknown shape is a complex task that involves different computer vision techniques, like color balancing, stabilization, and point tracking. Furthermore, static video input conditions are not always sufficient: multiple LED strings may be employed, necessitating synchronized calibration. When the camera's field of view is inadequate, video must be captured dynamically, thereby requiring the development of novel techniques to track LEDs along frames and to be able to catch new ones entering the scene. Building upon an existing algorithm capable of mapping LEDs from a static input video, this thesis aims to extend its functionality by adapting it for use within free-moving camera scenes and in a streaming pipeline. The work implements novel strategies for frame-by-frame processing, addressing the limitations of the previous implementation, which was only suitable for fixed-camera setups. Specifically, the thesis develops color preprocessing and stabilization techniques, alongside a color classification pipeline, to detect and identify LEDs based on their chromatic properties. The research subsequently investigates and implements various approaches for reliable LED tracking across consecutive frames. This investigation encompasses dense point-tracking deep learning models, state-of-the-art 3D reconstruction networks, and object detection frameworks, identifying the most appropriate methodology for the application.

Toward Robust LED Detection and Identification in a Streaming Pipeline: from Static Calibration to Free-Moving Camera Videos

CASALI, CRISTIAN
2024/2025

Abstract

From traditional Christmas trees to the exteriors of shopping centers, and extending to public parks, private gardens, and gaming setups, digital LEDs strings are utilized in numerous scenarios. Illumination is often merely a secondary objective; these devices are primarily employed for decoration and even for advertising purposes. For this reason, displaying a simple alternate pattern is not enough, but they are often asked to show complex arrangements like companies' logos or advertisements. To achieve this, each individual LED light in a string must be independently controllable, and its spatial position must be known to facilitate coordination with other elements. This mapping and calibration of LED strings of an unknown shape is a complex task that involves different computer vision techniques, like color balancing, stabilization, and point tracking. Furthermore, static video input conditions are not always sufficient: multiple LED strings may be employed, necessitating synchronized calibration. When the camera's field of view is inadequate, video must be captured dynamically, thereby requiring the development of novel techniques to track LEDs along frames and to be able to catch new ones entering the scene. Building upon an existing algorithm capable of mapping LEDs from a static input video, this thesis aims to extend its functionality by adapting it for use within free-moving camera scenes and in a streaming pipeline. The work implements novel strategies for frame-by-frame processing, addressing the limitations of the previous implementation, which was only suitable for fixed-camera setups. Specifically, the thesis develops color preprocessing and stabilization techniques, alongside a color classification pipeline, to detect and identify LEDs based on their chromatic properties. The research subsequently investigates and implements various approaches for reliable LED tracking across consecutive frames. This investigation encompasses dense point-tracking deep learning models, state-of-the-art 3D reconstruction networks, and object detection frameworks, identifying the most appropriate methodology for the application.
2024
computer vision
deep learning
3d reconstruction
point tracking
feature matching
File in questo prodotto:
File Dimensione Formato  
Casali.Cristian.pdf

Accesso riservato

Dimensione 241.97 MB
Formato Adobe PDF
241.97 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14251/4230