Use este identificador para citar ou linkar para este item: http://repositorio.ufla.br/jspui/handle/1/59313
Título: Detecção e estimação de distância de marcos visuais por um veículo autônomo a partir de segmentação de imagens com aprendizado profundo e processos Gaussianos
Título(s) alternativo(s): Detection and distance estimation of visual landmarks by an autonomous vehicle using image segmentation with deep learning and Gaussian processes
Autores: Barbosa, Bruno Henrique Groenner
Lima, Danilo Alves de
Ferreira, Danton Diego
Vitor, Giovani Bernardes
Palavras-chave: Veículos autônomos
Visão computacional
Deep learning
Detecção de imagens
Segmentação de imagens
Regressão por processos Gaussianos
Estimação de distância
Aprendizado de máquina
Autonomous vehicles
Computer vision
Image detection
Image segmentation
Gaussian process regression
Distance estimation
Machine learning
Data do documento: 3-Set-2024
Editor: Universidade Federal de Lavras
Citação: BERNARDES, Danilo Serenini. Detecção e estimação de distância de marcos visuais por um veículo autônomo a partir de segmentação de imagens com aprendizado profundo e processos Gaussianos. 2024. 66p. Dissertação (Mestrado em Engenharia de Sistemas e Automação)–Universidade Federal de Lavras, Lavras, 2024.
Resumo: In the constantly evolving scenario of technologies for autonomous vehicles implementation, precision in vehicle localization emerges as a significant challenge. The objective of this work is to propose an algorithm that applies distance prediction techniques to estimate the distance between a vehicle equipped with a camera and landmarks in the environment it will be exposed to. For this purpose, computer vision techniques were employed for landmark detection, followed by object segmentation to enhance the algorithm's perception of the environment. For the development of the prediction algorithm, the Python language was chosen, and a real database with approximately 8000 samples collected in the field at the University of Waterloo, through an instrumented autonomous vehicle, was considered. The use of YOLO-v8 network with Object Detection and Segmentation models, DeTr (Detection Transformers) network, and SAM (Segment Anything Model) network were evaluated to provide the input data that were related to distance estimation from deep learning techniques with a GPR (Gaussian Process Regression) model. At the end of the project, the superiority of YOLO-v8 network in the segmentation model when applied to object detection task was observed, with an average Recall of 0.76 and a maP@0.5 of 0.891, highlighting the benefit of using segmentation masks also for object detection. The analysis showed that the combination of YOLO-v8 Segmentation and SAM networks enhances the environment perception with a DICE coefficient of 71.039% and significantly reduces the error in distance prediction, achieving a MAE (Mean Absolute Error) of 0.65 meters. However, this combination resulted in an increase in processing time, standing out as a challenge for real-time application from the perspective of the hardware used. From the results, it is possible to note that the incorporation of segmentation characteristics to the input data substantially improves the performance of the GPR model in distance prediction, highlighting the potential of computer vision techniques in improving the localization and decision-making in autonomous vehicles.
URI: http://repositorio.ufla.br/jspui/handle/1/59313
Aparece nas coleções:Engenharia de Sistemas e automação (Dissertações)



Este item está licenciada sob uma Licença Creative Commons Creative Commons