Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10662/20314
Títulos: A new deep convolutional neural network for fast hyperspectral image classification
Autores/as: Paoletti Ávila, Mercedes Eugenia
Haut Hurtado, Juan Mario
Plaza Miguel, Javier
Plaza, Antonio
Palabras clave: Imagen hiperespectral;Aprendizaje profundo;Redes neuronales convolucionales;Clasificación;Unidades de procesamiento de gráficos;Hyperspectral imaging;Deep learning;Convolutional neural networks;Classification;Graphics processing units
Fecha de publicación: 2018
Editor/a: Elsevier
Resumen: Artificial neural networks (ANNs) have been widely used for the analysis of remotely sensed imagery. In particular, convolutional neural networks (CNNs) are gaining more and more attention in this field. CNNs have proved to be very effective in areas such as image recognition and classification, especially for the classification of large sets composed by two-dimensional images. However, their application to multispectral and hyperspectral images faces some challenges, especially related to the processing of the high-dimensional information contained in multidimensional data cubes. This results in a significant increase in computation time. In this paper, we present a new CNN architecture for the classification of hyperspectral images. The proposed CNN is a 3-D network that uses both spectral and spatial information. It also implements a border mirroring strategy to effectively process border areas in the image, and has been efficiently implemented using graphics processing units (GPUs). Our experimental results indicate that the proposed network performs accurately and efficiently, achieving a reduction of the computation time and increasing the accuracy in the classification of hyperspectral images when compared to other traditional ANN techniques.
URI: http://hdl.handle.net/10662/20314
ISSN: 0924-2716
DOI: 10.1016/j.isprsjprs.2017.11.021
Colección:DTCYC - Artículos

Archivos
Archivo Descripción TamañoFormato 
j_isprsjprs_2017_11_021_preprint.pdf6,78 MBAdobe PDFDescargar


Este elemento está sujeto a una licencia Licencia Creative Commons Creative Commons