Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10662/20332
Títulos: Multiple attention-guided capsule networks for hyperspectral image classification
Autores/as: Paoletti Ávila, Mercedes Eugenia
Moreno Álvarez, Sergio
Haut Hurtado, Juan Mario
Palabras clave: Red de cápsulas;Redes neuronales convolucionales;Reportaje;HSI;Atención;Feature;Attention;Capsule network (CapsNet);Convolutional neural networks (CNNs)
Fecha de publicación: 2021
Editor/a: IEEE
Resumen: The profound impact of deep learning and particularly of convolutional neural networks (CNNs) in automatic image processing has been decisive for the progress and evolution of remote sensing (RS) hyperspectral imaging (HSI) processing. Indeed, CNNs have stated themselves as the current state-ofart, reaching unparalleled results in HSI classification. However, most CNNs were designed for RGB images and their direct application to HSI data analysis could lead to nonoptimal solutions. Moreover, CNNs perform classification based on the identification of specific features, neglecting the spatialrelationships between different features (i.e., their arrangement) due to pooling techniques. The capsule network (CapsNet) architecture is an attempt to overcome this drawback by nesting several neural layers within a capsule, connected by dynamic routing, both to identify not only the presence of a feature, but also its instantiation parameters, and to learn the relationships between different features. Although this mechanism improves the data representations, enhancing the classification of HSI data, it still acts as a black box, without control of the most relevant features for classification purposes. Indeed, important features could be discriminated. In this paper, a new multiple attention guided CapsNet is proposed to improve feature processing for RSHSIs classification, both to improve computational efficiency (in terms of parameters) and to increase accuracy. Hence, the most representatives visual parts of the images are identified using a detailed feature extractor coupled with attention mechanisms. Extensive experimental results have been obtained on five real datasets, demonstrating the great potential of the proposed method compared to other state-of-the-art classifiers.
URI: http://hdl.handle.net/10662/20332
ISSN: 0196-2892
DOI: 10.1109/TGRS.2021.3135506
Colección:DIEEA - Artículos
DTCYC - Artículos

Archivos
Archivo Descripción TamañoFormato 
TGRS_2021_3135506.pdf6,34 MBAdobe PDFDescargar


Este elemento está sujeto a una licencia Licencia Creative Commons Creative Commons