Please use this identifier to cite or link to this item:
http://hdl.handle.net/10662/20332
Title: | Multiple attention-guided capsule networks for hyperspectral image classification |
Authors: | Paoletti Ávila, Mercedes Eugenia Moreno Álvarez, Sergio Haut Hurtado, Juan Mario |
Keywords: | Red de cápsulas;Redes neuronales convolucionales;Reportaje;HSI;Atención;Feature;Attention;Capsule network (CapsNet);Convolutional neural networks (CNNs) |
Issue Date: | 2021 |
Publisher: | IEEE |
Abstract: | The profound impact of deep learning and particularly of convolutional neural networks (CNNs) in automatic image processing has been decisive for the progress and evolution of remote sensing (RS) hyperspectral imaging (HSI) processing. Indeed, CNNs have stated themselves as the current state-ofart, reaching unparalleled results in HSI classification. However, most CNNs were designed for RGB images and their direct application to HSI data analysis could lead to nonoptimal solutions. Moreover, CNNs perform classification based on the identification of specific features, neglecting the spatialrelationships between different features (i.e., their arrangement) due to pooling techniques. The capsule network (CapsNet) architecture is an attempt to overcome this drawback by nesting several neural layers within a capsule, connected by dynamic routing, both to identify not only the presence of a feature, but also its instantiation parameters, and to learn the relationships between different features. Although this mechanism improves the data representations, enhancing the classification of HSI data, it still acts as a black box, without control of the most relevant features for classification purposes. Indeed, important features could be discriminated. In this paper, a new multiple attention guided CapsNet is proposed to improve feature processing for RSHSIs classification, both to improve computational efficiency (in terms of parameters) and to increase accuracy. Hence, the most representatives visual parts of the images are identified using a detailed feature extractor coupled with attention mechanisms. Extensive experimental results have been obtained on five real datasets, demonstrating the great potential of the proposed method compared to other state-of-the-art classifiers. |
URI: | http://hdl.handle.net/10662/20332 |
ISSN: | 0196-2892 |
DOI: | 10.1109/TGRS.2021.3135506 |
Appears in Collections: | DIEEA - Artículos DTCYC - Artículos |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
TGRS_2021_3135506.pdf | 6,34 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License