Please use this identifier to cite or link to this item:
http://hdl.handle.net/10662/20329
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Haut Hurtado, Juan Mario | - |
dc.contributor.author | Paoletti Ávila, Mercedes Eugenia | - |
dc.contributor.author | Plaza Miguel, Javier | - |
dc.contributor.author | Plaza, Antonio | - |
dc.contributor.author | Li, Jun | - |
dc.date.accessioned | 2024-02-07T12:44:26Z | - |
dc.date.available | 2024-02-07T12:44:26Z | - |
dc.date.issued | 2019 | - |
dc.identifier.issn | 0196-2892 | - |
dc.identifier.uri | http://hdl.handle.net/10662/20329 | - |
dc.description.abstract | Deep neural networks (DNNs), including convolutional (CNNs) and residual (ResNets) models, are able to learn abstract representations from the input data by considering a deep hierarchy of layers that performs advanced feature extraction. The combination of these models with visual attention techniques can assist with the identification of the most representative parts of the data from a visual standpoint, obtained through a more detailed filtering of the features extracted by the operational layers of the network. This is of significant interest for analyzing remotely sensed hyperspectral images (HSIs), characterized by their very high spectral dimensionality. However, few efforts have been conducted in the literature in order to adapt visual attention methods to remotely sensed HSI data analysis. In this paper, we introduce a new visual attention-driven technique for HIS classification. Specifically, we incorporate attention mechanisms to a ResNet in order to better characterize the spectral-spatial information contained in the data. Our newly proposed method calculates a mask that is applied on the features obtained by the network in order to identify the most desirable ones for classification purposes. Our experiments, conducted using four widely used HSI datasets, reveal that the proposed deep attention model provides competitive advantages in terms of classification accuracy when compared to other state-of-the-art methods. | en_Us |
dc.description.sponsorship | This paper was supported by Ministerio de Educación (Resolución de 26 de diciembre de 2014 y de 19 de noviembre de 2015, de la Secretaría de Estado de Educación, Formación Profesional y Universidades, por la que se convocan ayudas para la formación de profesorado universitario, de los subprogramas de Formación y de Movilidad incluidos en el Programa Estatal de Promoción del Talento y su Empleabilidad, en el marco del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016. This work has also been supported by Junta de Extremadura (Decreto 14/2018, de 6 de febrero, por el que se establecen las bases reguladoras de las ayudas para la realización de actividades de investigación y desarrollo tecnológico, de divulgación y de transferencia de conocimiento por los Grupos de Investigación de Extremadura, Ref. GR18060) and the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 734541 (EOXPOSURE). This work was supported in part by the National Natural Science Foundation of China under Grant 61771496, in part by the Guangdong Provincial Natural Science Foundation under Grant 2016A030313254, and in part by the National Key Research and Development Program of China under Grant 2017YFB0502900. (Corresponding author: Jun Li.) | en_Us |
dc.format.extent | 17 p. | es_ES |
dc.format.mimetype | application/pdf | en_US |
dc.language.iso | eng | es_ES |
dc.publisher | IEEE | - |
dc.rights | Atribución 4.0 Internacional | - |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | - |
dc.subject | Clasificación de imagen hiperespectral | es_ES |
dc.subject | Atención visual | es_ES |
dc.subject | Extracción de características | es_ES |
dc.subject | Aprendizaje profundo | es_ES |
dc.subject | Red neuronal residual | es_ES |
dc.subject | Hyperspectral image classification | en_Us |
dc.subject | Visual attention | en_Us |
dc.subject | Feature extraction | en_Us |
dc.subject | Deep learning | en_Us |
dc.subject | Residual neural networks | en_Us |
dc.title | Visual attention-driven hyperspectral image classification | es_ES |
dc.type | article | es_ES |
dc.description.version | peerReviewed | es_ES |
europeana.type | TEXT | en_US |
dc.rights.accessRights | openAccess | es_ES |
dc.subject.unesco | 2490 Neurociencias | - |
dc.subject.unesco | 3304 Tecnología de Los Ordenadores | - |
europeana.dataProvider | Universidad de Extremadura. España | es_ES |
dc.identifier.bibliographicCitation | J. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza and J. Li, "Visual Attention-Driven Hyperspectral Image Classification," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 10, pp. 8065-8080, Oct. 2019, doi: 10.1109/TGRS.2019.2918080 | - |
dc.type.version | publishedVersion | - |
dc.contributor.affiliation | Universidad de Extremadura. Departamento de Tecnología de los Computadores y de las Comunicaciones | es_ES |
dc.contributor.affiliation | Sun Yat-sen University. China | - |
dc.relation.publisherversion | https://ieeexplore.ieee.org/document/8736024 | - |
dc.identifier.doi | 10.1109/TGRS.2019.2918080 | - |
dc.identifier.publicationtitle | IEEE Transactions on Geoscience and Remote Sensing | es_ES |
dc.identifier.publicationissue | 10 | - |
dc.identifier.publicationfirstpage | 8065 | es_ES |
dc.identifier.publicationlastpage | 8080 | es_ES |
dc.identifier.publicationvolume | 57 | es_ES |
dc.identifier.e-issn | 1558-0644 | - |
dc.identifier.orcid | 0000-0001-6701-961X | es_ES |
dc.identifier.orcid | 0000-0003-1030-3729 | - |
dc.identifier.orcid | 0000-0002-2384-9141 | - |
dc.identifier.orcid | 0000-0002-9613-1659 | - |
Appears in Collections: | DTCYC - Artículos |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
TGRS_2019_2918080.pdf | 2,36 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License