Identificador persistente para citar o vincular este elemento: http://hdl.handle.net/10662/20329
Registro completo de Metadatos
Campo DCValoridioma
dc.contributor.authorHaut Hurtado, Juan Mario-
dc.contributor.authorPaoletti Ávila, Mercedes Eugenia-
dc.contributor.authorPlaza Miguel, Javier-
dc.contributor.authorPlaza, Antonio-
dc.contributor.authorLi, Jun-
dc.date.accessioned2024-02-07T12:44:26Z-
dc.date.available2024-02-07T12:44:26Z-
dc.date.issued2019-
dc.identifier.issn0196-2892-
dc.identifier.urihttp://hdl.handle.net/10662/20329-
dc.description.abstractDeep neural networks (DNNs), including convolutional (CNNs) and residual (ResNets) models, are able to learn abstract representations from the input data by considering a deep hierarchy of layers that performs advanced feature extraction. The combination of these models with visual attention techniques can assist with the identification of the most representative parts of the data from a visual standpoint, obtained through a more detailed filtering of the features extracted by the operational layers of the network. This is of significant interest for analyzing remotely sensed hyperspectral images (HSIs), characterized by their very high spectral dimensionality. However, few efforts have been conducted in the literature in order to adapt visual attention methods to remotely sensed HSI data analysis. In this paper, we introduce a new visual attention-driven technique for HIS classification. Specifically, we incorporate attention mechanisms to a ResNet in order to better characterize the spectral-spatial information contained in the data. Our newly proposed method calculates a mask that is applied on the features obtained by the network in order to identify the most desirable ones for classification purposes. Our experiments, conducted using four widely used HSI datasets, reveal that the proposed deep attention model provides competitive advantages in terms of classification accuracy when compared to other state-of-the-art methods.en_Us
dc.description.sponsorshipThis paper was supported by Ministerio de Educación (Resolución de 26 de diciembre de 2014 y de 19 de noviembre de 2015, de la Secretaría de Estado de Educación, Formación Profesional y Universidades, por la que se convocan ayudas para la formación de profesorado universitario, de los subprogramas de Formación y de Movilidad incluidos en el Programa Estatal de Promoción del Talento y su Empleabilidad, en el marco del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016. This work has also been supported by Junta de Extremadura (Decreto 14/2018, de 6 de febrero, por el que se establecen las bases reguladoras de las ayudas para la realización de actividades de investigación y desarrollo tecnológico, de divulgación y de transferencia de conocimiento por los Grupos de Investigación de Extremadura, Ref. GR18060) and the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 734541 (EOXPOSURE). This work was supported in part by the National Natural Science Foundation of China under Grant 61771496, in part by the Guangdong Provincial Natural Science Foundation under Grant 2016A030313254, and in part by the National Key Research and Development Program of China under Grant 2017YFB0502900. (Corresponding author: Jun Li.)en_Us
dc.format.extent17 p.es_ES
dc.format.mimetypeapplication/pdfen_US
dc.language.isoenges_ES
dc.publisherIEEE-
dc.rightsAtribución 4.0 Internacional-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectClasificación de imagen hiperespectrales_ES
dc.subjectAtención visuales_ES
dc.subjectExtracción de característicases_ES
dc.subjectAprendizaje profundoes_ES
dc.subjectRed neuronal residuales_ES
dc.subjectHyperspectral image classificationen_Us
dc.subjectVisual attentionen_Us
dc.subjectFeature extractionen_Us
dc.subjectDeep learningen_Us
dc.subjectResidual neural networksen_Us
dc.titleVisual attention-driven hyperspectral image classificationes_ES
dc.typearticlees_ES
dc.description.versionpeerReviewedes_ES
europeana.typeTEXTen_US
dc.rights.accessRightsopenAccesses_ES
dc.subject.unesco2490 Neurociencias-
dc.subject.unesco3304 Tecnología de Los Ordenadores-
europeana.dataProviderUniversidad de Extremadura. Españaes_ES
dc.identifier.bibliographicCitationJ. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza and J. Li, "Visual Attention-Driven Hyperspectral Image Classification," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 10, pp. 8065-8080, Oct. 2019, doi: 10.1109/TGRS.2019.2918080-
dc.type.versionpublishedVersion-
dc.contributor.affiliationUniversidad de Extremadura. Departamento de Tecnología de los Computadores y de las Comunicacioneses_ES
dc.contributor.affiliationSun Yat-sen University. China-
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/8736024-
dc.identifier.doi10.1109/TGRS.2019.2918080-
dc.identifier.publicationtitleIEEE Transactions on Geoscience and Remote Sensinges_ES
dc.identifier.publicationissue10-
dc.identifier.publicationfirstpage8065es_ES
dc.identifier.publicationlastpage8080es_ES
dc.identifier.publicationvolume57es_ES
dc.identifier.e-issn1558-0644-
dc.identifier.orcid0000-0001-6701-961Xes_ES
dc.identifier.orcid0000-0003-1030-3729-
dc.identifier.orcid0000-0002-2384-9141-
dc.identifier.orcid0000-0002-9613-1659-
Colección:DTCYC - Artículos

Archivos
Archivo Descripción TamañoFormato 
TGRS_2019_2918080.pdf2,36 MBAdobe PDFDescargar


Este elemento está sujeto a una licencia Licencia Creative Commons Creative Commons