Deep metric attention learning for skin lesion classification in dermoscopy images View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-01-04

AUTHORS

Xiaoyu He, Yong Wang, Shuang Zhao, Chunli Yao

ABSTRACT

Currently, convolutional neural networks (CNNs) have made remarkable achievements in skin lesion classification because of their end-to-end feature representation abilities. However, precise skin lesion classification is still challenging because of the following three issues: (1) insufficient training samples, (2) inter-class similarities and intra-class variations, and (3) lack of the ability to focus on discriminative skin lesion parts. To address these issues, we propose a deep metric attention learning CNN (DeMAL-CNN) for skin lesion classification. In DeMAL-CNN, a triplet-based network (TPN) is first designed based on deep metric learning, which consists of three weight-shared embedding extraction networks. TPN adopts a triplet of samples as input and uses the triplet loss to optimize the embeddings, which can not only increase the number of training samples, but also learn the embeddings robust to inter-class similarities and intra-class variations. In addition, a mixed attention mechanism considering both the spatial-wise and channel-wise attention information is designed and integrated into the construction of each embedding extraction network, which can further strengthen the skin lesion localization ability of DeMAL-CNN. After extracting the embeddings, three weight-shared classification layers are used to generate the final predictions. In the training procedure, we combine the triplet loss with the classification loss as a hybrid loss to train DeMAL-CNN. We compare DeMAL-CNN with the baseline method, attention methods, advanced challenge methods, and state-of-the-art skin lesion classification methods on the ISIC 2016 and ISIC 2017 datasets, and test its generalization ability on the PH2 dataset. The results demonstrate its effectiveness. More... »

PAGES

1487-1504

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s40747-021-00587-4

DOI

http://dx.doi.org/10.1007/s40747-021-00587-4

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1144402327


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "School of Automation, Central South University, 410083, Changsha, China", 
          "id": "http://www.grid.ac/institutes/grid.216417.7", 
          "name": [
            "School of Automation, Central South University, 410083, Changsha, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "He", 
        "givenName": "Xiaoyu", 
        "id": "sg:person.016160445571.16", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016160445571.16"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "School of Automation, Central South University, 410083, Changsha, China", 
          "id": "http://www.grid.ac/institutes/grid.216417.7", 
          "name": [
            "School of Automation, Central South University, 410083, Changsha, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wang", 
        "givenName": "Yong", 
        "id": "sg:person.010062307074.32", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010062307074.32"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Department of Dermatology, Xiangya Hospital, Central South University, 410008, Changsha, China", 
          "id": "http://www.grid.ac/institutes/grid.452223.0", 
          "name": [
            "Department of Dermatology, Xiangya Hospital, Central South University, 410008, Changsha, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zhao", 
        "givenName": "Shuang", 
        "id": "sg:person.0671022567.52", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0671022567.52"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Department of Dermatology, The Second Hospital, Jilin University, 130041, Changchun, China", 
          "id": "http://www.grid.ac/institutes/grid.64924.3d", 
          "name": [
            "Department of Dermatology, The Second Hospital, Jilin University, 130041, Changchun, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Yao", 
        "givenName": "Chunli", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s40747-021-00296-y", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1135470366", 
          "https://doi.org/10.1007/s40747-021-00296-y"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s13312-011-0055-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1043189010", 
          "https://doi.org/10.1007/s13312-011-0055-4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1038/nature21056", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1074217286", 
          "https://doi.org/10.1038/nature21056"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s10618-014-0356-z", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1053032316", 
          "https://doi.org/10.1007/s10618-014-0356-z"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s40747-021-00274-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1135006769", 
          "https://doi.org/10.1007/s40747-021-00274-4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-24261-3_7", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1026974855", 
          "https://doi.org/10.1007/978-3-319-24261-3_7"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2022-01-04", 
    "datePublishedReg": "2022-01-04", 
    "description": "Currently, convolutional neural networks (CNNs) have made remarkable achievements in skin lesion classification because of their end-to-end feature representation abilities. However, precise skin lesion classification is still challenging because of the following three issues: (1) insufficient training samples, (2) inter-class similarities and intra-class variations, and (3) lack of the ability to focus on discriminative skin lesion parts. To address these issues, we propose a deep metric attention learning CNN (DeMAL-CNN) for skin lesion classification. In DeMAL-CNN, a triplet-based network (TPN) is first designed based on deep metric learning, which consists of three weight-shared embedding extraction networks. TPN adopts a triplet of samples as input and uses the triplet loss to optimize the embeddings, which can not only increase the number of training samples, but also learn the embeddings robust to inter-class similarities and intra-class variations. In addition, a mixed attention mechanism considering both the spatial-wise and channel-wise attention information is designed and integrated into the construction of each embedding extraction network, which can further strengthen the skin lesion localization ability of DeMAL-CNN. After extracting the embeddings, three weight-shared classification layers are used to generate the final predictions. In the training procedure, we combine the triplet loss with the classification loss as a hybrid loss to train DeMAL-CNN. We compare DeMAL-CNN with the baseline method, attention methods, advanced challenge methods, and state-of-the-art skin lesion classification methods on the ISIC 2016 and ISIC 2017 datasets, and test its generalization ability on the PH2 dataset. The results demonstrate its effectiveness.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/s40747-021-00587-4", 
    "isAccessibleForFree": true, 
    "isFundedItemOf": [
      {
        "id": "sg:grant.8867473", 
        "type": "MonetaryGrant"
      }
    ], 
    "isPartOf": [
      {
        "id": "sg:journal.1136144", 
        "issn": [
          "2199-4536", 
          "2198-6053"
        ], 
        "name": "Complex & Intelligent Systems", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "2", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "8"
      }
    ], 
    "keywords": [
      "convolutional neural network", 
      "skin lesion classification", 
      "lesion classification", 
      "inter-class similarity", 
      "intra-class variations", 
      "extraction network", 
      "triplet loss", 
      "feature representation ability", 
      "insufficient training samples", 
      "training samples", 
      "deep metric learning", 
      "triplets of samples", 
      "mixed attention mechanism", 
      "ISIC 2017 dataset", 
      "neural network", 
      "network", 
      "representation ability", 
      "lesion part", 
      "metric learning", 
      "attention mechanism", 
      "attention information", 
      "classification layer", 
      "final prediction", 
      "training procedure", 
      "classification loss", 
      "hybrid loss", 
      "baseline methods", 
      "attention method", 
      "classification method", 
      "ISIC 2016", 
      "generalization ability", 
      "PH2 dataset", 
      "dermoscopy images", 
      "classification", 
      "embedding", 
      "dataset", 
      "remarkable achievements", 
      "issues", 
      "learning", 
      "information", 
      "localization ability", 
      "method", 
      "images", 
      "ability", 
      "similarity", 
      "attention", 
      "input", 
      "prediction", 
      "effectiveness", 
      "achievement", 
      "end", 
      "lack", 
      "part", 
      "number", 
      "construction", 
      "layer", 
      "state", 
      "results", 
      "samples", 
      "variation", 
      "triplet", 
      "loss", 
      "addition", 
      "mechanism", 
      "procedure", 
      "challenge method"
    ], 
    "name": "Deep metric attention learning for skin lesion classification in dermoscopy images", 
    "pagination": "1487-1504", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1144402327"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s40747-021-00587-4"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s40747-021-00587-4", 
      "https://app.dimensions.ai/details/publication/pub.1144402327"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-09-02T16:07", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220902/entities/gbq_results/article/article_925.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/s40747-021-00587-4"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s40747-021-00587-4'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s40747-021-00587-4'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s40747-021-00587-4'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s40747-021-00587-4'


 

This table displays all metadata directly associated to this object as RDF triples.

175 TRIPLES      21 PREDICATES      96 URIs      82 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s40747-021-00587-4 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N2b28801b039b48e99f2b421ce01ff593
4 schema:citation sg:pub.10.1007/978-3-319-24261-3_7
5 sg:pub.10.1007/s10618-014-0356-z
6 sg:pub.10.1007/s13312-011-0055-4
7 sg:pub.10.1007/s40747-021-00274-4
8 sg:pub.10.1007/s40747-021-00296-y
9 sg:pub.10.1038/nature21056
10 schema:datePublished 2022-01-04
11 schema:datePublishedReg 2022-01-04
12 schema:description Currently, convolutional neural networks (CNNs) have made remarkable achievements in skin lesion classification because of their end-to-end feature representation abilities. However, precise skin lesion classification is still challenging because of the following three issues: (1) insufficient training samples, (2) inter-class similarities and intra-class variations, and (3) lack of the ability to focus on discriminative skin lesion parts. To address these issues, we propose a deep metric attention learning CNN (DeMAL-CNN) for skin lesion classification. In DeMAL-CNN, a triplet-based network (TPN) is first designed based on deep metric learning, which consists of three weight-shared embedding extraction networks. TPN adopts a triplet of samples as input and uses the triplet loss to optimize the embeddings, which can not only increase the number of training samples, but also learn the embeddings robust to inter-class similarities and intra-class variations. In addition, a mixed attention mechanism considering both the spatial-wise and channel-wise attention information is designed and integrated into the construction of each embedding extraction network, which can further strengthen the skin lesion localization ability of DeMAL-CNN. After extracting the embeddings, three weight-shared classification layers are used to generate the final predictions. In the training procedure, we combine the triplet loss with the classification loss as a hybrid loss to train DeMAL-CNN. We compare DeMAL-CNN with the baseline method, attention methods, advanced challenge methods, and state-of-the-art skin lesion classification methods on the ISIC 2016 and ISIC 2017 datasets, and test its generalization ability on the PH2 dataset. The results demonstrate its effectiveness.
13 schema:genre article
14 schema:isAccessibleForFree true
15 schema:isPartOf N6253689acb934fe08cb4eb0c9e214f07
16 N790f37bb107c4285a088ae3755b54608
17 sg:journal.1136144
18 schema:keywords ISIC 2016
19 ISIC 2017 dataset
20 PH2 dataset
21 ability
22 achievement
23 addition
24 attention
25 attention information
26 attention mechanism
27 attention method
28 baseline methods
29 challenge method
30 classification
31 classification layer
32 classification loss
33 classification method
34 construction
35 convolutional neural network
36 dataset
37 deep metric learning
38 dermoscopy images
39 effectiveness
40 embedding
41 end
42 extraction network
43 feature representation ability
44 final prediction
45 generalization ability
46 hybrid loss
47 images
48 information
49 input
50 insufficient training samples
51 inter-class similarity
52 intra-class variations
53 issues
54 lack
55 layer
56 learning
57 lesion classification
58 lesion part
59 localization ability
60 loss
61 mechanism
62 method
63 metric learning
64 mixed attention mechanism
65 network
66 neural network
67 number
68 part
69 prediction
70 procedure
71 remarkable achievements
72 representation ability
73 results
74 samples
75 similarity
76 skin lesion classification
77 state
78 training procedure
79 training samples
80 triplet
81 triplet loss
82 triplets of samples
83 variation
84 schema:name Deep metric attention learning for skin lesion classification in dermoscopy images
85 schema:pagination 1487-1504
86 schema:productId N1a6a0d4a9b5e43488657b5a3b870823c
87 N76593fabe1ba495dbe13f6e9a602d126
88 schema:sameAs https://app.dimensions.ai/details/publication/pub.1144402327
89 https://doi.org/10.1007/s40747-021-00587-4
90 schema:sdDatePublished 2022-09-02T16:07
91 schema:sdLicense https://scigraph.springernature.com/explorer/license/
92 schema:sdPublisher Ncea5a3a7516e43ff93d083421bb1dc19
93 schema:url https://doi.org/10.1007/s40747-021-00587-4
94 sgo:license sg:explorer/license/
95 sgo:sdDataset articles
96 rdf:type schema:ScholarlyArticle
97 N1a6a0d4a9b5e43488657b5a3b870823c schema:name doi
98 schema:value 10.1007/s40747-021-00587-4
99 rdf:type schema:PropertyValue
100 N1c4c4aa4e4dc4f289b6164e27abbc2c5 schema:affiliation grid-institutes:grid.64924.3d
101 schema:familyName Yao
102 schema:givenName Chunli
103 rdf:type schema:Person
104 N2b28801b039b48e99f2b421ce01ff593 rdf:first sg:person.016160445571.16
105 rdf:rest Nb333536b5937412c946fe18dbfd4c4c9
106 N5deb529f44cc40f98e3589b1770e907a rdf:first sg:person.0671022567.52
107 rdf:rest Na222e63029e94b1a929ae8b5e4131356
108 N6253689acb934fe08cb4eb0c9e214f07 schema:volumeNumber 8
109 rdf:type schema:PublicationVolume
110 N76593fabe1ba495dbe13f6e9a602d126 schema:name dimensions_id
111 schema:value pub.1144402327
112 rdf:type schema:PropertyValue
113 N790f37bb107c4285a088ae3755b54608 schema:issueNumber 2
114 rdf:type schema:PublicationIssue
115 Na222e63029e94b1a929ae8b5e4131356 rdf:first N1c4c4aa4e4dc4f289b6164e27abbc2c5
116 rdf:rest rdf:nil
117 Nb333536b5937412c946fe18dbfd4c4c9 rdf:first sg:person.010062307074.32
118 rdf:rest N5deb529f44cc40f98e3589b1770e907a
119 Ncea5a3a7516e43ff93d083421bb1dc19 schema:name Springer Nature - SN SciGraph project
120 rdf:type schema:Organization
121 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
122 schema:name Information and Computing Sciences
123 rdf:type schema:DefinedTerm
124 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
125 schema:name Artificial Intelligence and Image Processing
126 rdf:type schema:DefinedTerm
127 sg:grant.8867473 http://pending.schema.org/fundedItem sg:pub.10.1007/s40747-021-00587-4
128 rdf:type schema:MonetaryGrant
129 sg:journal.1136144 schema:issn 2198-6053
130 2199-4536
131 schema:name Complex & Intelligent Systems
132 schema:publisher Springer Nature
133 rdf:type schema:Periodical
134 sg:person.010062307074.32 schema:affiliation grid-institutes:grid.216417.7
135 schema:familyName Wang
136 schema:givenName Yong
137 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010062307074.32
138 rdf:type schema:Person
139 sg:person.016160445571.16 schema:affiliation grid-institutes:grid.216417.7
140 schema:familyName He
141 schema:givenName Xiaoyu
142 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016160445571.16
143 rdf:type schema:Person
144 sg:person.0671022567.52 schema:affiliation grid-institutes:grid.452223.0
145 schema:familyName Zhao
146 schema:givenName Shuang
147 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0671022567.52
148 rdf:type schema:Person
149 sg:pub.10.1007/978-3-319-24261-3_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026974855
150 https://doi.org/10.1007/978-3-319-24261-3_7
151 rdf:type schema:CreativeWork
152 sg:pub.10.1007/s10618-014-0356-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1053032316
153 https://doi.org/10.1007/s10618-014-0356-z
154 rdf:type schema:CreativeWork
155 sg:pub.10.1007/s13312-011-0055-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043189010
156 https://doi.org/10.1007/s13312-011-0055-4
157 rdf:type schema:CreativeWork
158 sg:pub.10.1007/s40747-021-00274-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1135006769
159 https://doi.org/10.1007/s40747-021-00274-4
160 rdf:type schema:CreativeWork
161 sg:pub.10.1007/s40747-021-00296-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1135470366
162 https://doi.org/10.1007/s40747-021-00296-y
163 rdf:type schema:CreativeWork
164 sg:pub.10.1038/nature21056 schema:sameAs https://app.dimensions.ai/details/publication/pub.1074217286
165 https://doi.org/10.1038/nature21056
166 rdf:type schema:CreativeWork
167 grid-institutes:grid.216417.7 schema:alternateName School of Automation, Central South University, 410083, Changsha, China
168 schema:name School of Automation, Central South University, 410083, Changsha, China
169 rdf:type schema:Organization
170 grid-institutes:grid.452223.0 schema:alternateName Department of Dermatology, Xiangya Hospital, Central South University, 410008, Changsha, China
171 schema:name Department of Dermatology, Xiangya Hospital, Central South University, 410008, Changsha, China
172 rdf:type schema:Organization
173 grid-institutes:grid.64924.3d schema:alternateName Department of Dermatology, The Second Hospital, Jilin University, 130041, Changchun, China
174 schema:name Department of Dermatology, The Second Hospital, Jilin University, 130041, Changchun, China
175 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...