Semantic Contrastive Embedding for Generalized Zero-Shot Learning View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2022-08-18

AUTHORS

Zongyan Han, Zhenyong Fu, Shuo Chen, Jian Yang

ABSTRACT

Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL recognition since it lacks semantic information, which is vital for recognizing the unseen classes. To tackle this issue, we propose to integrate the feature generation model with an embedding model. Our GZSL framework maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a semantic contrastive embedding (SCE) for our GZSL framework. Our SCE consists of attribute-level contrastive embedding and class-level contrastive embedding. They aim to obtain the transferable and discriminative information, respectively, in the embedding space. We evaluate our GZSL method with semantic contrastive embedding, named SCE-GZSL, on four benchmark datasets. The results show that our SCE-GZSL method can achieve the state-of-the-art or the second-best on these datasets. More... »

PAGES

2606-2622

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s11263-022-01656-y

DOI

http://dx.doi.org/10.1007/s11263-022-01656-y

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1150326040


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0806", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information Systems", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China", 
          "id": "http://www.grid.ac/institutes/grid.410579.e", 
          "name": [
            "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Han", 
        "givenName": "Zongyan", 
        "id": "sg:person.011644357407.88", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011644357407.88"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China", 
          "id": "http://www.grid.ac/institutes/grid.410579.e", 
          "name": [
            "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Fu", 
        "givenName": "Zhenyong", 
        "id": "sg:person.013305637071.19", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013305637071.19"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "RIKEN Center for Advanced Intelligence Project, Tokyo, Japan", 
          "id": "http://www.grid.ac/institutes/grid.509456.b", 
          "name": [
            "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China", 
            "RIKEN Center for Advanced Intelligence Project, Tokyo, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Chen", 
        "givenName": "Shuo", 
        "id": "sg:person.013736477527.30", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013736477527.30"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China", 
          "id": "http://www.grid.ac/institutes/grid.410579.e", 
          "name": [
            "PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Yang", 
        "givenName": "Jian", 
        "id": "sg:person.0706373631.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0706373631.37"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/978-3-030-58542-6_29", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1132670006", 
          "https://doi.org/10.1007/978-3-030-58542-6_29"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-030-58523-5_27", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1133092316", 
          "https://doi.org/10.1007/978-3-030-58523-5_27"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-030-58517-4_36", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1131567373", 
          "https://doi.org/10.1007/978-3-030-58517-4_36"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-030-58586-0_34", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1132980397", 
          "https://doi.org/10.1007/978-3-030-58586-0_34"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-46454-1_44", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1036620713", 
          "https://doi.org/10.1007/978-3-319-46454-1_44"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-10605-2_38", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044946925", 
          "https://doi.org/10.1007/978-3-319-10605-2_38"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-10602-1_48", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1045321436", 
          "https://doi.org/10.1007/978-3-319-10602-1_48"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-030-58577-8_5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1131127232", 
          "https://doi.org/10.1007/978-3-030-58577-8_5"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-46475-6_4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1016409222", 
          "https://doi.org/10.1007/978-3-319-46475-6_4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-030-01231-1_2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1107454539", 
          "https://doi.org/10.1007/978-3-030-01231-1_2"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2022-08-18", 
    "datePublishedReg": "2022-08-18", 
    "description": "Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL recognition since it lacks semantic information, which is vital for recognizing the unseen classes. To tackle this issue, we propose to integrate the feature generation model with an embedding model. Our GZSL framework maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a semantic contrastive embedding (SCE) for our GZSL framework. Our SCE consists of attribute-level contrastive embedding and class-level contrastive embedding. They aim to obtain the transferable and discriminative information, respectively, in the embedding space. We evaluate our GZSL method with semantic contrastive embedding, named SCE-GZSL, on four benchmark datasets. The results show that our SCE-GZSL method can achieve the state-of-the-art or the second-best on these datasets.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/s11263-022-01656-y", 
    "isAccessibleForFree": false, 
    "isFundedItemOf": [
      {
        "id": "sg:grant.8945102", 
        "type": "MonetaryGrant"
      }, 
      {
        "id": "sg:grant.8381949", 
        "type": "MonetaryGrant"
      }
    ], 
    "isPartOf": [
      {
        "id": "sg:journal.1032807", 
        "issn": [
          "0920-5691", 
          "1573-1405"
        ], 
        "name": "International Journal of Computer Vision", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "11", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "130"
      }
    ], 
    "keywords": [
      "zero-shot learning", 
      "unseen classes", 
      "visual feature space", 
      "data imbalance problem", 
      "feature generation model", 
      "feature generation method", 
      "embedding space", 
      "generative model", 
      "space", 
      "embedding", 
      "semantic information", 
      "generation model", 
      "GZSL methods", 
      "benchmark datasets", 
      "visual features", 
      "discriminative information", 
      "class", 
      "feature space", 
      "generation method", 
      "Generalized", 
      "model", 
      "dataset", 
      "learning", 
      "problem", 
      "information", 
      "GZSL", 
      "objects", 
      "framework", 
      "classification", 
      "recognition", 
      "method", 
      "maps", 
      "state", 
      "art", 
      "features", 
      "issues", 
      "synthetic samples", 
      "results", 
      "example", 
      "framework map", 
      "samples"
    ], 
    "name": "Semantic Contrastive Embedding for Generalized Zero-Shot Learning", 
    "pagination": "2606-2622", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1150326040"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s11263-022-01656-y"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s11263-022-01656-y", 
      "https://app.dimensions.ai/details/publication/pub.1150326040"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-12-01T06:44", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_959.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/s11263-022-01656-y"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01656-y'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01656-y'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01656-y'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01656-y'


 

This table displays all metadata directly associated to this object as RDF triples.

167 TRIPLES      21 PREDICATES      75 URIs      57 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s11263-022-01656-y schema:about anzsrc-for:08
2 anzsrc-for:0806
3 schema:author N492de992d3594c5f9c623fc63616dd28
4 schema:citation sg:pub.10.1007/978-3-030-01231-1_2
5 sg:pub.10.1007/978-3-030-58517-4_36
6 sg:pub.10.1007/978-3-030-58523-5_27
7 sg:pub.10.1007/978-3-030-58542-6_29
8 sg:pub.10.1007/978-3-030-58577-8_5
9 sg:pub.10.1007/978-3-030-58586-0_34
10 sg:pub.10.1007/978-3-319-10602-1_48
11 sg:pub.10.1007/978-3-319-10605-2_38
12 sg:pub.10.1007/978-3-319-46454-1_44
13 sg:pub.10.1007/978-3-319-46475-6_4
14 schema:datePublished 2022-08-18
15 schema:datePublishedReg 2022-08-18
16 schema:description Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL recognition since it lacks semantic information, which is vital for recognizing the unseen classes. To tackle this issue, we propose to integrate the feature generation model with an embedding model. Our GZSL framework maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a semantic contrastive embedding (SCE) for our GZSL framework. Our SCE consists of attribute-level contrastive embedding and class-level contrastive embedding. They aim to obtain the transferable and discriminative information, respectively, in the embedding space. We evaluate our GZSL method with semantic contrastive embedding, named SCE-GZSL, on four benchmark datasets. The results show that our SCE-GZSL method can achieve the state-of-the-art or the second-best on these datasets.
17 schema:genre article
18 schema:isAccessibleForFree false
19 schema:isPartOf Na51f3c7728ab4d7e9f317f14333a63ac
20 Nd5be5f7bba4442a8ab57b13c37a8f5fc
21 sg:journal.1032807
22 schema:keywords GZSL
23 GZSL methods
24 Generalized
25 art
26 benchmark datasets
27 class
28 classification
29 data imbalance problem
30 dataset
31 discriminative information
32 embedding
33 embedding space
34 example
35 feature generation method
36 feature generation model
37 feature space
38 features
39 framework
40 framework map
41 generation method
42 generation model
43 generative model
44 information
45 issues
46 learning
47 maps
48 method
49 model
50 objects
51 problem
52 recognition
53 results
54 samples
55 semantic information
56 space
57 state
58 synthetic samples
59 unseen classes
60 visual feature space
61 visual features
62 zero-shot learning
63 schema:name Semantic Contrastive Embedding for Generalized Zero-Shot Learning
64 schema:pagination 2606-2622
65 schema:productId N1f309076d211452681b2c3af72ebe692
66 N9eb12c7ffb97441bbb08cf4417aa80f8
67 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150326040
68 https://doi.org/10.1007/s11263-022-01656-y
69 schema:sdDatePublished 2022-12-01T06:44
70 schema:sdLicense https://scigraph.springernature.com/explorer/license/
71 schema:sdPublisher Nf09f31c4634a448faef4cd30274b3e23
72 schema:url https://doi.org/10.1007/s11263-022-01656-y
73 sgo:license sg:explorer/license/
74 sgo:sdDataset articles
75 rdf:type schema:ScholarlyArticle
76 N03e96538d2db4283b599a2fd04d3d713 rdf:first sg:person.013736477527.30
77 rdf:rest N0f2fbbdc14e940138d5dd0e26435b789
78 N0ac1137a8ff044e6918fec06b669473b rdf:first sg:person.013305637071.19
79 rdf:rest N03e96538d2db4283b599a2fd04d3d713
80 N0f2fbbdc14e940138d5dd0e26435b789 rdf:first sg:person.0706373631.37
81 rdf:rest rdf:nil
82 N1f309076d211452681b2c3af72ebe692 schema:name doi
83 schema:value 10.1007/s11263-022-01656-y
84 rdf:type schema:PropertyValue
85 N492de992d3594c5f9c623fc63616dd28 rdf:first sg:person.011644357407.88
86 rdf:rest N0ac1137a8ff044e6918fec06b669473b
87 N9eb12c7ffb97441bbb08cf4417aa80f8 schema:name dimensions_id
88 schema:value pub.1150326040
89 rdf:type schema:PropertyValue
90 Na51f3c7728ab4d7e9f317f14333a63ac schema:issueNumber 11
91 rdf:type schema:PublicationIssue
92 Nd5be5f7bba4442a8ab57b13c37a8f5fc schema:volumeNumber 130
93 rdf:type schema:PublicationVolume
94 Nf09f31c4634a448faef4cd30274b3e23 schema:name Springer Nature - SN SciGraph project
95 rdf:type schema:Organization
96 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
97 schema:name Information and Computing Sciences
98 rdf:type schema:DefinedTerm
99 anzsrc-for:0806 schema:inDefinedTermSet anzsrc-for:
100 schema:name Information Systems
101 rdf:type schema:DefinedTerm
102 sg:grant.8381949 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01656-y
103 rdf:type schema:MonetaryGrant
104 sg:grant.8945102 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01656-y
105 rdf:type schema:MonetaryGrant
106 sg:journal.1032807 schema:issn 0920-5691
107 1573-1405
108 schema:name International Journal of Computer Vision
109 schema:publisher Springer Nature
110 rdf:type schema:Periodical
111 sg:person.011644357407.88 schema:affiliation grid-institutes:grid.410579.e
112 schema:familyName Han
113 schema:givenName Zongyan
114 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011644357407.88
115 rdf:type schema:Person
116 sg:person.013305637071.19 schema:affiliation grid-institutes:grid.410579.e
117 schema:familyName Fu
118 schema:givenName Zhenyong
119 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013305637071.19
120 rdf:type schema:Person
121 sg:person.013736477527.30 schema:affiliation grid-institutes:grid.509456.b
122 schema:familyName Chen
123 schema:givenName Shuo
124 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013736477527.30
125 rdf:type schema:Person
126 sg:person.0706373631.37 schema:affiliation grid-institutes:grid.410579.e
127 schema:familyName Yang
128 schema:givenName Jian
129 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0706373631.37
130 rdf:type schema:Person
131 sg:pub.10.1007/978-3-030-01231-1_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454539
132 https://doi.org/10.1007/978-3-030-01231-1_2
133 rdf:type schema:CreativeWork
134 sg:pub.10.1007/978-3-030-58517-4_36 schema:sameAs https://app.dimensions.ai/details/publication/pub.1131567373
135 https://doi.org/10.1007/978-3-030-58517-4_36
136 rdf:type schema:CreativeWork
137 sg:pub.10.1007/978-3-030-58523-5_27 schema:sameAs https://app.dimensions.ai/details/publication/pub.1133092316
138 https://doi.org/10.1007/978-3-030-58523-5_27
139 rdf:type schema:CreativeWork
140 sg:pub.10.1007/978-3-030-58542-6_29 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132670006
141 https://doi.org/10.1007/978-3-030-58542-6_29
142 rdf:type schema:CreativeWork
143 sg:pub.10.1007/978-3-030-58577-8_5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1131127232
144 https://doi.org/10.1007/978-3-030-58577-8_5
145 rdf:type schema:CreativeWork
146 sg:pub.10.1007/978-3-030-58586-0_34 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132980397
147 https://doi.org/10.1007/978-3-030-58586-0_34
148 rdf:type schema:CreativeWork
149 sg:pub.10.1007/978-3-319-10602-1_48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045321436
150 https://doi.org/10.1007/978-3-319-10602-1_48
151 rdf:type schema:CreativeWork
152 sg:pub.10.1007/978-3-319-10605-2_38 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044946925
153 https://doi.org/10.1007/978-3-319-10605-2_38
154 rdf:type schema:CreativeWork
155 sg:pub.10.1007/978-3-319-46454-1_44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036620713
156 https://doi.org/10.1007/978-3-319-46454-1_44
157 rdf:type schema:CreativeWork
158 sg:pub.10.1007/978-3-319-46475-6_4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016409222
159 https://doi.org/10.1007/978-3-319-46475-6_4
160 rdf:type schema:CreativeWork
161 grid-institutes:grid.410579.e schema:alternateName PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
162 schema:name PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
163 rdf:type schema:Organization
164 grid-institutes:grid.509456.b schema:alternateName RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
165 schema:name PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
166 RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
167 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...