Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2014

AUTHORS

Kaiming He , Xiangyu Zhang , Shaoqing Ren , Jian Sun

ABSTRACT

Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101. The power of SPP-net is more significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method computes convolutional features 30-170× faster than the recent leading method R-CNN (and 24-64× faster overall), while achieving better or comparable accuracy on Pascal VOC 2007. More... »

PAGES

346-361

References to SciGraph publications

  • 2014. Multi-scale Orderless Pooling of Deep Convolutional Activation Features in COMPUTER VISION – ECCV 2014
  • 2008. Kernel Codebooks for Scene Categorization in COMPUTER VISION – ECCV 2008
  • 2010. Improving the Fisher Kernel for Large-Scale Image Classification in COMPUTER VISION – ECCV 2010
  • Book

    TITLE

    Computer Vision – ECCV 2014

    ISBN

    978-3-319-10577-2
    978-3-319-10578-9

    Author Affiliations

    Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/978-3-319-10578-9_23

    DOI

    http://dx.doi.org/10.1007/978-3-319-10578-9_23

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1030406568


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "name": [
                "Microsoft Research, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "He", 
            "givenName": "Kaiming", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Xi'an Jiaotong University", 
              "id": "https://www.grid.ac/institutes/grid.43169.39", 
              "name": [
                "Xi\u2019an Jiaotong University, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zhang", 
            "givenName": "Xiangyu", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "University of Science and Technology, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ren", 
            "givenName": "Shaoqing", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "Microsoft Research, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Sun", 
            "givenName": "Jian", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1016/j.cviu.2005.09.012", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004784969"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1162/neco.1989.1.4.541", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008345178"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10584-0_26", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032984348", 
              "https://doi.org/10.1007/978-3-319-10584-0_26"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-15561-1_11", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045344996", 
              "https://doi.org/10.1007/978-3-642-15561-1_11"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-15561-1_11", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045344996", 
              "https://doi.org/10.1007/978-3-642-15561-1_11"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-88690-7_52", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1048787563", 
              "https://doi.org/10.1007/978-3-540-88690-7_52"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-88690-7_52", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1048787563", 
              "https://doi.org/10.1007/978-3-540-88690-7_52"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2014.220", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052782426"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2014.212", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093810850"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2013.10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093883984"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2005.177", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093997066"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2014.222", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094012327"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2006.68", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094512911"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2014.81", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094727707"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2003.1238663", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094978467"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2009.5206757", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095180230"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2010.5540018", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095506116"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2005.239", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095611654"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2009.5206848", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095689025"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.5244/c.25.76", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1099341617"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2014", 
        "datePublishedReg": "2014-01-01", 
        "description": "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g.\u00a0224\u00d7224) input image. This requirement is \u201cartificial\u201d and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, \u201cspatial pyramid pooling\u201d, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101. The power of SPP-net is more significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method computes convolutional features 30-170\u00d7 faster than the recent leading method R-CNN (and 24-64\u00d7 faster overall), while achieving better or comparable accuracy on Pascal VOC 2007.", 
        "editor": [
          {
            "familyName": "Fleet", 
            "givenName": "David", 
            "type": "Person"
          }, 
          {
            "familyName": "Pajdla", 
            "givenName": "Tomas", 
            "type": "Person"
          }, 
          {
            "familyName": "Schiele", 
            "givenName": "Bernt", 
            "type": "Person"
          }, 
          {
            "familyName": "Tuytelaars", 
            "givenName": "Tinne", 
            "type": "Person"
          }
        ], 
        "genre": "chapter", 
        "id": "sg:pub.10.1007/978-3-319-10578-9_23", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": true, 
        "isPartOf": {
          "isbn": [
            "978-3-319-10577-2", 
            "978-3-319-10578-9"
          ], 
          "name": "Computer Vision \u2013 ECCV 2014", 
          "type": "Book"
        }, 
        "name": "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition", 
        "pagination": "346-361", 
        "productId": [
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/978-3-319-10578-9_23"
            ]
          }, 
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "90da7e6dfbf6e95b050c1e78167db4bdfa484ebf23fd9f26609cc1bb2360ee52"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1030406568"
            ]
          }
        ], 
        "publisher": {
          "location": "Cham", 
          "name": "Springer International Publishing", 
          "type": "Organisation"
        }, 
        "sameAs": [
          "https://doi.org/10.1007/978-3-319-10578-9_23", 
          "https://app.dimensions.ai/details/publication/pub.1030406568"
        ], 
        "sdDataset": "chapters", 
        "sdDatePublished": "2019-04-15T17:14", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8678_00000262.jsonl", 
        "type": "Chapter", 
        "url": "http://link.springer.com/10.1007/978-3-319-10578-9_23"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-10578-9_23'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-10578-9_23'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-10578-9_23'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-10578-9_23'


     

    This table displays all metadata directly associated to this object as RDF triples.

    160 TRIPLES      23 PREDICATES      45 URIs      20 LITERALS      8 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/978-3-319-10578-9_23 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N02d0378af2f7430f97fc984b56c75ddf
    4 schema:citation sg:pub.10.1007/978-3-319-10584-0_26
    5 sg:pub.10.1007/978-3-540-88690-7_52
    6 sg:pub.10.1007/978-3-642-15561-1_11
    7 https://doi.org/10.1016/j.cviu.2005.09.012
    8 https://doi.org/10.1109/cvpr.2005.177
    9 https://doi.org/10.1109/cvpr.2006.68
    10 https://doi.org/10.1109/cvpr.2009.5206757
    11 https://doi.org/10.1109/cvpr.2009.5206848
    12 https://doi.org/10.1109/cvpr.2010.5540018
    13 https://doi.org/10.1109/cvpr.2014.212
    14 https://doi.org/10.1109/cvpr.2014.220
    15 https://doi.org/10.1109/cvpr.2014.222
    16 https://doi.org/10.1109/cvpr.2014.81
    17 https://doi.org/10.1109/iccv.2003.1238663
    18 https://doi.org/10.1109/iccv.2005.239
    19 https://doi.org/10.1109/iccv.2013.10
    20 https://doi.org/10.1162/neco.1989.1.4.541
    21 https://doi.org/10.5244/c.25.76
    22 schema:datePublished 2014
    23 schema:datePublishedReg 2014-01-01
    24 schema:description Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101. The power of SPP-net is more significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method computes convolutional features 30-170× faster than the recent leading method R-CNN (and 24-64× faster overall), while achieving better or comparable accuracy on Pascal VOC 2007.
    25 schema:editor N67a585940c0f40aba5b0422828d00f11
    26 schema:genre chapter
    27 schema:inLanguage en
    28 schema:isAccessibleForFree true
    29 schema:isPartOf Neb068b49114c469cb5a471af6fa1be60
    30 schema:name Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
    31 schema:pagination 346-361
    32 schema:productId N0227ca05cbed47c1a3e4bb421d1af2c1
    33 N1b7461c4e4d94ceab9603e96deb9c576
    34 N317dbdd60b854e7fb3c1a44d9f1189f6
    35 schema:publisher N6f48dc9611af4c55b500e1b397034294
    36 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030406568
    37 https://doi.org/10.1007/978-3-319-10578-9_23
    38 schema:sdDatePublished 2019-04-15T17:14
    39 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    40 schema:sdPublisher N6958c8a9613f4cdaab80d52a6c7e7f0f
    41 schema:url http://link.springer.com/10.1007/978-3-319-10578-9_23
    42 sgo:license sg:explorer/license/
    43 sgo:sdDataset chapters
    44 rdf:type schema:Chapter
    45 N0227ca05cbed47c1a3e4bb421d1af2c1 schema:name readcube_id
    46 schema:value 90da7e6dfbf6e95b050c1e78167db4bdfa484ebf23fd9f26609cc1bb2360ee52
    47 rdf:type schema:PropertyValue
    48 N02d0378af2f7430f97fc984b56c75ddf rdf:first N90919eb8190747b3abc0f6dd759a1d7e
    49 rdf:rest N453803e7fd0d43e09ae2b9b225118782
    50 N0a2a9e11376b49709045864771f3ba0f rdf:first Ne07a2fbc26a643b68cad07c00fd90cef
    51 rdf:rest N9717d33cb1d44fd6975c617a1f47a516
    52 N1b7461c4e4d94ceab9603e96deb9c576 schema:name doi
    53 schema:value 10.1007/978-3-319-10578-9_23
    54 rdf:type schema:PropertyValue
    55 N275cd86a8083474eacfd7a871d904ed8 schema:name Microsoft Research, China
    56 rdf:type schema:Organization
    57 N2fdd5ec3e989475980fcdb4a1c0a301e schema:affiliation N275cd86a8083474eacfd7a871d904ed8
    58 schema:familyName Sun
    59 schema:givenName Jian
    60 rdf:type schema:Person
    61 N317dbdd60b854e7fb3c1a44d9f1189f6 schema:name dimensions_id
    62 schema:value pub.1030406568
    63 rdf:type schema:PropertyValue
    64 N3e298a2bf1934132835319c31a1c1c1b schema:familyName Tuytelaars
    65 schema:givenName Tinne
    66 rdf:type schema:Person
    67 N410864f5438140178c7aabacb2074194 rdf:first Na2c260bb4e5741d0957f9b3eff75acdc
    68 rdf:rest N0a2a9e11376b49709045864771f3ba0f
    69 N453803e7fd0d43e09ae2b9b225118782 rdf:first Need49d9e63d64c93839128eb667325ff
    70 rdf:rest N634bcc0dda50403c97e31b7f5695c63b
    71 N634bcc0dda50403c97e31b7f5695c63b rdf:first N67790c5fbe2f49d3b9127bd2b5873c8a
    72 rdf:rest Nae95e31eca28449ea15795aa2996bdc1
    73 N67790c5fbe2f49d3b9127bd2b5873c8a schema:affiliation N8788b956c1254eea909b772987decc83
    74 schema:familyName Ren
    75 schema:givenName Shaoqing
    76 rdf:type schema:Person
    77 N67a585940c0f40aba5b0422828d00f11 rdf:first Nd972692503944438aafa0f967dfe0f34
    78 rdf:rest N410864f5438140178c7aabacb2074194
    79 N6958c8a9613f4cdaab80d52a6c7e7f0f schema:name Springer Nature - SN SciGraph project
    80 rdf:type schema:Organization
    81 N6f48dc9611af4c55b500e1b397034294 schema:location Cham
    82 schema:name Springer International Publishing
    83 rdf:type schema:Organisation
    84 N8788b956c1254eea909b772987decc83 schema:name University of Science and Technology, China
    85 rdf:type schema:Organization
    86 N90919eb8190747b3abc0f6dd759a1d7e schema:affiliation Nae4cffe937634ede802da546766ba4cc
    87 schema:familyName He
    88 schema:givenName Kaiming
    89 rdf:type schema:Person
    90 N9717d33cb1d44fd6975c617a1f47a516 rdf:first N3e298a2bf1934132835319c31a1c1c1b
    91 rdf:rest rdf:nil
    92 Na2c260bb4e5741d0957f9b3eff75acdc schema:familyName Pajdla
    93 schema:givenName Tomas
    94 rdf:type schema:Person
    95 Nae4cffe937634ede802da546766ba4cc schema:name Microsoft Research, China
    96 rdf:type schema:Organization
    97 Nae95e31eca28449ea15795aa2996bdc1 rdf:first N2fdd5ec3e989475980fcdb4a1c0a301e
    98 rdf:rest rdf:nil
    99 Nd972692503944438aafa0f967dfe0f34 schema:familyName Fleet
    100 schema:givenName David
    101 rdf:type schema:Person
    102 Ne07a2fbc26a643b68cad07c00fd90cef schema:familyName Schiele
    103 schema:givenName Bernt
    104 rdf:type schema:Person
    105 Neb068b49114c469cb5a471af6fa1be60 schema:isbn 978-3-319-10577-2
    106 978-3-319-10578-9
    107 schema:name Computer Vision – ECCV 2014
    108 rdf:type schema:Book
    109 Need49d9e63d64c93839128eb667325ff schema:affiliation https://www.grid.ac/institutes/grid.43169.39
    110 schema:familyName Zhang
    111 schema:givenName Xiangyu
    112 rdf:type schema:Person
    113 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    114 schema:name Information and Computing Sciences
    115 rdf:type schema:DefinedTerm
    116 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    117 schema:name Artificial Intelligence and Image Processing
    118 rdf:type schema:DefinedTerm
    119 sg:pub.10.1007/978-3-319-10584-0_26 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032984348
    120 https://doi.org/10.1007/978-3-319-10584-0_26
    121 rdf:type schema:CreativeWork
    122 sg:pub.10.1007/978-3-540-88690-7_52 schema:sameAs https://app.dimensions.ai/details/publication/pub.1048787563
    123 https://doi.org/10.1007/978-3-540-88690-7_52
    124 rdf:type schema:CreativeWork
    125 sg:pub.10.1007/978-3-642-15561-1_11 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045344996
    126 https://doi.org/10.1007/978-3-642-15561-1_11
    127 rdf:type schema:CreativeWork
    128 https://doi.org/10.1016/j.cviu.2005.09.012 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004784969
    129 rdf:type schema:CreativeWork
    130 https://doi.org/10.1109/cvpr.2005.177 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093997066
    131 rdf:type schema:CreativeWork
    132 https://doi.org/10.1109/cvpr.2006.68 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094512911
    133 rdf:type schema:CreativeWork
    134 https://doi.org/10.1109/cvpr.2009.5206757 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095180230
    135 rdf:type schema:CreativeWork
    136 https://doi.org/10.1109/cvpr.2009.5206848 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095689025
    137 rdf:type schema:CreativeWork
    138 https://doi.org/10.1109/cvpr.2010.5540018 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095506116
    139 rdf:type schema:CreativeWork
    140 https://doi.org/10.1109/cvpr.2014.212 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093810850
    141 rdf:type schema:CreativeWork
    142 https://doi.org/10.1109/cvpr.2014.220 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052782426
    143 rdf:type schema:CreativeWork
    144 https://doi.org/10.1109/cvpr.2014.222 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094012327
    145 rdf:type schema:CreativeWork
    146 https://doi.org/10.1109/cvpr.2014.81 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094727707
    147 rdf:type schema:CreativeWork
    148 https://doi.org/10.1109/iccv.2003.1238663 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094978467
    149 rdf:type schema:CreativeWork
    150 https://doi.org/10.1109/iccv.2005.239 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095611654
    151 rdf:type schema:CreativeWork
    152 https://doi.org/10.1109/iccv.2013.10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093883984
    153 rdf:type schema:CreativeWork
    154 https://doi.org/10.1162/neco.1989.1.4.541 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008345178
    155 rdf:type schema:CreativeWork
    156 https://doi.org/10.5244/c.25.76 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099341617
    157 rdf:type schema:CreativeWork
    158 https://www.grid.ac/institutes/grid.43169.39 schema:alternateName Xi'an Jiaotong University
    159 schema:name Xi’an Jiaotong University, China
    160 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...