One-Shot Object Affordance Detection in the Wild View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-08-08

AUTHORS

Wei Zhai, Hongchen Luo, Jing Zhang, Yang Cao, Dacheng Tao

ABSTRACT

Affordance detection refers to identifying the potential action possibilities of objects in an image, which is a crucial ability for robot perception and manipulation. To empower robots with this ability in unseen scenarios, we first study the challenging one-shot affordance detection problem in this paper, i.e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected. To this end, we devise a One-Shot Affordance Detection Network (OSAD-Net) that firstly estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images. Through collaboration learning, OSAD-Net can capture the common characteristics between objects having the same underlying affordance and learn a good adaptation capability for perceiving unseen affordances. Besides, we build a large-scale purpose-driven affordance dataset v2 (PADv2) by collecting and labeling 30k images from 39 affordance and 103 object categories. With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods and may also facilitate downstream vision tasks, such as scene understanding, action recognition, and robot manipulation. Specifically, we conducted comprehensive experiments on PADv2 dataset by including 11 advanced models from several related research fields. Experimental results demonstrate the superiority of our model over previous representative ones in terms of both objective metrics and visual quality. The benchmark suite is available at https://github.com/lhc1224/OSAD_Net. More... »

PAGES

2472-2500

References to SciGraph publications

  • 2018-10-06. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation in COMPUTER VISION – ECCV 2018
  • 2016-09-17. Learning to Learn: Model Regression Networks for Easy Small Sample Learning in COMPUTER VISION – ECCV 2016
  • 2014. Predicting Actions from Static Scenes in COMPUTER VISION – ECCV 2014
  • 2008-01-01. Functional Object Class Detection Based on Learned Affordance Cues in COMPUTER VISION SYSTEMS
  • 2015-04-11. ImageNet Large Scale Visual Recognition Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • 2016-02-25. Attribute Based Affordance Detection from Human-Object Interaction Images in IMAGE AND VIDEO TECHNOLOGY – PSIVT 2015 WORKSHOPS
  • 2020-08-01. View Transfer on Human Skeleton Pose: Automatically Disentangle the View-Variant and View-Invariant Information for Pose Representation Learning in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2006-01-27. Markov logic networks in MACHINE LEARNING
  • 2014. Reasoning about Object Affordances in a Knowledge Base Representation in COMPUTER VISION – ECCV 2014
  • 2021-02-27. An Exploration of Embodied Visual Exploration in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2021-04-19. Polysemy Deciphering Network for Robust Human–Object Interaction Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2020-11-07. Highly Efficient Salient Object Detection with 100K Parameters in COMPUTER VISION – ECCV 2020
  • 2019-07-04. Object affordance detection with relationship-aware network in NEURAL COMPUTING AND APPLICATIONS
  • 2014. Microsoft COCO: Common Objects in Context in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-022-01642-4

    DOI

    http://dx.doi.org/10.1007/s11263-022-01642-4

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1150071666


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of Science and Technology of China, Hefei, China", 
              "id": "http://www.grid.ac/institutes/grid.59053.3a", 
              "name": [
                "University of Science and Technology of China, Hefei, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zhai", 
            "givenName": "Wei", 
            "id": "sg:person.016657642144.91", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016657642144.91"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Science and Technology of China, Hefei, China", 
              "id": "http://www.grid.ac/institutes/grid.59053.3a", 
              "name": [
                "University of Science and Technology of China, Hefei, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Luo", 
            "givenName": "Hongchen", 
            "id": "sg:person.010733071435.99", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010733071435.99"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "The University of Sydney, Sydney, Australia", 
              "id": "http://www.grid.ac/institutes/grid.1013.3", 
              "name": [
                "The University of Sydney, Sydney, Australia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zhang", 
            "givenName": "Jing", 
            "id": "sg:person.010767212714.33", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010767212714.33"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "University of Science and Technology of China, Hefei, China", 
                "Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Cao", 
            "givenName": "Yang", 
            "id": "sg:person.016566172511.31", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016566172511.31"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "JD Explore Academy, Beijing, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "The University of Sydney, Sydney, Australia", 
                "JD Explore Academy, Beijing, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Tao", 
            "givenName": "Dacheng", 
            "id": "sg:person.0677067025.10", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0677067025.10"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s00521-019-04336-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1117764493", 
              "https://doi.org/10.1007/s00521-019-04336-0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10994-006-5833-1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045259579", 
              "https://doi.org/10.1007/s10994-006-5833-1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_48", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045321436", 
              "https://doi.org/10.1007/978-3-319-10602-1_48"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01458-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1137309169", 
              "https://doi.org/10.1007/s11263-021-01458-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01437-z", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1135804888", 
              "https://doi.org/10.1007/s11263-021-01437-z"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-015-0816-y", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1009767488", 
              "https://doi.org/10.1007/s11263-015-0816-y"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-79547-6_42", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1023413022", 
              "https://doi.org/10.1007/978-3-540-79547-6_42"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10605-2_27", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1018385677", 
              "https://doi.org/10.1007/978-3-319-10605-2_27"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-020-01354-7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1129825354", 
              "https://doi.org/10.1007/s11263-020-01354-7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01234-2_49", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454614", 
              "https://doi.org/10.1007/978-3-030-01234-2_49"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-30285-0_18", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084703838", 
              "https://doi.org/10.1007/978-3-319-30285-0_18"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46466-4_37", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1009862037", 
              "https://doi.org/10.1007/978-3-319-46466-4_37"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58539-6_42", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132395513", 
              "https://doi.org/10.1007/978-3-030-58539-6_42"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024393198", 
              "https://doi.org/10.1007/978-3-319-10602-1_28"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2022-08-08", 
        "datePublishedReg": "2022-08-08", 
        "description": "Affordance detection refers to identifying the potential action possibilities of objects in an image, which is a crucial ability for robot perception and manipulation. To empower robots with this ability in unseen scenarios, we first study the challenging one-shot affordance detection problem in this paper, i.e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected. To this end, we devise a One-Shot Affordance Detection Network (OSAD-Net) that firstly estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images. Through collaboration learning, OSAD-Net can capture the common characteristics between objects having the same underlying affordance and learn a good adaptation capability for perceiving unseen affordances. Besides, we build a large-scale purpose-driven affordance dataset v2 (PADv2) by collecting and labeling 30k images from 39 affordance and 103 object categories. With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods and may also facilitate downstream vision tasks, such as scene understanding, action recognition, and robot manipulation. Specifically, we conducted comprehensive experiments on PADv2 dataset by including 11 advanced models from several related research fields. Experimental results demonstrate the superiority of our model over previous representative ones in terms of both objective metrics and visual quality. The benchmark suite is available at https://github.com/lhc1224/OSAD_Net.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-022-01642-4", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8944133", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "10", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "130"
          }
        ], 
        "keywords": [
          "affordance detection", 
          "common affordances", 
          "action purposes", 
          "vision tasks", 
          "scene understanding", 
          "robot perception", 
          "support images", 
          "good adaptation capability", 
          "rich annotations", 
          "action recognition", 
          "robot manipulation", 
          "unseen scenarios", 
          "object categories", 
          "candidate images", 
          "comprehensive experiments", 
          "complex scenes", 
          "action possibilities", 
          "detection network", 
          "benchmark suite", 
          "crucial ability", 
          "visual quality", 
          "detection problem", 
          "adaptation capabilities", 
          "collaboration learning", 
          "objective metrics", 
          "test bed", 
          "affordances", 
          "detection method", 
          "images", 
          "experimental results", 
          "representative ones", 
          "research field", 
          "scene", 
          "objects", 
          "advanced models", 
          "robot", 
          "perception", 
          "annotation", 
          "dataset", 
          "task", 
          "network", 
          "ability", 
          "learning", 
          "detection", 
          "manipulation", 
          "metrics", 
          "common characteristics", 
          "scenarios", 
          "recognition", 
          "capability", 
          "superiority", 
          "suite", 
          "model", 
          "purpose", 
          "categories", 
          "understanding", 
          "quality", 
          "wild", 
          "experiments", 
          "method", 
          "V2", 
          "one", 
          "terms", 
          "end", 
          "field", 
          "problem", 
          "results", 
          "possibility", 
          "characteristics", 
          "paper", 
          "bed"
        ], 
        "name": "One-Shot Object Affordance Detection in the Wild", 
        "pagination": "2472-2500", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1150071666"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-022-01642-4"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-022-01642-4", 
          "https://app.dimensions.ai/details/publication/pub.1150071666"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:44", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_933.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-022-01642-4"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01642-4'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01642-4'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01642-4'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01642-4'


     

    This table displays all metadata directly associated to this object as RDF triples.

    228 TRIPLES      21 PREDICATES      110 URIs      87 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-022-01642-4 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N3feecf865d7b411a8b67398b6ed70d79
    4 schema:citation sg:pub.10.1007/978-3-030-01234-2_49
    5 sg:pub.10.1007/978-3-030-58539-6_42
    6 sg:pub.10.1007/978-3-319-10602-1_28
    7 sg:pub.10.1007/978-3-319-10602-1_48
    8 sg:pub.10.1007/978-3-319-10605-2_27
    9 sg:pub.10.1007/978-3-319-24574-4_28
    10 sg:pub.10.1007/978-3-319-30285-0_18
    11 sg:pub.10.1007/978-3-319-46466-4_37
    12 sg:pub.10.1007/978-3-540-79547-6_42
    13 sg:pub.10.1007/s00521-019-04336-0
    14 sg:pub.10.1007/s10994-006-5833-1
    15 sg:pub.10.1007/s11263-015-0816-y
    16 sg:pub.10.1007/s11263-020-01354-7
    17 sg:pub.10.1007/s11263-021-01437-z
    18 sg:pub.10.1007/s11263-021-01458-8
    19 schema:datePublished 2022-08-08
    20 schema:datePublishedReg 2022-08-08
    21 schema:description Affordance detection refers to identifying the potential action possibilities of objects in an image, which is a crucial ability for robot perception and manipulation. To empower robots with this ability in unseen scenarios, we first study the challenging one-shot affordance detection problem in this paper, i.e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected. To this end, we devise a One-Shot Affordance Detection Network (OSAD-Net) that firstly estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images. Through collaboration learning, OSAD-Net can capture the common characteristics between objects having the same underlying affordance and learn a good adaptation capability for perceiving unseen affordances. Besides, we build a large-scale purpose-driven affordance dataset v2 (PADv2) by collecting and labeling 30k images from 39 affordance and 103 object categories. With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods and may also facilitate downstream vision tasks, such as scene understanding, action recognition, and robot manipulation. Specifically, we conducted comprehensive experiments on PADv2 dataset by including 11 advanced models from several related research fields. Experimental results demonstrate the superiority of our model over previous representative ones in terms of both objective metrics and visual quality. The benchmark suite is available at https://github.com/lhc1224/OSAD_Net.
    22 schema:genre article
    23 schema:isAccessibleForFree true
    24 schema:isPartOf N68a0571d85ac4777896eeecef7f82ab0
    25 Nd9ab59c7af8c4c72b63bc7070a61ba13
    26 sg:journal.1032807
    27 schema:keywords V2
    28 ability
    29 action possibilities
    30 action purposes
    31 action recognition
    32 adaptation capabilities
    33 advanced models
    34 affordance detection
    35 affordances
    36 annotation
    37 bed
    38 benchmark suite
    39 candidate images
    40 capability
    41 categories
    42 characteristics
    43 collaboration learning
    44 common affordances
    45 common characteristics
    46 complex scenes
    47 comprehensive experiments
    48 crucial ability
    49 dataset
    50 detection
    51 detection method
    52 detection network
    53 detection problem
    54 end
    55 experimental results
    56 experiments
    57 field
    58 good adaptation capability
    59 images
    60 learning
    61 manipulation
    62 method
    63 metrics
    64 model
    65 network
    66 object categories
    67 objective metrics
    68 objects
    69 one
    70 paper
    71 perception
    72 possibility
    73 problem
    74 purpose
    75 quality
    76 recognition
    77 representative ones
    78 research field
    79 results
    80 rich annotations
    81 robot
    82 robot manipulation
    83 robot perception
    84 scenarios
    85 scene
    86 scene understanding
    87 suite
    88 superiority
    89 support images
    90 task
    91 terms
    92 test bed
    93 understanding
    94 unseen scenarios
    95 vision tasks
    96 visual quality
    97 wild
    98 schema:name One-Shot Object Affordance Detection in the Wild
    99 schema:pagination 2472-2500
    100 schema:productId N415ead793a8b4692bed278e2e8d4ea62
    101 N445902cbde46449cb04af405647cf703
    102 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150071666
    103 https://doi.org/10.1007/s11263-022-01642-4
    104 schema:sdDatePublished 2022-12-01T06:44
    105 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    106 schema:sdPublisher Nf640079b35b34ddca40ccc74f6c7ef34
    107 schema:url https://doi.org/10.1007/s11263-022-01642-4
    108 sgo:license sg:explorer/license/
    109 sgo:sdDataset articles
    110 rdf:type schema:ScholarlyArticle
    111 N302aced792254fb2805844522b22b236 rdf:first sg:person.010767212714.33
    112 rdf:rest N4e7bf5c7f727459fbe26e0f4d92b05d2
    113 N3feecf865d7b411a8b67398b6ed70d79 rdf:first sg:person.016657642144.91
    114 rdf:rest N6a8d6a5b0e88472e936fb672518a14c2
    115 N415ead793a8b4692bed278e2e8d4ea62 schema:name doi
    116 schema:value 10.1007/s11263-022-01642-4
    117 rdf:type schema:PropertyValue
    118 N445902cbde46449cb04af405647cf703 schema:name dimensions_id
    119 schema:value pub.1150071666
    120 rdf:type schema:PropertyValue
    121 N4e7bf5c7f727459fbe26e0f4d92b05d2 rdf:first sg:person.016566172511.31
    122 rdf:rest Nea9d285e97eb4f3392de47ae9d018730
    123 N68a0571d85ac4777896eeecef7f82ab0 schema:issueNumber 10
    124 rdf:type schema:PublicationIssue
    125 N6a8d6a5b0e88472e936fb672518a14c2 rdf:first sg:person.010733071435.99
    126 rdf:rest N302aced792254fb2805844522b22b236
    127 Nd9ab59c7af8c4c72b63bc7070a61ba13 schema:volumeNumber 130
    128 rdf:type schema:PublicationVolume
    129 Nea9d285e97eb4f3392de47ae9d018730 rdf:first sg:person.0677067025.10
    130 rdf:rest rdf:nil
    131 Nf640079b35b34ddca40ccc74f6c7ef34 schema:name Springer Nature - SN SciGraph project
    132 rdf:type schema:Organization
    133 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    134 schema:name Information and Computing Sciences
    135 rdf:type schema:DefinedTerm
    136 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    137 schema:name Artificial Intelligence and Image Processing
    138 rdf:type schema:DefinedTerm
    139 sg:grant.8944133 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01642-4
    140 rdf:type schema:MonetaryGrant
    141 sg:journal.1032807 schema:issn 0920-5691
    142 1573-1405
    143 schema:name International Journal of Computer Vision
    144 schema:publisher Springer Nature
    145 rdf:type schema:Periodical
    146 sg:person.010733071435.99 schema:affiliation grid-institutes:grid.59053.3a
    147 schema:familyName Luo
    148 schema:givenName Hongchen
    149 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010733071435.99
    150 rdf:type schema:Person
    151 sg:person.010767212714.33 schema:affiliation grid-institutes:grid.1013.3
    152 schema:familyName Zhang
    153 schema:givenName Jing
    154 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010767212714.33
    155 rdf:type schema:Person
    156 sg:person.016566172511.31 schema:affiliation grid-institutes:None
    157 schema:familyName Cao
    158 schema:givenName Yang
    159 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016566172511.31
    160 rdf:type schema:Person
    161 sg:person.016657642144.91 schema:affiliation grid-institutes:grid.59053.3a
    162 schema:familyName Zhai
    163 schema:givenName Wei
    164 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016657642144.91
    165 rdf:type schema:Person
    166 sg:person.0677067025.10 schema:affiliation grid-institutes:None
    167 schema:familyName Tao
    168 schema:givenName Dacheng
    169 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0677067025.10
    170 rdf:type schema:Person
    171 sg:pub.10.1007/978-3-030-01234-2_49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454614
    172 https://doi.org/10.1007/978-3-030-01234-2_49
    173 rdf:type schema:CreativeWork
    174 sg:pub.10.1007/978-3-030-58539-6_42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132395513
    175 https://doi.org/10.1007/978-3-030-58539-6_42
    176 rdf:type schema:CreativeWork
    177 sg:pub.10.1007/978-3-319-10602-1_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024393198
    178 https://doi.org/10.1007/978-3-319-10602-1_28
    179 rdf:type schema:CreativeWork
    180 sg:pub.10.1007/978-3-319-10602-1_48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045321436
    181 https://doi.org/10.1007/978-3-319-10602-1_48
    182 rdf:type schema:CreativeWork
    183 sg:pub.10.1007/978-3-319-10605-2_27 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018385677
    184 https://doi.org/10.1007/978-3-319-10605-2_27
    185 rdf:type schema:CreativeWork
    186 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    187 https://doi.org/10.1007/978-3-319-24574-4_28
    188 rdf:type schema:CreativeWork
    189 sg:pub.10.1007/978-3-319-30285-0_18 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084703838
    190 https://doi.org/10.1007/978-3-319-30285-0_18
    191 rdf:type schema:CreativeWork
    192 sg:pub.10.1007/978-3-319-46466-4_37 schema:sameAs https://app.dimensions.ai/details/publication/pub.1009862037
    193 https://doi.org/10.1007/978-3-319-46466-4_37
    194 rdf:type schema:CreativeWork
    195 sg:pub.10.1007/978-3-540-79547-6_42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1023413022
    196 https://doi.org/10.1007/978-3-540-79547-6_42
    197 rdf:type schema:CreativeWork
    198 sg:pub.10.1007/s00521-019-04336-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1117764493
    199 https://doi.org/10.1007/s00521-019-04336-0
    200 rdf:type schema:CreativeWork
    201 sg:pub.10.1007/s10994-006-5833-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045259579
    202 https://doi.org/10.1007/s10994-006-5833-1
    203 rdf:type schema:CreativeWork
    204 sg:pub.10.1007/s11263-015-0816-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1009767488
    205 https://doi.org/10.1007/s11263-015-0816-y
    206 rdf:type schema:CreativeWork
    207 sg:pub.10.1007/s11263-020-01354-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1129825354
    208 https://doi.org/10.1007/s11263-020-01354-7
    209 rdf:type schema:CreativeWork
    210 sg:pub.10.1007/s11263-021-01437-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1135804888
    211 https://doi.org/10.1007/s11263-021-01437-z
    212 rdf:type schema:CreativeWork
    213 sg:pub.10.1007/s11263-021-01458-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1137309169
    214 https://doi.org/10.1007/s11263-021-01458-8
    215 rdf:type schema:CreativeWork
    216 grid-institutes:None schema:alternateName Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
    217 JD Explore Academy, Beijing, China
    218 schema:name Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
    219 JD Explore Academy, Beijing, China
    220 The University of Sydney, Sydney, Australia
    221 University of Science and Technology of China, Hefei, China
    222 rdf:type schema:Organization
    223 grid-institutes:grid.1013.3 schema:alternateName The University of Sydney, Sydney, Australia
    224 schema:name The University of Sydney, Sydney, Australia
    225 rdf:type schema:Organization
    226 grid-institutes:grid.59053.3a schema:alternateName University of Science and Technology of China, Hefei, China
    227 schema:name University of Science and Technology of China, Hefei, China
    228 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...