EAN: Event Adaptive Network for Enhanced Action Recognition View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-08-07

AUTHORS

Yuan Tian, Yichao Yan, Guangtao Zhai, Guodong Guo, Zhiyong Gao

ABSTRACT

Efficiently modeling spatial–temporal information in videos is crucial for action recognition. To achieve this goal, state-of-the-art methods typically employ the convolution operator and the dense interaction modules such as non-local blocks. However, these methods cannot accurately fit the diverse events in videos. On the one hand, the adopted convolutions are with fixed scales, thus struggling with events of various scales. On the other hand, the dense interaction modeling paradigm only achieves sub-optimal performance as action-irrelevant parts bring additional noises for the final prediction. In this paper, we propose a unified action recognition framework to investigate the dynamic nature of video content by introducing the following designs. First, when extracting local cues, we generate the spatial–temporal kernels of dynamic-scale to adaptively fit the diverse events. Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer, which yields a sparse paradigm. We call the proposed framework as Event Adaptive Network because both key designs are adaptive to the input video content. To exploit the short-term motions within local segments, we propose a novel and efficient Latent Motion Code module, further improving the performance of the framework. Extensive experiments on several large-scale video datasets, e.g., Something-to-Something V1 &V2, Kinetics, and Diving48, verify that our models achieve state-of-the-art or competitive performances at low FLOPs. Codes are available at: https://github.com/tianyuan168326/EAN-Pytorch. More... »

PAGES

2453-2471

References to SciGraph publications

  • 2018-08-19. Second-order Temporal Pooling for Action Recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2000-06. Visual Surveillance for Moving Vehicles in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-10-06. RESOUND: Towards Action Recognition Without Representation Bias in COMPUTER VISION – ECCV 2018
  • 2021-08-04. SportsCap: Monocular 3D Human Motion Capture and Fine-Grained Understanding in Challenging Sports Videos in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2021-08-18. A Coarse-to-Fine Framework for Resource Efficient Video Recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-10-06. Temporal Relational Reasoning in Videos in COMPUTER VISION – ECCV 2018
  • 2007-01-01. A Duality Based Approach for Realtime TV-L1 Optical Flow in PATTERN RECOGNITION
  • 2016-09-17. Temporal Segment Networks: Towards Good Practices for Deep Action Recognition in COMPUTER VISION – ECCV 2016
  • 2020-11-13. Self-supervised Motion Representation via Scattering Local Motion Cues in COMPUTER VISION – ECCV 2020
  • 2018-10-07. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification in COMPUTER VISION – ECCV 2018
  • 2018-10-09. ECO: Efficient Convolutional Network for Online Video Understanding in COMPUTER VISION – ECCV 2018
  • 2018-10-06. Videos as Space-Time Region Graphs in COMPUTER VISION – ECCV 2018
  • 2019-10-22. Semantic Image Networks for Human Action Recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2020-10-10. MotionSqueeze: Neural Motion Feature Learning for Video Understanding in COMPUTER VISION – ECCV 2020
  • 2019-10-29. Deep Insights into Convolutional Networks for Video Recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2016-07-19. Spatiotemporal Deformable Prototypes for Motion Anomaly Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-12-01. Fast Abnormal Event Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-022-01661-1

    DOI

    http://dx.doi.org/10.1007/s11263-022-01661-1

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1150059588


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Tian", 
            "givenName": "Yuan", 
            "id": "sg:person.07512726071.18", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07512726071.18"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "AI Institute, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China", 
                "AI Institute, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yan", 
            "givenName": "Yichao", 
            "id": "sg:person.012303042662.53", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012303042662.53"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zhai", 
            "givenName": "Guangtao", 
            "id": "sg:person.014252421702.11", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014252421702.11"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Guo", 
            "givenName": "Guodong", 
            "id": "sg:person.012064621621.92", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012064621621.92"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Gao", 
            "givenName": "Zhiyong", 
            "id": "sg:person.012355543705.54", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012355543705.54"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s11263-021-01486-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1140185760", 
              "https://doi.org/10.1007/s11263-021-01486-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01228-1_25", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463260", 
              "https://doi.org/10.1007/978-3-030-01228-1_25"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-019-01248-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1121994193", 
              "https://doi.org/10.1007/s11263-019-01248-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-018-1111-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1106224945", 
              "https://doi.org/10.1007/s11263-018-1111-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01246-5_49", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454663", 
              "https://doi.org/10.1007/978-3-030-01246-5_49"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-016-0934-1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004462818", 
              "https://doi.org/10.1007/s11263-016-0934-1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01508-1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1140499108", 
              "https://doi.org/10.1007/s11263-021-01508-1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-018-1129-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1110333695", 
              "https://doi.org/10.1007/s11263-018-1129-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-019-01225-w", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1122169306", 
              "https://doi.org/10.1007/s11263-019-01225-w"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46484-8_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1025750946", 
              "https://doi.org/10.1007/978-3-319-46484-8_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01231-1_32", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454553", 
              "https://doi.org/10.1007/978-3-030-01231-1_32"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58517-4_21", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1131567351", 
              "https://doi.org/10.1007/978-3-030-58517-4_21"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01267-0_19", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463400", 
              "https://doi.org/10.1007/978-3-030-01267-0_19"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-74936-3_22", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016508230", 
              "https://doi.org/10.1007/978-3-540-74936-3_22"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58568-6_5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132581297", 
              "https://doi.org/10.1007/978-3-030-58568-6_5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01216-8_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107502594", 
              "https://doi.org/10.1007/978-3-030-01216-8_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/a:1008155721192", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1051069481", 
              "https://doi.org/10.1023/a:1008155721192"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2022-08-07", 
        "datePublishedReg": "2022-08-07", 
        "description": "Efficiently modeling spatial\u2013temporal information in videos is crucial for action recognition. To achieve this goal, state-of-the-art methods typically employ the convolution operator and the dense interaction modules such as non-local blocks. However, these methods cannot accurately fit the diverse events in videos. On the one hand, the adopted convolutions are with fixed scales, thus struggling with events of various scales. On the other hand, the dense interaction modeling paradigm only achieves sub-optimal performance as action-irrelevant parts bring additional noises for the final prediction. In this paper, we propose a unified action recognition framework to investigate the dynamic nature of video content by introducing the following designs. First, when extracting local cues, we generate the spatial\u2013temporal kernels of dynamic-scale to adaptively fit the diverse events. Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer, which yields a sparse paradigm. We call the proposed framework as Event Adaptive Network because both key designs are adaptive to the input video content. To exploit the short-term motions within local segments, we propose a novel and efficient Latent Motion Code module, further improving the performance of the framework. Extensive experiments on several large-scale video datasets, e.g., Something-to-Something V1 &V2, Kinetics, and Diving48, verify that our models achieve state-of-the-art or competitive performances at low FLOPs. Codes are available at: https://github.com/tianyuan168326/EAN-Pytorch.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-022-01661-1", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8943022", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "10", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "130"
          }
        ], 
        "keywords": [
          "video content", 
          "action recognition", 
          "large-scale video datasets", 
          "global video representation", 
          "adaptive network", 
          "action recognition framework", 
          "non-local block", 
          "spatial-temporal information", 
          "short-term motion", 
          "sub-optimal performance", 
          "video dataset", 
          "video representation", 
          "recognition framework", 
          "foreground objects", 
          "Extensive experiments", 
          "code modules", 
          "lower FLOPs", 
          "art methods", 
          "final prediction", 
          "competitive performance", 
          "Something V1", 
          "interaction module", 
          "video", 
          "key design", 
          "dynamic nature", 
          "diverse events", 
          "network", 
          "framework", 
          "module", 
          "local segments", 
          "recognition", 
          "paradigm", 
          "convolution operators", 
          "additional noise", 
          "Diving48", 
          "performance", 
          "dense interactions", 
          "dataset", 
          "convolution", 
          "objects", 
          "code", 
          "kernel", 
          "design", 
          "flop", 
          "representation", 
          "information", 
          "operators", 
          "local cues", 
          "art", 
          "method", 
          "noise", 
          "goal", 
          "hand", 
          "block", 
          "prediction", 
          "cues", 
          "model", 
          "state", 
          "experiments", 
          "V2", 
          "motion", 
          "content", 
          "transformer", 
          "interaction", 
          "part", 
          "events", 
          "segments", 
          "scale", 
          "nature", 
          "V1", 
          "kinetics", 
          "paper"
        ], 
        "name": "EAN: Event Adaptive Network for Enhanced Action Recognition", 
        "pagination": "2453-2471", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1150059588"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-022-01661-1"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-022-01661-1", 
          "https://app.dimensions.ai/details/publication/pub.1150059588"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:45", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_950.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-022-01661-1"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01661-1'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01661-1'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01661-1'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01661-1'


     

    This table displays all metadata directly associated to this object as RDF triples.

    229 TRIPLES      21 PREDICATES      113 URIs      88 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-022-01661-1 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N75e8146d14824eba9839dcdf37202584
    4 schema:citation sg:pub.10.1007/978-3-030-01216-8_43
    5 sg:pub.10.1007/978-3-030-01228-1_25
    6 sg:pub.10.1007/978-3-030-01231-1_32
    7 sg:pub.10.1007/978-3-030-01246-5_49
    8 sg:pub.10.1007/978-3-030-01267-0_19
    9 sg:pub.10.1007/978-3-030-58517-4_21
    10 sg:pub.10.1007/978-3-030-58568-6_5
    11 sg:pub.10.1007/978-3-319-46484-8_2
    12 sg:pub.10.1007/978-3-540-74936-3_22
    13 sg:pub.10.1007/s11263-016-0934-1
    14 sg:pub.10.1007/s11263-018-1111-5
    15 sg:pub.10.1007/s11263-018-1129-8
    16 sg:pub.10.1007/s11263-019-01225-w
    17 sg:pub.10.1007/s11263-019-01248-3
    18 sg:pub.10.1007/s11263-021-01486-4
    19 sg:pub.10.1007/s11263-021-01508-1
    20 sg:pub.10.1023/a:1008155721192
    21 schema:datePublished 2022-08-07
    22 schema:datePublishedReg 2022-08-07
    23 schema:description Efficiently modeling spatial–temporal information in videos is crucial for action recognition. To achieve this goal, state-of-the-art methods typically employ the convolution operator and the dense interaction modules such as non-local blocks. However, these methods cannot accurately fit the diverse events in videos. On the one hand, the adopted convolutions are with fixed scales, thus struggling with events of various scales. On the other hand, the dense interaction modeling paradigm only achieves sub-optimal performance as action-irrelevant parts bring additional noises for the final prediction. In this paper, we propose a unified action recognition framework to investigate the dynamic nature of video content by introducing the following designs. First, when extracting local cues, we generate the spatial–temporal kernels of dynamic-scale to adaptively fit the diverse events. Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer, which yields a sparse paradigm. We call the proposed framework as Event Adaptive Network because both key designs are adaptive to the input video content. To exploit the short-term motions within local segments, we propose a novel and efficient Latent Motion Code module, further improving the performance of the framework. Extensive experiments on several large-scale video datasets, e.g., Something-to-Something V1 &V2, Kinetics, and Diving48, verify that our models achieve state-of-the-art or competitive performances at low FLOPs. Codes are available at: https://github.com/tianyuan168326/EAN-Pytorch.
    24 schema:genre article
    25 schema:isAccessibleForFree true
    26 schema:isPartOf N8422dec29613498dad86daa5ffa36245
    27 Nf1224441b7fb4f63b5eb3553af044648
    28 sg:journal.1032807
    29 schema:keywords Diving48
    30 Extensive experiments
    31 Something V1
    32 V1
    33 V2
    34 action recognition
    35 action recognition framework
    36 adaptive network
    37 additional noise
    38 art
    39 art methods
    40 block
    41 code
    42 code modules
    43 competitive performance
    44 content
    45 convolution
    46 convolution operators
    47 cues
    48 dataset
    49 dense interactions
    50 design
    51 diverse events
    52 dynamic nature
    53 events
    54 experiments
    55 final prediction
    56 flop
    57 foreground objects
    58 framework
    59 global video representation
    60 goal
    61 hand
    62 information
    63 interaction
    64 interaction module
    65 kernel
    66 key design
    67 kinetics
    68 large-scale video datasets
    69 local cues
    70 local segments
    71 lower FLOPs
    72 method
    73 model
    74 module
    75 motion
    76 nature
    77 network
    78 noise
    79 non-local block
    80 objects
    81 operators
    82 paper
    83 paradigm
    84 part
    85 performance
    86 prediction
    87 recognition
    88 recognition framework
    89 representation
    90 scale
    91 segments
    92 short-term motion
    93 spatial-temporal information
    94 state
    95 sub-optimal performance
    96 transformer
    97 video
    98 video content
    99 video dataset
    100 video representation
    101 schema:name EAN: Event Adaptive Network for Enhanced Action Recognition
    102 schema:pagination 2453-2471
    103 schema:productId N19619692ce784f29b5e0487431491243
    104 N1fb12be6588647cc9d8f83574ec0c83a
    105 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150059588
    106 https://doi.org/10.1007/s11263-022-01661-1
    107 schema:sdDatePublished 2022-12-01T06:45
    108 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    109 schema:sdPublisher N4ee7c43ebf1e483da4ff67615ca66eab
    110 schema:url https://doi.org/10.1007/s11263-022-01661-1
    111 sgo:license sg:explorer/license/
    112 sgo:sdDataset articles
    113 rdf:type schema:ScholarlyArticle
    114 N18fe111f056a45fa9b08e531de14a3fd rdf:first sg:person.012303042662.53
    115 rdf:rest N459655b5d3e64c3ea04576f2b1d98213
    116 N19619692ce784f29b5e0487431491243 schema:name dimensions_id
    117 schema:value pub.1150059588
    118 rdf:type schema:PropertyValue
    119 N1fb12be6588647cc9d8f83574ec0c83a schema:name doi
    120 schema:value 10.1007/s11263-022-01661-1
    121 rdf:type schema:PropertyValue
    122 N459655b5d3e64c3ea04576f2b1d98213 rdf:first sg:person.014252421702.11
    123 rdf:rest Nf22fe4f1ee7944d8801b91071e4d3669
    124 N4ee7c43ebf1e483da4ff67615ca66eab schema:name Springer Nature - SN SciGraph project
    125 rdf:type schema:Organization
    126 N61d7c57c602b48fb82e2de5cb29a5b8f rdf:first sg:person.012355543705.54
    127 rdf:rest rdf:nil
    128 N75e8146d14824eba9839dcdf37202584 rdf:first sg:person.07512726071.18
    129 rdf:rest N18fe111f056a45fa9b08e531de14a3fd
    130 N8422dec29613498dad86daa5ffa36245 schema:issueNumber 10
    131 rdf:type schema:PublicationIssue
    132 Nf1224441b7fb4f63b5eb3553af044648 schema:volumeNumber 130
    133 rdf:type schema:PublicationVolume
    134 Nf22fe4f1ee7944d8801b91071e4d3669 rdf:first sg:person.012064621621.92
    135 rdf:rest N61d7c57c602b48fb82e2de5cb29a5b8f
    136 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    137 schema:name Information and Computing Sciences
    138 rdf:type schema:DefinedTerm
    139 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    140 schema:name Artificial Intelligence and Image Processing
    141 rdf:type schema:DefinedTerm
    142 sg:grant.8943022 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01661-1
    143 rdf:type schema:MonetaryGrant
    144 sg:journal.1032807 schema:issn 0920-5691
    145 1573-1405
    146 schema:name International Journal of Computer Vision
    147 schema:publisher Springer Nature
    148 rdf:type schema:Periodical
    149 sg:person.012064621621.92 schema:affiliation grid-institutes:grid.16821.3c
    150 schema:familyName Guo
    151 schema:givenName Guodong
    152 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012064621621.92
    153 rdf:type schema:Person
    154 sg:person.012303042662.53 schema:affiliation grid-institutes:grid.16821.3c
    155 schema:familyName Yan
    156 schema:givenName Yichao
    157 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012303042662.53
    158 rdf:type schema:Person
    159 sg:person.012355543705.54 schema:affiliation grid-institutes:grid.16821.3c
    160 schema:familyName Gao
    161 schema:givenName Zhiyong
    162 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012355543705.54
    163 rdf:type schema:Person
    164 sg:person.014252421702.11 schema:affiliation grid-institutes:grid.16821.3c
    165 schema:familyName Zhai
    166 schema:givenName Guangtao
    167 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014252421702.11
    168 rdf:type schema:Person
    169 sg:person.07512726071.18 schema:affiliation grid-institutes:grid.16821.3c
    170 schema:familyName Tian
    171 schema:givenName Yuan
    172 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07512726071.18
    173 rdf:type schema:Person
    174 sg:pub.10.1007/978-3-030-01216-8_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107502594
    175 https://doi.org/10.1007/978-3-030-01216-8_43
    176 rdf:type schema:CreativeWork
    177 sg:pub.10.1007/978-3-030-01228-1_25 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463260
    178 https://doi.org/10.1007/978-3-030-01228-1_25
    179 rdf:type schema:CreativeWork
    180 sg:pub.10.1007/978-3-030-01231-1_32 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454553
    181 https://doi.org/10.1007/978-3-030-01231-1_32
    182 rdf:type schema:CreativeWork
    183 sg:pub.10.1007/978-3-030-01246-5_49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454663
    184 https://doi.org/10.1007/978-3-030-01246-5_49
    185 rdf:type schema:CreativeWork
    186 sg:pub.10.1007/978-3-030-01267-0_19 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463400
    187 https://doi.org/10.1007/978-3-030-01267-0_19
    188 rdf:type schema:CreativeWork
    189 sg:pub.10.1007/978-3-030-58517-4_21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1131567351
    190 https://doi.org/10.1007/978-3-030-58517-4_21
    191 rdf:type schema:CreativeWork
    192 sg:pub.10.1007/978-3-030-58568-6_5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132581297
    193 https://doi.org/10.1007/978-3-030-58568-6_5
    194 rdf:type schema:CreativeWork
    195 sg:pub.10.1007/978-3-319-46484-8_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025750946
    196 https://doi.org/10.1007/978-3-319-46484-8_2
    197 rdf:type schema:CreativeWork
    198 sg:pub.10.1007/978-3-540-74936-3_22 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016508230
    199 https://doi.org/10.1007/978-3-540-74936-3_22
    200 rdf:type schema:CreativeWork
    201 sg:pub.10.1007/s11263-016-0934-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004462818
    202 https://doi.org/10.1007/s11263-016-0934-1
    203 rdf:type schema:CreativeWork
    204 sg:pub.10.1007/s11263-018-1111-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1106224945
    205 https://doi.org/10.1007/s11263-018-1111-5
    206 rdf:type schema:CreativeWork
    207 sg:pub.10.1007/s11263-018-1129-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1110333695
    208 https://doi.org/10.1007/s11263-018-1129-8
    209 rdf:type schema:CreativeWork
    210 sg:pub.10.1007/s11263-019-01225-w schema:sameAs https://app.dimensions.ai/details/publication/pub.1122169306
    211 https://doi.org/10.1007/s11263-019-01225-w
    212 rdf:type schema:CreativeWork
    213 sg:pub.10.1007/s11263-019-01248-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1121994193
    214 https://doi.org/10.1007/s11263-019-01248-3
    215 rdf:type schema:CreativeWork
    216 sg:pub.10.1007/s11263-021-01486-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1140185760
    217 https://doi.org/10.1007/s11263-021-01486-4
    218 rdf:type schema:CreativeWork
    219 sg:pub.10.1007/s11263-021-01508-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1140499108
    220 https://doi.org/10.1007/s11263-021-01508-1
    221 rdf:type schema:CreativeWork
    222 sg:pub.10.1023/a:1008155721192 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051069481
    223 https://doi.org/10.1023/a:1008155721192
    224 rdf:type schema:CreativeWork
    225 grid-institutes:grid.16821.3c schema:alternateName AI Institute, Shanghai Jiao Tong University, Shanghai, China
    226 Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
    227 schema:name AI Institute, Shanghai Jiao Tong University, Shanghai, China
    228 Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
    229 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...