Exploring the Semi-Supervised Video Object Segmentation Problem from a Cyclic Perspective View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-08-06

AUTHORS

Yuxi Li, Ning Xu, Wenjie Yang, John See, Weiyao Lin

ABSTRACT

Modern video object segmentation (VOS) algorithms have achieved remarkably high performance in a sequential processing order, while most of currently prevailing pipelines still show some obvious inadequacy like accumulative error, unknown robustness or lack of proper interpretation tools. In this paper, we place the semi-supervised video object segmentation problem into a cyclic workflow and find the defects above can be collectively addressed via the inherent cyclic property of semi-supervised VOS systems. Firstly, a cyclic mechanism incorporated to the standard sequential flow can produce more consistent representations for pixel-wise correspondance. Relying on the accurate reference mask in the starting frame, we show that the error propagation problem can be mitigated. Next, a simple gradient correction module, which naturally extends the offline cyclic pipeline to an online manner, can highlight the high-frequent and detailed part of results to further improve the segmentation quality while keeping feasible computation cost. Meanwhile such correction can protect the network from severe performance degration resulted from interference signals. Finally we develop cycle effective receptive field (cycle-ERF) based on gradient correction process to provide a new perspective into analyzing object-specific regions of interests. We conduct comprehensive comparison and detailed analysis on challenging benchmarks of DAVIS16, DAVIS17 and Youtube-VOS, demonstrating that the cyclic mechanism is helpful to enhance segmentation quality, improve the robustness of VOS systems, and further provide qualitative comparison and interpretation on how different VOS algorithms work. The code of this project can be found at https://github.com/lyxok1/STM-Training. More... »

PAGES

2408-2424

References to SciGraph publications

  • 2020-11-07. Fast Video Object Segmentation Using the Global Context Module in COMPUTER VISION – ECCV 2020
  • 2019-03-15. Lucid Data Dreaming for Video Object Segmentation in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-10-06. YouTube-VOS: Sequence-to-Sequence Video Object Segmentation in COMPUTER VISION – ECCV 2018
  • 2018-10-06. Recycle-GAN: Unsupervised Video Retargeting in COMPUTER VISION – ECCV 2018
  • 2015-04-11. ImageNet Large Scale Visual Recognition Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2019-05-25. PReMVOS: Proposal-Generation, Refinement and Merging for Video Object Segmentation in COMPUTER VISION – ACCV 2018
  • 2014-06-25. The Pascal Visual Object Classes Challenge: A Retrospective in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2020-11-17. Kernelized Memory Network for Video Object Segmentation in COMPUTER VISION – ECCV 2020
  • 2020-10-29. Chained-Tracker: Chaining Paired Attentive Regression Results for End-to-End Joint Multiple-Object Detection and Tracking in COMPUTER VISION – ECCV 2020
  • 2014. Microsoft COCO: Common Objects in Context in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-022-01655-z

    DOI

    http://dx.doi.org/10.1007/s11263-022-01655-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1150047961


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Li", 
            "givenName": "Yuxi", 
            "id": "sg:person.011063116347.33", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011063116347.33"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Adobe Research, San Jose, USA", 
              "id": "http://www.grid.ac/institutes/grid.467212.4", 
              "name": [
                "Adobe Research, San Jose, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Xu", 
            "givenName": "Ning", 
            "id": "sg:person.014444706547.88", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014444706547.88"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Computer Science, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Department of Computer Science, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yang", 
            "givenName": "Wenjie", 
            "id": "sg:person.07463522422.84", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07463522422.84"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Heriot-Watt University Malaysia, Putrajaya, Malaysia", 
              "id": "http://www.grid.ac/institutes/grid.472615.3", 
              "name": [
                "Heriot-Watt University Malaysia, Putrajaya, Malaysia"
              ], 
              "type": "Organization"
            }, 
            "familyName": "See", 
            "givenName": "John", 
            "id": "sg:person.07737154705.41", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07737154705.41"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.16821.3c", 
              "name": [
                "Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lin", 
            "givenName": "Weiyao", 
            "id": "sg:person.07700362000.20", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07700362000.20"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-030-58542-6_38", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132664083", 
              "https://doi.org/10.1007/978-3-030-58542-6_38"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-019-01164-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1112777911", 
              "https://doi.org/10.1007/s11263-019-01164-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_48", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045321436", 
              "https://doi.org/10.1007/978-3-319-10602-1_48"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-015-0816-y", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1009767488", 
              "https://doi.org/10.1007/s11263-015-0816-y"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58607-2_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132408332", 
              "https://doi.org/10.1007/978-3-030-58607-2_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01228-1_36", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463272", 
              "https://doi.org/10.1007/978-3-030-01228-1_36"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01228-1_8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463289", 
              "https://doi.org/10.1007/978-3-030-01228-1_8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58548-8_9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132130526", 
              "https://doi.org/10.1007/978-3-030-58548-8_9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-014-0733-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017073734", 
              "https://doi.org/10.1007/s11263-014-0733-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20870-7_35", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115546776", 
              "https://doi.org/10.1007/978-3-030-20870-7_35"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2022-08-06", 
        "datePublishedReg": "2022-08-06", 
        "description": "Modern video object segmentation (VOS) algorithms have achieved remarkably high performance in a sequential processing order, while most of currently prevailing pipelines still show some obvious inadequacy like accumulative error, unknown robustness or lack of proper interpretation tools. In this paper, we place the semi-supervised video object segmentation problem into a cyclic workflow and find the defects above can be collectively addressed via the inherent cyclic property of semi-supervised VOS systems. Firstly, a cyclic mechanism incorporated to the standard sequential flow can produce more consistent representations for pixel-wise correspondance. Relying on the accurate reference mask in the starting frame, we show that the error propagation problem can be mitigated. Next, a simple gradient correction module, which naturally extends the offline cyclic pipeline to an online manner, can highlight the high-frequent and detailed part of results to further improve the segmentation quality while keeping feasible computation cost. Meanwhile such correction can protect the network from severe performance degration resulted from interference signals. Finally we develop cycle effective receptive field (cycle-ERF) based on gradient correction process to provide a new perspective into analyzing object-specific regions of interests. We conduct comprehensive comparison and detailed analysis on challenging benchmarks of DAVIS16, DAVIS17 and Youtube-VOS, demonstrating that the cyclic mechanism is helpful to enhance segmentation quality, improve the robustness of VOS systems, and further provide qualitative comparison and interpretation on how different VOS algorithms work. The code of this project can be found at https://github.com/lyxok1/STM-Training.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-022-01655-z", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8948152", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "10", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "130"
          }
        ], 
        "keywords": [
          "accumulative error", 
          "cyclic properties", 
          "interference signals", 
          "propagation problems", 
          "high performance", 
          "correction module", 
          "correction process", 
          "comprehensive comparison", 
          "sequential flow", 
          "processing order", 
          "detailed parts", 
          "qualitative comparison", 
          "robustness", 
          "pipeline", 
          "error propagation problem", 
          "reference mask", 
          "computation cost", 
          "cyclic mechanism", 
          "detailed analysis", 
          "flow", 
          "online manner", 
          "system", 
          "cyclic workflow", 
          "consistent representation", 
          "algorithm", 
          "performance", 
          "properties", 
          "module", 
          "interpretation tools", 
          "problem", 
          "such corrections", 
          "signals", 
          "mask", 
          "error", 
          "cost", 
          "effective receptive field", 
          "comparison", 
          "degration", 
          "field", 
          "object segmentation algorithm", 
          "process", 
          "defects", 
          "order", 
          "quality", 
          "frame", 
          "challenging benchmarks", 
          "code", 
          "mechanism", 
          "segmentation quality", 
          "results", 
          "network", 
          "segmentation algorithm", 
          "benchmarks", 
          "correction", 
          "new perspective", 
          "tool", 
          "segmentation problem", 
          "analysis", 
          "project", 
          "part", 
          "region", 
          "workflow", 
          "interest", 
          "representation", 
          "manner", 
          "perspective", 
          "inadequacy", 
          "interpretation", 
          "correspondance", 
          "lack", 
          "obvious inadequacy", 
          "receptive fields", 
          "video object segmentation algorithm", 
          "object segmentation problem", 
          "paper", 
          "more consistent representation", 
          "YouTube-VOS", 
          "VOS algorithms"
        ], 
        "name": "Exploring the Semi-Supervised Video Object Segmentation Problem from a Cyclic Perspective", 
        "pagination": "2408-2424", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1150047961"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-022-01655-z"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-022-01655-z", 
          "https://app.dimensions.ai/details/publication/pub.1150047961"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:44", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_933.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-022-01655-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01655-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01655-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01655-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01655-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    213 TRIPLES      21 PREDICATES      112 URIs      94 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-022-01655-z schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N53c8cbfd49414a54b9d697a73ab13f96
    4 schema:citation sg:pub.10.1007/978-3-030-01228-1_36
    5 sg:pub.10.1007/978-3-030-01228-1_8
    6 sg:pub.10.1007/978-3-030-20870-7_35
    7 sg:pub.10.1007/978-3-030-58542-6_38
    8 sg:pub.10.1007/978-3-030-58548-8_9
    9 sg:pub.10.1007/978-3-030-58607-2_43
    10 sg:pub.10.1007/978-3-319-10602-1_48
    11 sg:pub.10.1007/s11263-014-0733-5
    12 sg:pub.10.1007/s11263-015-0816-y
    13 sg:pub.10.1007/s11263-019-01164-6
    14 schema:datePublished 2022-08-06
    15 schema:datePublishedReg 2022-08-06
    16 schema:description Modern video object segmentation (VOS) algorithms have achieved remarkably high performance in a sequential processing order, while most of currently prevailing pipelines still show some obvious inadequacy like accumulative error, unknown robustness or lack of proper interpretation tools. In this paper, we place the semi-supervised video object segmentation problem into a cyclic workflow and find the defects above can be collectively addressed via the inherent cyclic property of semi-supervised VOS systems. Firstly, a cyclic mechanism incorporated to the standard sequential flow can produce more consistent representations for pixel-wise correspondance. Relying on the accurate reference mask in the starting frame, we show that the error propagation problem can be mitigated. Next, a simple gradient correction module, which naturally extends the offline cyclic pipeline to an online manner, can highlight the high-frequent and detailed part of results to further improve the segmentation quality while keeping feasible computation cost. Meanwhile such correction can protect the network from severe performance degration resulted from interference signals. Finally we develop cycle effective receptive field (cycle-ERF) based on gradient correction process to provide a new perspective into analyzing object-specific regions of interests. We conduct comprehensive comparison and detailed analysis on challenging benchmarks of DAVIS16, DAVIS17 and Youtube-VOS, demonstrating that the cyclic mechanism is helpful to enhance segmentation quality, improve the robustness of VOS systems, and further provide qualitative comparison and interpretation on how different VOS algorithms work. The code of this project can be found at https://github.com/lyxok1/STM-Training.
    17 schema:genre article
    18 schema:isAccessibleForFree true
    19 schema:isPartOf N9886a8be816f49e8b97280d2463219fa
    20 Ned81d509044c49a5ad57652428b59050
    21 sg:journal.1032807
    22 schema:keywords VOS algorithms
    23 YouTube-VOS
    24 accumulative error
    25 algorithm
    26 analysis
    27 benchmarks
    28 challenging benchmarks
    29 code
    30 comparison
    31 comprehensive comparison
    32 computation cost
    33 consistent representation
    34 correction
    35 correction module
    36 correction process
    37 correspondance
    38 cost
    39 cyclic mechanism
    40 cyclic properties
    41 cyclic workflow
    42 defects
    43 degration
    44 detailed analysis
    45 detailed parts
    46 effective receptive field
    47 error
    48 error propagation problem
    49 field
    50 flow
    51 frame
    52 high performance
    53 inadequacy
    54 interest
    55 interference signals
    56 interpretation
    57 interpretation tools
    58 lack
    59 manner
    60 mask
    61 mechanism
    62 module
    63 more consistent representation
    64 network
    65 new perspective
    66 object segmentation algorithm
    67 object segmentation problem
    68 obvious inadequacy
    69 online manner
    70 order
    71 paper
    72 part
    73 performance
    74 perspective
    75 pipeline
    76 problem
    77 process
    78 processing order
    79 project
    80 propagation problems
    81 properties
    82 qualitative comparison
    83 quality
    84 receptive fields
    85 reference mask
    86 region
    87 representation
    88 results
    89 robustness
    90 segmentation algorithm
    91 segmentation problem
    92 segmentation quality
    93 sequential flow
    94 signals
    95 such corrections
    96 system
    97 tool
    98 video object segmentation algorithm
    99 workflow
    100 schema:name Exploring the Semi-Supervised Video Object Segmentation Problem from a Cyclic Perspective
    101 schema:pagination 2408-2424
    102 schema:productId N57cf7367beda42df83c530db0e323eed
    103 Ned98119ef54c45228ba5f12fe46787bd
    104 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150047961
    105 https://doi.org/10.1007/s11263-022-01655-z
    106 schema:sdDatePublished 2022-12-01T06:44
    107 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    108 schema:sdPublisher Necd667a003c4454da8e246a347c578d0
    109 schema:url https://doi.org/10.1007/s11263-022-01655-z
    110 sgo:license sg:explorer/license/
    111 sgo:sdDataset articles
    112 rdf:type schema:ScholarlyArticle
    113 N22c6e3afbaeb424d95d38cbed9dcc744 rdf:first sg:person.07463522422.84
    114 rdf:rest N6cc25c95d17b4d3194ad88fcb39428c8
    115 N53c8cbfd49414a54b9d697a73ab13f96 rdf:first sg:person.011063116347.33
    116 rdf:rest N95c49a4f47fb47f3b4a9909a81948af1
    117 N57cf7367beda42df83c530db0e323eed schema:name doi
    118 schema:value 10.1007/s11263-022-01655-z
    119 rdf:type schema:PropertyValue
    120 N6cc25c95d17b4d3194ad88fcb39428c8 rdf:first sg:person.07737154705.41
    121 rdf:rest N70327f886c8c4bbaac4ad70d61017f76
    122 N70327f886c8c4bbaac4ad70d61017f76 rdf:first sg:person.07700362000.20
    123 rdf:rest rdf:nil
    124 N95c49a4f47fb47f3b4a9909a81948af1 rdf:first sg:person.014444706547.88
    125 rdf:rest N22c6e3afbaeb424d95d38cbed9dcc744
    126 N9886a8be816f49e8b97280d2463219fa schema:issueNumber 10
    127 rdf:type schema:PublicationIssue
    128 Necd667a003c4454da8e246a347c578d0 schema:name Springer Nature - SN SciGraph project
    129 rdf:type schema:Organization
    130 Ned81d509044c49a5ad57652428b59050 schema:volumeNumber 130
    131 rdf:type schema:PublicationVolume
    132 Ned98119ef54c45228ba5f12fe46787bd schema:name dimensions_id
    133 schema:value pub.1150047961
    134 rdf:type schema:PropertyValue
    135 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    136 schema:name Information and Computing Sciences
    137 rdf:type schema:DefinedTerm
    138 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    139 schema:name Artificial Intelligence and Image Processing
    140 rdf:type schema:DefinedTerm
    141 sg:grant.8948152 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01655-z
    142 rdf:type schema:MonetaryGrant
    143 sg:journal.1032807 schema:issn 0920-5691
    144 1573-1405
    145 schema:name International Journal of Computer Vision
    146 schema:publisher Springer Nature
    147 rdf:type schema:Periodical
    148 sg:person.011063116347.33 schema:affiliation grid-institutes:grid.16821.3c
    149 schema:familyName Li
    150 schema:givenName Yuxi
    151 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011063116347.33
    152 rdf:type schema:Person
    153 sg:person.014444706547.88 schema:affiliation grid-institutes:grid.467212.4
    154 schema:familyName Xu
    155 schema:givenName Ning
    156 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014444706547.88
    157 rdf:type schema:Person
    158 sg:person.07463522422.84 schema:affiliation grid-institutes:grid.16821.3c
    159 schema:familyName Yang
    160 schema:givenName Wenjie
    161 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07463522422.84
    162 rdf:type schema:Person
    163 sg:person.07700362000.20 schema:affiliation grid-institutes:grid.16821.3c
    164 schema:familyName Lin
    165 schema:givenName Weiyao
    166 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07700362000.20
    167 rdf:type schema:Person
    168 sg:person.07737154705.41 schema:affiliation grid-institutes:grid.472615.3
    169 schema:familyName See
    170 schema:givenName John
    171 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07737154705.41
    172 rdf:type schema:Person
    173 sg:pub.10.1007/978-3-030-01228-1_36 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463272
    174 https://doi.org/10.1007/978-3-030-01228-1_36
    175 rdf:type schema:CreativeWork
    176 sg:pub.10.1007/978-3-030-01228-1_8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463289
    177 https://doi.org/10.1007/978-3-030-01228-1_8
    178 rdf:type schema:CreativeWork
    179 sg:pub.10.1007/978-3-030-20870-7_35 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115546776
    180 https://doi.org/10.1007/978-3-030-20870-7_35
    181 rdf:type schema:CreativeWork
    182 sg:pub.10.1007/978-3-030-58542-6_38 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132664083
    183 https://doi.org/10.1007/978-3-030-58542-6_38
    184 rdf:type schema:CreativeWork
    185 sg:pub.10.1007/978-3-030-58548-8_9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132130526
    186 https://doi.org/10.1007/978-3-030-58548-8_9
    187 rdf:type schema:CreativeWork
    188 sg:pub.10.1007/978-3-030-58607-2_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132408332
    189 https://doi.org/10.1007/978-3-030-58607-2_43
    190 rdf:type schema:CreativeWork
    191 sg:pub.10.1007/978-3-319-10602-1_48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045321436
    192 https://doi.org/10.1007/978-3-319-10602-1_48
    193 rdf:type schema:CreativeWork
    194 sg:pub.10.1007/s11263-014-0733-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017073734
    195 https://doi.org/10.1007/s11263-014-0733-5
    196 rdf:type schema:CreativeWork
    197 sg:pub.10.1007/s11263-015-0816-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1009767488
    198 https://doi.org/10.1007/s11263-015-0816-y
    199 rdf:type schema:CreativeWork
    200 sg:pub.10.1007/s11263-019-01164-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112777911
    201 https://doi.org/10.1007/s11263-019-01164-6
    202 rdf:type schema:CreativeWork
    203 grid-institutes:grid.16821.3c schema:alternateName Department of Computer Science, Shanghai Jiao Tong University, Shanghai, China
    204 Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
    205 schema:name Department of Computer Science, Shanghai Jiao Tong University, Shanghai, China
    206 Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
    207 rdf:type schema:Organization
    208 grid-institutes:grid.467212.4 schema:alternateName Adobe Research, San Jose, USA
    209 schema:name Adobe Research, San Jose, USA
    210 rdf:type schema:Organization
    211 grid-institutes:grid.472615.3 schema:alternateName Heriot-Watt University Malaysia, Putrajaya, Malaysia
    212 schema:name Heriot-Watt University Malaysia, Putrajaya, Malaysia
    213 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...