Lucid Data Dreaming for Video Object Segmentation View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-03-15

AUTHORS

Anna Khoreva, Rodrigo Benenson, Eddy Ilg, Thomas Brox, Bernt Schiele

ABSTRACT

Convolutional networks reach top quality in pixel-level video object segmentation but require a large amount of training data (1k–100k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$20\,\times $$\end{document}–1000×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1000\,\times $$\end{document} less annotated data than competing methods. Our approach is suitable for both single and multiple object segmentation. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize—“lucid dream” (in a lucid dream the sleeper is aware that he or she is dreaming and is sometimes able to control the course of the dream)—plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the video object segmentation task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general “objectness” knowledge are required for the video object segmentation task. More... »

PAGES

1175-1197

References to SciGraph publications

  • 2016-11-03. Fully-Convolutional Siamese Networks for Object Tracking in COMPUTER VISION – ECCV 2016 WORKSHOPS
  • 2016-11-03. The Visual Object Tracking VOT2016 Challenge Results in COMPUTER VISION – ECCV 2016 WORKSHOPS
  • 2014. Supervoxel-Consistent Foreground Propagation in Video in COMPUTER VISION – ECCV 2014
  • 2016-09-17. Learning to Track at 100 FPS with Deep Regression Networks in COMPUTER VISION – ECCV 2016
  • 2016-09-17. Normalized Cut Meets MRF in COMPUTER VISION – ECCV 2016
  • 2016-11-24. Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness in COMPUTER VISION – ECCV 2016 WORKSHOPS
  • 2014-06-25. The Pascal Visual Object Classes Challenge: A Retrospective in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2015-03-20. The Visual Object Tracking VOT2014 Challenge Results in COMPUTER VISION - ECCV 2014 WORKSHOPS
  • 2016-09-17. Playing for Data: Ground Truth from Computer Games in COMPUTER VISION – ECCV 2016
  • 2012. Exploiting the Circulant Structure of Tracking-by-Detection with Kernels in COMPUTER VISION – ECCV 2012
  • 2019-05-25. PReMVOS: Proposal-Generation, Refinement and Merging for Video Object Segmentation in COMPUTER VISION – ACCV 2018
  • 2018-10-06. Pyramid Dilated Deeper ConvLSTM for Video Salient Object Detection in COMPUTER VISION – ECCV 2018
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-019-01164-6

    DOI

    http://dx.doi.org/10.1007/s11263-019-01164-6

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1112777911


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Max Planck Institute for Informatics, Saarbr\u00fccken, Germany", 
              "id": "http://www.grid.ac/institutes/grid.419528.3", 
              "name": [
                "Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Khoreva", 
            "givenName": "Anna", 
            "id": "sg:person.012166735257.23", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012166735257.23"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Google, Menlo Park, USA", 
              "id": "http://www.grid.ac/institutes/grid.420451.6", 
              "name": [
                "Google, Menlo Park, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Benenson", 
            "givenName": "Rodrigo", 
            "id": "sg:person.015610367365.26", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015610367365.26"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ilg", 
            "givenName": "Eddy", 
            "id": "sg:person.014016531047.11", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014016531047.11"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Brox", 
            "givenName": "Thomas", 
            "id": "sg:person.012443225372.65", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Max Planck Institute for Informatics, Saarbr\u00fccken, Germany", 
              "id": "http://www.grid.ac/institutes/grid.419528.3", 
              "name": [
                "Max Planck Institute for Informatics, Saarbr\u00fccken, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Schiele", 
            "givenName": "Bernt", 
            "id": "sg:person.01174260421.90", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01174260421.90"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-48881-3_54", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1028486103", 
              "https://doi.org/10.1007/978-3-319-48881-3_54"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_46", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1026224019", 
              "https://doi.org/10.1007/978-3-319-46475-6_46"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20870-7_35", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115546776", 
              "https://doi.org/10.1007/978-3-030-20870-7_35"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-33765-9_50", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039884592", 
              "https://doi.org/10.1007/978-3-642-33765-9_50"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-16181-5_14", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1010495712", 
              "https://doi.org/10.1007/978-3-319-16181-5_14"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10593-2_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1051925929", 
              "https://doi.org/10.1007/978-3-319-10593-2_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46448-0_45", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008624618", 
              "https://doi.org/10.1007/978-3-319-46448-0_45"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1025415319", 
              "https://doi.org/10.1007/978-3-319-46475-6_7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-48881-3_56", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1043811270", 
              "https://doi.org/10.1007/978-3-319-48881-3_56"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01252-6_44", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454757", 
              "https://doi.org/10.1007/978-3-030-01252-6_44"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-49409-8_1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1043442270", 
              "https://doi.org/10.1007/978-3-319-49409-8_1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-014-0733-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017073734", 
              "https://doi.org/10.1007/s11263-014-0733-5"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-03-15", 
        "datePublishedReg": "2019-03-15", 
        "description": "Convolutional networks reach top quality in pixel-level video object segmentation but require a large amount of training data (1k\u2013100k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20\u00d7\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$20\\,\\times $$\\end{document}\u20131000\u00d7\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$1000\\,\\times $$\\end{document} less annotated data than competing methods. Our approach is suitable for both single and multiple object segmentation. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize\u2014\u201clucid dream\u201d (in a lucid dream the sleeper is aware that he or she is dreaming and is sometimes able to control the course of the dream)\u2014plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the video object segmentation task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general \u201cobjectness\u201d knowledge are required for the video object segmentation task.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-019-01164-6", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "9", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "127"
          }
        ], 
        "keywords": [
          "video object segmentation task", 
          "video object segmentation", 
          "object segmentation task", 
          "large training set", 
          "object segmentation", 
          "training data", 
          "segmentation task", 
          "training set", 
          "video training data", 
          "future video frames", 
          "domain training data", 
          "multiple object segmentation", 
          "small training set", 
          "new training strategy", 
          "convolutional network", 
          "video frames", 
          "post-processing stage", 
          "art results", 
          "target domain", 
          "competitive results", 
          "first frame", 
          "motion-based models", 
          "evaluation dataset", 
          "training samples", 
          "quality appearance", 
          "high-quality appearance", 
          "segmentation", 
          "training strategy", 
          "task", 
          "large amount", 
          "set", 
          "ImageNet", 
          "video", 
          "frame", 
          "dataset", 
          "annotation", 
          "network", 
          "domain", 
          "data", 
          "top quality", 
          "results", 
          "quality", 
          "knowledge", 
          "model", 
          "method", 
          "Such results", 
          "tune", 
          "strategies", 
          "amount", 
          "state", 
          "mindset", 
          "dreams", 
          "stage", 
          "appearance", 
          "course", 
          "lucid dreams", 
          "samples", 
          "dreaming", 
          "sleepers", 
          "approach"
        ], 
        "name": "Lucid Data Dreaming for Video Object Segmentation", 
        "pagination": "1175-1197", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1112777911"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-019-01164-6"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-019-01164-6", 
          "https://app.dimensions.ai/details/publication/pub.1112777911"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-10-01T06:45", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221001/entities/gbq_results/article/article_817.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-019-01164-6"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-019-01164-6'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-019-01164-6'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-019-01164-6'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-019-01164-6'


     

    This table displays all metadata directly associated to this object as RDF triples.

    199 TRIPLES      21 PREDICATES      96 URIs      76 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-019-01164-6 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Na3ffcfcc46994bd6aae400179061571a
    4 schema:citation sg:pub.10.1007/978-3-030-01252-6_44
    5 sg:pub.10.1007/978-3-030-20870-7_35
    6 sg:pub.10.1007/978-3-319-10593-2_43
    7 sg:pub.10.1007/978-3-319-16181-5_14
    8 sg:pub.10.1007/978-3-319-46448-0_45
    9 sg:pub.10.1007/978-3-319-46475-6_46
    10 sg:pub.10.1007/978-3-319-46475-6_7
    11 sg:pub.10.1007/978-3-319-48881-3_54
    12 sg:pub.10.1007/978-3-319-48881-3_56
    13 sg:pub.10.1007/978-3-319-49409-8_1
    14 sg:pub.10.1007/978-3-642-33765-9_50
    15 sg:pub.10.1007/s11263-014-0733-5
    16 schema:datePublished 2019-03-15
    17 schema:datePublishedReg 2019-03-15
    18 schema:description Convolutional networks reach top quality in pixel-level video object segmentation but require a large amount of training data (1k–100k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$20\,\times $$\end{document}–1000×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1000\,\times $$\end{document} less annotated data than competing methods. Our approach is suitable for both single and multiple object segmentation. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize—“lucid dream” (in a lucid dream the sleeper is aware that he or she is dreaming and is sometimes able to control the course of the dream)—plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the video object segmentation task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general “objectness” knowledge are required for the video object segmentation task.
    19 schema:genre article
    20 schema:isAccessibleForFree true
    21 schema:isPartOf N4d99146b862e4344b4829762411e433c
    22 N9f36cd8cccb24855b244bb5916757c79
    23 sg:journal.1032807
    24 schema:keywords ImageNet
    25 Such results
    26 amount
    27 annotation
    28 appearance
    29 approach
    30 art results
    31 competitive results
    32 convolutional network
    33 course
    34 data
    35 dataset
    36 domain
    37 domain training data
    38 dreaming
    39 dreams
    40 evaluation dataset
    41 first frame
    42 frame
    43 future video frames
    44 high-quality appearance
    45 knowledge
    46 large amount
    47 large training set
    48 lucid dreams
    49 method
    50 mindset
    51 model
    52 motion-based models
    53 multiple object segmentation
    54 network
    55 new training strategy
    56 object segmentation
    57 object segmentation task
    58 post-processing stage
    59 quality
    60 quality appearance
    61 results
    62 samples
    63 segmentation
    64 segmentation task
    65 set
    66 sleepers
    67 small training set
    68 stage
    69 state
    70 strategies
    71 target domain
    72 task
    73 top quality
    74 training data
    75 training samples
    76 training set
    77 training strategy
    78 tune
    79 video
    80 video frames
    81 video object segmentation
    82 video object segmentation task
    83 video training data
    84 schema:name Lucid Data Dreaming for Video Object Segmentation
    85 schema:pagination 1175-1197
    86 schema:productId N458dc9b4fafb427a81c40a263879d343
    87 N65c838bf5e7a404da845505a7c7acc83
    88 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112777911
    89 https://doi.org/10.1007/s11263-019-01164-6
    90 schema:sdDatePublished 2022-10-01T06:45
    91 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    92 schema:sdPublisher N16277082a45f45f1b5d79578d20e439d
    93 schema:url https://doi.org/10.1007/s11263-019-01164-6
    94 sgo:license sg:explorer/license/
    95 sgo:sdDataset articles
    96 rdf:type schema:ScholarlyArticle
    97 N16277082a45f45f1b5d79578d20e439d schema:name Springer Nature - SN SciGraph project
    98 rdf:type schema:Organization
    99 N423d373b519c462a9c6f95bd2d5ee7c7 rdf:first sg:person.012443225372.65
    100 rdf:rest N8be5d808dcd04ab2b2581a515ddbe43a
    101 N458dc9b4fafb427a81c40a263879d343 schema:name dimensions_id
    102 schema:value pub.1112777911
    103 rdf:type schema:PropertyValue
    104 N4d99146b862e4344b4829762411e433c schema:issueNumber 9
    105 rdf:type schema:PublicationIssue
    106 N65c838bf5e7a404da845505a7c7acc83 schema:name doi
    107 schema:value 10.1007/s11263-019-01164-6
    108 rdf:type schema:PropertyValue
    109 N8be5d808dcd04ab2b2581a515ddbe43a rdf:first sg:person.01174260421.90
    110 rdf:rest rdf:nil
    111 N9f36cd8cccb24855b244bb5916757c79 schema:volumeNumber 127
    112 rdf:type schema:PublicationVolume
    113 Na3ffcfcc46994bd6aae400179061571a rdf:first sg:person.012166735257.23
    114 rdf:rest Nedbb6f6710aa42a885a732e14989aef6
    115 Nab12257dd99c461bb862a5e85bcde378 rdf:first sg:person.014016531047.11
    116 rdf:rest N423d373b519c462a9c6f95bd2d5ee7c7
    117 Nedbb6f6710aa42a885a732e14989aef6 rdf:first sg:person.015610367365.26
    118 rdf:rest Nab12257dd99c461bb862a5e85bcde378
    119 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    120 schema:name Information and Computing Sciences
    121 rdf:type schema:DefinedTerm
    122 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    123 schema:name Artificial Intelligence and Image Processing
    124 rdf:type schema:DefinedTerm
    125 sg:journal.1032807 schema:issn 0920-5691
    126 1573-1405
    127 schema:name International Journal of Computer Vision
    128 schema:publisher Springer Nature
    129 rdf:type schema:Periodical
    130 sg:person.01174260421.90 schema:affiliation grid-institutes:grid.419528.3
    131 schema:familyName Schiele
    132 schema:givenName Bernt
    133 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01174260421.90
    134 rdf:type schema:Person
    135 sg:person.012166735257.23 schema:affiliation grid-institutes:grid.419528.3
    136 schema:familyName Khoreva
    137 schema:givenName Anna
    138 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012166735257.23
    139 rdf:type schema:Person
    140 sg:person.012443225372.65 schema:affiliation grid-institutes:grid.5963.9
    141 schema:familyName Brox
    142 schema:givenName Thomas
    143 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65
    144 rdf:type schema:Person
    145 sg:person.014016531047.11 schema:affiliation grid-institutes:grid.5963.9
    146 schema:familyName Ilg
    147 schema:givenName Eddy
    148 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014016531047.11
    149 rdf:type schema:Person
    150 sg:person.015610367365.26 schema:affiliation grid-institutes:grid.420451.6
    151 schema:familyName Benenson
    152 schema:givenName Rodrigo
    153 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015610367365.26
    154 rdf:type schema:Person
    155 sg:pub.10.1007/978-3-030-01252-6_44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454757
    156 https://doi.org/10.1007/978-3-030-01252-6_44
    157 rdf:type schema:CreativeWork
    158 sg:pub.10.1007/978-3-030-20870-7_35 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115546776
    159 https://doi.org/10.1007/978-3-030-20870-7_35
    160 rdf:type schema:CreativeWork
    161 sg:pub.10.1007/978-3-319-10593-2_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051925929
    162 https://doi.org/10.1007/978-3-319-10593-2_43
    163 rdf:type schema:CreativeWork
    164 sg:pub.10.1007/978-3-319-16181-5_14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010495712
    165 https://doi.org/10.1007/978-3-319-16181-5_14
    166 rdf:type schema:CreativeWork
    167 sg:pub.10.1007/978-3-319-46448-0_45 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008624618
    168 https://doi.org/10.1007/978-3-319-46448-0_45
    169 rdf:type schema:CreativeWork
    170 sg:pub.10.1007/978-3-319-46475-6_46 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026224019
    171 https://doi.org/10.1007/978-3-319-46475-6_46
    172 rdf:type schema:CreativeWork
    173 sg:pub.10.1007/978-3-319-46475-6_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025415319
    174 https://doi.org/10.1007/978-3-319-46475-6_7
    175 rdf:type schema:CreativeWork
    176 sg:pub.10.1007/978-3-319-48881-3_54 schema:sameAs https://app.dimensions.ai/details/publication/pub.1028486103
    177 https://doi.org/10.1007/978-3-319-48881-3_54
    178 rdf:type schema:CreativeWork
    179 sg:pub.10.1007/978-3-319-48881-3_56 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043811270
    180 https://doi.org/10.1007/978-3-319-48881-3_56
    181 rdf:type schema:CreativeWork
    182 sg:pub.10.1007/978-3-319-49409-8_1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043442270
    183 https://doi.org/10.1007/978-3-319-49409-8_1
    184 rdf:type schema:CreativeWork
    185 sg:pub.10.1007/978-3-642-33765-9_50 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039884592
    186 https://doi.org/10.1007/978-3-642-33765-9_50
    187 rdf:type schema:CreativeWork
    188 sg:pub.10.1007/s11263-014-0733-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017073734
    189 https://doi.org/10.1007/s11263-014-0733-5
    190 rdf:type schema:CreativeWork
    191 grid-institutes:grid.419528.3 schema:alternateName Max Planck Institute for Informatics, Saarbrücken, Germany
    192 schema:name Max Planck Institute for Informatics, Saarbrücken, Germany
    193 rdf:type schema:Organization
    194 grid-institutes:grid.420451.6 schema:alternateName Google, Menlo Park, USA
    195 schema:name Google, Menlo Park, USA
    196 rdf:type schema:Organization
    197 grid-institutes:grid.5963.9 schema:alternateName University of Freiburg, Freiburg, Germany
    198 schema:name University of Freiburg, Freiburg, Germany
    199 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...