What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation? View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2018-04-02

AUTHORS

Nikolaus Mayer, Eddy Ilg, Philipp Fischer, Caner Hazirbas, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox

ABSTRACT

The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process. More... »

PAGES

942-960

References to SciGraph publications

  • 2012. Lessons and Insights from Creating a Synthetic Optical Flow Benchmark in COMPUTER VISION – ECCV 2012. WORKSHOPS AND DEMONSTRATIONS
  • 2004. High Accuracy Optical Flow Estimation Based on a Theory for Warping in COMPUTER VISION - ECCV 2004
  • 2016-11-24. How Useful Is Photo-Realistic Rendering for Visual Learning? in COMPUTER VISION – ECCV 2016 WORKSHOPS
  • 2014-10-15. High-Resolution Stereo Datasets with Subpixel-Accurate Ground Truth in PATTERN RECOGNITION
  • 1994-02. Performance of optical flow techniques in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2016-09-17. Playing for Data: Ground Truth from Computer Games in COMPUTER VISION – ECCV 2016
  • 2012. Indoor Segmentation and Support Inference from RGBD Images in COMPUTER VISION – ECCV 2012
  • 2010-11-30. A Database and Evaluation Methodology for Optical Flow in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2012. A Naturalistic Open Source Movie for Optical Flow Evaluation in COMPUTER VISION – ECCV 2012
  • 2016-11-24. UnrealCV: Connecting Computer Vision to Unreal Engine in COMPUTER VISION – ECCV 2016 WORKSHOPS
  • 2014. Microsoft COCO: Common Objects in Context in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-018-1082-6

    DOI

    http://dx.doi.org/10.1007/s11263-018-1082-6

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1101888770


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg im Breisgau, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg im Breisgau, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Mayer", 
            "givenName": "Nikolaus", 
            "id": "sg:person.011011653263.83", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011011653263.83"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg im Breisgau, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg im Breisgau, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ilg", 
            "givenName": "Eddy", 
            "id": "sg:person.014016531047.11", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014016531047.11"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg im Breisgau, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg im Breisgau, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fischer", 
            "givenName": "Philipp", 
            "id": "sg:person.012106015125.15", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012106015125.15"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Technical University of Munich, Munich, Germany", 
              "id": "http://www.grid.ac/institutes/grid.6936.a", 
              "name": [
                "Technical University of Munich, Munich, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Hazirbas", 
            "givenName": "Caner", 
            "id": "sg:person.014671603627.71", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014671603627.71"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Technical University of Munich, Munich, Germany", 
              "id": "http://www.grid.ac/institutes/grid.6936.a", 
              "name": [
                "Technical University of Munich, Munich, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Cremers", 
            "givenName": "Daniel", 
            "id": "sg:person.010575005661.04", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010575005661.04"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg im Breisgau, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg im Breisgau, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Dosovitskiy", 
            "givenName": "Alexey", 
            "id": "sg:person.011726376703.15", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011726376703.15"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Freiburg, Freiburg im Breisgau, Germany", 
              "id": "http://www.grid.ac/institutes/grid.5963.9", 
              "name": [
                "University of Freiburg, Freiburg im Breisgau, Germany"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Brox", 
            "givenName": "Thomas", 
            "id": "sg:person.012443225372.65", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s11263-010-0390-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1034215603", 
              "https://doi.org/10.1007/s11263-010-0390-2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-49409-8_18", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1090663472", 
              "https://doi.org/10.1007/978-3-319-49409-8_18"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-33868-7_17", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1023589976", 
              "https://doi.org/10.1007/978-3-642-33868-7_17"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-49409-8_75", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1086290585", 
              "https://doi.org/10.1007/978-3-319-49409-8_75"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-33715-4_54", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053469442", 
              "https://doi.org/10.1007/978-3-642-33715-4_54"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-24673-2_3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045812409", 
              "https://doi.org/10.1007/978-3-540-24673-2_3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-11752-2_3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1046402464", 
              "https://doi.org/10.1007/978-3-319-11752-2_3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1025415319", 
              "https://doi.org/10.1007/978-3-319-46475-6_7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_48", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1045321436", 
              "https://doi.org/10.1007/978-3-319-10602-1_48"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-33783-3_44", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004909083", 
              "https://doi.org/10.1007/978-3-642-33783-3_44"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01420984", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021499342", 
              "https://doi.org/10.1007/bf01420984"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2018-04-02", 
        "datePublishedReg": "2018-04-02", 
        "description": "The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-018-1082-6", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.5051078", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "9", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "126"
          }
        ], 
        "keywords": [
          "training data", 
          "optical flow estimation", 
          "computer vision problems", 
          "synthetic training data", 
          "abundant training data", 
          "such training data", 
          "flow estimation", 
          "web data", 
          "computer vision", 
          "deep network", 
          "supervised learning", 
          "manual annotation", 
          "data acquisition methods", 
          "vision problems", 
          "dataset properties", 
          "generalization properties", 
          "training process", 
          "large networks", 
          "such tasks", 
          "visual recognition", 
          "learning disparity", 
          "acquisition method", 
          "network", 
          "such data", 
          "paradigm shift", 
          "algorithm", 
          "annotation", 
          "task", 
          "learning", 
          "vision", 
          "recognition", 
          "estimation", 
          "data", 
          "multiple ways", 
          "different types", 
          "performance", 
          "solution", 
          "way", 
          "schedule", 
          "method", 
          "benefits", 
          "process", 
          "field", 
          "use", 
          "purpose", 
          "results", 
          "humans", 
          "formulation", 
          "types", 
          "stage", 
          "properties", 
          "shift", 
          "flow field", 
          "disparities", 
          "influence", 
          "findings", 
          "paper", 
          "problem", 
          "approach"
        ], 
        "name": "What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation?", 
        "pagination": "942-960", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1101888770"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-018-1082-6"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-018-1082-6", 
          "https://app.dimensions.ai/details/publication/pub.1101888770"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-11-24T21:03", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/article/article_758.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-018-1082-6"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-018-1082-6'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-018-1082-6'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-018-1082-6'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-018-1082-6'


     

    This table displays all metadata directly associated to this object as RDF triples.

    207 TRIPLES      21 PREDICATES      94 URIs      75 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-018-1082-6 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N968096842bc14361b4bafb49a588cf52
    4 schema:citation sg:pub.10.1007/978-3-319-10602-1_48
    5 sg:pub.10.1007/978-3-319-11752-2_3
    6 sg:pub.10.1007/978-3-319-46475-6_7
    7 sg:pub.10.1007/978-3-319-49409-8_18
    8 sg:pub.10.1007/978-3-319-49409-8_75
    9 sg:pub.10.1007/978-3-540-24673-2_3
    10 sg:pub.10.1007/978-3-642-33715-4_54
    11 sg:pub.10.1007/978-3-642-33783-3_44
    12 sg:pub.10.1007/978-3-642-33868-7_17
    13 sg:pub.10.1007/bf01420984
    14 sg:pub.10.1007/s11263-010-0390-2
    15 schema:datePublished 2018-04-02
    16 schema:datePublishedReg 2018-04-02
    17 schema:description The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.
    18 schema:genre article
    19 schema:isAccessibleForFree true
    20 schema:isPartOf N8a4a2e1dd33241d3b134dedf1cc63708
    21 Nd32704c78dab4876bb91f24cfe5d1517
    22 sg:journal.1032807
    23 schema:keywords abundant training data
    24 acquisition method
    25 algorithm
    26 annotation
    27 approach
    28 benefits
    29 computer vision
    30 computer vision problems
    31 data
    32 data acquisition methods
    33 dataset properties
    34 deep network
    35 different types
    36 disparities
    37 estimation
    38 field
    39 findings
    40 flow estimation
    41 flow field
    42 formulation
    43 generalization properties
    44 humans
    45 influence
    46 large networks
    47 learning
    48 learning disparity
    49 manual annotation
    50 method
    51 multiple ways
    52 network
    53 optical flow estimation
    54 paper
    55 paradigm shift
    56 performance
    57 problem
    58 process
    59 properties
    60 purpose
    61 recognition
    62 results
    63 schedule
    64 shift
    65 solution
    66 stage
    67 such data
    68 such tasks
    69 such training data
    70 supervised learning
    71 synthetic training data
    72 task
    73 training data
    74 training process
    75 types
    76 use
    77 vision
    78 vision problems
    79 visual recognition
    80 way
    81 web data
    82 schema:name What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation?
    83 schema:pagination 942-960
    84 schema:productId N8aa63857a08245fa964c20d4fef769f0
    85 N93db5389381a4151ac2df6075c68e95d
    86 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101888770
    87 https://doi.org/10.1007/s11263-018-1082-6
    88 schema:sdDatePublished 2022-11-24T21:03
    89 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    90 schema:sdPublisher Nbb41916228e241e8aa381b2e96bef18b
    91 schema:url https://doi.org/10.1007/s11263-018-1082-6
    92 sgo:license sg:explorer/license/
    93 sgo:sdDataset articles
    94 rdf:type schema:ScholarlyArticle
    95 N1238b5523a2b4a37a047973416ce5355 rdf:first sg:person.014671603627.71
    96 rdf:rest N4a94bbee04ae4a45b21c1f75feedef4c
    97 N323dee03ac2244f5a996eb137f9c50f9 rdf:first sg:person.012106015125.15
    98 rdf:rest N1238b5523a2b4a37a047973416ce5355
    99 N4a94bbee04ae4a45b21c1f75feedef4c rdf:first sg:person.010575005661.04
    100 rdf:rest N9eae17b51a4348f3a209585775f988e7
    101 N8a4a2e1dd33241d3b134dedf1cc63708 schema:issueNumber 9
    102 rdf:type schema:PublicationIssue
    103 N8aa63857a08245fa964c20d4fef769f0 schema:name dimensions_id
    104 schema:value pub.1101888770
    105 rdf:type schema:PropertyValue
    106 N93db5389381a4151ac2df6075c68e95d schema:name doi
    107 schema:value 10.1007/s11263-018-1082-6
    108 rdf:type schema:PropertyValue
    109 N968096842bc14361b4bafb49a588cf52 rdf:first sg:person.011011653263.83
    110 rdf:rest Nb4c1c9860e6646caa272980cef2acac2
    111 N9eae17b51a4348f3a209585775f988e7 rdf:first sg:person.011726376703.15
    112 rdf:rest Ned8a0158d2dd43d2951084af06a3dc02
    113 Nb4c1c9860e6646caa272980cef2acac2 rdf:first sg:person.014016531047.11
    114 rdf:rest N323dee03ac2244f5a996eb137f9c50f9
    115 Nbb41916228e241e8aa381b2e96bef18b schema:name Springer Nature - SN SciGraph project
    116 rdf:type schema:Organization
    117 Nd32704c78dab4876bb91f24cfe5d1517 schema:volumeNumber 126
    118 rdf:type schema:PublicationVolume
    119 Ned8a0158d2dd43d2951084af06a3dc02 rdf:first sg:person.012443225372.65
    120 rdf:rest rdf:nil
    121 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    122 schema:name Information and Computing Sciences
    123 rdf:type schema:DefinedTerm
    124 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    125 schema:name Artificial Intelligence and Image Processing
    126 rdf:type schema:DefinedTerm
    127 sg:grant.5051078 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-018-1082-6
    128 rdf:type schema:MonetaryGrant
    129 sg:journal.1032807 schema:issn 0920-5691
    130 1573-1405
    131 schema:name International Journal of Computer Vision
    132 schema:publisher Springer Nature
    133 rdf:type schema:Periodical
    134 sg:person.010575005661.04 schema:affiliation grid-institutes:grid.6936.a
    135 schema:familyName Cremers
    136 schema:givenName Daniel
    137 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010575005661.04
    138 rdf:type schema:Person
    139 sg:person.011011653263.83 schema:affiliation grid-institutes:grid.5963.9
    140 schema:familyName Mayer
    141 schema:givenName Nikolaus
    142 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011011653263.83
    143 rdf:type schema:Person
    144 sg:person.011726376703.15 schema:affiliation grid-institutes:grid.5963.9
    145 schema:familyName Dosovitskiy
    146 schema:givenName Alexey
    147 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011726376703.15
    148 rdf:type schema:Person
    149 sg:person.012106015125.15 schema:affiliation grid-institutes:grid.5963.9
    150 schema:familyName Fischer
    151 schema:givenName Philipp
    152 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012106015125.15
    153 rdf:type schema:Person
    154 sg:person.012443225372.65 schema:affiliation grid-institutes:grid.5963.9
    155 schema:familyName Brox
    156 schema:givenName Thomas
    157 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65
    158 rdf:type schema:Person
    159 sg:person.014016531047.11 schema:affiliation grid-institutes:grid.5963.9
    160 schema:familyName Ilg
    161 schema:givenName Eddy
    162 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014016531047.11
    163 rdf:type schema:Person
    164 sg:person.014671603627.71 schema:affiliation grid-institutes:grid.6936.a
    165 schema:familyName Hazirbas
    166 schema:givenName Caner
    167 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014671603627.71
    168 rdf:type schema:Person
    169 sg:pub.10.1007/978-3-319-10602-1_48 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045321436
    170 https://doi.org/10.1007/978-3-319-10602-1_48
    171 rdf:type schema:CreativeWork
    172 sg:pub.10.1007/978-3-319-11752-2_3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046402464
    173 https://doi.org/10.1007/978-3-319-11752-2_3
    174 rdf:type schema:CreativeWork
    175 sg:pub.10.1007/978-3-319-46475-6_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025415319
    176 https://doi.org/10.1007/978-3-319-46475-6_7
    177 rdf:type schema:CreativeWork
    178 sg:pub.10.1007/978-3-319-49409-8_18 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090663472
    179 https://doi.org/10.1007/978-3-319-49409-8_18
    180 rdf:type schema:CreativeWork
    181 sg:pub.10.1007/978-3-319-49409-8_75 schema:sameAs https://app.dimensions.ai/details/publication/pub.1086290585
    182 https://doi.org/10.1007/978-3-319-49409-8_75
    183 rdf:type schema:CreativeWork
    184 sg:pub.10.1007/978-3-540-24673-2_3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1045812409
    185 https://doi.org/10.1007/978-3-540-24673-2_3
    186 rdf:type schema:CreativeWork
    187 sg:pub.10.1007/978-3-642-33715-4_54 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053469442
    188 https://doi.org/10.1007/978-3-642-33715-4_54
    189 rdf:type schema:CreativeWork
    190 sg:pub.10.1007/978-3-642-33783-3_44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004909083
    191 https://doi.org/10.1007/978-3-642-33783-3_44
    192 rdf:type schema:CreativeWork
    193 sg:pub.10.1007/978-3-642-33868-7_17 schema:sameAs https://app.dimensions.ai/details/publication/pub.1023589976
    194 https://doi.org/10.1007/978-3-642-33868-7_17
    195 rdf:type schema:CreativeWork
    196 sg:pub.10.1007/bf01420984 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021499342
    197 https://doi.org/10.1007/bf01420984
    198 rdf:type schema:CreativeWork
    199 sg:pub.10.1007/s11263-010-0390-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034215603
    200 https://doi.org/10.1007/s11263-010-0390-2
    201 rdf:type schema:CreativeWork
    202 grid-institutes:grid.5963.9 schema:alternateName University of Freiburg, Freiburg im Breisgau, Germany
    203 schema:name University of Freiburg, Freiburg im Breisgau, Germany
    204 rdf:type schema:Organization
    205 grid-institutes:grid.6936.a schema:alternateName Technical University of Munich, Munich, Germany
    206 schema:name Technical University of Munich, Munich, Germany
    207 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...