Deep-plane sweep generative adversarial network for consistent multi-view depth estimation View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2021-11-16

AUTHORS

Dong Wook Shu, Wonbeom Jang, Heebin Yoo, Hong-Chang Shin, Junseok Kwon

ABSTRACT

Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods. More... »

PAGES

5

References to SciGraph publications

  • 2018-10-07. MVSNet: Depth Inference for Unstructured Multi-view Stereo in COMPUTER VISION – ECCV 2018
  • 2019-01-23. Generative Adversarial Networks for Unsupervised Monocular Depth Prediction in COMPUTER VISION – ECCV 2018 WORKSHOPS
  • 2014. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7

    DOI

    http://dx.doi.org/10.1007/s00138-021-01258-7

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1142611724


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Shu", 
            "givenName": "Dong Wook", 
            "id": "sg:person.07571233775.64", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07571233775.64"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Jang", 
            "givenName": "Wonbeom", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yoo", 
            "givenName": "Heebin", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Electronics and Telecommunications Research Institute, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.36303.35", 
              "name": [
                "Electronics and Telecommunications Research Institute, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Shin", 
            "givenName": "Hong-Chang", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kwon", 
            "givenName": "Junseok", 
            "id": "sg:person.015767414061.20", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015767414061.20"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-030-11009-3_20", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1111703316", 
              "https://doi.org/10.1007/978-3-030-11009-3_20"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10578-9_23", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030406568", 
              "https://doi.org/10.1007/978-3-319-10578-9_23"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01237-3_47", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463332", 
              "https://doi.org/10.1007/978-3-030-01237-3_47"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-11-16", 
        "datePublishedReg": "2021-11-16", 
        "description": "Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01258-7", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "33"
          }
        ], 
        "keywords": [
          "generative adversarial network", 
          "adversarial network", 
          "scene depth", 
          "recent deep learning-based methods", 
          "deep learning-based methods", 
          "multi-view depth estimation", 
          "depth estimation results", 
          "multi-view images", 
          "learning-based methods", 
          "texture-less regions", 
          "real-world environments", 
          "novel depth estimation method", 
          "depth estimation accuracy", 
          "depth estimation method", 
          "severe illumination changes", 
          "convolution layers", 
          "feature representation", 
          "input image", 
          "consistency loss", 
          "adversarial loss", 
          "illumination changes", 
          "art methods", 
          "depth estimation", 
          "representation ability", 
          "network", 
          "experimental results", 
          "real-world setting", 
          "images", 
          "representation", 
          "accuracy", 
          "method", 
          "environment", 
          "viewpoint", 
          "estimation", 
          "results", 
          "difficulties", 
          "number", 
          "occlusion", 
          "state", 
          "ability", 
          "setting", 
          "layer", 
          "addition", 
          "depth", 
          "loss", 
          "changes", 
          "region", 
          "paper", 
          "problem", 
          "improved representation ability", 
          "consistent scene depths", 
          "unstructured multi-view images", 
          "plane sweep generative adversarial network", 
          "sweep generative adversarial network", 
          "Deep-plane sweep generative adversarial network", 
          "consistent multi-view depth estimation"
        ], 
        "name": "Deep-plane sweep generative adversarial network for consistent multi-view depth estimation", 
        "pagination": "5", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1142611724"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01258-7"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01258-7", 
          "https://app.dimensions.ai/details/publication/pub.1142611724"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T18:58", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_900.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01258-7"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'


     

    This table displays all metadata directly associated to this object as RDF triples.

    154 TRIPLES      22 PREDICATES      84 URIs      73 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01258-7 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N55be5f76acf843c3a5a5d66d86e15dff
    4 schema:citation sg:pub.10.1007/978-3-030-01237-3_47
    5 sg:pub.10.1007/978-3-030-11009-3_20
    6 sg:pub.10.1007/978-3-319-10578-9_23
    7 schema:datePublished 2021-11-16
    8 schema:datePublishedReg 2021-11-16
    9 schema:description Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods.
    10 schema:genre article
    11 schema:inLanguage en
    12 schema:isAccessibleForFree false
    13 schema:isPartOf N40a6b22aea444c6c8da99193cbb66fbc
    14 Naacdbe504ad147ffa30b84fa885e09f4
    15 sg:journal.1045266
    16 schema:keywords Deep-plane sweep generative adversarial network
    17 ability
    18 accuracy
    19 addition
    20 adversarial loss
    21 adversarial network
    22 art methods
    23 changes
    24 consistency loss
    25 consistent multi-view depth estimation
    26 consistent scene depths
    27 convolution layers
    28 deep learning-based methods
    29 depth
    30 depth estimation
    31 depth estimation accuracy
    32 depth estimation method
    33 depth estimation results
    34 difficulties
    35 environment
    36 estimation
    37 experimental results
    38 feature representation
    39 generative adversarial network
    40 illumination changes
    41 images
    42 improved representation ability
    43 input image
    44 layer
    45 learning-based methods
    46 loss
    47 method
    48 multi-view depth estimation
    49 multi-view images
    50 network
    51 novel depth estimation method
    52 number
    53 occlusion
    54 paper
    55 plane sweep generative adversarial network
    56 problem
    57 real-world environments
    58 real-world setting
    59 recent deep learning-based methods
    60 region
    61 representation
    62 representation ability
    63 results
    64 scene depth
    65 setting
    66 severe illumination changes
    67 state
    68 sweep generative adversarial network
    69 texture-less regions
    70 unstructured multi-view images
    71 viewpoint
    72 schema:name Deep-plane sweep generative adversarial network for consistent multi-view depth estimation
    73 schema:pagination 5
    74 schema:productId Nb345a558838b42248727446910d81e58
    75 Nd37dc14c290846bf9c3676e69ab91bf1
    76 schema:sameAs https://app.dimensions.ai/details/publication/pub.1142611724
    77 https://doi.org/10.1007/s00138-021-01258-7
    78 schema:sdDatePublished 2022-01-01T18:58
    79 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    80 schema:sdPublisher N12479ef032c94035a142b78feabf5fb2
    81 schema:url https://doi.org/10.1007/s00138-021-01258-7
    82 sgo:license sg:explorer/license/
    83 sgo:sdDataset articles
    84 rdf:type schema:ScholarlyArticle
    85 N12479ef032c94035a142b78feabf5fb2 schema:name Springer Nature - SN SciGraph project
    86 rdf:type schema:Organization
    87 N40a6b22aea444c6c8da99193cbb66fbc schema:volumeNumber 33
    88 rdf:type schema:PublicationVolume
    89 N55be5f76acf843c3a5a5d66d86e15dff rdf:first sg:person.07571233775.64
    90 rdf:rest Neac166a6d5d04720a7ee696df5fe4aca
    91 N77369bf207984e6dba2182c74f4898d3 rdf:first Nf5ccb202ff794bf8bc0d86aef9de98eb
    92 rdf:rest Nefa72c1e75e64f45b4ded0624d420619
    93 N9f7c01e4ef72498dbee926defedfa345 schema:affiliation grid-institutes:grid.36303.35
    94 schema:familyName Shin
    95 schema:givenName Hong-Chang
    96 rdf:type schema:Person
    97 Naacdbe504ad147ffa30b84fa885e09f4 schema:issueNumber 1
    98 rdf:type schema:PublicationIssue
    99 Nb345a558838b42248727446910d81e58 schema:name doi
    100 schema:value 10.1007/s00138-021-01258-7
    101 rdf:type schema:PropertyValue
    102 Nb6c8a9874b8048858457a7857a6e2a50 schema:affiliation grid-institutes:grid.254224.7
    103 schema:familyName Jang
    104 schema:givenName Wonbeom
    105 rdf:type schema:Person
    106 Nd37dc14c290846bf9c3676e69ab91bf1 schema:name dimensions_id
    107 schema:value pub.1142611724
    108 rdf:type schema:PropertyValue
    109 Neac166a6d5d04720a7ee696df5fe4aca rdf:first Nb6c8a9874b8048858457a7857a6e2a50
    110 rdf:rest N77369bf207984e6dba2182c74f4898d3
    111 Nefa72c1e75e64f45b4ded0624d420619 rdf:first N9f7c01e4ef72498dbee926defedfa345
    112 rdf:rest Nfee39a94b2124391a7a3634f09932530
    113 Nf5ccb202ff794bf8bc0d86aef9de98eb schema:affiliation grid-institutes:grid.254224.7
    114 schema:familyName Yoo
    115 schema:givenName Heebin
    116 rdf:type schema:Person
    117 Nfee39a94b2124391a7a3634f09932530 rdf:first sg:person.015767414061.20
    118 rdf:rest rdf:nil
    119 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    120 schema:name Information and Computing Sciences
    121 rdf:type schema:DefinedTerm
    122 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    123 schema:name Artificial Intelligence and Image Processing
    124 rdf:type schema:DefinedTerm
    125 sg:journal.1045266 schema:issn 0932-8092
    126 1432-1769
    127 schema:name Machine Vision and Applications
    128 schema:publisher Springer Nature
    129 rdf:type schema:Periodical
    130 sg:person.015767414061.20 schema:affiliation grid-institutes:grid.254224.7
    131 schema:familyName Kwon
    132 schema:givenName Junseok
    133 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015767414061.20
    134 rdf:type schema:Person
    135 sg:person.07571233775.64 schema:affiliation grid-institutes:grid.254224.7
    136 schema:familyName Shu
    137 schema:givenName Dong Wook
    138 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07571233775.64
    139 rdf:type schema:Person
    140 sg:pub.10.1007/978-3-030-01237-3_47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463332
    141 https://doi.org/10.1007/978-3-030-01237-3_47
    142 rdf:type schema:CreativeWork
    143 sg:pub.10.1007/978-3-030-11009-3_20 schema:sameAs https://app.dimensions.ai/details/publication/pub.1111703316
    144 https://doi.org/10.1007/978-3-030-11009-3_20
    145 rdf:type schema:CreativeWork
    146 sg:pub.10.1007/978-3-319-10578-9_23 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030406568
    147 https://doi.org/10.1007/978-3-319-10578-9_23
    148 rdf:type schema:CreativeWork
    149 grid-institutes:grid.254224.7 schema:alternateName School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
    150 schema:name School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
    151 rdf:type schema:Organization
    152 grid-institutes:grid.36303.35 schema:alternateName Electronics and Telecommunications Research Institute, Seoul, Korea
    153 schema:name Electronics and Telecommunications Research Institute, Seoul, Korea
    154 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...