Deep-plane sweep generative adversarial network for consistent multi-view depth estimation View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2021-11-16

AUTHORS

Dong Wook Shu, Wonbeom Jang, Heebin Yoo, Hong-Chang Shin, Junseok Kwon

ABSTRACT

Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods. More... »

PAGES

5

References to SciGraph publications

  • 2018-10-07. MVSNet: Depth Inference for Unstructured Multi-view Stereo in COMPUTER VISION – ECCV 2018
  • 2019-01-23. Generative Adversarial Networks for Unsupervised Monocular Depth Prediction in COMPUTER VISION – ECCV 2018 WORKSHOPS
  • 2014. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition in COMPUTER VISION – ECCV 2014
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7

    DOI

    http://dx.doi.org/10.1007/s00138-021-01258-7

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1142611724


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Shu", 
            "givenName": "Dong Wook", 
            "id": "sg:person.07571233775.64", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07571233775.64"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Jang", 
            "givenName": "Wonbeom", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yoo", 
            "givenName": "Heebin", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Electronics and Telecommunications Research Institute, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.36303.35", 
              "name": [
                "Electronics and Telecommunications Research Institute, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Shin", 
            "givenName": "Hong-Chang", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea", 
              "id": "http://www.grid.ac/institutes/grid.254224.7", 
              "name": [
                "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kwon", 
            "givenName": "Junseok", 
            "id": "sg:person.015767414061.20", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015767414061.20"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-030-01237-3_47", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463332", 
              "https://doi.org/10.1007/978-3-030-01237-3_47"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-11009-3_20", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1111703316", 
              "https://doi.org/10.1007/978-3-030-11009-3_20"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10578-9_23", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030406568", 
              "https://doi.org/10.1007/978-3-319-10578-9_23"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-11-16", 
        "datePublishedReg": "2021-11-16", 
        "description": "Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01258-7", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "33"
          }
        ], 
        "keywords": [
          "generative adversarial network", 
          "adversarial network", 
          "scene depth", 
          "recent deep learning-based methods", 
          "deep learning-based methods", 
          "multi-view depth estimation", 
          "depth estimation results", 
          "multi-view images", 
          "learning-based methods", 
          "texture-less regions", 
          "real-world environments", 
          "novel depth estimation method", 
          "depth estimation accuracy", 
          "depth estimation method", 
          "severe illumination changes", 
          "convolution layers", 
          "feature representation", 
          "input image", 
          "consistency loss", 
          "adversarial loss", 
          "illumination changes", 
          "art methods", 
          "depth estimation", 
          "representation ability", 
          "network", 
          "experimental results", 
          "real-world setting", 
          "images", 
          "representation", 
          "accuracy", 
          "method", 
          "environment", 
          "viewpoint", 
          "estimation", 
          "results", 
          "difficulties", 
          "number", 
          "occlusion", 
          "state", 
          "ability", 
          "setting", 
          "layer", 
          "addition", 
          "depth", 
          "loss", 
          "changes", 
          "region", 
          "paper", 
          "problem"
        ], 
        "name": "Deep-plane sweep generative adversarial network for consistent multi-view depth estimation", 
        "pagination": "5", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1142611724"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01258-7"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01258-7", 
          "https://app.dimensions.ai/details/publication/pub.1142611724"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-05-20T07:39", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/article/article_899.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01258-7"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'


     

    This table displays all metadata directly associated to this object as RDF triples.

    147 TRIPLES      22 PREDICATES      77 URIs      66 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01258-7 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Neff89740aa754cdebb221fb17c799a5e
    4 schema:citation sg:pub.10.1007/978-3-030-01237-3_47
    5 sg:pub.10.1007/978-3-030-11009-3_20
    6 sg:pub.10.1007/978-3-319-10578-9_23
    7 schema:datePublished 2021-11-16
    8 schema:datePublishedReg 2021-11-16
    9 schema:description Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods.
    10 schema:genre article
    11 schema:inLanguage en
    12 schema:isAccessibleForFree false
    13 schema:isPartOf N32966d1b08c845c7951b6a9421ed5452
    14 N9daa4a3a98664450ac9fef5d7f1a97cb
    15 sg:journal.1045266
    16 schema:keywords ability
    17 accuracy
    18 addition
    19 adversarial loss
    20 adversarial network
    21 art methods
    22 changes
    23 consistency loss
    24 convolution layers
    25 deep learning-based methods
    26 depth
    27 depth estimation
    28 depth estimation accuracy
    29 depth estimation method
    30 depth estimation results
    31 difficulties
    32 environment
    33 estimation
    34 experimental results
    35 feature representation
    36 generative adversarial network
    37 illumination changes
    38 images
    39 input image
    40 layer
    41 learning-based methods
    42 loss
    43 method
    44 multi-view depth estimation
    45 multi-view images
    46 network
    47 novel depth estimation method
    48 number
    49 occlusion
    50 paper
    51 problem
    52 real-world environments
    53 real-world setting
    54 recent deep learning-based methods
    55 region
    56 representation
    57 representation ability
    58 results
    59 scene depth
    60 setting
    61 severe illumination changes
    62 state
    63 texture-less regions
    64 viewpoint
    65 schema:name Deep-plane sweep generative adversarial network for consistent multi-view depth estimation
    66 schema:pagination 5
    67 schema:productId N6c6795aa84024d74b9a32dfbac512bca
    68 Nd4315b518dab486ab8652cc67b9c46f8
    69 schema:sameAs https://app.dimensions.ai/details/publication/pub.1142611724
    70 https://doi.org/10.1007/s00138-021-01258-7
    71 schema:sdDatePublished 2022-05-20T07:39
    72 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    73 schema:sdPublisher N38b2c891f10a4da38bc285ac71fac625
    74 schema:url https://doi.org/10.1007/s00138-021-01258-7
    75 sgo:license sg:explorer/license/
    76 sgo:sdDataset articles
    77 rdf:type schema:ScholarlyArticle
    78 N2361713bd44340c7b3e455d353ec334d schema:affiliation grid-institutes:grid.254224.7
    79 schema:familyName Jang
    80 schema:givenName Wonbeom
    81 rdf:type schema:Person
    82 N32966d1b08c845c7951b6a9421ed5452 schema:volumeNumber 33
    83 rdf:type schema:PublicationVolume
    84 N38b2c891f10a4da38bc285ac71fac625 schema:name Springer Nature - SN SciGraph project
    85 rdf:type schema:Organization
    86 N6c6795aa84024d74b9a32dfbac512bca schema:name doi
    87 schema:value 10.1007/s00138-021-01258-7
    88 rdf:type schema:PropertyValue
    89 N7824e13f25bf40f1988ad1d472105cf2 schema:affiliation grid-institutes:grid.36303.35
    90 schema:familyName Shin
    91 schema:givenName Hong-Chang
    92 rdf:type schema:Person
    93 N932c53fd113f444b9f2d50011ce5aae1 rdf:first N7824e13f25bf40f1988ad1d472105cf2
    94 rdf:rest Nfef24865cf4045489410cab4dbe63cec
    95 N9daa4a3a98664450ac9fef5d7f1a97cb schema:issueNumber 1
    96 rdf:type schema:PublicationIssue
    97 Na1cf4b9cee484ff08a04bd758eadeb6d rdf:first N2361713bd44340c7b3e455d353ec334d
    98 rdf:rest Nb811beb8950945cab1de7b35e9b878c3
    99 Nb811beb8950945cab1de7b35e9b878c3 rdf:first Nf429cdc1af964f2ea8a6d23412cc2dbf
    100 rdf:rest N932c53fd113f444b9f2d50011ce5aae1
    101 Nd4315b518dab486ab8652cc67b9c46f8 schema:name dimensions_id
    102 schema:value pub.1142611724
    103 rdf:type schema:PropertyValue
    104 Neff89740aa754cdebb221fb17c799a5e rdf:first sg:person.07571233775.64
    105 rdf:rest Na1cf4b9cee484ff08a04bd758eadeb6d
    106 Nf429cdc1af964f2ea8a6d23412cc2dbf schema:affiliation grid-institutes:grid.254224.7
    107 schema:familyName Yoo
    108 schema:givenName Heebin
    109 rdf:type schema:Person
    110 Nfef24865cf4045489410cab4dbe63cec rdf:first sg:person.015767414061.20
    111 rdf:rest rdf:nil
    112 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    113 schema:name Information and Computing Sciences
    114 rdf:type schema:DefinedTerm
    115 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    116 schema:name Artificial Intelligence and Image Processing
    117 rdf:type schema:DefinedTerm
    118 sg:journal.1045266 schema:issn 0932-8092
    119 1432-1769
    120 schema:name Machine Vision and Applications
    121 schema:publisher Springer Nature
    122 rdf:type schema:Periodical
    123 sg:person.015767414061.20 schema:affiliation grid-institutes:grid.254224.7
    124 schema:familyName Kwon
    125 schema:givenName Junseok
    126 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015767414061.20
    127 rdf:type schema:Person
    128 sg:person.07571233775.64 schema:affiliation grid-institutes:grid.254224.7
    129 schema:familyName Shu
    130 schema:givenName Dong Wook
    131 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07571233775.64
    132 rdf:type schema:Person
    133 sg:pub.10.1007/978-3-030-01237-3_47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463332
    134 https://doi.org/10.1007/978-3-030-01237-3_47
    135 rdf:type schema:CreativeWork
    136 sg:pub.10.1007/978-3-030-11009-3_20 schema:sameAs https://app.dimensions.ai/details/publication/pub.1111703316
    137 https://doi.org/10.1007/978-3-030-11009-3_20
    138 rdf:type schema:CreativeWork
    139 sg:pub.10.1007/978-3-319-10578-9_23 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030406568
    140 https://doi.org/10.1007/978-3-319-10578-9_23
    141 rdf:type schema:CreativeWork
    142 grid-institutes:grid.254224.7 schema:alternateName School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
    143 schema:name School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
    144 rdf:type schema:Organization
    145 grid-institutes:grid.36303.35 schema:alternateName Electronics and Telecommunications Research Institute, Seoul, Korea
    146 schema:name Electronics and Telecommunications Research Institute, Seoul, Korea
    147 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...