Ontology type: schema:ScholarlyArticle
2021-11-16
AUTHORSDong Wook Shu, Wonbeom Jang, Heebin Yoo, Hong-Chang Shin, Junseok Kwon
ABSTRACTOwing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods. More... »
PAGES5
http://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7
DOIhttp://dx.doi.org/10.1007/s00138-021-01258-7
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1142611724
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea",
"id": "http://www.grid.ac/institutes/grid.254224.7",
"name": [
"School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
],
"type": "Organization"
},
"familyName": "Shu",
"givenName": "Dong Wook",
"id": "sg:person.07571233775.64",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07571233775.64"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea",
"id": "http://www.grid.ac/institutes/grid.254224.7",
"name": [
"School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
],
"type": "Organization"
},
"familyName": "Jang",
"givenName": "Wonbeom",
"type": "Person"
},
{
"affiliation": {
"alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea",
"id": "http://www.grid.ac/institutes/grid.254224.7",
"name": [
"School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
],
"type": "Organization"
},
"familyName": "Yoo",
"givenName": "Heebin",
"type": "Person"
},
{
"affiliation": {
"alternateName": "Electronics and Telecommunications Research Institute, Seoul, Korea",
"id": "http://www.grid.ac/institutes/grid.36303.35",
"name": [
"Electronics and Telecommunications Research Institute, Seoul, Korea"
],
"type": "Organization"
},
"familyName": "Shin",
"givenName": "Hong-Chang",
"type": "Person"
},
{
"affiliation": {
"alternateName": "School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea",
"id": "http://www.grid.ac/institutes/grid.254224.7",
"name": [
"School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea"
],
"type": "Organization"
},
"familyName": "Kwon",
"givenName": "Junseok",
"id": "sg:person.015767414061.20",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015767414061.20"
],
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.1007/978-3-030-01237-3_47",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1107463332",
"https://doi.org/10.1007/978-3-030-01237-3_47"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-030-11009-3_20",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1111703316",
"https://doi.org/10.1007/978-3-030-11009-3_20"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-10578-9_23",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1030406568",
"https://doi.org/10.1007/978-3-319-10578-9_23"
],
"type": "CreativeWork"
}
],
"datePublished": "2021-11-16",
"datePublishedReg": "2021-11-16",
"description": "Owing to the improved representation ability, recent deep learning-based methods enable to estimate scene depths accurately. However, these methods still have difficulty in estimating consistent scene depths under real-world environments containing severe illumination changes, occlusions, and texture-less regions. To solve this problem, in this paper, we propose a novel depth-estimation method for unstructured multi-view images. Accordingly, we present a plane sweep generative adversarial network, where the proposed adversarial loss significantly improves the depth-estimation accuracy under real-world settings, and the consistency loss makes the depth-estimation results insensitive to the changes in viewpoints and the number of input images. In addition, 3D convolution layers are inserted into the network to enrich feature representation. Experimental results indicate that the proposed plane sweep generative adversarial network quantitatively and qualitatively outperforms state-of-the-art methods.",
"genre": "article",
"id": "sg:pub.10.1007/s00138-021-01258-7",
"inLanguage": "en",
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1045266",
"issn": [
"0932-8092",
"1432-1769"
],
"name": "Machine Vision and Applications",
"publisher": "Springer Nature",
"type": "Periodical"
},
{
"issueNumber": "1",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "33"
}
],
"keywords": [
"generative adversarial network",
"adversarial network",
"scene depth",
"recent deep learning-based methods",
"deep learning-based methods",
"multi-view depth estimation",
"depth estimation results",
"multi-view images",
"learning-based methods",
"texture-less regions",
"real-world environments",
"novel depth estimation method",
"depth estimation accuracy",
"depth estimation method",
"severe illumination changes",
"convolution layers",
"feature representation",
"input image",
"consistency loss",
"adversarial loss",
"illumination changes",
"art methods",
"depth estimation",
"representation ability",
"network",
"experimental results",
"real-world setting",
"images",
"representation",
"accuracy",
"method",
"environment",
"viewpoint",
"estimation",
"results",
"difficulties",
"number",
"occlusion",
"state",
"ability",
"setting",
"layer",
"addition",
"depth",
"loss",
"changes",
"region",
"paper",
"problem"
],
"name": "Deep-plane sweep generative adversarial network for consistent multi-view depth estimation",
"pagination": "5",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1142611724"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s00138-021-01258-7"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s00138-021-01258-7",
"https://app.dimensions.ai/details/publication/pub.1142611724"
],
"sdDataset": "articles",
"sdDatePublished": "2022-05-20T07:39",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/article/article_899.jsonl",
"type": "ScholarlyArticle",
"url": "https://doi.org/10.1007/s00138-021-01258-7"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01258-7'
This table displays all metadata directly associated to this object as RDF triples.
147 TRIPLES
22 PREDICATES
77 URIs
66 LITERALS
6 BLANK NODES