Ontology type: schema:ScholarlyArticle
2021-10-12
AUTHORSShaoyi Li, Jian Lin, Xi Yang, Jun Ma, Yifeng Chen
ABSTRACTIn this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality. More... »
PAGES124
http://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9
DOIhttp://dx.doi.org/10.1007/s00138-021-01248-9
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1141821691
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China",
"id": "http://www.grid.ac/institutes/grid.440588.5",
"name": [
"School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
],
"type": "Organization"
},
"familyName": "Li",
"givenName": "Shaoyi",
"id": "sg:person.010025723405.06",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010025723405.06"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China",
"id": "http://www.grid.ac/institutes/grid.440588.5",
"name": [
"Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
],
"type": "Organization"
},
"familyName": "Lin",
"givenName": "Jian",
"id": "sg:person.015312722714.60",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015312722714.60"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China",
"id": "http://www.grid.ac/institutes/grid.440588.5",
"name": [
"School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
],
"type": "Organization"
},
"familyName": "Yang",
"givenName": "Xi",
"id": "sg:person.013577320437.10",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013577320437.10"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China",
"id": "http://www.grid.ac/institutes/grid.464234.3",
"name": [
"Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China"
],
"type": "Organization"
},
"familyName": "Ma",
"givenName": "Jun",
"id": "sg:person.07445653431.03",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07445653431.03"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China",
"id": "http://www.grid.ac/institutes/None",
"name": [
"China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China"
],
"type": "Organization"
},
"familyName": "Chen",
"givenName": "Yifeng",
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.1007/978-3-319-46475-6_10",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1033380566",
"https://doi.org/10.1007/978-3-319-46475-6_10"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-030-20887-5_13",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1115900444",
"https://doi.org/10.1007/978-3-030-20887-5_13"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-46475-6_43",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1018034649",
"https://doi.org/10.1007/978-3-319-46475-6_43"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10851-019-00909-9",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1121224867",
"https://doi.org/10.1007/s10851-019-00909-9"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-030-01234-2_43",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1107454608",
"https://doi.org/10.1007/978-3-030-01234-2_43"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s11760-013-0500-z",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1021850467",
"https://doi.org/10.1007/s11760-013-0500-z"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-24574-4_28",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1017774818",
"https://doi.org/10.1007/978-3-319-24574-4_28"
],
"type": "CreativeWork"
}
],
"datePublished": "2021-10-12",
"datePublishedReg": "2021-10-12",
"description": "In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality.",
"genre": "article",
"id": "sg:pub.10.1007/s00138-021-01248-9",
"inLanguage": "en",
"isAccessibleForFree": false,
"isFundedItemOf": [
{
"id": "sg:grant.8308967",
"type": "MonetaryGrant"
}
],
"isPartOf": [
{
"id": "sg:journal.1045266",
"issn": [
"0932-8092",
"1432-1769"
],
"name": "Machine Vision and Applications",
"publisher": "Springer Nature",
"type": "Periodical"
},
{
"issueNumber": "6",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "32"
}
],
"keywords": [
"generative adversarial network",
"pix2pix framework",
"UNet-like network",
"better visual quality",
"indoor images",
"GAN network",
"Dehazing Network",
"adversarial network",
"image generation",
"public datasets",
"single image",
"perceptual loss",
"visual quality",
"training process",
"peak signal",
"processing model",
"network",
"qualitative quality",
"images",
"experimental results",
"nonlinear characteristics",
"framework",
"dataset",
"structural similarity",
"loss model",
"good chromaticity",
"noise ratio",
"subsequent processes",
"module",
"model",
"high consistency",
"superiority",
"quality",
"starting point",
"process",
"consistency",
"similarity",
"view",
"generation",
"signals",
"chromaticity",
"point",
"strategies",
"characteristics",
"ratio",
"loss strategies",
"results",
"problem",
"loss",
"paper"
],
"name": "BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image",
"pagination": "124",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1141821691"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s00138-021-01248-9"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s00138-021-01248-9",
"https://app.dimensions.ai/details/publication/pub.1141821691"
],
"sdDataset": "articles",
"sdDatePublished": "2022-05-20T07:38",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/article/article_890.jsonl",
"type": "ScholarlyArticle",
"url": "https://doi.org/10.1007/s00138-021-01248-9"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'
This table displays all metadata directly associated to this object as RDF triples.
173 TRIPLES
22 PREDICATES
82 URIs
67 LITERALS
6 BLANK NODES