BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2021-10-12

AUTHORS

Shaoyi Li, Jian Lin, Xi Yang, Jun Ma, Yifeng Chen

ABSTRACT

In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality. More... »

PAGES

124

References to SciGraph publications

  • 2016-09-17. Single Image Dehazing via Multi-scale Convolutional Neural Networks in COMPUTER VISION – ECCV 2016
  • 2019-09-24. A Novel Total Generalized Variation Model for Image Dehazing in JOURNAL OF MATHEMATICAL IMAGING AND VISION
  • 2018-10-06. Proximal Dehaze-Net: A Prior Learning-Based Deep Network for Single Image Dehazing in COMPUTER VISION – ECCV 2018
  • 2016-09-17. Perceptual Losses for Real-Time Style Transfer and Super-Resolution in COMPUTER VISION – ECCV 2016
  • 2019-05-28. Progressive Feature Fusion Network for Realistic Image Dehazing in COMPUTER VISION – ACCV 2018
  • 2013-06-15. A new histogram equalization method for digital image enhancement and brightness preservation in SIGNAL, IMAGE AND VIDEO PROCESSING
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9

    DOI

    http://dx.doi.org/10.1007/s00138-021-01248-9

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1141821691


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Li", 
            "givenName": "Shaoyi", 
            "id": "sg:person.010025723405.06", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010025723405.06"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lin", 
            "givenName": "Jian", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yang", 
            "givenName": "Xi", 
            "id": "sg:person.013577320437.10", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013577320437.10"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ma", 
            "givenName": "Jun", 
            "id": "sg:person.07445653431.03", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07445653431.03"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Yifeng", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1033380566", 
              "https://doi.org/10.1007/978-3-319-46475-6_10"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01234-2_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454608", 
              "https://doi.org/10.1007/978-3-030-01234-2_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11760-013-0500-z", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021850467", 
              "https://doi.org/10.1007/s11760-013-0500-z"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20887-5_13", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115900444", 
              "https://doi.org/10.1007/978-3-030-20887-5_13"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1018034649", 
              "https://doi.org/10.1007/978-3-319-46475-6_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10851-019-00909-9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1121224867", 
              "https://doi.org/10.1007/s10851-019-00909-9"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-10-12", 
        "datePublishedReg": "2021-10-12", 
        "description": "In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01248-9", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8308967", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "32"
          }
        ], 
        "keywords": [
          "generative adversarial network", 
          "better visual quality", 
          "indoor images", 
          "GAN network", 
          "adversarial network", 
          "Dehazing Network", 
          "image generation", 
          "public datasets", 
          "single image", 
          "perceptual loss", 
          "visual quality", 
          "training process", 
          "peak signal", 
          "processing model", 
          "network", 
          "images", 
          "qualitative quality", 
          "experimental results", 
          "nonlinear characteristics", 
          "framework", 
          "dataset", 
          "loss model", 
          "structural similarity", 
          "noise ratio", 
          "good chromaticity", 
          "subsequent processes", 
          "module", 
          "model", 
          "high consistency", 
          "quality", 
          "superiority", 
          "starting point", 
          "process", 
          "consistency", 
          "similarity", 
          "view", 
          "generation", 
          "signals", 
          "point", 
          "chromaticity", 
          "strategies", 
          "characteristics", 
          "ratio", 
          "loss strategies", 
          "results", 
          "problem", 
          "loss", 
          "paper", 
          "pix2pix framework", 
          "UNet-like network", 
          "shortcut module", 
          "face illumination processing model", 
          "illumination processing model", 
          "perceptual loss model", 
          "quality vision loss strategy", 
          "vision loss strategy"
        ], 
        "name": "BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image", 
        "pagination": "124", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1141821691"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01248-9"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01248-9", 
          "https://app.dimensions.ai/details/publication/pub.1141821691"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T19:00", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_877.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01248-9"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'


     

    This table displays all metadata directly associated to this object as RDF triples.

    177 TRIPLES      22 PREDICATES      88 URIs      73 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01248-9 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N2ca1d70884784a45b91173fa36488496
    4 schema:citation sg:pub.10.1007/978-3-030-01234-2_43
    5 sg:pub.10.1007/978-3-030-20887-5_13
    6 sg:pub.10.1007/978-3-319-24574-4_28
    7 sg:pub.10.1007/978-3-319-46475-6_10
    8 sg:pub.10.1007/978-3-319-46475-6_43
    9 sg:pub.10.1007/s10851-019-00909-9
    10 sg:pub.10.1007/s11760-013-0500-z
    11 schema:datePublished 2021-10-12
    12 schema:datePublishedReg 2021-10-12
    13 schema:description In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality.
    14 schema:genre article
    15 schema:inLanguage en
    16 schema:isAccessibleForFree false
    17 schema:isPartOf N49334f63d3e940cb8001c75d3b0c5bd2
    18 N78049ef0f88d457da6ea0ef7a3a49fb1
    19 sg:journal.1045266
    20 schema:keywords Dehazing Network
    21 GAN network
    22 UNet-like network
    23 adversarial network
    24 better visual quality
    25 characteristics
    26 chromaticity
    27 consistency
    28 dataset
    29 experimental results
    30 face illumination processing model
    31 framework
    32 generation
    33 generative adversarial network
    34 good chromaticity
    35 high consistency
    36 illumination processing model
    37 image generation
    38 images
    39 indoor images
    40 loss
    41 loss model
    42 loss strategies
    43 model
    44 module
    45 network
    46 noise ratio
    47 nonlinear characteristics
    48 paper
    49 peak signal
    50 perceptual loss
    51 perceptual loss model
    52 pix2pix framework
    53 point
    54 problem
    55 process
    56 processing model
    57 public datasets
    58 qualitative quality
    59 quality
    60 quality vision loss strategy
    61 ratio
    62 results
    63 shortcut module
    64 signals
    65 similarity
    66 single image
    67 starting point
    68 strategies
    69 structural similarity
    70 subsequent processes
    71 superiority
    72 training process
    73 view
    74 vision loss strategy
    75 visual quality
    76 schema:name BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image
    77 schema:pagination 124
    78 schema:productId N052bd09152114839bcba72daea93d000
    79 N388303755b9e4f7cb487e84253eb7616
    80 schema:sameAs https://app.dimensions.ai/details/publication/pub.1141821691
    81 https://doi.org/10.1007/s00138-021-01248-9
    82 schema:sdDatePublished 2022-01-01T19:00
    83 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    84 schema:sdPublisher N3f5d0d5a0fb145088be7ea55cd099f55
    85 schema:url https://doi.org/10.1007/s00138-021-01248-9
    86 sgo:license sg:explorer/license/
    87 sgo:sdDataset articles
    88 rdf:type schema:ScholarlyArticle
    89 N052bd09152114839bcba72daea93d000 schema:name dimensions_id
    90 schema:value pub.1141821691
    91 rdf:type schema:PropertyValue
    92 N2ca1d70884784a45b91173fa36488496 rdf:first sg:person.010025723405.06
    93 rdf:rest N667c9349d2fc4d05bb6ff20cea211727
    94 N388303755b9e4f7cb487e84253eb7616 schema:name doi
    95 schema:value 10.1007/s00138-021-01248-9
    96 rdf:type schema:PropertyValue
    97 N3f5d0d5a0fb145088be7ea55cd099f55 schema:name Springer Nature - SN SciGraph project
    98 rdf:type schema:Organization
    99 N47ab748cbf8846b4bbe3f4c378c8c45d schema:affiliation grid-institutes:grid.440588.5
    100 schema:familyName Lin
    101 schema:givenName Jian
    102 rdf:type schema:Person
    103 N49334f63d3e940cb8001c75d3b0c5bd2 schema:volumeNumber 32
    104 rdf:type schema:PublicationVolume
    105 N50a1b63f46924cb0a0c53af9a478c6b5 rdf:first sg:person.013577320437.10
    106 rdf:rest N5fbaa4a0b9604f44a1f5e96d847dafa1
    107 N5fbaa4a0b9604f44a1f5e96d847dafa1 rdf:first sg:person.07445653431.03
    108 rdf:rest Nac04f85668eb4a2f9d0b9c277b826406
    109 N667c9349d2fc4d05bb6ff20cea211727 rdf:first N47ab748cbf8846b4bbe3f4c378c8c45d
    110 rdf:rest N50a1b63f46924cb0a0c53af9a478c6b5
    111 N78049ef0f88d457da6ea0ef7a3a49fb1 schema:issueNumber 6
    112 rdf:type schema:PublicationIssue
    113 N883aa834da7748f1844761b1f97bc76e schema:affiliation grid-institutes:None
    114 schema:familyName Chen
    115 schema:givenName Yifeng
    116 rdf:type schema:Person
    117 Nac04f85668eb4a2f9d0b9c277b826406 rdf:first N883aa834da7748f1844761b1f97bc76e
    118 rdf:rest rdf:nil
    119 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    120 schema:name Information and Computing Sciences
    121 rdf:type schema:DefinedTerm
    122 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    123 schema:name Artificial Intelligence and Image Processing
    124 rdf:type schema:DefinedTerm
    125 sg:grant.8308967 http://pending.schema.org/fundedItem sg:pub.10.1007/s00138-021-01248-9
    126 rdf:type schema:MonetaryGrant
    127 sg:journal.1045266 schema:issn 0932-8092
    128 1432-1769
    129 schema:name Machine Vision and Applications
    130 schema:publisher Springer Nature
    131 rdf:type schema:Periodical
    132 sg:person.010025723405.06 schema:affiliation grid-institutes:grid.440588.5
    133 schema:familyName Li
    134 schema:givenName Shaoyi
    135 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010025723405.06
    136 rdf:type schema:Person
    137 sg:person.013577320437.10 schema:affiliation grid-institutes:grid.440588.5
    138 schema:familyName Yang
    139 schema:givenName Xi
    140 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013577320437.10
    141 rdf:type schema:Person
    142 sg:person.07445653431.03 schema:affiliation grid-institutes:None
    143 schema:familyName Ma
    144 schema:givenName Jun
    145 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07445653431.03
    146 rdf:type schema:Person
    147 sg:pub.10.1007/978-3-030-01234-2_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454608
    148 https://doi.org/10.1007/978-3-030-01234-2_43
    149 rdf:type schema:CreativeWork
    150 sg:pub.10.1007/978-3-030-20887-5_13 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115900444
    151 https://doi.org/10.1007/978-3-030-20887-5_13
    152 rdf:type schema:CreativeWork
    153 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    154 https://doi.org/10.1007/978-3-319-24574-4_28
    155 rdf:type schema:CreativeWork
    156 sg:pub.10.1007/978-3-319-46475-6_10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033380566
    157 https://doi.org/10.1007/978-3-319-46475-6_10
    158 rdf:type schema:CreativeWork
    159 sg:pub.10.1007/978-3-319-46475-6_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018034649
    160 https://doi.org/10.1007/978-3-319-46475-6_43
    161 rdf:type schema:CreativeWork
    162 sg:pub.10.1007/s10851-019-00909-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1121224867
    163 https://doi.org/10.1007/s10851-019-00909-9
    164 rdf:type schema:CreativeWork
    165 sg:pub.10.1007/s11760-013-0500-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1021850467
    166 https://doi.org/10.1007/s11760-013-0500-z
    167 rdf:type schema:CreativeWork
    168 grid-institutes:None schema:alternateName China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China
    169 Xi’an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi’an, China
    170 schema:name China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China
    171 Xi’an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi’an, China
    172 rdf:type schema:Organization
    173 grid-institutes:grid.440588.5 schema:alternateName School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    174 Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    175 schema:name School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    176 Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    177 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...