BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2021-10-12

AUTHORS

Shaoyi Li, Jian Lin, Xi Yang, Jun Ma, Yifeng Chen

ABSTRACT

In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality. More... »

PAGES

124

References to SciGraph publications

  • 2016-09-17. Single Image Dehazing via Multi-scale Convolutional Neural Networks in COMPUTER VISION – ECCV 2016
  • 2019-09-24. A Novel Total Generalized Variation Model for Image Dehazing in JOURNAL OF MATHEMATICAL IMAGING AND VISION
  • 2018-10-06. Proximal Dehaze-Net: A Prior Learning-Based Deep Network for Single Image Dehazing in COMPUTER VISION – ECCV 2018
  • 2016-09-17. Perceptual Losses for Real-Time Style Transfer and Super-Resolution in COMPUTER VISION – ECCV 2016
  • 2019-05-28. Progressive Feature Fusion Network for Realistic Image Dehazing in COMPUTER VISION – ACCV 2018
  • 2013-06-15. A new histogram equalization method for digital image enhancement and brightness preservation in SIGNAL, IMAGE AND VIDEO PROCESSING
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9

    DOI

    http://dx.doi.org/10.1007/s00138-021-01248-9

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1141821691


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Li", 
            "givenName": "Shaoyi", 
            "id": "sg:person.010025723405.06", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010025723405.06"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lin", 
            "givenName": "Jian", 
            "id": "sg:person.015312722714.60", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015312722714.60"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.440588.5", 
              "name": [
                "School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yang", 
            "givenName": "Xi", 
            "id": "sg:person.013577320437.10", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013577320437.10"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China", 
              "id": "http://www.grid.ac/institutes/grid.464234.3", 
              "name": [
                "Xi\u2019an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi\u2019an, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ma", 
            "givenName": "Jun", 
            "id": "sg:person.07445653431.03", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07445653431.03"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Yifeng", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_10", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1033380566", 
              "https://doi.org/10.1007/978-3-319-46475-6_10"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-20887-5_13", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1115900444", 
              "https://doi.org/10.1007/978-3-030-20887-5_13"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46475-6_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1018034649", 
              "https://doi.org/10.1007/978-3-319-46475-6_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10851-019-00909-9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1121224867", 
              "https://doi.org/10.1007/s10851-019-00909-9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01234-2_43", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454608", 
              "https://doi.org/10.1007/978-3-030-01234-2_43"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11760-013-0500-z", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021850467", 
              "https://doi.org/10.1007/s11760-013-0500-z"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-10-12", 
        "datePublishedReg": "2021-10-12", 
        "description": "In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01248-9", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8308967", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "32"
          }
        ], 
        "keywords": [
          "generative adversarial network", 
          "pix2pix framework", 
          "UNet-like network", 
          "better visual quality", 
          "indoor images", 
          "GAN network", 
          "Dehazing Network", 
          "adversarial network", 
          "image generation", 
          "public datasets", 
          "single image", 
          "perceptual loss", 
          "visual quality", 
          "training process", 
          "peak signal", 
          "processing model", 
          "network", 
          "qualitative quality", 
          "images", 
          "experimental results", 
          "nonlinear characteristics", 
          "framework", 
          "dataset", 
          "structural similarity", 
          "loss model", 
          "good chromaticity", 
          "noise ratio", 
          "subsequent processes", 
          "module", 
          "model", 
          "high consistency", 
          "superiority", 
          "quality", 
          "starting point", 
          "process", 
          "consistency", 
          "similarity", 
          "view", 
          "generation", 
          "signals", 
          "chromaticity", 
          "point", 
          "strategies", 
          "characteristics", 
          "ratio", 
          "loss strategies", 
          "results", 
          "problem", 
          "loss", 
          "paper"
        ], 
        "name": "BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image", 
        "pagination": "124", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1141821691"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01248-9"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01248-9", 
          "https://app.dimensions.ai/details/publication/pub.1141821691"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-05-20T07:38", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/article/article_890.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01248-9"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01248-9'


     

    This table displays all metadata directly associated to this object as RDF triples.

    173 TRIPLES      22 PREDICATES      82 URIs      67 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01248-9 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N146b501f32334ed68e0b0256a4e2ceb7
    4 schema:citation sg:pub.10.1007/978-3-030-01234-2_43
    5 sg:pub.10.1007/978-3-030-20887-5_13
    6 sg:pub.10.1007/978-3-319-24574-4_28
    7 sg:pub.10.1007/978-3-319-46475-6_10
    8 sg:pub.10.1007/978-3-319-46475-6_43
    9 sg:pub.10.1007/s10851-019-00909-9
    10 sg:pub.10.1007/s11760-013-0500-z
    11 schema:datePublished 2021-10-12
    12 schema:datePublishedReg 2021-10-12
    13 schema:description In this paper, we propose an image dehazing model based on the generative adversarial networks (GAN). The pix2pix framework is taken as the starting point in the proposed model. First, a UNet-like network is employed as the dehazing network in view of the high consistency of the image dehazing problem. In the proposed model, a shortcut module is proposed to effectively increase the nonlinear characteristics of the network, which is beneficial for subsequent processes of image generation and stabilizing the training process of the GAN network. Also, inspired by the face illumination processing model and the perceptual loss model, the quality vision loss strategy is designed to obtain a better visual quality of the dehazed image, based on peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and perceptual losses. The experimental results on public datasets show that our network demonstrates the superiority over the compared models on indoor images. Also, the dehazed image by the proposed model shows better chromaticity and qualitative quality.
    14 schema:genre article
    15 schema:inLanguage en
    16 schema:isAccessibleForFree false
    17 schema:isPartOf N8f0df2793d734bf18c2bbacbe5c1bb73
    18 Nbdb9899c4d2742ac90950b070c3d86c7
    19 sg:journal.1045266
    20 schema:keywords Dehazing Network
    21 GAN network
    22 UNet-like network
    23 adversarial network
    24 better visual quality
    25 characteristics
    26 chromaticity
    27 consistency
    28 dataset
    29 experimental results
    30 framework
    31 generation
    32 generative adversarial network
    33 good chromaticity
    34 high consistency
    35 image generation
    36 images
    37 indoor images
    38 loss
    39 loss model
    40 loss strategies
    41 model
    42 module
    43 network
    44 noise ratio
    45 nonlinear characteristics
    46 paper
    47 peak signal
    48 perceptual loss
    49 pix2pix framework
    50 point
    51 problem
    52 process
    53 processing model
    54 public datasets
    55 qualitative quality
    56 quality
    57 ratio
    58 results
    59 signals
    60 similarity
    61 single image
    62 starting point
    63 strategies
    64 structural similarity
    65 subsequent processes
    66 superiority
    67 training process
    68 view
    69 visual quality
    70 schema:name BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image
    71 schema:pagination 124
    72 schema:productId Na5ec349e2f9543ec83d13b531bba7cce
    73 Neaf149fd5cdf41ee9ddcde9d1f9ec701
    74 schema:sameAs https://app.dimensions.ai/details/publication/pub.1141821691
    75 https://doi.org/10.1007/s00138-021-01248-9
    76 schema:sdDatePublished 2022-05-20T07:38
    77 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    78 schema:sdPublisher Ne9b0aa99b9e448a4a9a38d05c54a452e
    79 schema:url https://doi.org/10.1007/s00138-021-01248-9
    80 sgo:license sg:explorer/license/
    81 sgo:sdDataset articles
    82 rdf:type schema:ScholarlyArticle
    83 N0e65ed69f337493fa7f8097792de9c15 rdf:first Nb0eb5d52d20844d4abb4cb4dd10ae784
    84 rdf:rest rdf:nil
    85 N146b501f32334ed68e0b0256a4e2ceb7 rdf:first sg:person.010025723405.06
    86 rdf:rest N32970b24b7064e03a56c3507cea8a618
    87 N32970b24b7064e03a56c3507cea8a618 rdf:first sg:person.015312722714.60
    88 rdf:rest N58f5aac4e1ed41298dfb112634e588da
    89 N58059718f4a945be94f719efd3d1bf62 rdf:first sg:person.07445653431.03
    90 rdf:rest N0e65ed69f337493fa7f8097792de9c15
    91 N58f5aac4e1ed41298dfb112634e588da rdf:first sg:person.013577320437.10
    92 rdf:rest N58059718f4a945be94f719efd3d1bf62
    93 N8f0df2793d734bf18c2bbacbe5c1bb73 schema:volumeNumber 32
    94 rdf:type schema:PublicationVolume
    95 Na5ec349e2f9543ec83d13b531bba7cce schema:name doi
    96 schema:value 10.1007/s00138-021-01248-9
    97 rdf:type schema:PropertyValue
    98 Nb0eb5d52d20844d4abb4cb4dd10ae784 schema:affiliation grid-institutes:None
    99 schema:familyName Chen
    100 schema:givenName Yifeng
    101 rdf:type schema:Person
    102 Nbdb9899c4d2742ac90950b070c3d86c7 schema:issueNumber 6
    103 rdf:type schema:PublicationIssue
    104 Ne9b0aa99b9e448a4a9a38d05c54a452e schema:name Springer Nature - SN SciGraph project
    105 rdf:type schema:Organization
    106 Neaf149fd5cdf41ee9ddcde9d1f9ec701 schema:name dimensions_id
    107 schema:value pub.1141821691
    108 rdf:type schema:PropertyValue
    109 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    110 schema:name Information and Computing Sciences
    111 rdf:type schema:DefinedTerm
    112 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    113 schema:name Artificial Intelligence and Image Processing
    114 rdf:type schema:DefinedTerm
    115 sg:grant.8308967 http://pending.schema.org/fundedItem sg:pub.10.1007/s00138-021-01248-9
    116 rdf:type schema:MonetaryGrant
    117 sg:journal.1045266 schema:issn 0932-8092
    118 1432-1769
    119 schema:name Machine Vision and Applications
    120 schema:publisher Springer Nature
    121 rdf:type schema:Periodical
    122 sg:person.010025723405.06 schema:affiliation grid-institutes:grid.440588.5
    123 schema:familyName Li
    124 schema:givenName Shaoyi
    125 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010025723405.06
    126 rdf:type schema:Person
    127 sg:person.013577320437.10 schema:affiliation grid-institutes:grid.440588.5
    128 schema:familyName Yang
    129 schema:givenName Xi
    130 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013577320437.10
    131 rdf:type schema:Person
    132 sg:person.015312722714.60 schema:affiliation grid-institutes:grid.440588.5
    133 schema:familyName Lin
    134 schema:givenName Jian
    135 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015312722714.60
    136 rdf:type schema:Person
    137 sg:person.07445653431.03 schema:affiliation grid-institutes:grid.464234.3
    138 schema:familyName Ma
    139 schema:givenName Jun
    140 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07445653431.03
    141 rdf:type schema:Person
    142 sg:pub.10.1007/978-3-030-01234-2_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454608
    143 https://doi.org/10.1007/978-3-030-01234-2_43
    144 rdf:type schema:CreativeWork
    145 sg:pub.10.1007/978-3-030-20887-5_13 schema:sameAs https://app.dimensions.ai/details/publication/pub.1115900444
    146 https://doi.org/10.1007/978-3-030-20887-5_13
    147 rdf:type schema:CreativeWork
    148 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    149 https://doi.org/10.1007/978-3-319-24574-4_28
    150 rdf:type schema:CreativeWork
    151 sg:pub.10.1007/978-3-319-46475-6_10 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033380566
    152 https://doi.org/10.1007/978-3-319-46475-6_10
    153 rdf:type schema:CreativeWork
    154 sg:pub.10.1007/978-3-319-46475-6_43 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018034649
    155 https://doi.org/10.1007/978-3-319-46475-6_43
    156 rdf:type schema:CreativeWork
    157 sg:pub.10.1007/s10851-019-00909-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1121224867
    158 https://doi.org/10.1007/s10851-019-00909-9
    159 rdf:type schema:CreativeWork
    160 sg:pub.10.1007/s11760-013-0500-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1021850467
    161 https://doi.org/10.1007/s11760-013-0500-z
    162 rdf:type schema:CreativeWork
    163 grid-institutes:None schema:alternateName China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China
    164 schema:name China Airborne Missile Academy, No. 166 Jiefang Road, 471000, Luoyang, China
    165 rdf:type schema:Organization
    166 grid-institutes:grid.440588.5 schema:alternateName School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    167 Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    168 schema:name School of Astronautics, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    169 Unmanned System Technology Research Institute, Northwestern Polytechnical University, No. 127 Youyi West Road, 710072, Xi’an, China
    170 rdf:type schema:Organization
    171 grid-institutes:grid.464234.3 schema:alternateName Xi’an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi’an, China
    172 schema:name Xi’an Modern Control Technology Research Institute, No. 10 Zhangba East Road, 710065, Xi’an, China
    173 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...