Facial UV map completion for pose-invariant face recognition: a novel adversarial approach based on coupled attention residual UNets View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2020-11-10

AUTHORS

In Seop Na, Chung Tran, Dung Nguyen, Sang Dinh

ABSTRACT

Pose-invariant face recognition refers to the problem of identifying or verifying a person by analyzing face images captured from different poses. This problem is challenging due to the large variation of pose, illumination and facial expression. A promising approach to deal with pose variation is to fulfill incomplete UV maps extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses. The synthesized faces increase the pose variation for training deep face recognition models and reduce the pose discrepancy during the testing phase. In this paper, we propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion. We enhance the original UV-GAN by using a couple of U-Nets. Particularly, the skip connections within each U-Net are boosted by attention gates. Meanwhile, the features from two U-Nets are fused with trainable scalar weights. The experiments on the popular benchmarks, including Multi-PIE, LFW, CPLWF and CFP datasets, show that the proposed method yields superior performance compared to other existing methods. More... »

PAGES

45

References to SciGraph publications

  • 2019-11-05. Recognizing Profile Faces by Imagining Frontal View in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2016-09-16. Do We Really Need to Collect Millions of Faces for Effective Face Recognition? in COMPUTER VISION – ECCV 2016
  • 2015-08-08. Time evolution of face recognition in accessible scenarios in HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES
  • 2018-11-24. 3D face recognition: a survey in HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES
  • 2018-09-20. UNet++: A Nested U-Net Architecture for Medical Image Segmentation in DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT
  • 2016-09-17. Stacked Hourglass Networks for Human Pose Estimation in COMPUTER VISION – ECCV 2016
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s13673-020-00250-w

    DOI

    http://dx.doi.org/10.1186/s13673-020-00250-w

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1132496098


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0805", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Distributed Computing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0806", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information Systems", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Chosun University, 309 Pilmun-daero, 61452, Gwangju, South Korea", 
              "id": "http://www.grid.ac/institutes/grid.254187.d", 
              "name": [
                "Chosun University, 309 Pilmun-daero, 61452, Gwangju, South Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Na", 
            "givenName": "In Seop", 
            "id": "sg:person.01240056113.01", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01240056113.01"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam", 
              "id": "http://www.grid.ac/institutes/grid.440792.c", 
              "name": [
                "Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Tran", 
            "givenName": "Chung", 
            "id": "sg:person.07610141411.61", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07610141411.61"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam", 
              "id": "http://www.grid.ac/institutes/grid.267849.6", 
              "name": [
                "Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Nguyen", 
            "givenName": "Dung", 
            "id": "sg:person.010405522011.96", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010405522011.96"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam", 
              "id": "http://www.grid.ac/institutes/grid.440792.c", 
              "name": [
                "Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Dinh", 
            "givenName": "Sang", 
            "id": "sg:person.013126736220.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013126736220.32"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s11263-019-01252-7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1122328854", 
              "https://doi.org/10.1007/s11263-019-01252-7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46484-8_29", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049647714", 
              "https://doi.org/10.1007/978-3-319-46484-8_29"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46454-1_35", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1035557946", 
              "https://doi.org/10.1007/978-3-319-46454-1_35"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s13673-015-0043-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1006749399", 
              "https://doi.org/10.1186/s13673-015-0043-0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-00889-5_1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107102652", 
              "https://doi.org/10.1007/978-3-030-00889-5_1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s13673-018-0157-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1110209896", 
              "https://doi.org/10.1186/s13673-018-0157-2"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2020-11-10", 
        "datePublishedReg": "2020-11-10", 
        "description": "Pose-invariant face recognition refers to the problem of identifying or verifying a person by analyzing face images captured from different poses. This problem is challenging due to the large variation of pose, illumination and facial expression. A promising approach to deal with pose variation is to fulfill incomplete UV maps extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses. The synthesized faces increase the pose variation for training deep face recognition models and reduce the pose discrepancy during the testing phase. In this paper, we propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion. We enhance the original UV-GAN by using a couple of U-Nets. Particularly, the skip connections within each U-Net are boosted by attention gates. Meanwhile, the features from two U-Nets are fused with trainable scalar weights. The experiments on the popular benchmarks, including Multi-PIE, LFW, CPLWF and CFP datasets, show that the proposed method yields superior performance compared to other existing methods.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s13673-020-00250-w", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1136381", 
            "issn": [
              "2192-1962"
            ], 
            "name": "Human-centric Computing and Information Sciences", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "10"
          }
        ], 
        "keywords": [
          "pose-invariant face recognition", 
          "face recognition", 
          "facial expressions", 
          "face recognition model", 
          "attention gate", 
          "deep face recognition models", 
          "recognition model", 
          "pose variations", 
          "face images", 
          "generative model", 
          "popular benchmarks", 
          "different poses", 
          "Multi-PIE", 
          "face", 
          "U-Net", 
          "testing phase", 
          "novel generative model", 
          "recognition", 
          "map completion", 
          "CFP datasets", 
          "UV maps", 
          "arbitrary poses", 
          "skip connections", 
          "persons", 
          "residual UNet", 
          "LFW", 
          "synthesized face", 
          "completion", 
          "adversarial approach", 
          "pose", 
          "couples", 
          "superior performance", 
          "wild faces", 
          "performance", 
          "problem", 
          "model", 
          "approach", 
          "UNet", 
          "promising approach", 
          "scalar weights", 
          "discrepancy", 
          "dataset", 
          "connection", 
          "images", 
          "benchmarks", 
          "maps", 
          "experiments", 
          "features", 
          "method", 
          "variation", 
          "mesh", 
          "paper", 
          "illumination", 
          "gate", 
          "large variation", 
          "expression", 
          "phase", 
          "weight", 
          "UV GaN"
        ], 
        "name": "Facial UV map completion for pose-invariant face recognition: a novel adversarial approach based on coupled attention residual UNets", 
        "pagination": "45", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1132496098"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s13673-020-00250-w"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s13673-020-00250-w", 
          "https://app.dimensions.ai/details/publication/pub.1132496098"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-11-24T21:06", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/article/article_851.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s13673-020-00250-w"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00250-w'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00250-w'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00250-w'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00250-w'


     

    This table displays all metadata directly associated to this object as RDF triples.

    178 TRIPLES      21 PREDICATES      92 URIs      75 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s13673-020-00250-w schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 anzsrc-for:0805
    4 anzsrc-for:0806
    5 schema:author Ndfe8d1ff57ed4c8abf8f8dd79d8084a0
    6 schema:citation sg:pub.10.1007/978-3-030-00889-5_1
    7 sg:pub.10.1007/978-3-319-24574-4_28
    8 sg:pub.10.1007/978-3-319-46454-1_35
    9 sg:pub.10.1007/978-3-319-46484-8_29
    10 sg:pub.10.1007/s11263-019-01252-7
    11 sg:pub.10.1186/s13673-015-0043-0
    12 sg:pub.10.1186/s13673-018-0157-2
    13 schema:datePublished 2020-11-10
    14 schema:datePublishedReg 2020-11-10
    15 schema:description Pose-invariant face recognition refers to the problem of identifying or verifying a person by analyzing face images captured from different poses. This problem is challenging due to the large variation of pose, illumination and facial expression. A promising approach to deal with pose variation is to fulfill incomplete UV maps extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses. The synthesized faces increase the pose variation for training deep face recognition models and reduce the pose discrepancy during the testing phase. In this paper, we propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion. We enhance the original UV-GAN by using a couple of U-Nets. Particularly, the skip connections within each U-Net are boosted by attention gates. Meanwhile, the features from two U-Nets are fused with trainable scalar weights. The experiments on the popular benchmarks, including Multi-PIE, LFW, CPLWF and CFP datasets, show that the proposed method yields superior performance compared to other existing methods.
    16 schema:genre article
    17 schema:isAccessibleForFree true
    18 schema:isPartOf Na3270ba289a14f13831b4c0a9667e980
    19 Ne8d7e6e96f9e4dbca3982a098f6bb9ae
    20 sg:journal.1136381
    21 schema:keywords CFP datasets
    22 LFW
    23 Multi-PIE
    24 U-Net
    25 UNet
    26 UV GaN
    27 UV maps
    28 adversarial approach
    29 approach
    30 arbitrary poses
    31 attention gate
    32 benchmarks
    33 completion
    34 connection
    35 couples
    36 dataset
    37 deep face recognition models
    38 different poses
    39 discrepancy
    40 experiments
    41 expression
    42 face
    43 face images
    44 face recognition
    45 face recognition model
    46 facial expressions
    47 features
    48 gate
    49 generative model
    50 illumination
    51 images
    52 large variation
    53 map completion
    54 maps
    55 mesh
    56 method
    57 model
    58 novel generative model
    59 paper
    60 performance
    61 persons
    62 phase
    63 popular benchmarks
    64 pose
    65 pose variations
    66 pose-invariant face recognition
    67 problem
    68 promising approach
    69 recognition
    70 recognition model
    71 residual UNet
    72 scalar weights
    73 skip connections
    74 superior performance
    75 synthesized face
    76 testing phase
    77 variation
    78 weight
    79 wild faces
    80 schema:name Facial UV map completion for pose-invariant face recognition: a novel adversarial approach based on coupled attention residual UNets
    81 schema:pagination 45
    82 schema:productId N7e98afe172044124bfae2428dd9afa30
    83 N99a73c593c5c4a8fb2e6af5ba38453c7
    84 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132496098
    85 https://doi.org/10.1186/s13673-020-00250-w
    86 schema:sdDatePublished 2022-11-24T21:06
    87 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    88 schema:sdPublisher N48bdad2927374e5797bcce5ec96c5e5c
    89 schema:url https://doi.org/10.1186/s13673-020-00250-w
    90 sgo:license sg:explorer/license/
    91 sgo:sdDataset articles
    92 rdf:type schema:ScholarlyArticle
    93 N454428ae5a3e4593abe9c76331757651 rdf:first sg:person.07610141411.61
    94 rdf:rest Ne6336ca29caf4a04a1a707234c7728e7
    95 N48bdad2927374e5797bcce5ec96c5e5c schema:name Springer Nature - SN SciGraph project
    96 rdf:type schema:Organization
    97 N7e98afe172044124bfae2428dd9afa30 schema:name dimensions_id
    98 schema:value pub.1132496098
    99 rdf:type schema:PropertyValue
    100 N99a73c593c5c4a8fb2e6af5ba38453c7 schema:name doi
    101 schema:value 10.1186/s13673-020-00250-w
    102 rdf:type schema:PropertyValue
    103 Na3270ba289a14f13831b4c0a9667e980 schema:issueNumber 1
    104 rdf:type schema:PublicationIssue
    105 Ndfe8d1ff57ed4c8abf8f8dd79d8084a0 rdf:first sg:person.01240056113.01
    106 rdf:rest N454428ae5a3e4593abe9c76331757651
    107 Ne6336ca29caf4a04a1a707234c7728e7 rdf:first sg:person.010405522011.96
    108 rdf:rest Nfe02ea3c94d54178adefaad653427bc9
    109 Ne8d7e6e96f9e4dbca3982a098f6bb9ae schema:volumeNumber 10
    110 rdf:type schema:PublicationVolume
    111 Nfe02ea3c94d54178adefaad653427bc9 rdf:first sg:person.013126736220.32
    112 rdf:rest rdf:nil
    113 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    114 schema:name Information and Computing Sciences
    115 rdf:type schema:DefinedTerm
    116 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    117 schema:name Artificial Intelligence and Image Processing
    118 rdf:type schema:DefinedTerm
    119 anzsrc-for:0805 schema:inDefinedTermSet anzsrc-for:
    120 schema:name Distributed Computing
    121 rdf:type schema:DefinedTerm
    122 anzsrc-for:0806 schema:inDefinedTermSet anzsrc-for:
    123 schema:name Information Systems
    124 rdf:type schema:DefinedTerm
    125 sg:journal.1136381 schema:issn 2192-1962
    126 schema:name Human-centric Computing and Information Sciences
    127 schema:publisher Springer Nature
    128 rdf:type schema:Periodical
    129 sg:person.010405522011.96 schema:affiliation grid-institutes:grid.267849.6
    130 schema:familyName Nguyen
    131 schema:givenName Dung
    132 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010405522011.96
    133 rdf:type schema:Person
    134 sg:person.01240056113.01 schema:affiliation grid-institutes:grid.254187.d
    135 schema:familyName Na
    136 schema:givenName In Seop
    137 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01240056113.01
    138 rdf:type schema:Person
    139 sg:person.013126736220.32 schema:affiliation grid-institutes:grid.440792.c
    140 schema:familyName Dinh
    141 schema:givenName Sang
    142 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013126736220.32
    143 rdf:type schema:Person
    144 sg:person.07610141411.61 schema:affiliation grid-institutes:grid.440792.c
    145 schema:familyName Tran
    146 schema:givenName Chung
    147 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07610141411.61
    148 rdf:type schema:Person
    149 sg:pub.10.1007/978-3-030-00889-5_1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107102652
    150 https://doi.org/10.1007/978-3-030-00889-5_1
    151 rdf:type schema:CreativeWork
    152 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    153 https://doi.org/10.1007/978-3-319-24574-4_28
    154 rdf:type schema:CreativeWork
    155 sg:pub.10.1007/978-3-319-46454-1_35 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035557946
    156 https://doi.org/10.1007/978-3-319-46454-1_35
    157 rdf:type schema:CreativeWork
    158 sg:pub.10.1007/978-3-319-46484-8_29 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049647714
    159 https://doi.org/10.1007/978-3-319-46484-8_29
    160 rdf:type schema:CreativeWork
    161 sg:pub.10.1007/s11263-019-01252-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1122328854
    162 https://doi.org/10.1007/s11263-019-01252-7
    163 rdf:type schema:CreativeWork
    164 sg:pub.10.1186/s13673-015-0043-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1006749399
    165 https://doi.org/10.1186/s13673-015-0043-0
    166 rdf:type schema:CreativeWork
    167 sg:pub.10.1186/s13673-018-0157-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1110209896
    168 https://doi.org/10.1186/s13673-018-0157-2
    169 rdf:type schema:CreativeWork
    170 grid-institutes:grid.254187.d schema:alternateName Chosun University, 309 Pilmun-daero, 61452, Gwangju, South Korea
    171 schema:name Chosun University, 309 Pilmun-daero, 61452, Gwangju, South Korea
    172 rdf:type schema:Organization
    173 grid-institutes:grid.267849.6 schema:alternateName Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam
    174 schema:name Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet, Hanoi, Vietnam
    175 rdf:type schema:Organization
    176 grid-institutes:grid.440792.c schema:alternateName Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam
    177 schema:name Hanoi University of Science and Technology, 1 Dai Co Viet, Hanoi, Vietnam
    178 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...