A Viewpoint Invariant, Sparsely Registered, Patch Based, Face Verifier View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2008-10

AUTHORS

Simon Lucey, Tsuhan Chen

ABSTRACT

Sparsely registering a face (i.e., locating 2–3 fiducial points) is considered a much easier task than densely registering one; especially with varying viewpoints. Unfortunately, the converse tends to be true for the task of viewpoint-invariant face verification; the more registration points one has the better the performance. In this paper we present a novel approach to viewpoint invariant face verification which we refer to as the “patch-whole” algorithm. The algorithm is able to obtain good verification performance with sparsely registered faces. Good performance is achieved by not assuming any alignment between gallery and probe view faces, but instead trying to learn the joint likelihood functions for faces of similar and dissimilar identities. Generalization is encouraged by factorizing the joint gallery and probe appearance likelihood, for each class, into an ensemble of “patch-whole” likelihoods. We make an additional contribution in this paper by reviewing existing approaches to viewpoint-invariant face verification and demonstrating how most of them fall into one of two categories; namely viewpoint-generative or viewpoint-discriminative. This categorization is instructive as it enables us to compare our “patch-whole” algorithm to other paradigms in viewpoint-invariant face verification and also gives deeper insights into why the algorithm performs so well. More... »

PAGES

58-71

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s11263-007-0119-z

DOI

http://dx.doi.org/10.1007/s11263-007-0119-z

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1034714539


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Carnegie Mellon University", 
          "id": "https://www.grid.ac/institutes/grid.147455.6", 
          "name": [
            "The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, 15213, Pittsburgh, PA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Lucey", 
        "givenName": "Simon", 
        "id": "sg:person.0754071362.25", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0754071362.25"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Carnegie Mellon University", 
          "id": "https://www.grid.ac/institutes/grid.147455.6", 
          "name": [
            "The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, 15213, Pittsburgh, PA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Chen", 
        "givenName": "Tsuhan", 
        "id": "sg:person.012245072625.31", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/j.patcog.2005.07.001", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022406760"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.patcog.2005.07.001", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022406760"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.598227", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156616"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.598228", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156617"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.667881", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156743"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.879790", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061157156"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2003.1227983", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061742556"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2004.1265861", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061742676"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2005.58", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061742920"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/fgr.2006.90", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093453500"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cira.2003.1222308", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094199719"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/afgr.2000.840648", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094971766"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2006.172", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095063988"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.150", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095318561"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.150", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095318561"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/fgr.2006.42", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095422067"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.276", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095587336"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2008-10", 
    "datePublishedReg": "2008-10-01", 
    "description": "Sparsely registering a face (i.e., locating 2\u20133 fiducial points) is considered a much easier task than densely registering one; especially with varying viewpoints. Unfortunately, the converse tends to be true for the task of viewpoint-invariant face verification; the more registration points one has the better the performance. In this paper we present a novel approach to viewpoint invariant face verification which we refer to as the \u201cpatch-whole\u201d algorithm. The algorithm is able to obtain good verification performance with sparsely registered faces. Good performance is achieved by not assuming any alignment between gallery and probe view faces, but instead trying to learn the joint likelihood functions for faces of similar and dissimilar identities. Generalization is encouraged by factorizing the joint gallery and probe appearance likelihood, for each class, into an ensemble of \u201cpatch-whole\u201d likelihoods. We make an additional contribution in this paper by reviewing existing approaches to viewpoint-invariant face verification and demonstrating how most of them fall into one of two categories; namely viewpoint-generative or viewpoint-discriminative. This categorization is instructive as it enables us to compare our \u201cpatch-whole\u201d algorithm to other paradigms in viewpoint-invariant face verification and also gives deeper insights into why the algorithm performs so well.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s11263-007-0119-z", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1032807", 
        "issn": [
          "0920-5691", 
          "1573-1405"
        ], 
        "name": "International Journal of Computer Vision", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "1", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "80"
      }
    ], 
    "name": "A Viewpoint Invariant, Sparsely Registered, Patch Based, Face Verifier", 
    "pagination": "58-71", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "a55e0020bff2513b755320a85f4fa9851194855fe27bff7471353a332c547267"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s11263-007-0119-z"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1034714539"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s11263-007-0119-z", 
      "https://app.dimensions.ai/details/publication/pub.1034714539"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T14:12", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8660_00000523.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1007%2Fs11263-007-0119-z"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-007-0119-z'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-007-0119-z'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-007-0119-z'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-007-0119-z'


 

This table displays all metadata directly associated to this object as RDF triples.

113 TRIPLES      21 PREDICATES      42 URIs      19 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s11263-007-0119-z schema:about anzsrc-for:17
2 anzsrc-for:1701
3 schema:author Na97c5660875f4ca0b130d889c3e5794d
4 schema:citation https://doi.org/10.1016/j.patcog.2005.07.001
5 https://doi.org/10.1109/34.598227
6 https://doi.org/10.1109/34.598228
7 https://doi.org/10.1109/34.667881
8 https://doi.org/10.1109/34.879790
9 https://doi.org/10.1109/afgr.2000.840648
10 https://doi.org/10.1109/cira.2003.1222308
11 https://doi.org/10.1109/cvpr.2005.150
12 https://doi.org/10.1109/cvpr.2005.276
13 https://doi.org/10.1109/cvpr.2006.172
14 https://doi.org/10.1109/fgr.2006.42
15 https://doi.org/10.1109/fgr.2006.90
16 https://doi.org/10.1109/tpami.2003.1227983
17 https://doi.org/10.1109/tpami.2004.1265861
18 https://doi.org/10.1109/tpami.2005.58
19 schema:datePublished 2008-10
20 schema:datePublishedReg 2008-10-01
21 schema:description Sparsely registering a face (i.e., locating 2–3 fiducial points) is considered a much easier task than densely registering one; especially with varying viewpoints. Unfortunately, the converse tends to be true for the task of viewpoint-invariant face verification; the more registration points one has the better the performance. In this paper we present a novel approach to viewpoint invariant face verification which we refer to as the “patch-whole” algorithm. The algorithm is able to obtain good verification performance with sparsely registered faces. Good performance is achieved by not assuming any alignment between gallery and probe view faces, but instead trying to learn the joint likelihood functions for faces of similar and dissimilar identities. Generalization is encouraged by factorizing the joint gallery and probe appearance likelihood, for each class, into an ensemble of “patch-whole” likelihoods. We make an additional contribution in this paper by reviewing existing approaches to viewpoint-invariant face verification and demonstrating how most of them fall into one of two categories; namely viewpoint-generative or viewpoint-discriminative. This categorization is instructive as it enables us to compare our “patch-whole” algorithm to other paradigms in viewpoint-invariant face verification and also gives deeper insights into why the algorithm performs so well.
22 schema:genre research_article
23 schema:inLanguage en
24 schema:isAccessibleForFree true
25 schema:isPartOf N0ef471940e564be7be8149f1ed99dc42
26 Nc1cb8727b477485ab10df635faf72e09
27 sg:journal.1032807
28 schema:name A Viewpoint Invariant, Sparsely Registered, Patch Based, Face Verifier
29 schema:pagination 58-71
30 schema:productId N09ed5c25a13244d5a73d579065e8bd2c
31 N5735c046ed2b4bb99ffb6a1d76fc41bc
32 Nebfa30548bc14737b302c276bbb5ab40
33 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034714539
34 https://doi.org/10.1007/s11263-007-0119-z
35 schema:sdDatePublished 2019-04-10T14:12
36 schema:sdLicense https://scigraph.springernature.com/explorer/license/
37 schema:sdPublisher N1dd843fb67be4813ac0a7fad4167108b
38 schema:url http://link.springer.com/10.1007%2Fs11263-007-0119-z
39 sgo:license sg:explorer/license/
40 sgo:sdDataset articles
41 rdf:type schema:ScholarlyArticle
42 N09ed5c25a13244d5a73d579065e8bd2c schema:name dimensions_id
43 schema:value pub.1034714539
44 rdf:type schema:PropertyValue
45 N0ef471940e564be7be8149f1ed99dc42 schema:issueNumber 1
46 rdf:type schema:PublicationIssue
47 N1dd843fb67be4813ac0a7fad4167108b schema:name Springer Nature - SN SciGraph project
48 rdf:type schema:Organization
49 N5735c046ed2b4bb99ffb6a1d76fc41bc schema:name readcube_id
50 schema:value a55e0020bff2513b755320a85f4fa9851194855fe27bff7471353a332c547267
51 rdf:type schema:PropertyValue
52 N699370e03da945cbaddc3e48e8bdaca7 rdf:first sg:person.012245072625.31
53 rdf:rest rdf:nil
54 Na97c5660875f4ca0b130d889c3e5794d rdf:first sg:person.0754071362.25
55 rdf:rest N699370e03da945cbaddc3e48e8bdaca7
56 Nc1cb8727b477485ab10df635faf72e09 schema:volumeNumber 80
57 rdf:type schema:PublicationVolume
58 Nebfa30548bc14737b302c276bbb5ab40 schema:name doi
59 schema:value 10.1007/s11263-007-0119-z
60 rdf:type schema:PropertyValue
61 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
62 schema:name Psychology and Cognitive Sciences
63 rdf:type schema:DefinedTerm
64 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
65 schema:name Psychology
66 rdf:type schema:DefinedTerm
67 sg:journal.1032807 schema:issn 0920-5691
68 1573-1405
69 schema:name International Journal of Computer Vision
70 rdf:type schema:Periodical
71 sg:person.012245072625.31 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
72 schema:familyName Chen
73 schema:givenName Tsuhan
74 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012245072625.31
75 rdf:type schema:Person
76 sg:person.0754071362.25 schema:affiliation https://www.grid.ac/institutes/grid.147455.6
77 schema:familyName Lucey
78 schema:givenName Simon
79 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0754071362.25
80 rdf:type schema:Person
81 https://doi.org/10.1016/j.patcog.2005.07.001 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022406760
82 rdf:type schema:CreativeWork
83 https://doi.org/10.1109/34.598227 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156616
84 rdf:type schema:CreativeWork
85 https://doi.org/10.1109/34.598228 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156617
86 rdf:type schema:CreativeWork
87 https://doi.org/10.1109/34.667881 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156743
88 rdf:type schema:CreativeWork
89 https://doi.org/10.1109/34.879790 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157156
90 rdf:type schema:CreativeWork
91 https://doi.org/10.1109/afgr.2000.840648 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094971766
92 rdf:type schema:CreativeWork
93 https://doi.org/10.1109/cira.2003.1222308 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094199719
94 rdf:type schema:CreativeWork
95 https://doi.org/10.1109/cvpr.2005.150 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095318561
96 rdf:type schema:CreativeWork
97 https://doi.org/10.1109/cvpr.2005.276 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095587336
98 rdf:type schema:CreativeWork
99 https://doi.org/10.1109/cvpr.2006.172 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095063988
100 rdf:type schema:CreativeWork
101 https://doi.org/10.1109/fgr.2006.42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095422067
102 rdf:type schema:CreativeWork
103 https://doi.org/10.1109/fgr.2006.90 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093453500
104 rdf:type schema:CreativeWork
105 https://doi.org/10.1109/tpami.2003.1227983 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061742556
106 rdf:type schema:CreativeWork
107 https://doi.org/10.1109/tpami.2004.1265861 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061742676
108 rdf:type schema:CreativeWork
109 https://doi.org/10.1109/tpami.2005.58 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061742920
110 rdf:type schema:CreativeWork
111 https://www.grid.ac/institutes/grid.147455.6 schema:alternateName Carnegie Mellon University
112 schema:name The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, 15213, Pittsburgh, PA, USA
113 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...