An Efficient Iris and Eye Corners Extraction Method View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2010

AUTHORS

Nesli Erdogmus , Jean-Luc Dugelay

ABSTRACT

Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training. More... »

PAGES

549-558

Book

TITLE

Structural, Syntactic, and Statistical Pattern Recognition

ISBN

978-3-642-14979-5
978-3-642-14980-1

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54

DOI

http://dx.doi.org/10.1007/978-3-642-14980-1_54

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1026216629


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France", 
          "id": "http://www.grid.ac/institutes/grid.28848.3e", 
          "name": [
            "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Erdogmus", 
        "givenName": "Nesli", 
        "id": "sg:person.013122633325.50", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013122633325.50"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France", 
          "id": "http://www.grid.ac/institutes/grid.28848.3e", 
          "name": [
            "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Dugelay", 
        "givenName": "Jean-Luc", 
        "id": "sg:person.015053427343.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015053427343.37"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2010", 
    "datePublishedReg": "2010-01-01", 
    "description": "Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training.", 
    "editor": [
      {
        "familyName": "Hancock", 
        "givenName": "Edwin R.", 
        "type": "Person"
      }, 
      {
        "familyName": "Wilson", 
        "givenName": "Richard C.", 
        "type": "Person"
      }, 
      {
        "familyName": "Windeatt", 
        "givenName": "Terry", 
        "type": "Person"
      }, 
      {
        "familyName": "Ulusoy", 
        "givenName": "Ilkay", 
        "type": "Person"
      }, 
      {
        "familyName": "Escolano", 
        "givenName": "Francisco", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-14980-1_54", 
    "inLanguage": "en", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-642-14979-5", 
        "978-3-642-14980-1"
      ], 
      "name": "Structural, Syntactic, and Statistical Pattern Recognition", 
      "type": "Book"
    }, 
    "keywords": [
      "eye features", 
      "computer vision applications", 
      "efficient image processing method", 
      "frontal face images", 
      "image processing methods", 
      "vision applications", 
      "Bosphorus database", 
      "eye corners", 
      "face images", 
      "fine fashion", 
      "skin pixels", 
      "Hough transform", 
      "Efficient Iris", 
      "need of training", 
      "processing methods", 
      "eye region", 
      "extraction method", 
      "remarkable results", 
      "efficient method", 
      "circle position", 
      "algorithm", 
      "features", 
      "common knowledge", 
      "pixels", 
      "intersection points", 
      "images", 
      "usage", 
      "method", 
      "database", 
      "edge", 
      "iris", 
      "extraction", 
      "applications", 
      "final step", 
      "transform", 
      "training", 
      "face", 
      "utilization", 
      "knowledge", 
      "step", 
      "need", 
      "results", 
      "fashion", 
      "eye structures", 
      "point", 
      "polynomials", 
      "assumption", 
      "part", 
      "corner", 
      "position", 
      "important clues", 
      "structure", 
      "clues", 
      "reproducibility", 
      "region", 
      "strength", 
      "eyelid", 
      "radius", 
      "lower eyelid", 
      "upper part", 
      "paper"
    ], 
    "name": "An Efficient Iris and Eye Corners Extraction Method", 
    "pagination": "549-558", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1026216629"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-14980-1_54"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-14980-1_54", 
      "https://app.dimensions.ai/details/publication/pub.1026216629"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-05-20T07:46", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/chapter/chapter_342.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-642-14980-1_54"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'


 

This table displays all metadata directly associated to this object as RDF triples.

148 TRIPLES      23 PREDICATES      87 URIs      80 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-14980-1_54 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nb3964fb59da84102a6f0e4686f377ac3
4 schema:datePublished 2010
5 schema:datePublishedReg 2010-01-01
6 schema:description Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training.
7 schema:editor Nb0a043308a8a497699780699696640aa
8 schema:genre chapter
9 schema:inLanguage en
10 schema:isAccessibleForFree true
11 schema:isPartOf Ncf92e3bf880441068495dc67ab344bed
12 schema:keywords Bosphorus database
13 Efficient Iris
14 Hough transform
15 algorithm
16 applications
17 assumption
18 circle position
19 clues
20 common knowledge
21 computer vision applications
22 corner
23 database
24 edge
25 efficient image processing method
26 efficient method
27 extraction
28 extraction method
29 eye corners
30 eye features
31 eye region
32 eye structures
33 eyelid
34 face
35 face images
36 fashion
37 features
38 final step
39 fine fashion
40 frontal face images
41 image processing methods
42 images
43 important clues
44 intersection points
45 iris
46 knowledge
47 lower eyelid
48 method
49 need
50 need of training
51 paper
52 part
53 pixels
54 point
55 polynomials
56 position
57 processing methods
58 radius
59 region
60 remarkable results
61 reproducibility
62 results
63 skin pixels
64 step
65 strength
66 structure
67 training
68 transform
69 upper part
70 usage
71 utilization
72 vision applications
73 schema:name An Efficient Iris and Eye Corners Extraction Method
74 schema:pagination 549-558
75 schema:productId N886a6b3bac364362aa9f5a6f337b2b7b
76 Neacb6ad1e7944bfca4381409f26a7986
77 schema:publisher N2b8d4e27d3524e49bcc8dcaf4a6b0b94
78 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026216629
79 https://doi.org/10.1007/978-3-642-14980-1_54
80 schema:sdDatePublished 2022-05-20T07:46
81 schema:sdLicense https://scigraph.springernature.com/explorer/license/
82 schema:sdPublisher Na7f1ed00ca4a40f9a105f1a774c4ee93
83 schema:url https://doi.org/10.1007/978-3-642-14980-1_54
84 sgo:license sg:explorer/license/
85 sgo:sdDataset chapters
86 rdf:type schema:Chapter
87 N04c9044d5ff745989addd85f07d3ffed rdf:first N8a08b2ca918b44bc88a2a90d2d929070
88 rdf:rest Nfad680de49024fd3a1abf1f63cddc585
89 N087b0b6f2a814bc6bb66dd63ed78470e rdf:first N7f0aa9522889436a9321ed19ad3e9a5e
90 rdf:rest rdf:nil
91 N15b58bd18943487c865a317a00897506 rdf:first sg:person.015053427343.37
92 rdf:rest rdf:nil
93 N2b8d4e27d3524e49bcc8dcaf4a6b0b94 schema:name Springer Nature
94 rdf:type schema:Organisation
95 N631dc508ba5b4c1f94503b54f7be77d3 schema:familyName Hancock
96 schema:givenName Edwin R.
97 rdf:type schema:Person
98 N7e8c898fd44c405a8b60c40d70c01c58 schema:familyName Wilson
99 schema:givenName Richard C.
100 rdf:type schema:Person
101 N7f0aa9522889436a9321ed19ad3e9a5e schema:familyName Escolano
102 schema:givenName Francisco
103 rdf:type schema:Person
104 N886a6b3bac364362aa9f5a6f337b2b7b schema:name dimensions_id
105 schema:value pub.1026216629
106 rdf:type schema:PropertyValue
107 N8a08b2ca918b44bc88a2a90d2d929070 schema:familyName Windeatt
108 schema:givenName Terry
109 rdf:type schema:Person
110 Na7f1ed00ca4a40f9a105f1a774c4ee93 schema:name Springer Nature - SN SciGraph project
111 rdf:type schema:Organization
112 Nb0a043308a8a497699780699696640aa rdf:first N631dc508ba5b4c1f94503b54f7be77d3
113 rdf:rest Nb223efe456e34135ab0e94fcc3e93f72
114 Nb223efe456e34135ab0e94fcc3e93f72 rdf:first N7e8c898fd44c405a8b60c40d70c01c58
115 rdf:rest N04c9044d5ff745989addd85f07d3ffed
116 Nb3964fb59da84102a6f0e4686f377ac3 rdf:first sg:person.013122633325.50
117 rdf:rest N15b58bd18943487c865a317a00897506
118 Ncf92e3bf880441068495dc67ab344bed schema:isbn 978-3-642-14979-5
119 978-3-642-14980-1
120 schema:name Structural, Syntactic, and Statistical Pattern Recognition
121 rdf:type schema:Book
122 Neacb6ad1e7944bfca4381409f26a7986 schema:name doi
123 schema:value 10.1007/978-3-642-14980-1_54
124 rdf:type schema:PropertyValue
125 Nf48426218ad141678a8ef358867a2300 schema:familyName Ulusoy
126 schema:givenName Ilkay
127 rdf:type schema:Person
128 Nfad680de49024fd3a1abf1f63cddc585 rdf:first Nf48426218ad141678a8ef358867a2300
129 rdf:rest N087b0b6f2a814bc6bb66dd63ed78470e
130 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
131 schema:name Information and Computing Sciences
132 rdf:type schema:DefinedTerm
133 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
134 schema:name Artificial Intelligence and Image Processing
135 rdf:type schema:DefinedTerm
136 sg:person.013122633325.50 schema:affiliation grid-institutes:grid.28848.3e
137 schema:familyName Erdogmus
138 schema:givenName Nesli
139 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013122633325.50
140 rdf:type schema:Person
141 sg:person.015053427343.37 schema:affiliation grid-institutes:grid.28848.3e
142 schema:familyName Dugelay
143 schema:givenName Jean-Luc
144 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015053427343.37
145 rdf:type schema:Person
146 grid-institutes:grid.28848.3e schema:alternateName Eurecom, Multimedia Communications Department, 2229 Routes des Crêtes, 06904, Sophia Antipolis, France
147 schema:name Eurecom, Multimedia Communications Department, 2229 Routes des Crêtes, 06904, Sophia Antipolis, France
148 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...