An Efficient Iris and Eye Corners Extraction Method View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2010

AUTHORS

Nesli Erdogmus , Jean-Luc Dugelay

ABSTRACT

Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training. More... »

PAGES

549-558

Book

TITLE

Structural, Syntactic, and Statistical Pattern Recognition

ISBN

978-3-642-14979-5
978-3-642-14980-1

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54

DOI

http://dx.doi.org/10.1007/978-3-642-14980-1_54

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1026216629


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France", 
          "id": "http://www.grid.ac/institutes/grid.28848.3e", 
          "name": [
            "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Erdogmus", 
        "givenName": "Nesli", 
        "id": "sg:person.013122633325.50", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013122633325.50"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France", 
          "id": "http://www.grid.ac/institutes/grid.28848.3e", 
          "name": [
            "Eurecom, Multimedia Communications Department, 2229 Routes des Cr\u00eates, 06904, Sophia Antipolis, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Dugelay", 
        "givenName": "Jean-Luc", 
        "id": "sg:person.015053427343.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015053427343.37"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2010", 
    "datePublishedReg": "2010-01-01", 
    "description": "Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training.", 
    "editor": [
      {
        "familyName": "Hancock", 
        "givenName": "Edwin R.", 
        "type": "Person"
      }, 
      {
        "familyName": "Wilson", 
        "givenName": "Richard C.", 
        "type": "Person"
      }, 
      {
        "familyName": "Windeatt", 
        "givenName": "Terry", 
        "type": "Person"
      }, 
      {
        "familyName": "Ulusoy", 
        "givenName": "Ilkay", 
        "type": "Person"
      }, 
      {
        "familyName": "Escolano", 
        "givenName": "Francisco", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-14980-1_54", 
    "inLanguage": "en", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-642-14979-5", 
        "978-3-642-14980-1"
      ], 
      "name": "Structural, Syntactic, and Statistical Pattern Recognition", 
      "type": "Book"
    }, 
    "keywords": [
      "eye features", 
      "computer vision applications", 
      "efficient image processing method", 
      "frontal face images", 
      "image processing methods", 
      "vision applications", 
      "Bosphorus database", 
      "eye corners", 
      "face images", 
      "fine fashion", 
      "Hough transform", 
      "skin pixels", 
      "Efficient Iris", 
      "need of training", 
      "processing methods", 
      "eye region", 
      "extraction method", 
      "remarkable results", 
      "efficient method", 
      "algorithm", 
      "common knowledge", 
      "features", 
      "pixels", 
      "intersection points", 
      "images", 
      "usage", 
      "database", 
      "method", 
      "iris", 
      "edge", 
      "applications", 
      "extraction", 
      "final step", 
      "transform", 
      "circle position", 
      "face", 
      "training", 
      "utilization", 
      "knowledge", 
      "step", 
      "need", 
      "results", 
      "fashion", 
      "point", 
      "eye structures", 
      "polynomials", 
      "assumption", 
      "part", 
      "corner", 
      "position", 
      "important clues", 
      "structure", 
      "clues", 
      "reproducibility", 
      "region", 
      "strength", 
      "eyelid", 
      "radius", 
      "lower eyelid", 
      "upper part", 
      "paper", 
      "coarse eye regions", 
      "iris circle position", 
      "Eye Corners Extraction Method", 
      "Corners Extraction Method"
    ], 
    "name": "An Efficient Iris and Eye Corners Extraction Method", 
    "pagination": "549-558", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1026216629"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-14980-1_54"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-14980-1_54", 
      "https://app.dimensions.ai/details/publication/pub.1026216629"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-01-01T19:24", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/chapter/chapter_427.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-642-14980-1_54"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-14980-1_54'


 

This table displays all metadata directly associated to this object as RDF triples.

152 TRIPLES      23 PREDICATES      91 URIs      84 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-14980-1_54 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nf56d47c341584ef0906326f93522b8dc
4 schema:datePublished 2010
5 schema:datePublishedReg 2010-01-01
6 schema:description Eye features are one of the most important clues for many computer vision applications. In this paper, an efficient method to automatically extract eye features is presented. The extraction is highly based on the usage of the common knowledge about face and eye structure. With the assumption of frontal face images, firstly coarse eye regions are extracted by removing skin pixels in the upper part of the face. Then, iris circle position and radius are detected by using Hough transform in a coarse-to-fine fashion. In the final step, edges created by upper and lower eyelids are detected and polynomials are fitted to those edges so that their intersection points are labeled as eye corners. The algorithm is experimented on the Bosphorus database and the obtained results demonstrate that it can locate eye features very accurately. The strength of the proposed method stems from its reproducibility due to the utilization of simple and efficient image processing methods while achieving remarkable results without any need of training.
7 schema:editor N0e761543f21c420580a56b96f85057e7
8 schema:genre chapter
9 schema:inLanguage en
10 schema:isAccessibleForFree true
11 schema:isPartOf N79b240883d83410fbcf7664546f31935
12 schema:keywords Bosphorus database
13 Corners Extraction Method
14 Efficient Iris
15 Eye Corners Extraction Method
16 Hough transform
17 algorithm
18 applications
19 assumption
20 circle position
21 clues
22 coarse eye regions
23 common knowledge
24 computer vision applications
25 corner
26 database
27 edge
28 efficient image processing method
29 efficient method
30 extraction
31 extraction method
32 eye corners
33 eye features
34 eye region
35 eye structures
36 eyelid
37 face
38 face images
39 fashion
40 features
41 final step
42 fine fashion
43 frontal face images
44 image processing methods
45 images
46 important clues
47 intersection points
48 iris
49 iris circle position
50 knowledge
51 lower eyelid
52 method
53 need
54 need of training
55 paper
56 part
57 pixels
58 point
59 polynomials
60 position
61 processing methods
62 radius
63 region
64 remarkable results
65 reproducibility
66 results
67 skin pixels
68 step
69 strength
70 structure
71 training
72 transform
73 upper part
74 usage
75 utilization
76 vision applications
77 schema:name An Efficient Iris and Eye Corners Extraction Method
78 schema:pagination 549-558
79 schema:productId N17114cc85b284fc184723091679ae848
80 N46f0cb4c5f544546b3042b2c494673eb
81 schema:publisher N023abc5605124c6dbeb956881c0f040a
82 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026216629
83 https://doi.org/10.1007/978-3-642-14980-1_54
84 schema:sdDatePublished 2022-01-01T19:24
85 schema:sdLicense https://scigraph.springernature.com/explorer/license/
86 schema:sdPublisher Neb90332ccfce4bd6a2066e9a9a18c3ee
87 schema:url https://doi.org/10.1007/978-3-642-14980-1_54
88 sgo:license sg:explorer/license/
89 sgo:sdDataset chapters
90 rdf:type schema:Chapter
91 N023abc5605124c6dbeb956881c0f040a schema:name Springer Nature
92 rdf:type schema:Organisation
93 N02f18ae7bef94f719cec820e1b6035ba schema:familyName Windeatt
94 schema:givenName Terry
95 rdf:type schema:Person
96 N0e761543f21c420580a56b96f85057e7 rdf:first Ne4421a5c25894c56be0f8d09ef44bba7
97 rdf:rest N521445b53e614798ab1b8f0e73d6192f
98 N17114cc85b284fc184723091679ae848 schema:name doi
99 schema:value 10.1007/978-3-642-14980-1_54
100 rdf:type schema:PropertyValue
101 N1a06612ac9d04f9190ea6573d06de594 schema:familyName Wilson
102 schema:givenName Richard C.
103 rdf:type schema:Person
104 N46f0cb4c5f544546b3042b2c494673eb schema:name dimensions_id
105 schema:value pub.1026216629
106 rdf:type schema:PropertyValue
107 N521445b53e614798ab1b8f0e73d6192f rdf:first N1a06612ac9d04f9190ea6573d06de594
108 rdf:rest N7fdcf9e92f46412e8528ff52aef11111
109 N755eb06023774d5d9d9b830b9e3dbf48 rdf:first Nf82fec16f69948c8b9a5e1e4e90d5a66
110 rdf:rest N8b4ffe2273084f62838682dec3dba850
111 N79b240883d83410fbcf7664546f31935 schema:isbn 978-3-642-14979-5
112 978-3-642-14980-1
113 schema:name Structural, Syntactic, and Statistical Pattern Recognition
114 rdf:type schema:Book
115 N7fdcf9e92f46412e8528ff52aef11111 rdf:first N02f18ae7bef94f719cec820e1b6035ba
116 rdf:rest N755eb06023774d5d9d9b830b9e3dbf48
117 N8b4ffe2273084f62838682dec3dba850 rdf:first Na47eec50378b413daa49cd0a027ba379
118 rdf:rest rdf:nil
119 Na47eec50378b413daa49cd0a027ba379 schema:familyName Escolano
120 schema:givenName Francisco
121 rdf:type schema:Person
122 Naef75725d3c24a928e14b8dbf4a38e74 rdf:first sg:person.015053427343.37
123 rdf:rest rdf:nil
124 Ne4421a5c25894c56be0f8d09ef44bba7 schema:familyName Hancock
125 schema:givenName Edwin R.
126 rdf:type schema:Person
127 Neb90332ccfce4bd6a2066e9a9a18c3ee schema:name Springer Nature - SN SciGraph project
128 rdf:type schema:Organization
129 Nf56d47c341584ef0906326f93522b8dc rdf:first sg:person.013122633325.50
130 rdf:rest Naef75725d3c24a928e14b8dbf4a38e74
131 Nf82fec16f69948c8b9a5e1e4e90d5a66 schema:familyName Ulusoy
132 schema:givenName Ilkay
133 rdf:type schema:Person
134 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
135 schema:name Information and Computing Sciences
136 rdf:type schema:DefinedTerm
137 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
138 schema:name Artificial Intelligence and Image Processing
139 rdf:type schema:DefinedTerm
140 sg:person.013122633325.50 schema:affiliation grid-institutes:grid.28848.3e
141 schema:familyName Erdogmus
142 schema:givenName Nesli
143 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013122633325.50
144 rdf:type schema:Person
145 sg:person.015053427343.37 schema:affiliation grid-institutes:grid.28848.3e
146 schema:familyName Dugelay
147 schema:givenName Jean-Luc
148 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015053427343.37
149 rdf:type schema:Person
150 grid-institutes:grid.28848.3e schema:alternateName Eurecom, Multimedia Communications Department, 2229 Routes des Crêtes, 06904, Sophia Antipolis, France
151 schema:name Eurecom, Multimedia Communications Department, 2229 Routes des Crêtes, 06904, Sophia Antipolis, France
152 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...