A procedure to locate the eyelid position in noisy videokeratoscopic images View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2016-12

AUTHORS

Tim Schäck, Michael Muma, Weaam Alkhaldi, Abdelhak M. Zoubir

ABSTRACT

In this paper, we propose a new procedure to robustly determine the eyelid position in high-speed videokeratoscopic images. This knowledge is crucial in videokeratoscopy to study the effects of the eyelids on the cornea and on the tear film dynamics. Difficulties arise due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. The proposed procedure uses robust M-estimation to fit a parametric model to a set of eyelid edge candidate pixels. To detect these pixels, firstly, nonlinear image filtering operations are performed to remove the eyelashes. Secondly, we propose an image segmentation approach based on morphological operations and active contours to provide the set of candidate pixels. Subsequently, a verification procedure reduces this set to pixels that are likely to contribute to an accurate fit of the eyelid edge. We propose a complete framework, for which each stage is evaluated using real-world videokeratoscopic images. This methodology allows for automatic localization of the eyelid edges and is applicable to replace the currently used time-consuming manual labeling approach, while maintaining its accuracy. More... »

PAGES

136

Identifiers

URI

http://scigraph.springernature.com/pub.10.1186/s13634-016-0433-0

DOI

http://dx.doi.org/10.1186/s13634-016-0433-0

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1037047903


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Technical University of Darmstadt", 
          "id": "https://www.grid.ac/institutes/grid.6546.1", 
          "name": [
            "Signal Processing Group, Technische Universit\u00e4t Darmstadt, 64283, Darmstadt, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Sch\u00e4ck", 
        "givenName": "Tim", 
        "id": "sg:person.016527565157.52", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016527565157.52"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Technical University of Darmstadt", 
          "id": "https://www.grid.ac/institutes/grid.6546.1", 
          "name": [
            "Signal Processing Group, Technische Universit\u00e4t Darmstadt, 64283, Darmstadt, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Muma", 
        "givenName": "Michael", 
        "id": "sg:person.010645765323.85", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010645765323.85"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Technical University of Darmstadt", 
          "id": "https://www.grid.ac/institutes/grid.6546.1", 
          "name": [
            "Signal Processing Group, Technische Universit\u00e4t Darmstadt, 64283, Darmstadt, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Alkhaldi", 
        "givenName": "Weaam", 
        "id": "sg:person.016576506775.46", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016576506775.46"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Technical University of Darmstadt", 
          "id": "https://www.grid.ac/institutes/grid.6546.1", 
          "name": [
            "Signal Processing Group, Technische Universit\u00e4t Darmstadt, 64283, Darmstadt, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zoubir", 
        "givenName": "Abdelhak M.", 
        "id": "sg:person.013316510015.38", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013316510015.38"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/j.patrec.2008.05.001", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1001703703"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1002/j.1538-7305.1945.tb00453.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1004447500"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.cmpb.2013.06.003", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1008411608"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1093/biomet/83.4.715", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1008663110"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.imavis.2009.04.010", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1015584288"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00133570", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1016330466", 
          "https://doi.org/10.1007/bf00133570"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00133570", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1016330466", 
          "https://doi.org/10.1007/bf00133570"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.imavis.2009.04.001", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1018512742"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/opx.0b013e318250192d", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019193005"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/opx.0b013e318250192d", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019193005"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf02733175", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1027964024", 
          "https://doi.org/10.1007/bf02733175"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1444-0938.2005.tb06700.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1039448051"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1111/j.1444-0938.2005.tb06700.x", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1039448051"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1117/1.3598837", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1039958279"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0031-3203(81)90009-1", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040477036"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0031-3203(81)90009-1", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040477036"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/opx.0000000000000876", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042097978"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1097/opx.0000000000000876", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042097978"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1090/qam/10666", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1059346793"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.295913", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156009"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/msp.2012.2183773", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061423776"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tbme.2005.856253", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061526472"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tbme.2005.856253", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061526472"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tbme.2008.2005997", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061527371"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tbme.2010.2050770", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061528035"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.1986.4767851", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061742261"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1137/0111030", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1062837892"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1137/s1064827595289108", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1062884340"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1214/aoms/1177703732", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1064400228"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://app.dimensions.ai/details/publication/pub.1075060965", 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2016-12", 
    "datePublishedReg": "2016-12-01", 
    "description": "In this paper, we propose a new procedure to robustly determine the eyelid position in high-speed videokeratoscopic images. This knowledge is crucial in videokeratoscopy to study the effects of the eyelids on the cornea and on the tear film dynamics. Difficulties arise due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. The proposed procedure uses robust M-estimation to fit a parametric model to a set of eyelid edge candidate pixels. To detect these pixels, firstly, nonlinear image filtering operations are performed to remove the eyelashes. Secondly, we propose an image segmentation approach based on morphological operations and active contours to provide the set of candidate pixels. Subsequently, a verification procedure reduces this set to pixels that are likely to contribute to an accurate fit of the eyelid edge. We propose a complete framework, for which each stage is evaluated using real-world videokeratoscopic images. This methodology allows for automatic localization of the eyelid edges and is applicable to replace the currently used time-consuming manual labeling approach, while maintaining its accuracy.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1186/s13634-016-0433-0", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1357355", 
        "issn": [
          "1687-6172", 
          "1687-0433"
        ], 
        "name": "Applied Signal Processing", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "1", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "2016"
      }
    ], 
    "name": "A procedure to locate the eyelid position in noisy videokeratoscopic images", 
    "pagination": "136", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "b07baa35a4f43560fd436199e3ea0c717619cb0f68e61df02fc195769bbafdaf"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1186/s13634-016-0433-0"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1037047903"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1186/s13634-016-0433-0", 
      "https://app.dimensions.ai/details/publication/pub.1037047903"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T23:17", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8693_00000482.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1186/s13634-016-0433-0"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13634-016-0433-0'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13634-016-0433-0'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13634-016-0433-0'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13634-016-0433-0'


 

This table displays all metadata directly associated to this object as RDF triples.

155 TRIPLES      21 PREDICATES      51 URIs      19 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1186/s13634-016-0433-0 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N17bf76a367c142e58000f2052ffad4be
4 schema:citation sg:pub.10.1007/bf00133570
5 sg:pub.10.1007/bf02733175
6 https://app.dimensions.ai/details/publication/pub.1075060965
7 https://doi.org/10.1002/j.1538-7305.1945.tb00453.x
8 https://doi.org/10.1016/0031-3203(81)90009-1
9 https://doi.org/10.1016/j.cmpb.2013.06.003
10 https://doi.org/10.1016/j.imavis.2009.04.001
11 https://doi.org/10.1016/j.imavis.2009.04.010
12 https://doi.org/10.1016/j.patrec.2008.05.001
13 https://doi.org/10.1090/qam/10666
14 https://doi.org/10.1093/biomet/83.4.715
15 https://doi.org/10.1097/opx.0000000000000876
16 https://doi.org/10.1097/opx.0b013e318250192d
17 https://doi.org/10.1109/34.295913
18 https://doi.org/10.1109/msp.2012.2183773
19 https://doi.org/10.1109/tbme.2005.856253
20 https://doi.org/10.1109/tbme.2008.2005997
21 https://doi.org/10.1109/tbme.2010.2050770
22 https://doi.org/10.1109/tpami.1986.4767851
23 https://doi.org/10.1111/j.1444-0938.2005.tb06700.x
24 https://doi.org/10.1117/1.3598837
25 https://doi.org/10.1137/0111030
26 https://doi.org/10.1137/s1064827595289108
27 https://doi.org/10.1214/aoms/1177703732
28 schema:datePublished 2016-12
29 schema:datePublishedReg 2016-12-01
30 schema:description In this paper, we propose a new procedure to robustly determine the eyelid position in high-speed videokeratoscopic images. This knowledge is crucial in videokeratoscopy to study the effects of the eyelids on the cornea and on the tear film dynamics. Difficulties arise due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. The proposed procedure uses robust M-estimation to fit a parametric model to a set of eyelid edge candidate pixels. To detect these pixels, firstly, nonlinear image filtering operations are performed to remove the eyelashes. Secondly, we propose an image segmentation approach based on morphological operations and active contours to provide the set of candidate pixels. Subsequently, a verification procedure reduces this set to pixels that are likely to contribute to an accurate fit of the eyelid edge. We propose a complete framework, for which each stage is evaluated using real-world videokeratoscopic images. This methodology allows for automatic localization of the eyelid edges and is applicable to replace the currently used time-consuming manual labeling approach, while maintaining its accuracy.
31 schema:genre research_article
32 schema:inLanguage en
33 schema:isAccessibleForFree true
34 schema:isPartOf N22a0a6762b7946178cdafeaae3d03fa3
35 N5296faa4f8ac48b99a62a8d768530672
36 sg:journal.1357355
37 schema:name A procedure to locate the eyelid position in noisy videokeratoscopic images
38 schema:pagination 136
39 schema:productId N4671e6d7670c4ebcb5e148196d3b0215
40 N6cb3fc4865b1431195ab3ab4603f84a2
41 Ncc597c864db24bb38a8ac1965ec2af86
42 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037047903
43 https://doi.org/10.1186/s13634-016-0433-0
44 schema:sdDatePublished 2019-04-10T23:17
45 schema:sdLicense https://scigraph.springernature.com/explorer/license/
46 schema:sdPublisher N9124f527f44d46188e3d10bd6a263699
47 schema:url http://link.springer.com/10.1186/s13634-016-0433-0
48 sgo:license sg:explorer/license/
49 sgo:sdDataset articles
50 rdf:type schema:ScholarlyArticle
51 N17bf76a367c142e58000f2052ffad4be rdf:first sg:person.016527565157.52
52 rdf:rest N42217db49d6a48019165138adcab4170
53 N22a0a6762b7946178cdafeaae3d03fa3 schema:issueNumber 1
54 rdf:type schema:PublicationIssue
55 N42217db49d6a48019165138adcab4170 rdf:first sg:person.010645765323.85
56 rdf:rest Ne70ec1724412451dba8d2e51906dc5ca
57 N4671e6d7670c4ebcb5e148196d3b0215 schema:name doi
58 schema:value 10.1186/s13634-016-0433-0
59 rdf:type schema:PropertyValue
60 N4676369b19304136a1f54eefa798f0d8 rdf:first sg:person.013316510015.38
61 rdf:rest rdf:nil
62 N5296faa4f8ac48b99a62a8d768530672 schema:volumeNumber 2016
63 rdf:type schema:PublicationVolume
64 N6cb3fc4865b1431195ab3ab4603f84a2 schema:name dimensions_id
65 schema:value pub.1037047903
66 rdf:type schema:PropertyValue
67 N9124f527f44d46188e3d10bd6a263699 schema:name Springer Nature - SN SciGraph project
68 rdf:type schema:Organization
69 Ncc597c864db24bb38a8ac1965ec2af86 schema:name readcube_id
70 schema:value b07baa35a4f43560fd436199e3ea0c717619cb0f68e61df02fc195769bbafdaf
71 rdf:type schema:PropertyValue
72 Ne70ec1724412451dba8d2e51906dc5ca rdf:first sg:person.016576506775.46
73 rdf:rest N4676369b19304136a1f54eefa798f0d8
74 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
75 schema:name Information and Computing Sciences
76 rdf:type schema:DefinedTerm
77 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
78 schema:name Artificial Intelligence and Image Processing
79 rdf:type schema:DefinedTerm
80 sg:journal.1357355 schema:issn 1687-0433
81 1687-6172
82 schema:name Applied Signal Processing
83 rdf:type schema:Periodical
84 sg:person.010645765323.85 schema:affiliation https://www.grid.ac/institutes/grid.6546.1
85 schema:familyName Muma
86 schema:givenName Michael
87 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010645765323.85
88 rdf:type schema:Person
89 sg:person.013316510015.38 schema:affiliation https://www.grid.ac/institutes/grid.6546.1
90 schema:familyName Zoubir
91 schema:givenName Abdelhak M.
92 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013316510015.38
93 rdf:type schema:Person
94 sg:person.016527565157.52 schema:affiliation https://www.grid.ac/institutes/grid.6546.1
95 schema:familyName Schäck
96 schema:givenName Tim
97 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016527565157.52
98 rdf:type schema:Person
99 sg:person.016576506775.46 schema:affiliation https://www.grid.ac/institutes/grid.6546.1
100 schema:familyName Alkhaldi
101 schema:givenName Weaam
102 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016576506775.46
103 rdf:type schema:Person
104 sg:pub.10.1007/bf00133570 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016330466
105 https://doi.org/10.1007/bf00133570
106 rdf:type schema:CreativeWork
107 sg:pub.10.1007/bf02733175 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027964024
108 https://doi.org/10.1007/bf02733175
109 rdf:type schema:CreativeWork
110 https://app.dimensions.ai/details/publication/pub.1075060965 schema:CreativeWork
111 https://doi.org/10.1002/j.1538-7305.1945.tb00453.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1004447500
112 rdf:type schema:CreativeWork
113 https://doi.org/10.1016/0031-3203(81)90009-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040477036
114 rdf:type schema:CreativeWork
115 https://doi.org/10.1016/j.cmpb.2013.06.003 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008411608
116 rdf:type schema:CreativeWork
117 https://doi.org/10.1016/j.imavis.2009.04.001 schema:sameAs https://app.dimensions.ai/details/publication/pub.1018512742
118 rdf:type schema:CreativeWork
119 https://doi.org/10.1016/j.imavis.2009.04.010 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015584288
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1016/j.patrec.2008.05.001 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001703703
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1090/qam/10666 schema:sameAs https://app.dimensions.ai/details/publication/pub.1059346793
124 rdf:type schema:CreativeWork
125 https://doi.org/10.1093/biomet/83.4.715 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008663110
126 rdf:type schema:CreativeWork
127 https://doi.org/10.1097/opx.0000000000000876 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042097978
128 rdf:type schema:CreativeWork
129 https://doi.org/10.1097/opx.0b013e318250192d schema:sameAs https://app.dimensions.ai/details/publication/pub.1019193005
130 rdf:type schema:CreativeWork
131 https://doi.org/10.1109/34.295913 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156009
132 rdf:type schema:CreativeWork
133 https://doi.org/10.1109/msp.2012.2183773 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061423776
134 rdf:type schema:CreativeWork
135 https://doi.org/10.1109/tbme.2005.856253 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061526472
136 rdf:type schema:CreativeWork
137 https://doi.org/10.1109/tbme.2008.2005997 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061527371
138 rdf:type schema:CreativeWork
139 https://doi.org/10.1109/tbme.2010.2050770 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061528035
140 rdf:type schema:CreativeWork
141 https://doi.org/10.1109/tpami.1986.4767851 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061742261
142 rdf:type schema:CreativeWork
143 https://doi.org/10.1111/j.1444-0938.2005.tb06700.x schema:sameAs https://app.dimensions.ai/details/publication/pub.1039448051
144 rdf:type schema:CreativeWork
145 https://doi.org/10.1117/1.3598837 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039958279
146 rdf:type schema:CreativeWork
147 https://doi.org/10.1137/0111030 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062837892
148 rdf:type schema:CreativeWork
149 https://doi.org/10.1137/s1064827595289108 schema:sameAs https://app.dimensions.ai/details/publication/pub.1062884340
150 rdf:type schema:CreativeWork
151 https://doi.org/10.1214/aoms/1177703732 schema:sameAs https://app.dimensions.ai/details/publication/pub.1064400228
152 rdf:type schema:CreativeWork
153 https://www.grid.ac/institutes/grid.6546.1 schema:alternateName Technical University of Darmstadt
154 schema:name Signal Processing Group, Technische Universität Darmstadt, 64283, Darmstadt, Germany
155 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...