Dynamic fixation and active perception View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1996-02

AUTHORS

Kourosh Van Pahla, Tomas Uhlin, Jan-Olof Eklundh

ABSTRACT

Fixation is the link between the physical environment and the visual observer, both of which can be dynamic. That is, dynamic fixation serves the task of preserving a reference point in the world, despite relative motion. In this respect, fixation is dynamical in two senses: in response to voluntary changes of fixation point or attentive cues-gaze shiftings, and in response to the desire to compensate for the retinal slip-gaze holding.The work presented here, addresses the vergence movement and preservation of binocular fixation during smooth pursuit. This movement is a crucial component of fixation. The two vergence processes, disparity vergence and accommodative vergence, are described; a novel algorithm for robust disparity vergence and an active approach for blur detection and depth from defocus are presented. The main characteristics of the disparity vergence technique are the simplicity of the algorithm, the influence of both left and right images in the course of fixation and the agreement with the fixation model of primates. The major characteristic of the suggested algorithm for blur detection is its active approach which makes it suitable for achieving qualitative and reasonable depth estimations without unrealistic assumptions about the structures in the images.The paper also covers the integration of the two processes disparity vergence and accommodation vergence which are in turn accomplished by an integration of the disparity and blur stimuli. This integration is accounted for in both static and dynamic experiments. More... »

PAGES

113-135

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/bf00058748

DOI

http://dx.doi.org/10.1007/bf00058748

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1011622144


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Van Pahla", 
        "givenName": "Kourosh", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Uhlin", 
        "givenName": "Tomas", 
        "id": "sg:person.011303253273.54", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011303253273.54"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/3-540-55426-2_58", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000160199", 
          "https://doi.org/10.1007/3-540-55426-2_58"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-1-4899-5379-7", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1005084291", 
          "https://doi.org/10.1007/978-1-4899-5379-7"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-1-4471-3201-1_24", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1044613722", 
          "https://doi.org/10.1007/978-1-4471-3201-1_24"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00336114", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040115145", 
          "https://doi.org/10.1007/bf00336114"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "1996-02", 
    "datePublishedReg": "1996-02-01", 
    "description": "Fixation is the link between the physical environment and the visual observer, both of which can be dynamic. That is, dynamic fixation serves the task of preserving a reference point in the world, despite relative motion. In this respect, fixation is dynamical in two senses: in response to voluntary changes of fixation point or attentive cues-gaze shiftings, and in response to the desire to compensate for the retinal slip-gaze holding.The work presented here, addresses the vergence movement and preservation of binocular fixation during smooth pursuit. This movement is a crucial component of fixation. The two vergence processes, disparity vergence and accommodative vergence, are described; a novel algorithm for robust disparity vergence and an active approach for blur detection and depth from defocus are presented. The main characteristics of the disparity vergence technique are the simplicity of the algorithm, the influence of both left and right images in the course of fixation and the agreement with the fixation model of primates. The major characteristic of the suggested algorithm for blur detection is its active approach which makes it suitable for achieving qualitative and reasonable depth estimations without unrealistic assumptions about the structures in the images.The paper also covers the integration of the two processes disparity vergence and accommodation vergence which are in turn accomplished by an integration of the disparity and blur stimuli. This integration is accounted for in both static and dynamic experiments.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/bf00058748", 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1032807", 
        "issn": [
          "0920-5691", 
          "1573-1405"
        ], 
        "name": "International Journal of Computer Vision", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "2", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "17"
      }
    ], 
    "keywords": [
      "active perception", 
      "vergence process", 
      "fixation point", 
      "disparity vergence", 
      "course of fixation", 
      "blur stimulus", 
      "binocular fixation", 
      "blur detection", 
      "smooth pursuit", 
      "active approach", 
      "vergence movements", 
      "accommodative vergence", 
      "vergence", 
      "right images", 
      "physical environment", 
      "voluntary changes", 
      "stimuli", 
      "perception", 
      "task", 
      "senses", 
      "depth estimation", 
      "crucial component", 
      "desire", 
      "integration", 
      "observer", 
      "primates", 
      "movement", 
      "pursuit", 
      "visual observers", 
      "shifting", 
      "link", 
      "reference point", 
      "turn", 
      "response", 
      "images", 
      "major characteristics", 
      "fixation", 
      "influence", 
      "approach", 
      "course", 
      "environment", 
      "assumption", 
      "disparities", 
      "world", 
      "process", 
      "unrealistic assumptions", 
      "experiments", 
      "model", 
      "work", 
      "characteristics", 
      "point", 
      "changes", 
      "components", 
      "relative motion", 
      "fixation model", 
      "respect", 
      "detection", 
      "motion", 
      "main characteristics", 
      "paper", 
      "dynamic fixation", 
      "defocus", 
      "technique", 
      "structure", 
      "algorithm", 
      "depth", 
      "estimation", 
      "holdings", 
      "novel algorithm", 
      "preservation", 
      "simplicity", 
      "agreement", 
      "dynamic experiments"
    ], 
    "name": "Dynamic fixation and active perception", 
    "pagination": "113-135", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1011622144"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/bf00058748"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/bf00058748", 
      "https://app.dimensions.ai/details/publication/pub.1011622144"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-12-01T06:21", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_280.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/bf00058748"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/bf00058748'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/bf00058748'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/bf00058748'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/bf00058748'


 

This table displays all metadata directly associated to this object as RDF triples.

159 TRIPLES      21 PREDICATES      102 URIs      90 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/bf00058748 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N7b41f5b990f747079efd502fb2e1e380
4 schema:citation sg:pub.10.1007/3-540-55426-2_58
5 sg:pub.10.1007/978-1-4471-3201-1_24
6 sg:pub.10.1007/978-1-4899-5379-7
7 sg:pub.10.1007/bf00336114
8 schema:datePublished 1996-02
9 schema:datePublishedReg 1996-02-01
10 schema:description Fixation is the link between the physical environment and the visual observer, both of which can be dynamic. That is, dynamic fixation serves the task of preserving a reference point in the world, despite relative motion. In this respect, fixation is dynamical in two senses: in response to voluntary changes of fixation point or attentive cues-gaze shiftings, and in response to the desire to compensate for the retinal slip-gaze holding.The work presented here, addresses the vergence movement and preservation of binocular fixation during smooth pursuit. This movement is a crucial component of fixation. The two vergence processes, disparity vergence and accommodative vergence, are described; a novel algorithm for robust disparity vergence and an active approach for blur detection and depth from defocus are presented. The main characteristics of the disparity vergence technique are the simplicity of the algorithm, the influence of both left and right images in the course of fixation and the agreement with the fixation model of primates. The major characteristic of the suggested algorithm for blur detection is its active approach which makes it suitable for achieving qualitative and reasonable depth estimations without unrealistic assumptions about the structures in the images.The paper also covers the integration of the two processes disparity vergence and accommodation vergence which are in turn accomplished by an integration of the disparity and blur stimuli. This integration is accounted for in both static and dynamic experiments.
11 schema:genre article
12 schema:isAccessibleForFree false
13 schema:isPartOf N77eabfe56ffc4c1982d66f763dcbf39e
14 Ne4f305bba36a4a71ae2f2a9f7e565ab8
15 sg:journal.1032807
16 schema:keywords accommodative vergence
17 active approach
18 active perception
19 agreement
20 algorithm
21 approach
22 assumption
23 binocular fixation
24 blur detection
25 blur stimulus
26 changes
27 characteristics
28 components
29 course
30 course of fixation
31 crucial component
32 defocus
33 depth
34 depth estimation
35 desire
36 detection
37 disparities
38 disparity vergence
39 dynamic experiments
40 dynamic fixation
41 environment
42 estimation
43 experiments
44 fixation
45 fixation model
46 fixation point
47 holdings
48 images
49 influence
50 integration
51 link
52 main characteristics
53 major characteristics
54 model
55 motion
56 movement
57 novel algorithm
58 observer
59 paper
60 perception
61 physical environment
62 point
63 preservation
64 primates
65 process
66 pursuit
67 reference point
68 relative motion
69 respect
70 response
71 right images
72 senses
73 shifting
74 simplicity
75 smooth pursuit
76 stimuli
77 structure
78 task
79 technique
80 turn
81 unrealistic assumptions
82 vergence
83 vergence movements
84 vergence process
85 visual observers
86 voluntary changes
87 work
88 world
89 schema:name Dynamic fixation and active perception
90 schema:pagination 113-135
91 schema:productId N2e0f8f2a00504246bd280a3ed08b661a
92 N6adeb07a9cf948299eb5ec52a45ae174
93 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011622144
94 https://doi.org/10.1007/bf00058748
95 schema:sdDatePublished 2022-12-01T06:21
96 schema:sdLicense https://scigraph.springernature.com/explorer/license/
97 schema:sdPublisher N9d7f4ab5c14f4e8d8ce7051ff34979c0
98 schema:url https://doi.org/10.1007/bf00058748
99 sgo:license sg:explorer/license/
100 sgo:sdDataset articles
101 rdf:type schema:ScholarlyArticle
102 N160e992402c64297ab66e95f58873fc5 rdf:first sg:person.011303253273.54
103 rdf:rest N7fa8ed47fd484461afc2c67b3f93e691
104 N2e0f8f2a00504246bd280a3ed08b661a schema:name doi
105 schema:value 10.1007/bf00058748
106 rdf:type schema:PropertyValue
107 N6adeb07a9cf948299eb5ec52a45ae174 schema:name dimensions_id
108 schema:value pub.1011622144
109 rdf:type schema:PropertyValue
110 N77eabfe56ffc4c1982d66f763dcbf39e schema:issueNumber 2
111 rdf:type schema:PublicationIssue
112 N7b41f5b990f747079efd502fb2e1e380 rdf:first N92778f3ff0d34f0dba3bb2fdd3be2071
113 rdf:rest N160e992402c64297ab66e95f58873fc5
114 N7fa8ed47fd484461afc2c67b3f93e691 rdf:first sg:person.014400652155.17
115 rdf:rest rdf:nil
116 N92778f3ff0d34f0dba3bb2fdd3be2071 schema:affiliation grid-institutes:grid.5037.1
117 schema:familyName Van Pahla
118 schema:givenName Kourosh
119 rdf:type schema:Person
120 N9d7f4ab5c14f4e8d8ce7051ff34979c0 schema:name Springer Nature - SN SciGraph project
121 rdf:type schema:Organization
122 Ne4f305bba36a4a71ae2f2a9f7e565ab8 schema:volumeNumber 17
123 rdf:type schema:PublicationVolume
124 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
125 schema:name Information and Computing Sciences
126 rdf:type schema:DefinedTerm
127 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
128 schema:name Artificial Intelligence and Image Processing
129 rdf:type schema:DefinedTerm
130 sg:journal.1032807 schema:issn 0920-5691
131 1573-1405
132 schema:name International Journal of Computer Vision
133 schema:publisher Springer Nature
134 rdf:type schema:Periodical
135 sg:person.011303253273.54 schema:affiliation grid-institutes:grid.5037.1
136 schema:familyName Uhlin
137 schema:givenName Tomas
138 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011303253273.54
139 rdf:type schema:Person
140 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
141 schema:familyName Eklundh
142 schema:givenName Jan-Olof
143 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
144 rdf:type schema:Person
145 sg:pub.10.1007/3-540-55426-2_58 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000160199
146 https://doi.org/10.1007/3-540-55426-2_58
147 rdf:type schema:CreativeWork
148 sg:pub.10.1007/978-1-4471-3201-1_24 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044613722
149 https://doi.org/10.1007/978-1-4471-3201-1_24
150 rdf:type schema:CreativeWork
151 sg:pub.10.1007/978-1-4899-5379-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005084291
152 https://doi.org/10.1007/978-1-4899-5379-7
153 rdf:type schema:CreativeWork
154 sg:pub.10.1007/bf00336114 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040115145
155 https://doi.org/10.1007/bf00336114
156 rdf:type schema:CreativeWork
157 grid-institutes:grid.5037.1 schema:alternateName Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden
158 schema:name Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, Royal Institute of Technology, S-100 44, Stockholm, Sweden
159 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...