Active fixation for scene exploration View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

1996-02

AUTHORS

Kjell Brunnström, Jan-Olof Eklundh, Tomas Uhlin

ABSTRACT

It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. The fixations are driven by a grouping strategy, which forms sets of connected junctions separated from the surrounding at depth discontinuities. We have furthermore developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings. More... »

PAGES

137-162

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/bf00058749

DOI

http://dx.doi.org/10.1007/bf00058749

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1012056551


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Brunnstr\u00f6m", 
        "givenName": "Kjell", 
        "id": "sg:person.07511673053.86", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07511673053.86"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Uhlin", 
        "givenName": "Tomas", 
        "id": "sg:person.011303253273.54", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011303253273.54"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/bf01469346", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022603165", 
          "https://doi.org/10.1007/bf01469346"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-55426-2_60", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1005688664", 
          "https://doi.org/10.1007/3-540-55426-2_60"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-55426-2_59", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1009744907", 
          "https://doi.org/10.1007/3-540-55426-2_59"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-55426-2_58", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1000160199", 
          "https://doi.org/10.1007/3-540-55426-2_58"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00203452", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1008061227", 
          "https://doi.org/10.1007/bf00203452"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-1-4757-6465-9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1012010063", 
          "https://doi.org/10.1007/978-1-4757-6465-9"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/3-540-55426-2_77", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019653141", 
          "https://doi.org/10.1007/3-540-55426-2_77"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00128527", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1041387124", 
          "https://doi.org/10.1007/bf00128527"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "1996-02", 
    "datePublishedReg": "1996-02-01", 
    "description": "It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. The fixations are driven by a grouping strategy, which forms sets of connected junctions separated from the surrounding at depth discontinuities. We have furthermore developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/bf00058749", 
    "isAccessibleForFree": true, 
    "isPartOf": [
      {
        "id": "sg:journal.1032807", 
        "issn": [
          "0920-5691", 
          "1573-1405"
        ], 
        "name": "International Journal of Computer Vision", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "2", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "17"
      }
    ], 
    "keywords": [
      "classification of junctions", 
      "integration of stereo", 
      "man-made objects", 
      "generic objects", 
      "image data", 
      "real imagery", 
      "active vision", 
      "scene exploration", 
      "depth discontinuities", 
      "computational process", 
      "luminance information", 
      "grouping strategy", 
      "observed junctions", 
      "accommodation cues", 
      "active recognition", 
      "context of recognition", 
      "connected junctions", 
      "active selection", 
      "active detection", 
      "objects", 
      "recognition", 
      "direct computation", 
      "fixation point", 
      "stereo", 
      "task", 
      "computation", 
      "vision", 
      "classification", 
      "selection", 
      "qualitative shape", 
      "information", 
      "set", 
      "integration", 
      "imagery", 
      "detection", 
      "methodology", 
      "Biederman", 
      "exploration", 
      "point", 
      "work", 
      "context", 
      "situation", 
      "data", 
      "efforts", 
      "process", 
      "strategies", 
      "shape", 
      "important role", 
      "cues", 
      "part", 
      "humans", 
      "Malik", 
      "types", 
      "discontinuities", 
      "spirit", 
      "role", 
      "active fixation", 
      "fixation", 
      "junction", 
      "approach"
    ], 
    "name": "Active fixation for scene exploration", 
    "pagination": "137-162", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1012056551"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/bf00058749"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/bf00058749", 
      "https://app.dimensions.ai/details/publication/pub.1012056551"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-12-01T06:21", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_267.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/bf00058749"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/bf00058749'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/bf00058749'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/bf00058749'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/bf00058749'


 

This table displays all metadata directly associated to this object as RDF triples.

163 TRIPLES      21 PREDICATES      93 URIs      77 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/bf00058749 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N3cb46ff1f1134033bbf1b2dd21175ed5
4 schema:citation sg:pub.10.1007/3-540-55426-2_58
5 sg:pub.10.1007/3-540-55426-2_59
6 sg:pub.10.1007/3-540-55426-2_60
7 sg:pub.10.1007/3-540-55426-2_77
8 sg:pub.10.1007/978-1-4757-6465-9
9 sg:pub.10.1007/bf00128527
10 sg:pub.10.1007/bf00203452
11 sg:pub.10.1007/bf01469346
12 schema:datePublished 1996-02
13 schema:datePublishedReg 1996-02-01
14 schema:description It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. The fixations are driven by a grouping strategy, which forms sets of connected junctions separated from the surrounding at depth discontinuities. We have furthermore developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings.
15 schema:genre article
16 schema:isAccessibleForFree true
17 schema:isPartOf Nc3f63830a97e4fe6a5b43319e4367575
18 Nc9bac2e67e904119847c0af45874bf69
19 sg:journal.1032807
20 schema:keywords Biederman
21 Malik
22 accommodation cues
23 active detection
24 active fixation
25 active recognition
26 active selection
27 active vision
28 approach
29 classification
30 classification of junctions
31 computation
32 computational process
33 connected junctions
34 context
35 context of recognition
36 cues
37 data
38 depth discontinuities
39 detection
40 direct computation
41 discontinuities
42 efforts
43 exploration
44 fixation
45 fixation point
46 generic objects
47 grouping strategy
48 humans
49 image data
50 imagery
51 important role
52 information
53 integration
54 integration of stereo
55 junction
56 luminance information
57 man-made objects
58 methodology
59 objects
60 observed junctions
61 part
62 point
63 process
64 qualitative shape
65 real imagery
66 recognition
67 role
68 scene exploration
69 selection
70 set
71 shape
72 situation
73 spirit
74 stereo
75 strategies
76 task
77 types
78 vision
79 work
80 schema:name Active fixation for scene exploration
81 schema:pagination 137-162
82 schema:productId N2d43ba0e8bce4d22b2b29dcc8151d952
83 N6fe7c81039374c61bf6a1342c331fed4
84 schema:sameAs https://app.dimensions.ai/details/publication/pub.1012056551
85 https://doi.org/10.1007/bf00058749
86 schema:sdDatePublished 2022-12-01T06:21
87 schema:sdLicense https://scigraph.springernature.com/explorer/license/
88 schema:sdPublisher N9521135e86a74b239a59609afc6b4f30
89 schema:url https://doi.org/10.1007/bf00058749
90 sgo:license sg:explorer/license/
91 sgo:sdDataset articles
92 rdf:type schema:ScholarlyArticle
93 N2d43ba0e8bce4d22b2b29dcc8151d952 schema:name doi
94 schema:value 10.1007/bf00058749
95 rdf:type schema:PropertyValue
96 N3cb46ff1f1134033bbf1b2dd21175ed5 rdf:first sg:person.07511673053.86
97 rdf:rest Nb8fd406bb7c2424cad99e1ddb6378009
98 N6fe7c81039374c61bf6a1342c331fed4 schema:name dimensions_id
99 schema:value pub.1012056551
100 rdf:type schema:PropertyValue
101 N93585012f5d64b528bc37aa45071cce7 rdf:first sg:person.011303253273.54
102 rdf:rest rdf:nil
103 N9521135e86a74b239a59609afc6b4f30 schema:name Springer Nature - SN SciGraph project
104 rdf:type schema:Organization
105 Nb8fd406bb7c2424cad99e1ddb6378009 rdf:first sg:person.014400652155.17
106 rdf:rest N93585012f5d64b528bc37aa45071cce7
107 Nc3f63830a97e4fe6a5b43319e4367575 schema:issueNumber 2
108 rdf:type schema:PublicationIssue
109 Nc9bac2e67e904119847c0af45874bf69 schema:volumeNumber 17
110 rdf:type schema:PublicationVolume
111 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
112 schema:name Information and Computing Sciences
113 rdf:type schema:DefinedTerm
114 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
115 schema:name Artificial Intelligence and Image Processing
116 rdf:type schema:DefinedTerm
117 sg:journal.1032807 schema:issn 0920-5691
118 1573-1405
119 schema:name International Journal of Computer Vision
120 schema:publisher Springer Nature
121 rdf:type schema:Periodical
122 sg:person.011303253273.54 schema:affiliation grid-institutes:grid.5037.1
123 schema:familyName Uhlin
124 schema:givenName Tomas
125 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011303253273.54
126 rdf:type schema:Person
127 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
128 schema:familyName Eklundh
129 schema:givenName Jan-Olof
130 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
131 rdf:type schema:Person
132 sg:person.07511673053.86 schema:affiliation grid-institutes:grid.5037.1
133 schema:familyName Brunnström
134 schema:givenName Kjell
135 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07511673053.86
136 rdf:type schema:Person
137 sg:pub.10.1007/3-540-55426-2_58 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000160199
138 https://doi.org/10.1007/3-540-55426-2_58
139 rdf:type schema:CreativeWork
140 sg:pub.10.1007/3-540-55426-2_59 schema:sameAs https://app.dimensions.ai/details/publication/pub.1009744907
141 https://doi.org/10.1007/3-540-55426-2_59
142 rdf:type schema:CreativeWork
143 sg:pub.10.1007/3-540-55426-2_60 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005688664
144 https://doi.org/10.1007/3-540-55426-2_60
145 rdf:type schema:CreativeWork
146 sg:pub.10.1007/3-540-55426-2_77 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019653141
147 https://doi.org/10.1007/3-540-55426-2_77
148 rdf:type schema:CreativeWork
149 sg:pub.10.1007/978-1-4757-6465-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1012010063
150 https://doi.org/10.1007/978-1-4757-6465-9
151 rdf:type schema:CreativeWork
152 sg:pub.10.1007/bf00128527 schema:sameAs https://app.dimensions.ai/details/publication/pub.1041387124
153 https://doi.org/10.1007/bf00128527
154 rdf:type schema:CreativeWork
155 sg:pub.10.1007/bf00203452 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008061227
156 https://doi.org/10.1007/bf00203452
157 rdf:type schema:CreativeWork
158 sg:pub.10.1007/bf01469346 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022603165
159 https://doi.org/10.1007/bf01469346
160 rdf:type schema:CreativeWork
161 grid-institutes:grid.5037.1 schema:alternateName Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden
162 schema:name Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computing Science, KTH (Royal Institute of Technology), S-100 44, Stockholm, Sweden
163 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...