Active fixation for junction classification View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

1993

AUTHORS

Kjell Brunnström , Jan-Olof Eklundh

ABSTRACT

It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. We have developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings. More... »

PAGES

452-459

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/3-540-57233-3_59

DOI

http://dx.doi.org/10.1007/3-540-57233-3_59

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1016408662


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Brunnstr\u00f6m", 
        "givenName": "Kjell", 
        "id": "sg:person.07511673053.86", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07511673053.86"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "1993", 
    "datePublishedReg": "1993-01-01", 
    "description": "It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. We have developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings.", 
    "editor": [
      {
        "familyName": "Chetverikov", 
        "givenName": "Dmitry", 
        "type": "Person"
      }, 
      {
        "familyName": "Kropatsch", 
        "givenName": "Walter G.", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/3-540-57233-3_59", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-540-57233-6", 
        "978-3-540-47980-2"
      ], 
      "name": "Computer Analysis of Images and Patterns", 
      "type": "Book"
    }, 
    "keywords": [
      "classification of junctions", 
      "integration of stereo", 
      "man-made objects", 
      "generic objects", 
      "image data", 
      "real imagery", 
      "active vision", 
      "computational process", 
      "luminance information", 
      "observed junctions", 
      "junction classification", 
      "accommodation cues", 
      "active recognition", 
      "context of recognition", 
      "active selection", 
      "active detection", 
      "objects", 
      "classification", 
      "recognition", 
      "direct computation", 
      "fixation point", 
      "stereo", 
      "task", 
      "computation", 
      "vision", 
      "selection", 
      "qualitative shape", 
      "information", 
      "integration", 
      "imagery", 
      "detection", 
      "methodology", 
      "Biederman", 
      "point", 
      "work", 
      "context", 
      "situation", 
      "data", 
      "efforts", 
      "process", 
      "shape", 
      "important role", 
      "cues", 
      "part", 
      "humans", 
      "Malik", 
      "types", 
      "spirit", 
      "role", 
      "active fixation", 
      "fixation", 
      "approach", 
      "junction"
    ], 
    "name": "Active fixation for junction classification", 
    "pagination": "452-459", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1016408662"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/3-540-57233-3_59"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/3-540-57233-3_59", 
      "https://app.dimensions.ai/details/publication/pub.1016408662"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-11-24T21:14", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/chapter/chapter_237.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/3-540-57233-3_59"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-57233-3_59'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-57233-3_59'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-57233-3_59'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-57233-3_59'


 

This table displays all metadata directly associated to this object as RDF triples.

124 TRIPLES      22 PREDICATES      78 URIs      71 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/3-540-57233-3_59 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nf30c419a1a7547bf8f39404e5e85b98c
4 schema:datePublished 1993
5 schema:datePublishedReg 1993-01-01
6 schema:description It is well-known that active selection of fixation points in humans is highly context and task dependent. It is therefore likely that successful computational processes for fixation in active vision should be so too. We are considering active fixation in the context of recognition of man-made objects characterized by their shapes. In this situation the qualitative shape and type of observed junctions play an important role. We have developed a methodology for rapid active detection and classification of junctions by selection of fixation points. The approach is based on direct computations from image data and allows integration of stereo and accommodation cues with luminance information. This work form a part of an effort to perform active recognition of generic objects, in the spirit of Malik and Biederman, but on real imagery rather than on line-drawings.
7 schema:editor N64e88e317d784745938262fbdbb8f46d
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf Nff82683032ea4768aa0a9aef2f7221d1
11 schema:keywords Biederman
12 Malik
13 accommodation cues
14 active detection
15 active fixation
16 active recognition
17 active selection
18 active vision
19 approach
20 classification
21 classification of junctions
22 computation
23 computational process
24 context
25 context of recognition
26 cues
27 data
28 detection
29 direct computation
30 efforts
31 fixation
32 fixation point
33 generic objects
34 humans
35 image data
36 imagery
37 important role
38 information
39 integration
40 integration of stereo
41 junction
42 junction classification
43 luminance information
44 man-made objects
45 methodology
46 objects
47 observed junctions
48 part
49 point
50 process
51 qualitative shape
52 real imagery
53 recognition
54 role
55 selection
56 shape
57 situation
58 spirit
59 stereo
60 task
61 types
62 vision
63 work
64 schema:name Active fixation for junction classification
65 schema:pagination 452-459
66 schema:productId N3193eb09700f46b4af10c97ffb548b20
67 N68051d4038fb49daa868432ceaf82ef4
68 schema:publisher N0b00327f6cc344dbace277c5182de526
69 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016408662
70 https://doi.org/10.1007/3-540-57233-3_59
71 schema:sdDatePublished 2022-11-24T21:14
72 schema:sdLicense https://scigraph.springernature.com/explorer/license/
73 schema:sdPublisher Nd00d0b6e6e404b96a81aa6e25af655a4
74 schema:url https://doi.org/10.1007/3-540-57233-3_59
75 sgo:license sg:explorer/license/
76 sgo:sdDataset chapters
77 rdf:type schema:Chapter
78 N0b00327f6cc344dbace277c5182de526 schema:name Springer Nature
79 rdf:type schema:Organisation
80 N3193eb09700f46b4af10c97ffb548b20 schema:name dimensions_id
81 schema:value pub.1016408662
82 rdf:type schema:PropertyValue
83 N375c485f24b3467d983b35a39399ef84 schema:familyName Chetverikov
84 schema:givenName Dmitry
85 rdf:type schema:Person
86 N3a5ce8ab87344557867b686f2d283b02 rdf:first sg:person.014400652155.17
87 rdf:rest rdf:nil
88 N3f5bdb8ea0764cefb9c363322f038b53 rdf:first Na7ff06500c9b4f9ba52e9fd48643f847
89 rdf:rest rdf:nil
90 N64e88e317d784745938262fbdbb8f46d rdf:first N375c485f24b3467d983b35a39399ef84
91 rdf:rest N3f5bdb8ea0764cefb9c363322f038b53
92 N68051d4038fb49daa868432ceaf82ef4 schema:name doi
93 schema:value 10.1007/3-540-57233-3_59
94 rdf:type schema:PropertyValue
95 Na7ff06500c9b4f9ba52e9fd48643f847 schema:familyName Kropatsch
96 schema:givenName Walter G.
97 rdf:type schema:Person
98 Nd00d0b6e6e404b96a81aa6e25af655a4 schema:name Springer Nature - SN SciGraph project
99 rdf:type schema:Organization
100 Nf30c419a1a7547bf8f39404e5e85b98c rdf:first sg:person.07511673053.86
101 rdf:rest N3a5ce8ab87344557867b686f2d283b02
102 Nff82683032ea4768aa0a9aef2f7221d1 schema:isbn 978-3-540-47980-2
103 978-3-540-57233-6
104 schema:name Computer Analysis of Images and Patterns
105 rdf:type schema:Book
106 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
107 schema:name Information and Computing Sciences
108 rdf:type schema:DefinedTerm
109 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
110 schema:name Artificial Intelligence and Image Processing
111 rdf:type schema:DefinedTerm
112 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
113 schema:familyName Eklundh
114 schema:givenName Jan-Olof
115 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
116 rdf:type schema:Person
117 sg:person.07511673053.86 schema:affiliation grid-institutes:grid.5037.1
118 schema:familyName Brunnström
119 schema:givenName Kjell
120 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07511673053.86
121 rdf:type schema:Person
122 grid-institutes:grid.5037.1 schema:alternateName Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden
123 schema:name Computational Vision and Active Perception Laboratory (CVAP), Royal Institute of Technology (KTH), Stockholm, Sweden
124 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...