Development of a Biologically Inspired Real-Time Visual Attention System View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2002-02-01

AUTHORS

Olivier Stasse , Yasuo Kuniyoshi , Gordon Cheng

ABSTRACT

The aim of this paper is to present our attempt in creating a visual system for a humanoid robot, which can intervene in non-specific tasks in real-time. Due to the generic aspects of our goal, our models are based around human architecture. Such approaches have usually been contradictory, with the efficient implementation of real systems and its demanding computational cost. We show that by using PredN1, a system for developing distributed real-time robotic applications, we are able to build a real-time scalable visual attention system. It is easy to change the structure of the system, or the hardware in order to investigate new models. In our presentation, we will also present our system with a number of human visual attributes, such as: log-polar retino-cortical mapping, banks of oriented filters providing a generic signature of any object in an image. Additionally, a visual attention mechanism — a psychophysical model — FeatureGate, is used in eliciting a fixation point. The system runs at frame rate, allowing interaction of same time scale as humans. More... »

PAGES

150-159

Book

TITLE

Biologically Motivated Computer Vision

ISBN

978-3-540-67560-0
978-3-540-45482-3

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/3-540-45482-9_15

DOI

http://dx.doi.org/10.1007/3-540-45482-9_15

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1038127590


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "National Institute of Advanced Industrial Science and Technology", 
          "id": "https://www.grid.ac/institutes/grid.208504.b", 
          "name": [
            "Humanoid Interaction Laboratory, Intelligent Systems Division, Electrotechnical Laboratory, 1-1-4 Umezono, 305-8568, Tsukuba Ibaraki, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Stasse", 
        "givenName": "Olivier", 
        "id": "sg:person.012501433417.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012501433417.37"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "National Institute of Advanced Industrial Science and Technology", 
          "id": "https://www.grid.ac/institutes/grid.208504.b", 
          "name": [
            "Humanoid Interaction Laboratory, Intelligent Systems Division, Electrotechnical Laboratory, 1-1-4 Umezono, 305-8568, Tsukuba Ibaraki, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Kuniyoshi", 
        "givenName": "Yasuo", 
        "id": "sg:person.013372311431.62", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013372311431.62"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "National Institute of Advanced Industrial Science and Technology", 
          "id": "https://www.grid.ac/institutes/grid.208504.b", 
          "name": [
            "Humanoid Interaction Laboratory, Intelligent Systems Division, Electrotechnical Laboratory, 1-1-4 Umezono, 305-8568, Tsukuba Ibaraki, Japan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Cheng", 
        "givenName": "Gordon", 
        "id": "sg:person.011214060205.82", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011214060205.82"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/0042-6989(80)90090-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1003181299"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0042-6989(80)90090-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1003181299"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1006/cviu.1997.0560", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1014254653"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s004220050518", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1015612823", 
          "https://doi.org/10.1007/s004220050518"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1023/a:1007974208880", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1015659659", 
          "https://doi.org/10.1023/a:1007974208880"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0004-3702(95)00026-7", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1032482354"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00353955", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1035624759", 
          "https://doi.org/10.1007/bf00353955"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00353955", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1035624759", 
          "https://doi.org/10.1007/bf00353955"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1006/rtim.1996.0057", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040373063"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1006/rtim.1996.0053", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1042127045"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.93808", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061157293"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1177/088307389100600118", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1063855932"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1177/088307389100600118", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1063855932"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2002-02-01", 
    "datePublishedReg": "2002-02-01", 
    "description": "The aim of this paper is to present our attempt in creating a visual system for a humanoid robot, which can intervene in non-specific tasks in real-time. Due to the generic aspects of our goal, our models are based around human architecture. Such approaches have usually been contradictory, with the efficient implementation of real systems and its demanding computational cost. We show that by using PredN1, a system for developing distributed real-time robotic applications, we are able to build a real-time scalable visual attention system. It is easy to change the structure of the system, or the hardware in order to investigate new models. In our presentation, we will also present our system with a number of human visual attributes, such as: log-polar retino-cortical mapping, banks of oriented filters providing a generic signature of any object in an image. Additionally, a visual attention mechanism \u2014 a psychophysical model \u2014 FeatureGate, is used in eliciting a fixation point. The system runs at frame rate, allowing interaction of same time scale as humans.", 
    "editor": [
      {
        "familyName": "Lee", 
        "givenName": "Seong-Whan", 
        "type": "Person"
      }, 
      {
        "familyName": "B\u00fclthoff", 
        "givenName": "Heinrich H.", 
        "type": "Person"
      }, 
      {
        "familyName": "Poggio", 
        "givenName": "Tomaso", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/3-540-45482-9_15", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-540-67560-0", 
        "978-3-540-45482-3"
      ], 
      "name": "Biologically Motivated Computer Vision", 
      "type": "Book"
    }, 
    "name": "Development of a Biologically Inspired Real-Time Visual Attention System", 
    "pagination": "150-159", 
    "productId": [
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/3-540-45482-9_15"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "b5a0fb1eb724e77319d49acf91aec7c70901bb9bea64aa1826e463faba103c58"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1038127590"
        ]
      }
    ], 
    "publisher": {
      "location": "Berlin, Heidelberg", 
      "name": "Springer Berlin Heidelberg", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/3-540-45482-9_15", 
      "https://app.dimensions.ai/details/publication/pub.1038127590"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-16T05:36", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000346_0000000346/records_99832_00000002.jsonl", 
    "type": "Chapter", 
    "url": "https://link.springer.com/10.1007%2F3-540-45482-9_15"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-45482-9_15'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-45482-9_15'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-45482-9_15'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-45482-9_15'


 

This table displays all metadata directly associated to this object as RDF triples.

122 TRIPLES      23 PREDICATES      36 URIs      19 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/3-540-45482-9_15 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N3e544af5b14f4d7e8b27616d47559f16
4 schema:citation sg:pub.10.1007/bf00353955
5 sg:pub.10.1007/s004220050518
6 sg:pub.10.1023/a:1007974208880
7 https://doi.org/10.1006/cviu.1997.0560
8 https://doi.org/10.1006/rtim.1996.0053
9 https://doi.org/10.1006/rtim.1996.0057
10 https://doi.org/10.1016/0004-3702(95)00026-7
11 https://doi.org/10.1016/0042-6989(80)90090-5
12 https://doi.org/10.1109/34.93808
13 https://doi.org/10.1177/088307389100600118
14 schema:datePublished 2002-02-01
15 schema:datePublishedReg 2002-02-01
16 schema:description The aim of this paper is to present our attempt in creating a visual system for a humanoid robot, which can intervene in non-specific tasks in real-time. Due to the generic aspects of our goal, our models are based around human architecture. Such approaches have usually been contradictory, with the efficient implementation of real systems and its demanding computational cost. We show that by using PredN1, a system for developing distributed real-time robotic applications, we are able to build a real-time scalable visual attention system. It is easy to change the structure of the system, or the hardware in order to investigate new models. In our presentation, we will also present our system with a number of human visual attributes, such as: log-polar retino-cortical mapping, banks of oriented filters providing a generic signature of any object in an image. Additionally, a visual attention mechanism — a psychophysical model — FeatureGate, is used in eliciting a fixation point. The system runs at frame rate, allowing interaction of same time scale as humans.
17 schema:editor N85aa440a0776467f84957fc12ad8fca5
18 schema:genre chapter
19 schema:inLanguage en
20 schema:isAccessibleForFree true
21 schema:isPartOf Nba99e01dc3774764af7fed9f37e5b7ee
22 schema:name Development of a Biologically Inspired Real-Time Visual Attention System
23 schema:pagination 150-159
24 schema:productId N56225b2a20bb4cffb620149c433f62e9
25 Ncbc2efc50be84feb942a79ed29591441
26 Nf37bd500d65f402b869df8363f4f8311
27 schema:publisher Nff3db0e3f0884151bd6a44882e2b29e5
28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038127590
29 https://doi.org/10.1007/3-540-45482-9_15
30 schema:sdDatePublished 2019-04-16T05:36
31 schema:sdLicense https://scigraph.springernature.com/explorer/license/
32 schema:sdPublisher Ne7c2fec1606f454f8381877ccb9249b8
33 schema:url https://link.springer.com/10.1007%2F3-540-45482-9_15
34 sgo:license sg:explorer/license/
35 sgo:sdDataset chapters
36 rdf:type schema:Chapter
37 N06aee3579eea43e7853abe9d84e3f6cd rdf:first sg:person.013372311431.62
38 rdf:rest N7835805f3c054879ae204f399d74d4cf
39 N29dd548291af46b2b3793d4ac8ff07f0 rdf:first N652e6cc0c4164f728d1a1ae5e2f07cf3
40 rdf:rest N2bc958dd7ad2486cbcc1e4a985e08e66
41 N2bc958dd7ad2486cbcc1e4a985e08e66 rdf:first Ne62c23f216464961adeafa98fdf6255a
42 rdf:rest rdf:nil
43 N3e544af5b14f4d7e8b27616d47559f16 rdf:first sg:person.012501433417.37
44 rdf:rest N06aee3579eea43e7853abe9d84e3f6cd
45 N56225b2a20bb4cffb620149c433f62e9 schema:name dimensions_id
46 schema:value pub.1038127590
47 rdf:type schema:PropertyValue
48 N652e6cc0c4164f728d1a1ae5e2f07cf3 schema:familyName Bülthoff
49 schema:givenName Heinrich H.
50 rdf:type schema:Person
51 N7835805f3c054879ae204f399d74d4cf rdf:first sg:person.011214060205.82
52 rdf:rest rdf:nil
53 N85aa440a0776467f84957fc12ad8fca5 rdf:first Ncde84525f43242d3aad25a0e76bb2375
54 rdf:rest N29dd548291af46b2b3793d4ac8ff07f0
55 Nba99e01dc3774764af7fed9f37e5b7ee schema:isbn 978-3-540-45482-3
56 978-3-540-67560-0
57 schema:name Biologically Motivated Computer Vision
58 rdf:type schema:Book
59 Ncbc2efc50be84feb942a79ed29591441 schema:name readcube_id
60 schema:value b5a0fb1eb724e77319d49acf91aec7c70901bb9bea64aa1826e463faba103c58
61 rdf:type schema:PropertyValue
62 Ncde84525f43242d3aad25a0e76bb2375 schema:familyName Lee
63 schema:givenName Seong-Whan
64 rdf:type schema:Person
65 Ne62c23f216464961adeafa98fdf6255a schema:familyName Poggio
66 schema:givenName Tomaso
67 rdf:type schema:Person
68 Ne7c2fec1606f454f8381877ccb9249b8 schema:name Springer Nature - SN SciGraph project
69 rdf:type schema:Organization
70 Nf37bd500d65f402b869df8363f4f8311 schema:name doi
71 schema:value 10.1007/3-540-45482-9_15
72 rdf:type schema:PropertyValue
73 Nff3db0e3f0884151bd6a44882e2b29e5 schema:location Berlin, Heidelberg
74 schema:name Springer Berlin Heidelberg
75 rdf:type schema:Organisation
76 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
77 schema:name Information and Computing Sciences
78 rdf:type schema:DefinedTerm
79 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
80 schema:name Artificial Intelligence and Image Processing
81 rdf:type schema:DefinedTerm
82 sg:person.011214060205.82 schema:affiliation https://www.grid.ac/institutes/grid.208504.b
83 schema:familyName Cheng
84 schema:givenName Gordon
85 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011214060205.82
86 rdf:type schema:Person
87 sg:person.012501433417.37 schema:affiliation https://www.grid.ac/institutes/grid.208504.b
88 schema:familyName Stasse
89 schema:givenName Olivier
90 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012501433417.37
91 rdf:type schema:Person
92 sg:person.013372311431.62 schema:affiliation https://www.grid.ac/institutes/grid.208504.b
93 schema:familyName Kuniyoshi
94 schema:givenName Yasuo
95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013372311431.62
96 rdf:type schema:Person
97 sg:pub.10.1007/bf00353955 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035624759
98 https://doi.org/10.1007/bf00353955
99 rdf:type schema:CreativeWork
100 sg:pub.10.1007/s004220050518 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015612823
101 https://doi.org/10.1007/s004220050518
102 rdf:type schema:CreativeWork
103 sg:pub.10.1023/a:1007974208880 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015659659
104 https://doi.org/10.1023/a:1007974208880
105 rdf:type schema:CreativeWork
106 https://doi.org/10.1006/cviu.1997.0560 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014254653
107 rdf:type schema:CreativeWork
108 https://doi.org/10.1006/rtim.1996.0053 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042127045
109 rdf:type schema:CreativeWork
110 https://doi.org/10.1006/rtim.1996.0057 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040373063
111 rdf:type schema:CreativeWork
112 https://doi.org/10.1016/0004-3702(95)00026-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032482354
113 rdf:type schema:CreativeWork
114 https://doi.org/10.1016/0042-6989(80)90090-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1003181299
115 rdf:type schema:CreativeWork
116 https://doi.org/10.1109/34.93808 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157293
117 rdf:type schema:CreativeWork
118 https://doi.org/10.1177/088307389100600118 schema:sameAs https://app.dimensions.ai/details/publication/pub.1063855932
119 rdf:type schema:CreativeWork
120 https://www.grid.ac/institutes/grid.208504.b schema:alternateName National Institute of Advanced Industrial Science and Technology
121 schema:name Humanoid Interaction Laboratory, Intelligent Systems Division, Electrotechnical Laboratory, 1-1-4 Umezono, 305-8568, Tsukuba Ibaraki, Japan
122 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...