Perception and Developmental Learning of Affordances in Autonomous Robots View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2007

AUTHORS

Lucas Paletta , Gerald Fritz , Florian Kintzler , Jörg Irran , Georg Dorffner

ABSTRACT

Recently, the aspect of visual perception has been explored in the context of Gibson’s concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent’s anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction. More... »

PAGES

235-250

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-540-74565-5_19

DOI

http://dx.doi.org/10.1007/978-3-540-74565-5_19

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1047447712


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria", 
          "id": "http://www.grid.ac/institutes/grid.8684.2", 
          "name": [
            "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Paletta", 
        "givenName": "Lucas", 
        "id": "sg:person.010060055125.29", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria", 
          "id": "http://www.grid.ac/institutes/grid.8684.2", 
          "name": [
            "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Fritz", 
        "givenName": "Gerald", 
        "id": "sg:person.011015636117.31", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011015636117.31"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria", 
          "id": "http://www.grid.ac/institutes/grid.432019.d", 
          "name": [
            "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Kintzler", 
        "givenName": "Florian", 
        "id": "sg:person.014632255776.62", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014632255776.62"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria", 
          "id": "http://www.grid.ac/institutes/grid.432019.d", 
          "name": [
            "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Irran", 
        "givenName": "J\u00f6rg", 
        "id": "sg:person.011613216517.45", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011613216517.45"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria", 
          "id": "http://www.grid.ac/institutes/grid.432019.d", 
          "name": [
            "\u00d6sterreichisches Forschungsinstitut f\u00fcr Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Dorffner", 
        "givenName": "Georg", 
        "id": "sg:person.01121016077.67", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01121016077.67"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2007", 
    "datePublishedReg": "2007-01-01", 
    "description": "Recently, the aspect of visual perception has been explored in the context of Gibson\u2019s concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent\u2019s anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction.", 
    "editor": [
      {
        "familyName": "Hertzberg", 
        "givenName": "Joachim", 
        "type": "Person"
      }, 
      {
        "familyName": "Beetz", 
        "givenName": "Michael", 
        "type": "Person"
      }, 
      {
        "familyName": "Englert", 
        "givenName": "Roman", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-540-74565-5_19", 
    "inLanguage": "en", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-540-74564-8", 
        "978-3-540-74565-5"
      ], 
      "name": "KI 2007: Advances in Artificial Intelligence", 
      "type": "Book"
    }, 
    "keywords": [
      "visual feature representations", 
      "developmental learning", 
      "visual cues", 
      "feature representation", 
      "predictive visual cue", 
      "perception of affordances", 
      "robot control architecture", 
      "perceptual cueing", 
      "perceptual states", 
      "basic affordances", 
      "visual perception", 
      "affordance features", 
      "representational concepts", 
      "robot scenario", 
      "human perception", 
      "autonomous robots", 
      "Gibson\u2019s concept", 
      "incremental learning", 
      "visual entities", 
      "reinforcement learning", 
      "reinforcement signal", 
      "cues", 
      "control architecture", 
      "perception", 
      "agents' anticipations", 
      "cueing", 
      "learning", 
      "affordances", 
      "functional view", 
      "causal relations", 
      "anticipation", 
      "representation", 
      "robot", 
      "architecture", 
      "concept", 
      "features", 
      "recognition", 
      "scenarios", 
      "framework", 
      "relation", 
      "context", 
      "work", 
      "interaction", 
      "entities", 
      "aspects", 
      "opportunities", 
      "extension", 
      "view", 
      "way", 
      "relevance", 
      "importance", 
      "role", 
      "terms", 
      "motion", 
      "important role", 
      "signals", 
      "analogy", 
      "state", 
      "addition"
    ], 
    "name": "Perception and Developmental Learning of Affordances in Autonomous Robots", 
    "pagination": "235-250", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1047447712"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-540-74565-5_19"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-540-74565-5_19", 
      "https://app.dimensions.ai/details/publication/pub.1047447712"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-06-01T22:33", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220601/entities/gbq_results/chapter/chapter_383.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-540-74565-5_19"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-74565-5_19'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-74565-5_19'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-74565-5_19'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-74565-5_19'


 

This table displays all metadata directly associated to this object as RDF triples.

168 TRIPLES      23 PREDICATES      87 URIs      78 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-540-74565-5_19 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 anzsrc-for:17
4 anzsrc-for:1701
5 schema:author Ndd56efa3794c46738a7419cd79bb7039
6 schema:datePublished 2007
7 schema:datePublishedReg 2007-01-01
8 schema:description Recently, the aspect of visual perception has been explored in the context of Gibson’s concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent’s anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction.
9 schema:editor Nad9e3007932c4aa9ad7efb73d2831181
10 schema:genre chapter
11 schema:inLanguage en
12 schema:isAccessibleForFree false
13 schema:isPartOf N4fb00b04f0d5481897d080ce37c17e09
14 schema:keywords Gibson’s concept
15 addition
16 affordance features
17 affordances
18 agents' anticipations
19 analogy
20 anticipation
21 architecture
22 aspects
23 autonomous robots
24 basic affordances
25 causal relations
26 concept
27 context
28 control architecture
29 cueing
30 cues
31 developmental learning
32 entities
33 extension
34 feature representation
35 features
36 framework
37 functional view
38 human perception
39 importance
40 important role
41 incremental learning
42 interaction
43 learning
44 motion
45 opportunities
46 perception
47 perception of affordances
48 perceptual cueing
49 perceptual states
50 predictive visual cue
51 recognition
52 reinforcement learning
53 reinforcement signal
54 relation
55 relevance
56 representation
57 representational concepts
58 robot
59 robot control architecture
60 robot scenario
61 role
62 scenarios
63 signals
64 state
65 terms
66 view
67 visual cues
68 visual entities
69 visual feature representations
70 visual perception
71 way
72 work
73 schema:name Perception and Developmental Learning of Affordances in Autonomous Robots
74 schema:pagination 235-250
75 schema:productId Nc93024bb583c45259b31c51239a0c5ab
76 Ncab00b201bf543429398c20dd0bd54f6
77 schema:publisher N41953672c6634712977e52060096eca2
78 schema:sameAs https://app.dimensions.ai/details/publication/pub.1047447712
79 https://doi.org/10.1007/978-3-540-74565-5_19
80 schema:sdDatePublished 2022-06-01T22:33
81 schema:sdLicense https://scigraph.springernature.com/explorer/license/
82 schema:sdPublisher N1b265fb71ea04afba9e84d084d0c1f71
83 schema:url https://doi.org/10.1007/978-3-540-74565-5_19
84 sgo:license sg:explorer/license/
85 sgo:sdDataset chapters
86 rdf:type schema:Chapter
87 N1b265fb71ea04afba9e84d084d0c1f71 schema:name Springer Nature - SN SciGraph project
88 rdf:type schema:Organization
89 N41953672c6634712977e52060096eca2 schema:name Springer Nature
90 rdf:type schema:Organisation
91 N47b7d75259004bcbb7c2b65825fe8a7b rdf:first sg:person.011613216517.45
92 rdf:rest N684374f7c1444f75af4dc6b10d76d847
93 N4fb00b04f0d5481897d080ce37c17e09 schema:isbn 978-3-540-74564-8
94 978-3-540-74565-5
95 schema:name KI 2007: Advances in Artificial Intelligence
96 rdf:type schema:Book
97 N684374f7c1444f75af4dc6b10d76d847 rdf:first sg:person.01121016077.67
98 rdf:rest rdf:nil
99 N701c773a4f514d3191a2de933758dcf1 schema:familyName Beetz
100 schema:givenName Michael
101 rdf:type schema:Person
102 N86aa428825c44dd8a3856e81a3fe58ba rdf:first N8fda9b62722942ae976f4e75ae0a9d4f
103 rdf:rest rdf:nil
104 N8fda9b62722942ae976f4e75ae0a9d4f schema:familyName Englert
105 schema:givenName Roman
106 rdf:type schema:Person
107 Na01c5abc6b8e435684f22eb8a0d8a74f rdf:first sg:person.014632255776.62
108 rdf:rest N47b7d75259004bcbb7c2b65825fe8a7b
109 Naab784a7b3ad4fa68d2900f7dc5ba555 schema:familyName Hertzberg
110 schema:givenName Joachim
111 rdf:type schema:Person
112 Nad9e3007932c4aa9ad7efb73d2831181 rdf:first Naab784a7b3ad4fa68d2900f7dc5ba555
113 rdf:rest Nec58bcaef5b24a49811f60da829889f6
114 Nc93024bb583c45259b31c51239a0c5ab schema:name doi
115 schema:value 10.1007/978-3-540-74565-5_19
116 rdf:type schema:PropertyValue
117 Ncab00b201bf543429398c20dd0bd54f6 schema:name dimensions_id
118 schema:value pub.1047447712
119 rdf:type schema:PropertyValue
120 Ndd56efa3794c46738a7419cd79bb7039 rdf:first sg:person.010060055125.29
121 rdf:rest Ne2c872be1f86484394ca05aee459f9a6
122 Ne2c872be1f86484394ca05aee459f9a6 rdf:first sg:person.011015636117.31
123 rdf:rest Na01c5abc6b8e435684f22eb8a0d8a74f
124 Nec58bcaef5b24a49811f60da829889f6 rdf:first N701c773a4f514d3191a2de933758dcf1
125 rdf:rest N86aa428825c44dd8a3856e81a3fe58ba
126 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
127 schema:name Information and Computing Sciences
128 rdf:type schema:DefinedTerm
129 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
130 schema:name Artificial Intelligence and Image Processing
131 rdf:type schema:DefinedTerm
132 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
133 schema:name Psychology and Cognitive Sciences
134 rdf:type schema:DefinedTerm
135 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
136 schema:name Psychology
137 rdf:type schema:DefinedTerm
138 sg:person.010060055125.29 schema:affiliation grid-institutes:grid.8684.2
139 schema:familyName Paletta
140 schema:givenName Lucas
141 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29
142 rdf:type schema:Person
143 sg:person.011015636117.31 schema:affiliation grid-institutes:grid.8684.2
144 schema:familyName Fritz
145 schema:givenName Gerald
146 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011015636117.31
147 rdf:type schema:Person
148 sg:person.01121016077.67 schema:affiliation grid-institutes:grid.432019.d
149 schema:familyName Dorffner
150 schema:givenName Georg
151 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01121016077.67
152 rdf:type schema:Person
153 sg:person.011613216517.45 schema:affiliation grid-institutes:grid.432019.d
154 schema:familyName Irran
155 schema:givenName Jörg
156 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011613216517.45
157 rdf:type schema:Person
158 sg:person.014632255776.62 schema:affiliation grid-institutes:grid.432019.d
159 schema:familyName Kintzler
160 schema:givenName Florian
161 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014632255776.62
162 rdf:type schema:Person
163 grid-institutes:grid.432019.d schema:alternateName Österreichisches Forschungsinstitut für Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria
164 schema:name Österreichisches Forschungsinstitut für Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria
165 rdf:type schema:Organization
166 grid-institutes:grid.8684.2 schema:alternateName Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria
167 schema:name Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria
168 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...