Reinforcement Learning of Predictive Features in Affordance Perception View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2008-01-01

AUTHORS

Lucas Paletta , Gerald Fritz

ABSTRACT

Recently, the aspect of visual perception has been explored in the context of Gibson’s concept of affordances [1] in various ways [4-9]. In extension to existing functional views on visual feature representations, we focus on the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. Furthermore, we propose that the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized towards using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and associated anticipated interactions, using visual information within the framework of Markov Decision Processes (MDPs). We emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. Affordance-like perception should enable systems to react to environment stimuli both more efficiently and autonomously, and provide a potential to plan on the basis of relevant responses to more complex perceptual configurations. We verify the concept with a concrete implementation of learning visual cues by reinforcement, applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction. More... »

PAGES

77-90

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-540-77915-5_6

DOI

http://dx.doi.org/10.1007/978-3-540-77915-5_6

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1007502705


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/17", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology and Cognitive Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/1701", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Psychology", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria", 
          "id": "http://www.grid.ac/institutes/grid.8684.2", 
          "name": [
            "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Paletta", 
        "givenName": "Lucas", 
        "id": "sg:person.010060055125.29", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria", 
          "id": "http://www.grid.ac/institutes/grid.8684.2", 
          "name": [
            "Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Fritz", 
        "givenName": "Gerald", 
        "id": "sg:person.011015636117.31", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011015636117.31"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2008-01-01", 
    "datePublishedReg": "2008-01-01", 
    "description": "Recently, the aspect of visual perception has been explored in the context of Gibson\u2019s concept of affordances [1] in various ways [4-9]. In extension to existing functional views on visual feature representations, we focus on the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. Furthermore, we propose that the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized towards using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and associated anticipated interactions, using visual information within the framework of Markov Decision Processes (MDPs). We emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. Affordance-like perception should enable systems to react to environment stimuli both more efficiently and autonomously, and provide a potential to plan on the basis of relevant responses to more complex perceptual configurations. We verify the concept with a concrete implementation of learning visual cues by reinforcement, applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.", 
    "editor": [
      {
        "familyName": "Rome", 
        "givenName": "Erich", 
        "type": "Person"
      }, 
      {
        "familyName": "Hertzberg", 
        "givenName": "Joachim", 
        "type": "Person"
      }, 
      {
        "familyName": "Dorffner", 
        "givenName": "Georg", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-540-77915-5_6", 
    "inLanguage": "en", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-540-77914-8", 
        "978-3-540-77915-5"
      ], 
      "name": "Towards Affordance-Based Robot Control", 
      "type": "Book"
    }, 
    "keywords": [
      "visual feature representations", 
      "visual cues", 
      "arbitrary visual feature representations", 
      "anticipation of opportunities", 
      "perception of affordances", 
      "art visual descriptors", 
      "perceptual entities", 
      "robot control architecture", 
      "perceptual cueing", 
      "affordance perception", 
      "perceptual configurations", 
      "visual perception", 
      "feature representation", 
      "Gibson\u2019s concept", 
      "robot interaction", 
      "representational concepts", 
      "visual information", 
      "visual entities", 
      "environment stimuli", 
      "robotic agents", 
      "perception", 
      "robot scenario", 
      "cueing", 
      "functional view", 
      "cues", 
      "affordances", 
      "learning", 
      "decision process", 
      "Markov decision process", 
      "visual descriptors", 
      "causal relationship", 
      "relevant responses", 
      "representation", 
      "context", 
      "predictive features", 
      "stimuli", 
      "anticipation", 
      "concrete implementation", 
      "control architecture", 
      "concept", 
      "recognition", 
      "optical flow", 
      "region of interest", 
      "new framework", 
      "interaction", 
      "relationship", 
      "framework", 
      "aspects", 
      "reinforcement", 
      "opportunities", 
      "view", 
      "features", 
      "information", 
      "relevance", 
      "way", 
      "importance", 
      "architecture", 
      "role", 
      "process", 
      "implementation", 
      "important role", 
      "descriptors", 
      "entities", 
      "response", 
      "scenarios", 
      "interest", 
      "terms", 
      "basis", 
      "system", 
      "state", 
      "extension", 
      "potential", 
      "configuration", 
      "region", 
      "agents", 
      "flow", 
      "determined 3D features", 
      "affordance-like visual entities", 
      "future robot control architectures", 
      "Affordance-like perception", 
      "complex perceptual configurations", 
      "simulated robot scenario"
    ], 
    "name": "Reinforcement Learning of Predictive Features in Affordance Perception", 
    "pagination": "77-90", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1007502705"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-540-77915-5_6"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-540-77915-5_6", 
      "https://app.dimensions.ai/details/publication/pub.1007502705"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-01-01T19:25", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/chapter/chapter_442.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-540-77915-5_6"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-77915-5_6'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-77915-5_6'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-77915-5_6'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-77915-5_6'


 

This table displays all metadata directly associated to this object as RDF triples.

167 TRIPLES      23 PREDICATES      109 URIs      100 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-540-77915-5_6 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 anzsrc-for:17
4 anzsrc-for:1701
5 schema:author N8a43bd1c06dc41079474ae2ee28b21d3
6 schema:datePublished 2008-01-01
7 schema:datePublishedReg 2008-01-01
8 schema:description Recently, the aspect of visual perception has been explored in the context of Gibson’s concept of affordances [1] in various ways [4-9]. In extension to existing functional views on visual feature representations, we focus on the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. Furthermore, we propose that the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized towards using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and associated anticipated interactions, using visual information within the framework of Markov Decision Processes (MDPs). We emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. Affordance-like perception should enable systems to react to environment stimuli both more efficiently and autonomously, and provide a potential to plan on the basis of relevant responses to more complex perceptual configurations. We verify the concept with a concrete implementation of learning visual cues by reinforcement, applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.
9 schema:editor N3434067746404dedaba4cb08108cac6c
10 schema:genre chapter
11 schema:inLanguage en
12 schema:isAccessibleForFree false
13 schema:isPartOf N3281205a0c0246a79300d52a5f8e95d1
14 schema:keywords Affordance-like perception
15 Gibson’s concept
16 Markov decision process
17 affordance perception
18 affordance-like visual entities
19 affordances
20 agents
21 anticipation
22 anticipation of opportunities
23 arbitrary visual feature representations
24 architecture
25 art visual descriptors
26 aspects
27 basis
28 causal relationship
29 complex perceptual configurations
30 concept
31 concrete implementation
32 configuration
33 context
34 control architecture
35 cueing
36 cues
37 decision process
38 descriptors
39 determined 3D features
40 entities
41 environment stimuli
42 extension
43 feature representation
44 features
45 flow
46 framework
47 functional view
48 future robot control architectures
49 implementation
50 importance
51 important role
52 information
53 interaction
54 interest
55 learning
56 new framework
57 opportunities
58 optical flow
59 perception
60 perception of affordances
61 perceptual configurations
62 perceptual cueing
63 perceptual entities
64 potential
65 predictive features
66 process
67 recognition
68 region
69 region of interest
70 reinforcement
71 relationship
72 relevance
73 relevant responses
74 representation
75 representational concepts
76 response
77 robot control architecture
78 robot interaction
79 robot scenario
80 robotic agents
81 role
82 scenarios
83 simulated robot scenario
84 state
85 stimuli
86 system
87 terms
88 view
89 visual cues
90 visual descriptors
91 visual entities
92 visual feature representations
93 visual information
94 visual perception
95 way
96 schema:name Reinforcement Learning of Predictive Features in Affordance Perception
97 schema:pagination 77-90
98 schema:productId N17b43cc599ee4015b7beeba90f08dc77
99 Ndacdaf3da3c444e08553368914c24dc9
100 schema:publisher N6666323876d042bc98dcc026b5f5c8a9
101 schema:sameAs https://app.dimensions.ai/details/publication/pub.1007502705
102 https://doi.org/10.1007/978-3-540-77915-5_6
103 schema:sdDatePublished 2022-01-01T19:25
104 schema:sdLicense https://scigraph.springernature.com/explorer/license/
105 schema:sdPublisher N12424dfb917b4db8b4382c0f02b26baf
106 schema:url https://doi.org/10.1007/978-3-540-77915-5_6
107 sgo:license sg:explorer/license/
108 sgo:sdDataset chapters
109 rdf:type schema:Chapter
110 N12424dfb917b4db8b4382c0f02b26baf schema:name Springer Nature - SN SciGraph project
111 rdf:type schema:Organization
112 N17b43cc599ee4015b7beeba90f08dc77 schema:name dimensions_id
113 schema:value pub.1007502705
114 rdf:type schema:PropertyValue
115 N3281205a0c0246a79300d52a5f8e95d1 schema:isbn 978-3-540-77914-8
116 978-3-540-77915-5
117 schema:name Towards Affordance-Based Robot Control
118 rdf:type schema:Book
119 N3434067746404dedaba4cb08108cac6c rdf:first N448b148cdf174fe1bc4e71acf22e6a66
120 rdf:rest N7d99dcb7563d443c980db25b6bc0723e
121 N448b148cdf174fe1bc4e71acf22e6a66 schema:familyName Rome
122 schema:givenName Erich
123 rdf:type schema:Person
124 N6666323876d042bc98dcc026b5f5c8a9 schema:name Springer Nature
125 rdf:type schema:Organisation
126 N7d99dcb7563d443c980db25b6bc0723e rdf:first N9a869b71463c47b1bfe2da86fb7c46b6
127 rdf:rest Ne49532bc8cc2440db503b077706277c5
128 N852c6e9fab67480bb799677c133a5a68 rdf:first sg:person.011015636117.31
129 rdf:rest rdf:nil
130 N8a43bd1c06dc41079474ae2ee28b21d3 rdf:first sg:person.010060055125.29
131 rdf:rest N852c6e9fab67480bb799677c133a5a68
132 N9a869b71463c47b1bfe2da86fb7c46b6 schema:familyName Hertzberg
133 schema:givenName Joachim
134 rdf:type schema:Person
135 Nbb8e7385db3b4815be066bb81062d42e schema:familyName Dorffner
136 schema:givenName Georg
137 rdf:type schema:Person
138 Ndacdaf3da3c444e08553368914c24dc9 schema:name doi
139 schema:value 10.1007/978-3-540-77915-5_6
140 rdf:type schema:PropertyValue
141 Ne49532bc8cc2440db503b077706277c5 rdf:first Nbb8e7385db3b4815be066bb81062d42e
142 rdf:rest rdf:nil
143 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
144 schema:name Information and Computing Sciences
145 rdf:type schema:DefinedTerm
146 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
147 schema:name Artificial Intelligence and Image Processing
148 rdf:type schema:DefinedTerm
149 anzsrc-for:17 schema:inDefinedTermSet anzsrc-for:
150 schema:name Psychology and Cognitive Sciences
151 rdf:type schema:DefinedTerm
152 anzsrc-for:1701 schema:inDefinedTermSet anzsrc-for:
153 schema:name Psychology
154 rdf:type schema:DefinedTerm
155 sg:person.010060055125.29 schema:affiliation grid-institutes:grid.8684.2
156 schema:familyName Paletta
157 schema:givenName Lucas
158 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29
159 rdf:type schema:Person
160 sg:person.011015636117.31 schema:affiliation grid-institutes:grid.8684.2
161 schema:familyName Fritz
162 schema:givenName Gerald
163 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011015636117.31
164 rdf:type schema:Person
165 grid-institutes:grid.8684.2 schema:alternateName Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria
166 schema:name Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria
167 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...