Navigation and Planning in an Unknown Environment Using Vision and a Cognitive Map View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2006

AUTHORS

Nicolas Cuperlier , Mathias Quoy , Philippe Gaussier

ABSTRACT

We present a framework for Simultaneous Localization and Map building of an unknown environment based on vision and dead-reckoning systems. An omnidirectional camera gives a panoramic image from which no a priori defined landmarks are extracted. The set of landmarks and their azimuth relative to the north given by a compass defines a particular location without any need of an external environment map. Transitions between two locations are explicitly coded. They are simultaneously used in two layers of our architecture. First to construct, during exploration (latent learning), a graph (our cognitive map) of the environment where the links are reinforced when the path is used. And second, to be associated, on an another layer, with the integrated movement used for going from one place to the other. During the planning phase, the activity of transition coding for the required goal in the cognitive map spreads along the arcs of this graph giving transitions (nodes) an higher value to the ones closer from this goal. We will show that, when planning to reach a goal in this environment is needed, the interactions of these two levels can lead to the selection of multiple transitions corresponding to the most activated ones according to the current place. Those proposed transitions are finally exploited by a dynamical system (neural field) merging these informations. Stable solution of this system gives a unique movement vector to apply. Experimental results underline the interest of such a soft competition of transition information over a strict one to get a more accurate generalization on the movement selection. More... »

PAGES

129-142

Book

TITLE

European Robotics Symposium 2006

ISBN

3-540-32689-8

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/11681120_11

DOI

http://dx.doi.org/10.1007/11681120_11

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1035288339


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Cergy-Pontoise University", 
          "id": "https://www.grid.ac/institutes/grid.7901.f", 
          "name": [
            "ETIS-UMR 8051 Universit de Cergy-Pontoise - ENSEA 6, Avenue du Ponceau 95014 Cergy-Pontoise, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Cuperlier", 
        "givenName": "Nicolas", 
        "id": "sg:person.0625043376.34", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0625043376.34"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Cergy-Pontoise University", 
          "id": "https://www.grid.ac/institutes/grid.7901.f", 
          "name": [
            "ETIS-UMR 8051 Universit de Cergy-Pontoise - ENSEA 6, Avenue du Ponceau 95014 Cergy-Pontoise, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Quoy", 
        "givenName": "Mathias", 
        "id": "sg:person.010556353111.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010556353111.37"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Cergy-Pontoise University", 
          "id": "https://www.grid.ac/institutes/grid.7901.f", 
          "name": [
            "ETIS-UMR 8051 Universit de Cergy-Pontoise - ENSEA 6, Avenue du Ponceau 95014 Cergy-Pontoise, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Gaussier", 
        "givenName": "Philippe", 
        "id": "sg:person.01041272554.05", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01041272554.05"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2006", 
    "datePublishedReg": "2006-01-01", 
    "description": "We present a framework for Simultaneous Localization and Map building of an unknown environment based on vision and dead-reckoning systems. An omnidirectional camera gives a panoramic image from which no a priori defined landmarks are extracted. The set of landmarks and their azimuth relative to the north given by a compass defines a particular location without any need of an external environment map. Transitions between two locations are explicitly coded. They are simultaneously used in two layers of our architecture. First to construct, during exploration (latent learning), a graph (our cognitive map) of the environment where the links are reinforced when the path is used. And second, to be associated, on an another layer, with the integrated movement used for going from one place to the other. During the planning phase, the activity of transition coding for the required goal in the cognitive map spreads along the arcs of this graph giving transitions (nodes) an higher value to the ones closer from this goal. We will show that, when planning to reach a goal in this environment is needed, the interactions of these two levels can lead to the selection of multiple transitions corresponding to the most activated ones according to the current place. Those proposed transitions are finally exploited by a dynamical system (neural field) merging these informations. Stable solution of this system gives a unique movement vector to apply. Experimental results underline the interest of such a soft competition of transition information over a strict one to get a more accurate generalization on the movement selection.", 
    "editor": [
      {
        "familyName": "Christensen", 
        "givenName": "Henrik I.", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/11681120_11", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "3-540-32689-8"
      ], 
      "name": "European Robotics Symposium 2006", 
      "type": "Book"
    }, 
    "name": "Navigation and Planning in an Unknown Environment Using Vision and a Cognitive Map", 
    "pagination": "129-142", 
    "productId": [
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/11681120_11"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "a8190f039d44cbafdc0958d8f6f244d2a8a59761c38acd0033e39d9d286d0c4b"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1035288339"
        ]
      }
    ], 
    "publisher": {
      "location": "Berlin/Heidelberg", 
      "name": "Springer-Verlag", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/11681120_11", 
      "https://app.dimensions.ai/details/publication/pub.1035288339"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-15T23:40", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8697_00000060.jsonl", 
    "type": "Chapter", 
    "url": "http://link.springer.com/10.1007/11681120_11"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/11681120_11'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/11681120_11'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/11681120_11'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/11681120_11'


 

This table displays all metadata directly associated to this object as RDF triples.

78 TRIPLES      22 PREDICATES      27 URIs      20 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/11681120_11 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N63f1ca19736e4efe9704b26f57aed4c7
4 schema:datePublished 2006
5 schema:datePublishedReg 2006-01-01
6 schema:description We present a framework for Simultaneous Localization and Map building of an unknown environment based on vision and dead-reckoning systems. An omnidirectional camera gives a panoramic image from which no a priori defined landmarks are extracted. The set of landmarks and their azimuth relative to the north given by a compass defines a particular location without any need of an external environment map. Transitions between two locations are explicitly coded. They are simultaneously used in two layers of our architecture. First to construct, during exploration (latent learning), a graph (our cognitive map) of the environment where the links are reinforced when the path is used. And second, to be associated, on an another layer, with the integrated movement used for going from one place to the other. During the planning phase, the activity of transition coding for the required goal in the cognitive map spreads along the arcs of this graph giving transitions (nodes) an higher value to the ones closer from this goal. We will show that, when planning to reach a goal in this environment is needed, the interactions of these two levels can lead to the selection of multiple transitions corresponding to the most activated ones according to the current place. Those proposed transitions are finally exploited by a dynamical system (neural field) merging these informations. Stable solution of this system gives a unique movement vector to apply. Experimental results underline the interest of such a soft competition of transition information over a strict one to get a more accurate generalization on the movement selection.
7 schema:editor N07a5436ef4d44138be2427393e430682
8 schema:genre chapter
9 schema:inLanguage en
10 schema:isAccessibleForFree true
11 schema:isPartOf Ndea8a70cfa1f433490c296a60aea1a85
12 schema:name Navigation and Planning in an Unknown Environment Using Vision and a Cognitive Map
13 schema:pagination 129-142
14 schema:productId N0a76283ac213420282a72b33f0da430e
15 N43a0e3fa731a477c96ccb80b00ad280f
16 N53e3fd58c6c94172956671bba4e3038e
17 schema:publisher Nc7fb3500e71b44e482c95ba246bb5f74
18 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035288339
19 https://doi.org/10.1007/11681120_11
20 schema:sdDatePublished 2019-04-15T23:40
21 schema:sdLicense https://scigraph.springernature.com/explorer/license/
22 schema:sdPublisher N4578335d9b464c1782b97176efc9997b
23 schema:url http://link.springer.com/10.1007/11681120_11
24 sgo:license sg:explorer/license/
25 sgo:sdDataset chapters
26 rdf:type schema:Chapter
27 N07a5436ef4d44138be2427393e430682 rdf:first N9bc9da7482554192a6c2914c97c9098d
28 rdf:rest rdf:nil
29 N0a76283ac213420282a72b33f0da430e schema:name dimensions_id
30 schema:value pub.1035288339
31 rdf:type schema:PropertyValue
32 N43a0e3fa731a477c96ccb80b00ad280f schema:name doi
33 schema:value 10.1007/11681120_11
34 rdf:type schema:PropertyValue
35 N4578335d9b464c1782b97176efc9997b schema:name Springer Nature - SN SciGraph project
36 rdf:type schema:Organization
37 N53e3fd58c6c94172956671bba4e3038e schema:name readcube_id
38 schema:value a8190f039d44cbafdc0958d8f6f244d2a8a59761c38acd0033e39d9d286d0c4b
39 rdf:type schema:PropertyValue
40 N5faee7a28c68456f919e805fe2c91664 rdf:first sg:person.01041272554.05
41 rdf:rest rdf:nil
42 N63f1ca19736e4efe9704b26f57aed4c7 rdf:first sg:person.0625043376.34
43 rdf:rest Nbba47ca260e34686a498be5e445f2bb0
44 N9bc9da7482554192a6c2914c97c9098d schema:familyName Christensen
45 schema:givenName Henrik I.
46 rdf:type schema:Person
47 Nbba47ca260e34686a498be5e445f2bb0 rdf:first sg:person.010556353111.37
48 rdf:rest N5faee7a28c68456f919e805fe2c91664
49 Nc7fb3500e71b44e482c95ba246bb5f74 schema:location Berlin/Heidelberg
50 schema:name Springer-Verlag
51 rdf:type schema:Organisation
52 Ndea8a70cfa1f433490c296a60aea1a85 schema:isbn 3-540-32689-8
53 schema:name European Robotics Symposium 2006
54 rdf:type schema:Book
55 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
56 schema:name Information and Computing Sciences
57 rdf:type schema:DefinedTerm
58 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
59 schema:name Artificial Intelligence and Image Processing
60 rdf:type schema:DefinedTerm
61 sg:person.01041272554.05 schema:affiliation https://www.grid.ac/institutes/grid.7901.f
62 schema:familyName Gaussier
63 schema:givenName Philippe
64 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01041272554.05
65 rdf:type schema:Person
66 sg:person.010556353111.37 schema:affiliation https://www.grid.ac/institutes/grid.7901.f
67 schema:familyName Quoy
68 schema:givenName Mathias
69 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010556353111.37
70 rdf:type schema:Person
71 sg:person.0625043376.34 schema:affiliation https://www.grid.ac/institutes/grid.7901.f
72 schema:familyName Cuperlier
73 schema:givenName Nicolas
74 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0625043376.34
75 rdf:type schema:Person
76 https://www.grid.ac/institutes/grid.7901.f schema:alternateName Cergy-Pontoise University
77 schema:name ETIS-UMR 8051 Universit de Cergy-Pontoise - ENSEA 6, Avenue du Ponceau 95014 Cergy-Pontoise, France
78 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...