Object Detection Using Model-based Prediction and Motion Parallax View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

1992

AUTHORS

Stefan Carlsson , Jan-Olof Eklundh

ABSTRACT

When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3–5]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points. More... »

PAGES

148-161

Book

TITLE

Vision-based Vehicle Guidance

ISBN

978-1-4612-7665-4
978-1-4612-2778-6

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-1-4612-2778-6_6

DOI

http://dx.doi.org/10.1007/978-1-4612-2778-6_6

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1002670094


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "familyName": "Carlsson", 
        "givenName": "Stefan", 
        "id": "sg:person.015432652223.40", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015432652223.40"
        ], 
        "type": "Person"
      }, 
      {
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "1992", 
    "datePublishedReg": "1992-01-01", 
    "description": "When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3\u20135]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points.", 
    "editor": [
      {
        "familyName": "Masaki", 
        "givenName": "Ichiro", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-1-4612-2778-6_6", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-1-4612-7665-4", 
        "978-1-4612-2778-6"
      ], 
      "name": "Vision-based Vehicle Guidance", 
      "type": "Book"
    }, 
    "keywords": [
      "obstacle detection", 
      "planar road", 
      "motion parallax", 
      "visual navigation", 
      "unconstrained environments", 
      "object detection", 
      "image points", 
      "motion field", 
      "scene", 
      "images", 
      "visual images", 
      "objects", 
      "information", 
      "navigation", 
      "detection", 
      "parallax", 
      "environment", 
      "visual observers", 
      "direct solution", 
      "road", 
      "example", 
      "point", 
      "motion", 
      "solution", 
      "field", 
      "work", 
      "prediction", 
      "model", 
      "immediate background", 
      "show", 
      "projections", 
      "successive times", 
      "successive time points", 
      "time", 
      "observer", 
      "transformation", 
      "arrow", 
      "background", 
      "ground", 
      "displacement field", 
      "time points", 
      "differences", 
      "approach"
    ], 
    "name": "Object Detection Using Model-based Prediction and Motion Parallax", 
    "pagination": "148-161", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1002670094"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-1-4612-2778-6_6"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-1-4612-2778-6_6", 
      "https://app.dimensions.ai/details/publication/pub.1002670094"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-12-01T06:50", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/chapter/chapter_30.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-1-4612-2778-6_6"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-1-4612-2778-6_6'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-1-4612-2778-6_6'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-1-4612-2778-6_6'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-1-4612-2778-6_6'


 

This table displays all metadata directly associated to this object as RDF triples.

104 TRIPLES      22 PREDICATES      68 URIs      61 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-1-4612-2778-6_6 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N73ca79a40e684d31b27aeb00f19b3800
4 schema:datePublished 1992
5 schema:datePublishedReg 1992-01-01
6 schema:description When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3–5]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points.
7 schema:editor N20e931ce2aec41738e12c27f63a3bc6b
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf N22fe22ca843b4c14a9500dcdb7afeee8
11 schema:keywords approach
12 arrow
13 background
14 detection
15 differences
16 direct solution
17 displacement field
18 environment
19 example
20 field
21 ground
22 image points
23 images
24 immediate background
25 information
26 model
27 motion
28 motion field
29 motion parallax
30 navigation
31 object detection
32 objects
33 observer
34 obstacle detection
35 parallax
36 planar road
37 point
38 prediction
39 projections
40 road
41 scene
42 show
43 solution
44 successive time points
45 successive times
46 time
47 time points
48 transformation
49 unconstrained environments
50 visual images
51 visual navigation
52 visual observers
53 work
54 schema:name Object Detection Using Model-based Prediction and Motion Parallax
55 schema:pagination 148-161
56 schema:productId N80c19404777b483f96b05799ca857160
57 Na8988d0c851d44dd974ce1aa8b4450bf
58 schema:publisher N74143d002d7f4d9b9830748a851b800d
59 schema:sameAs https://app.dimensions.ai/details/publication/pub.1002670094
60 https://doi.org/10.1007/978-1-4612-2778-6_6
61 schema:sdDatePublished 2022-12-01T06:50
62 schema:sdLicense https://scigraph.springernature.com/explorer/license/
63 schema:sdPublisher N6f1b2886f62744fe9475570e5048e6a3
64 schema:url https://doi.org/10.1007/978-1-4612-2778-6_6
65 sgo:license sg:explorer/license/
66 sgo:sdDataset chapters
67 rdf:type schema:Chapter
68 N0f1d4d43b658464186f69b0e69419456 rdf:first sg:person.014400652155.17
69 rdf:rest rdf:nil
70 N1c1da0cadb7c442e9a49d32d9de9c408 schema:familyName Masaki
71 schema:givenName Ichiro
72 rdf:type schema:Person
73 N20e931ce2aec41738e12c27f63a3bc6b rdf:first N1c1da0cadb7c442e9a49d32d9de9c408
74 rdf:rest rdf:nil
75 N22fe22ca843b4c14a9500dcdb7afeee8 schema:isbn 978-1-4612-2778-6
76 978-1-4612-7665-4
77 schema:name Vision-based Vehicle Guidance
78 rdf:type schema:Book
79 N6f1b2886f62744fe9475570e5048e6a3 schema:name Springer Nature - SN SciGraph project
80 rdf:type schema:Organization
81 N73ca79a40e684d31b27aeb00f19b3800 rdf:first sg:person.015432652223.40
82 rdf:rest N0f1d4d43b658464186f69b0e69419456
83 N74143d002d7f4d9b9830748a851b800d schema:name Springer Nature
84 rdf:type schema:Organisation
85 N80c19404777b483f96b05799ca857160 schema:name doi
86 schema:value 10.1007/978-1-4612-2778-6_6
87 rdf:type schema:PropertyValue
88 Na8988d0c851d44dd974ce1aa8b4450bf schema:name dimensions_id
89 schema:value pub.1002670094
90 rdf:type schema:PropertyValue
91 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
92 schema:name Information and Computing Sciences
93 rdf:type schema:DefinedTerm
94 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
95 schema:name Artificial Intelligence and Image Processing
96 rdf:type schema:DefinedTerm
97 sg:person.014400652155.17 schema:familyName Eklundh
98 schema:givenName Jan-Olof
99 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
100 rdf:type schema:Person
101 sg:person.015432652223.40 schema:familyName Carlsson
102 schema:givenName Stefan
103 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015432652223.40
104 rdf:type schema:Person
 




Preview window. Press ESC to close (or click here)


...