A model-free voting approach for integrating multiple cues View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

1998

AUTHORS

Carsten G. Bräutigam , Jan -Olof Eklundh , Henrik I. Christensen

ABSTRACT

Computer vision systems, such as “seeing” robots, aimed at functioning robustly in a natural environment rich on information benefit from relying on multiple cues. Then the problem of integrating these become central. Existing approaches to cue integration have typically been based on physical and mathematical models for each cue and used estimation and optimization methods to fuse the parameterizations of these models. In this paper we consider an approach for fusion that does not rely on the underlying models for each cue. It is based on a simple binary voting scheme. A particular feature of such a scheme is that also incommensurable cues, such as intensity and surface orientation, can be fused in a direct way. Other features are that uncertainties and the normalization of them is avoided. Instead, consensus of several cues is considered as non-accidental and used as support for hypotheses of whatever structure is sought for. It is shown that only a small set of cues need to agree to obtain a reliable output. We apply the proposed technique to finding instances of planar surfaces in binocular images, without resorting to scene reconstruction or segmentation. The results are of course not comparable to the best results that can be obtained by complete scene reconstruction. However, they provide the most obvious instances of planes also with rather crude assumptions and coarse algorithms. Even though the precise extent of the planar patches is not derived good overall hypotheses are obtained. Our work applies voting schemes beyond earlier attempts, and also approaches the cue integration problem in a novel manner. Although further research is needed to establish the full applicability of our technique our results so far seem quite useful. More... »

PAGES

734-750

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/bfb0055701

DOI

http://dx.doi.org/10.1007/bfb0055701

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1013349832


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Br\u00e4utigam", 
        "givenName": "Carsten G.", 
        "id": "sg:person.012441711672.93", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012441711672.93"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan -Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Computational Vision and Active Perception, Kungliga Tekniska H\u00f6gskolan, S-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Christensen", 
        "givenName": "Henrik I.", 
        "id": "sg:person.01365426624.74", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01365426624.74"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "1998", 
    "datePublishedReg": "1998-01-01", 
    "description": "Computer vision systems, such as \u201cseeing\u201d robots, aimed at functioning robustly in a natural environment rich on information benefit from relying on multiple cues. Then the problem of integrating these become central. Existing approaches to cue integration have typically been based on physical and mathematical models for each cue and used estimation and optimization methods to fuse the parameterizations of these models. In this paper we consider an approach for fusion that does not rely on the underlying models for each cue. It is based on a simple binary voting scheme. A particular feature of such a scheme is that also incommensurable cues, such as intensity and surface orientation, can be fused in a direct way. Other features are that uncertainties and the normalization of them is avoided. Instead, consensus of several cues is considered as non-accidental and used as support for hypotheses of whatever structure is sought for. It is shown that only a small set of cues need to agree to obtain a reliable output. We apply the proposed technique to finding instances of planar surfaces in binocular images, without resorting to scene reconstruction or segmentation. The results are of course not comparable to the best results that can be obtained by complete scene reconstruction. However, they provide the most obvious instances of planes also with rather crude assumptions and coarse algorithms. Even though the precise extent of the planar patches is not derived good overall hypotheses are obtained. Our work applies voting schemes beyond earlier attempts, and also approaches the cue integration problem in a novel manner. Although further research is needed to establish the full applicability of our technique our results so far seem quite useful.", 
    "editor": [
      {
        "familyName": "Burkhardt", 
        "givenName": "Hans", 
        "type": "Person"
      }, 
      {
        "familyName": "Neumann", 
        "givenName": "Bernd", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/bfb0055701", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-540-64569-6", 
        "978-3-540-69354-3"
      ], 
      "name": "Computer Vision \u2014 ECCV'98", 
      "type": "Book"
    }, 
    "keywords": [
      "voting scheme", 
      "computer vision system", 
      "complete scene reconstruction", 
      "multiple cues", 
      "scene reconstruction", 
      "vision system", 
      "coarse algorithm", 
      "voting approach", 
      "binocular images", 
      "planar patches", 
      "integration problems", 
      "mathematical model", 
      "optimization method", 
      "small set", 
      "reliable output", 
      "information benefits", 
      "full applicability", 
      "cue integration", 
      "scheme", 
      "novel manner", 
      "better results", 
      "robot", 
      "segmentation", 
      "instances", 
      "crude assumptions", 
      "direct way", 
      "algorithm", 
      "planar surface", 
      "problem", 
      "features", 
      "images", 
      "particular features", 
      "model", 
      "technique", 
      "reconstruction", 
      "set", 
      "obvious instances", 
      "integration", 
      "environment", 
      "parameterization", 
      "estimation", 
      "approach", 
      "fusion", 
      "uncertainty", 
      "assumption", 
      "system", 
      "early attempts", 
      "cues", 
      "surface orientation", 
      "applicability", 
      "plane", 
      "output", 
      "way", 
      "results", 
      "work", 
      "support", 
      "method", 
      "normalization", 
      "research", 
      "benefits", 
      "manner", 
      "structure", 
      "further research", 
      "natural environment", 
      "orientation", 
      "patches", 
      "consensus", 
      "surface", 
      "intensity", 
      "attempt", 
      "hypothesis", 
      "precise extent", 
      "course", 
      "overall hypothesis", 
      "extent", 
      "paper"
    ], 
    "name": "A model-free voting approach for integrating multiple cues", 
    "pagination": "734-750", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1013349832"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/bfb0055701"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/bfb0055701", 
      "https://app.dimensions.ai/details/publication/pub.1013349832"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-11-24T21:16", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/chapter/chapter_324.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/bfb0055701"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/bfb0055701'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/bfb0055701'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/bfb0055701'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/bfb0055701'


 

This table displays all metadata directly associated to this object as RDF triples.

154 TRIPLES      22 PREDICATES      101 URIs      94 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/bfb0055701 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Ne5cd64d984a4425594e96868b79e7b6e
4 schema:datePublished 1998
5 schema:datePublishedReg 1998-01-01
6 schema:description Computer vision systems, such as “seeing” robots, aimed at functioning robustly in a natural environment rich on information benefit from relying on multiple cues. Then the problem of integrating these become central. Existing approaches to cue integration have typically been based on physical and mathematical models for each cue and used estimation and optimization methods to fuse the parameterizations of these models. In this paper we consider an approach for fusion that does not rely on the underlying models for each cue. It is based on a simple binary voting scheme. A particular feature of such a scheme is that also incommensurable cues, such as intensity and surface orientation, can be fused in a direct way. Other features are that uncertainties and the normalization of them is avoided. Instead, consensus of several cues is considered as non-accidental and used as support for hypotheses of whatever structure is sought for. It is shown that only a small set of cues need to agree to obtain a reliable output. We apply the proposed technique to finding instances of planar surfaces in binocular images, without resorting to scene reconstruction or segmentation. The results are of course not comparable to the best results that can be obtained by complete scene reconstruction. However, they provide the most obvious instances of planes also with rather crude assumptions and coarse algorithms. Even though the precise extent of the planar patches is not derived good overall hypotheses are obtained. Our work applies voting schemes beyond earlier attempts, and also approaches the cue integration problem in a novel manner. Although further research is needed to establish the full applicability of our technique our results so far seem quite useful.
7 schema:editor N5882963362d64e0cb6ac6e827560f094
8 schema:genre chapter
9 schema:isAccessibleForFree true
10 schema:isPartOf N5adbb457a0d841c0a74184a9511e0455
11 schema:keywords algorithm
12 applicability
13 approach
14 assumption
15 attempt
16 benefits
17 better results
18 binocular images
19 coarse algorithm
20 complete scene reconstruction
21 computer vision system
22 consensus
23 course
24 crude assumptions
25 cue integration
26 cues
27 direct way
28 early attempts
29 environment
30 estimation
31 extent
32 features
33 full applicability
34 further research
35 fusion
36 hypothesis
37 images
38 information benefits
39 instances
40 integration
41 integration problems
42 intensity
43 manner
44 mathematical model
45 method
46 model
47 multiple cues
48 natural environment
49 normalization
50 novel manner
51 obvious instances
52 optimization method
53 orientation
54 output
55 overall hypothesis
56 paper
57 parameterization
58 particular features
59 patches
60 planar patches
61 planar surface
62 plane
63 precise extent
64 problem
65 reconstruction
66 reliable output
67 research
68 results
69 robot
70 scene reconstruction
71 scheme
72 segmentation
73 set
74 small set
75 structure
76 support
77 surface
78 surface orientation
79 system
80 technique
81 uncertainty
82 vision system
83 voting approach
84 voting scheme
85 way
86 work
87 schema:name A model-free voting approach for integrating multiple cues
88 schema:pagination 734-750
89 schema:productId N908e136b06cb4f62a8f80da6a6dd8693
90 Nc1a579bd5b97423399852fc884d7b982
91 schema:publisher N5332b44a05f44010b991ac7dab2ea15e
92 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013349832
93 https://doi.org/10.1007/bfb0055701
94 schema:sdDatePublished 2022-11-24T21:16
95 schema:sdLicense https://scigraph.springernature.com/explorer/license/
96 schema:sdPublisher N522338e008a347048d7016abbae59ced
97 schema:url https://doi.org/10.1007/bfb0055701
98 sgo:license sg:explorer/license/
99 sgo:sdDataset chapters
100 rdf:type schema:Chapter
101 N10402b2893f24cc4aae1b88b03bcbffe rdf:first sg:person.01365426624.74
102 rdf:rest rdf:nil
103 N522338e008a347048d7016abbae59ced schema:name Springer Nature - SN SciGraph project
104 rdf:type schema:Organization
105 N5332b44a05f44010b991ac7dab2ea15e schema:name Springer Nature
106 rdf:type schema:Organisation
107 N557d5964da9b4912856176fcf2c66ef8 rdf:first Nbe8f2de8b31a4ee280a785cfc50403ed
108 rdf:rest rdf:nil
109 N5882963362d64e0cb6ac6e827560f094 rdf:first N744f3e5b4e3a461b8a3bc8f4491447a4
110 rdf:rest N557d5964da9b4912856176fcf2c66ef8
111 N5adbb457a0d841c0a74184a9511e0455 schema:isbn 978-3-540-64569-6
112 978-3-540-69354-3
113 schema:name Computer Vision — ECCV'98
114 rdf:type schema:Book
115 N6a788723a1eb4b959e3fc8bf9b244ae6 rdf:first sg:person.014400652155.17
116 rdf:rest N10402b2893f24cc4aae1b88b03bcbffe
117 N744f3e5b4e3a461b8a3bc8f4491447a4 schema:familyName Burkhardt
118 schema:givenName Hans
119 rdf:type schema:Person
120 N908e136b06cb4f62a8f80da6a6dd8693 schema:name doi
121 schema:value 10.1007/bfb0055701
122 rdf:type schema:PropertyValue
123 Nbe8f2de8b31a4ee280a785cfc50403ed schema:familyName Neumann
124 schema:givenName Bernd
125 rdf:type schema:Person
126 Nc1a579bd5b97423399852fc884d7b982 schema:name dimensions_id
127 schema:value pub.1013349832
128 rdf:type schema:PropertyValue
129 Ne5cd64d984a4425594e96868b79e7b6e rdf:first sg:person.012441711672.93
130 rdf:rest N6a788723a1eb4b959e3fc8bf9b244ae6
131 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
132 schema:name Information and Computing Sciences
133 rdf:type schema:DefinedTerm
134 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
135 schema:name Artificial Intelligence and Image Processing
136 rdf:type schema:DefinedTerm
137 sg:person.012441711672.93 schema:affiliation grid-institutes:grid.5037.1
138 schema:familyName Bräutigam
139 schema:givenName Carsten G.
140 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012441711672.93
141 rdf:type schema:Person
142 sg:person.01365426624.74 schema:affiliation grid-institutes:grid.5037.1
143 schema:familyName Christensen
144 schema:givenName Henrik I.
145 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01365426624.74
146 rdf:type schema:Person
147 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
148 schema:familyName Eklundh
149 schema:givenName Jan -Olof
150 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
151 rdf:type schema:Person
152 grid-institutes:grid.5037.1 schema:alternateName Computational Vision and Active Perception, Kungliga Tekniska Högskolan, S-100 44, Stockholm, Sweden
153 schema:name Computational Vision and Active Perception, Kungliga Tekniska Högskolan, S-100 44, Stockholm, Sweden
154 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...