Probabilistic and Voting Approaches to Cue Integration for Figure-Ground Segmentation View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2002-04-29

AUTHORS

Eric Hayman , Jan-Olof Eklundh

ABSTRACT

This paper describes techniques for fusing the output of multiple cues to robustly and accurately segment foreground objects from the background in image sequences. Two different methods for cue integration are presented and tested. The first is a probabilistic approach which at each pixel computes the likelihood of observations over all cues before assigning pixels to foreground or background layers using Bayes Rule. The second method allows each cue to make a decision independent of the other cues before fusing their outputs with a weighted sum. A further important contribution of our work concerns demonstrating how models for some cues can be learnt and subsequently adapted online. In particular, regions of coherent motion are used to train distributions for colour and for a simple texture descriptor. An additional aspect of our framework is in providing mechanisms for suppressing cues when they are believed to be unreliable, for instance during training or when they disagree with the general consensus. Results on extended video sequences are presented. More... »

PAGES

469-486

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/3-540-47977-5_31

DOI

http://dx.doi.org/10.1007/3-540-47977-5_31

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1050997831


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Hayman", 
        "givenName": "Eric", 
        "id": "sg:person.010203264647.00", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010203264647.00"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden", 
          "id": "http://www.grid.ac/institutes/grid.5037.1", 
          "name": [
            "Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Eklundh", 
        "givenName": "Jan-Olof", 
        "id": "sg:person.014400652155.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2002-04-29", 
    "datePublishedReg": "2002-04-29", 
    "description": "This paper describes techniques for fusing the output of multiple cues to robustly and accurately segment foreground objects from the background in image sequences. Two different methods for cue integration are presented and tested. The first is a probabilistic approach which at each pixel computes the likelihood of observations over all cues before assigning pixels to foreground or background layers using Bayes Rule. The second method allows each cue to make a decision independent of the other cues before fusing their outputs with a weighted sum. A further important contribution of our work concerns demonstrating how models for some cues can be learnt and subsequently adapted online. In particular, regions of coherent motion are used to train distributions for colour and for a simple texture descriptor. An additional aspect of our framework is in providing mechanisms for suppressing cues when they are believed to be unreliable, for instance during training or when they disagree with the general consensus. Results on extended video sequences are presented.", 
    "editor": [
      {
        "familyName": "Heyden", 
        "givenName": "Anders", 
        "type": "Person"
      }, 
      {
        "familyName": "Sparr", 
        "givenName": "Gunnar", 
        "type": "Person"
      }, 
      {
        "familyName": "Nielsen", 
        "givenName": "Mads", 
        "type": "Person"
      }, 
      {
        "familyName": "Johansen", 
        "givenName": "Peter", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/3-540-47977-5_31", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-540-43746-8", 
        "978-3-540-47977-2"
      ], 
      "name": "Computer Vision \u2014 ECCV 2002", 
      "type": "Book"
    }, 
    "keywords": [
      "extended video sequences", 
      "segment foreground objects", 
      "simple texture descriptor", 
      "figure-ground segmentation", 
      "video sequences", 
      "foreground objects", 
      "voting approach", 
      "image sequences", 
      "texture descriptors", 
      "cue integration", 
      "background layer", 
      "multiple cues", 
      "probabilistic approach", 
      "pixels", 
      "further important contribution", 
      "Bayes rule", 
      "likelihood of observations", 
      "weighted sum", 
      "segmentation", 
      "work concerns", 
      "integration", 
      "second method", 
      "descriptors", 
      "objects", 
      "framework", 
      "additional aspects", 
      "output", 
      "instances", 
      "rules", 
      "different methods", 
      "method", 
      "decisions", 
      "cues", 
      "training", 
      "technique", 
      "sequence", 
      "model", 
      "aspects", 
      "color", 
      "motion", 
      "important contribution", 
      "concern", 
      "results", 
      "sum", 
      "contribution", 
      "consensus", 
      "background", 
      "layer", 
      "coherent motion", 
      "general consensus", 
      "mechanism", 
      "likelihood", 
      "distribution", 
      "observations", 
      "region", 
      "approach", 
      "paper"
    ], 
    "name": "Probabilistic and Voting Approaches to Cue Integration for Figure-Ground Segmentation", 
    "pagination": "469-486", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1050997831"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/3-540-47977-5_31"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/3-540-47977-5_31", 
      "https://app.dimensions.ai/details/publication/pub.1050997831"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-12-01T06:54", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/chapter/chapter_445.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/3-540-47977-5_31"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-47977-5_31'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-47977-5_31'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-47977-5_31'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-47977-5_31'


 

This table displays all metadata directly associated to this object as RDF triples.

138 TRIPLES      22 PREDICATES      81 URIs      74 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/3-540-47977-5_31 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N9da04f9a1b2542dc81382dfd2e502fe6
4 schema:datePublished 2002-04-29
5 schema:datePublishedReg 2002-04-29
6 schema:description This paper describes techniques for fusing the output of multiple cues to robustly and accurately segment foreground objects from the background in image sequences. Two different methods for cue integration are presented and tested. The first is a probabilistic approach which at each pixel computes the likelihood of observations over all cues before assigning pixels to foreground or background layers using Bayes Rule. The second method allows each cue to make a decision independent of the other cues before fusing their outputs with a weighted sum. A further important contribution of our work concerns demonstrating how models for some cues can be learnt and subsequently adapted online. In particular, regions of coherent motion are used to train distributions for colour and for a simple texture descriptor. An additional aspect of our framework is in providing mechanisms for suppressing cues when they are believed to be unreliable, for instance during training or when they disagree with the general consensus. Results on extended video sequences are presented.
7 schema:editor N73245c98346949789e7efaeed3abf260
8 schema:genre chapter
9 schema:isAccessibleForFree true
10 schema:isPartOf N9e66adc6e9cb4340b88d0db9188c3667
11 schema:keywords Bayes rule
12 additional aspects
13 approach
14 aspects
15 background
16 background layer
17 coherent motion
18 color
19 concern
20 consensus
21 contribution
22 cue integration
23 cues
24 decisions
25 descriptors
26 different methods
27 distribution
28 extended video sequences
29 figure-ground segmentation
30 foreground objects
31 framework
32 further important contribution
33 general consensus
34 image sequences
35 important contribution
36 instances
37 integration
38 layer
39 likelihood
40 likelihood of observations
41 mechanism
42 method
43 model
44 motion
45 multiple cues
46 objects
47 observations
48 output
49 paper
50 pixels
51 probabilistic approach
52 region
53 results
54 rules
55 second method
56 segment foreground objects
57 segmentation
58 sequence
59 simple texture descriptor
60 sum
61 technique
62 texture descriptors
63 training
64 video sequences
65 voting approach
66 weighted sum
67 work concerns
68 schema:name Probabilistic and Voting Approaches to Cue Integration for Figure-Ground Segmentation
69 schema:pagination 469-486
70 schema:productId N5e0715a801454f2281bf4259405e36cd
71 N91b11f6e2d804da787ec7b49d561d1cb
72 schema:publisher Nb26b8c6ff1034d21bddb38be3ad4368b
73 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050997831
74 https://doi.org/10.1007/3-540-47977-5_31
75 schema:sdDatePublished 2022-12-01T06:54
76 schema:sdLicense https://scigraph.springernature.com/explorer/license/
77 schema:sdPublisher N69a4730a31254bcb8ee60a3fe8b68535
78 schema:url https://doi.org/10.1007/3-540-47977-5_31
79 sgo:license sg:explorer/license/
80 sgo:sdDataset chapters
81 rdf:type schema:Chapter
82 N30c3b4d2721f4d2c8a9b7495d74dfa9b schema:familyName Johansen
83 schema:givenName Peter
84 rdf:type schema:Person
85 N3a2a2a7d653f47df8306128db9b9da5c schema:familyName Heyden
86 schema:givenName Anders
87 rdf:type schema:Person
88 N4797c75d80a74a3ab617caeb1a7cffe6 rdf:first N733c09d2473e4648a7bb0f1eb1c4c5ea
89 rdf:rest Nd4fac3148ed14f238abdbb477e41ef69
90 N5e0715a801454f2281bf4259405e36cd schema:name doi
91 schema:value 10.1007/3-540-47977-5_31
92 rdf:type schema:PropertyValue
93 N69a4730a31254bcb8ee60a3fe8b68535 schema:name Springer Nature - SN SciGraph project
94 rdf:type schema:Organization
95 N73245c98346949789e7efaeed3abf260 rdf:first N3a2a2a7d653f47df8306128db9b9da5c
96 rdf:rest N78295fb61d6f4817ba0865218981ee9a
97 N733c09d2473e4648a7bb0f1eb1c4c5ea schema:familyName Nielsen
98 schema:givenName Mads
99 rdf:type schema:Person
100 N78295fb61d6f4817ba0865218981ee9a rdf:first Nd4f0074e2b984fc6b776e657f4282eb1
101 rdf:rest N4797c75d80a74a3ab617caeb1a7cffe6
102 N91b11f6e2d804da787ec7b49d561d1cb schema:name dimensions_id
103 schema:value pub.1050997831
104 rdf:type schema:PropertyValue
105 N93c908d03a6c4c2e8958eae7ce01eaf2 rdf:first sg:person.014400652155.17
106 rdf:rest rdf:nil
107 N9da04f9a1b2542dc81382dfd2e502fe6 rdf:first sg:person.010203264647.00
108 rdf:rest N93c908d03a6c4c2e8958eae7ce01eaf2
109 N9e66adc6e9cb4340b88d0db9188c3667 schema:isbn 978-3-540-43746-8
110 978-3-540-47977-2
111 schema:name Computer Vision — ECCV 2002
112 rdf:type schema:Book
113 Nb26b8c6ff1034d21bddb38be3ad4368b schema:name Springer Nature
114 rdf:type schema:Organisation
115 Nd4f0074e2b984fc6b776e657f4282eb1 schema:familyName Sparr
116 schema:givenName Gunnar
117 rdf:type schema:Person
118 Nd4fac3148ed14f238abdbb477e41ef69 rdf:first N30c3b4d2721f4d2c8a9b7495d74dfa9b
119 rdf:rest rdf:nil
120 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
121 schema:name Information and Computing Sciences
122 rdf:type schema:DefinedTerm
123 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
124 schema:name Artificial Intelligence and Image Processing
125 rdf:type schema:DefinedTerm
126 sg:person.010203264647.00 schema:affiliation grid-institutes:grid.5037.1
127 schema:familyName Hayman
128 schema:givenName Eric
129 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010203264647.00
130 rdf:type schema:Person
131 sg:person.014400652155.17 schema:affiliation grid-institutes:grid.5037.1
132 schema:familyName Eklundh
133 schema:givenName Jan-Olof
134 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014400652155.17
135 rdf:type schema:Person
136 grid-institutes:grid.5037.1 schema:alternateName Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden
137 schema:name Dept. of Numerical Analysis and Computer Science KTH, Computational Vision and Active Perception Laboratory (CVAP), SE-100 44, Stockholm, Sweden
138 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...