A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1999-06

AUTHORS

H. Borotschnig, L. Paletta, A. Pinz

ABSTRACT

.One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment. More... »

PAGES

293-319

References to SciGraph publications

  • 1996. EigenTracking: Robust matching and tracking of articulated objects using a view-based representation in COMPUTER VISION — ECCV '96
  • 1998-09. Global feature space neural network for active computer vision in NEURAL COMPUTING AND APPLICATIONS
  • 1994-04. Planning multiple observations for object recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1995-01. Visual learning and recognition of 3-d objects from appearance in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1998-04. Coarse-to-fine adaptive masks for appearance matching of occluded scenes in MACHINE VISION AND APPLICATIONS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s006070050026

    DOI

    http://dx.doi.org/10.1007/s006070050026

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1046512378


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT", 
              "id": "http://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Borotschnig", 
            "givenName": "H.", 
            "id": "sg:person.014253175607.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014253175607.32"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT", 
              "id": "http://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Paletta", 
            "givenName": "L.", 
            "id": "sg:person.010060055125.29", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT", 
              "id": "http://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Pinz", 
            "givenName": "A.", 
            "id": "sg:person.012033065653.49", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s001380050075", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1010141098", 
              "https://doi.org/10.1007/s001380050075"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421201", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042882432", 
              "https://doi.org/10.1007/bf01421201"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bfb0015548", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040495258", 
              "https://doi.org/10.1007/bfb0015548"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421486", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020275860", 
              "https://doi.org/10.1007/bf01421486"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01414882", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039330411", 
              "https://doi.org/10.1007/bf01414882"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "1999-06", 
        "datePublishedReg": "1999-06-01", 
        "description": "Abstract.One major goal of active object recognition systems  is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory  and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s006070050026", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1297324", 
            "issn": [
              "0010-485X", 
              "1436-5057"
            ], 
            "name": "Computing", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "4", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "62"
          }
        ], 
        "keywords": [
          "recognition rate", 
          "active object recognition system", 
          "object recognition system", 
          "active object recognition", 
          "Dempster-Shafer theory", 
          "Comparison of Probabilistic", 
          "planning algorithm", 
          "Extensive experiments", 
          "information fusion", 
          "recognition system", 
          "recognition problem", 
          "fusion scheme", 
          "information integration", 
          "recognition environment", 
          "single view", 
          "object recognition", 
          "next action", 
          "different uncertainty calculi", 
          "uncertainty calculi", 
          "classification results", 
          "parametric eigenspace", 
          "object representations", 
          "probabilistic implementation", 
          "possibility theory", 
          "additional views", 
          "useful information", 
          "random selection", 
          "recognition", 
          "algorithm", 
          "camera", 
          "probability theory", 
          "probabilistic", 
          "implementation", 
          "system", 
          "scheme", 
          "eigenspace", 
          "framework", 
          "classification", 
          "representation", 
          "multiple measurements", 
          "information", 
          "environment", 
          "integration", 
          "major goal", 
          "step", 
          "planning", 
          "performance", 
          "view", 
          "fusion", 
          "calculus", 
          "goal", 
          "selection", 
          "order", 
          "experiments", 
          "detail", 
          "average number", 
          "number", 
          "theory", 
          "results", 
          "conjunctive approach", 
          "active steps", 
          "comparison", 
          "rate increases", 
          "rate", 
          "action", 
          "appearance", 
          "measurements", 
          "slight differences", 
          "increase", 
          "differences", 
          "evidence", 
          "approach", 
          "problem", 
          "active recognition problem", 
          "wrong object-pose classifications", 
          "object-pose classifications", 
          "outlier rate increases", 
          "fusion schemes outperform conjunctive approaches", 
          "schemes outperform conjunctive approaches", 
          "outperform conjunctive approaches", 
          "specific object recognition environment", 
          "object recognition environment", 
          "Evidence Theoretic Fusion Schemes", 
          "Theoretic Fusion Schemes"
        ], 
        "name": "A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition", 
        "pagination": "293-319", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1046512378"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s006070050026"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s006070050026", 
          "https://app.dimensions.ai/details/publication/pub.1046512378"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:13", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_347.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s006070050026"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'


     

    This table displays all metadata directly associated to this object as RDF triples.

    176 TRIPLES      22 PREDICATES      115 URIs      102 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s006070050026 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N5b090d0aa6cd40afb145f7014469e1c7
    4 schema:citation sg:pub.10.1007/bf01414882
    5 sg:pub.10.1007/bf01421201
    6 sg:pub.10.1007/bf01421486
    7 sg:pub.10.1007/bfb0015548
    8 sg:pub.10.1007/s001380050075
    9 schema:datePublished 1999-06
    10 schema:datePublishedReg 1999-06-01
    11 schema:description Abstract.One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment.
    12 schema:genre article
    13 schema:inLanguage en
    14 schema:isAccessibleForFree false
    15 schema:isPartOf Na2375b6aa6d247e78fdfaba2e0219016
    16 Ncbab539e5f8b464791dc5fe9e3acdedc
    17 sg:journal.1297324
    18 schema:keywords Comparison of Probabilistic
    19 Dempster-Shafer theory
    20 Evidence Theoretic Fusion Schemes
    21 Extensive experiments
    22 Theoretic Fusion Schemes
    23 action
    24 active object recognition
    25 active object recognition system
    26 active recognition problem
    27 active steps
    28 additional views
    29 algorithm
    30 appearance
    31 approach
    32 average number
    33 calculus
    34 camera
    35 classification
    36 classification results
    37 comparison
    38 conjunctive approach
    39 detail
    40 differences
    41 different uncertainty calculi
    42 eigenspace
    43 environment
    44 evidence
    45 experiments
    46 framework
    47 fusion
    48 fusion scheme
    49 fusion schemes outperform conjunctive approaches
    50 goal
    51 implementation
    52 increase
    53 information
    54 information fusion
    55 information integration
    56 integration
    57 major goal
    58 measurements
    59 multiple measurements
    60 next action
    61 number
    62 object recognition
    63 object recognition environment
    64 object recognition system
    65 object representations
    66 object-pose classifications
    67 order
    68 outlier rate increases
    69 outperform conjunctive approaches
    70 parametric eigenspace
    71 performance
    72 planning
    73 planning algorithm
    74 possibility theory
    75 probabilistic
    76 probabilistic implementation
    77 probability theory
    78 problem
    79 random selection
    80 rate
    81 rate increases
    82 recognition
    83 recognition environment
    84 recognition problem
    85 recognition rate
    86 recognition system
    87 representation
    88 results
    89 scheme
    90 schemes outperform conjunctive approaches
    91 selection
    92 single view
    93 slight differences
    94 specific object recognition environment
    95 step
    96 system
    97 theory
    98 uncertainty calculi
    99 useful information
    100 view
    101 wrong object-pose classifications
    102 schema:name A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition
    103 schema:pagination 293-319
    104 schema:productId N3fafd8ac7dce4b1387c9e78ccb5d43e1
    105 N563783282c12435cae06e8d811a8d3f0
    106 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046512378
    107 https://doi.org/10.1007/s006070050026
    108 schema:sdDatePublished 2021-12-01T19:13
    109 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    110 schema:sdPublisher N66d2bcf611f54b428a2325fdcbc51404
    111 schema:url https://doi.org/10.1007/s006070050026
    112 sgo:license sg:explorer/license/
    113 sgo:sdDataset articles
    114 rdf:type schema:ScholarlyArticle
    115 N3277d4d6ea9244d5ac68afddabe9d473 rdf:first sg:person.010060055125.29
    116 rdf:rest N5bc61f09243c460f983ab52f9a5b4bc3
    117 N3fafd8ac7dce4b1387c9e78ccb5d43e1 schema:name dimensions_id
    118 schema:value pub.1046512378
    119 rdf:type schema:PropertyValue
    120 N563783282c12435cae06e8d811a8d3f0 schema:name doi
    121 schema:value 10.1007/s006070050026
    122 rdf:type schema:PropertyValue
    123 N5b090d0aa6cd40afb145f7014469e1c7 rdf:first sg:person.014253175607.32
    124 rdf:rest N3277d4d6ea9244d5ac68afddabe9d473
    125 N5bc61f09243c460f983ab52f9a5b4bc3 rdf:first sg:person.012033065653.49
    126 rdf:rest rdf:nil
    127 N66d2bcf611f54b428a2325fdcbc51404 schema:name Springer Nature - SN SciGraph project
    128 rdf:type schema:Organization
    129 Na2375b6aa6d247e78fdfaba2e0219016 schema:volumeNumber 62
    130 rdf:type schema:PublicationVolume
    131 Ncbab539e5f8b464791dc5fe9e3acdedc schema:issueNumber 4
    132 rdf:type schema:PublicationIssue
    133 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    134 schema:name Information and Computing Sciences
    135 rdf:type schema:DefinedTerm
    136 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    137 schema:name Artificial Intelligence and Image Processing
    138 rdf:type schema:DefinedTerm
    139 sg:journal.1297324 schema:issn 0010-485X
    140 1436-5057
    141 schema:name Computing
    142 schema:publisher Springer Nature
    143 rdf:type schema:Periodical
    144 sg:person.010060055125.29 schema:affiliation grid-institutes:grid.410413.3
    145 schema:familyName Paletta
    146 schema:givenName L.
    147 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29
    148 rdf:type schema:Person
    149 sg:person.012033065653.49 schema:affiliation grid-institutes:grid.410413.3
    150 schema:familyName Pinz
    151 schema:givenName A.
    152 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49
    153 rdf:type schema:Person
    154 sg:person.014253175607.32 schema:affiliation grid-institutes:grid.410413.3
    155 schema:familyName Borotschnig
    156 schema:givenName H.
    157 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014253175607.32
    158 rdf:type schema:Person
    159 sg:pub.10.1007/bf01414882 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039330411
    160 https://doi.org/10.1007/bf01414882
    161 rdf:type schema:CreativeWork
    162 sg:pub.10.1007/bf01421201 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042882432
    163 https://doi.org/10.1007/bf01421201
    164 rdf:type schema:CreativeWork
    165 sg:pub.10.1007/bf01421486 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020275860
    166 https://doi.org/10.1007/bf01421486
    167 rdf:type schema:CreativeWork
    168 sg:pub.10.1007/bfb0015548 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040495258
    169 https://doi.org/10.1007/bfb0015548
    170 rdf:type schema:CreativeWork
    171 sg:pub.10.1007/s001380050075 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010141098
    172 https://doi.org/10.1007/s001380050075
    173 rdf:type schema:CreativeWork
    174 grid-institutes:grid.410413.3 schema:alternateName Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT
    175 schema:name Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT
    176 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...