A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

1999-06

AUTHORS

H. Borotschnig, L. Paletta, A. Pinz

ABSTRACT

One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment. More... »

PAGES

293-319

References to SciGraph publications

  • 2005-06-09. EigenTracking: Robust matching and tracking of articulated objects using a view-based representation in COMPUTER VISION — ECCV '96
  • 1998-09. Global feature space neural network for active computer vision in NEURAL COMPUTING AND APPLICATIONS
  • 1994-04. Planning multiple observations for object recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1995-01. Visual learning and recognition of 3-d objects from appearance in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1998-04. Coarse-to-fine adaptive masks for appearance matching of occluded scenes in MACHINE VISION AND APPLICATIONS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s006070050026

    DOI

    http://dx.doi.org/10.1007/s006070050026

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1046512378


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Graz University of Technology", 
              "id": "https://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Borotschnig", 
            "givenName": "H.", 
            "id": "sg:person.014253175607.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014253175607.32"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Graz University of Technology", 
              "id": "https://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Paletta", 
            "givenName": "L.", 
            "id": "sg:person.010060055125.29", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Graz University of Technology", 
              "id": "https://www.grid.ac/institutes/grid.410413.3", 
              "name": [
                "Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Pinz", 
            "givenName": "A.", 
            "id": "sg:person.012033065653.49", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.1117/12.256290", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1002508424"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0165-0114(78)90019-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005739859"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0165-0114(78)90019-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1005739859"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s001380050075", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1010141098", 
              "https://doi.org/10.1007/s001380050075"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0893-6080(94)90099-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017057242"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0893-6080(94)90099-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017057242"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421486", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020275860", 
              "https://doi.org/10.1007/bf01421486"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421486", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1020275860", 
              "https://doi.org/10.1007/bf01421486"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1037/0033-295x.94.2.115", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1034091054"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01414882", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039330411", 
              "https://doi.org/10.1007/bf01414882"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01414882", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039330411", 
              "https://doi.org/10.1007/bf01414882"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bfb0015548", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040495258", 
              "https://doi.org/10.1007/bfb0015548"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bfb0015548", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040495258", 
              "https://doi.org/10.1007/bfb0015548"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421201", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042882432", 
              "https://doi.org/10.1007/bf01421201"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bf01421201", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042882432", 
              "https://doi.org/10.1007/bf01421201"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1162/jocn.1991.3.1.71", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1043225769"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s0167-8655(96)00092-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053674868"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/34.667881", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061156743"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/3468.477860", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061157416"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/70.345940", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061216136"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.1996.517149", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094085162"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "1999-06", 
        "datePublishedReg": "1999-06-01", 
        "description": "One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1007/s006070050026", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1356894", 
            "issn": [
              "1521-9615", 
              "1436-5057"
            ], 
            "name": "Computing", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "4", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "62"
          }
        ], 
        "name": "A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition", 
        "pagination": "293-319", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "dd911bf02c891eab2022451f6cc4bd861b7a225c22169ea208879855c74903b5"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s006070050026"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1046512378"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s006070050026", 
          "https://app.dimensions.ai/details/publication/pub.1046512378"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-10T18:21", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8675_00000515.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "http://link.springer.com/10.1007%2Fs006070050026"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s006070050026'


     

    This table displays all metadata directly associated to this object as RDF triples.

    125 TRIPLES      21 PREDICATES      42 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s006070050026 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N89326e1124fa4cc8bdd23aaf4cea1d2b
    4 schema:citation sg:pub.10.1007/bf01414882
    5 sg:pub.10.1007/bf01421201
    6 sg:pub.10.1007/bf01421486
    7 sg:pub.10.1007/bfb0015548
    8 sg:pub.10.1007/s001380050075
    9 https://doi.org/10.1016/0165-0114(78)90019-2
    10 https://doi.org/10.1016/0893-6080(94)90099-x
    11 https://doi.org/10.1016/s0167-8655(96)00092-x
    12 https://doi.org/10.1037/0033-295x.94.2.115
    13 https://doi.org/10.1109/34.667881
    14 https://doi.org/10.1109/3468.477860
    15 https://doi.org/10.1109/70.345940
    16 https://doi.org/10.1109/cvpr.1996.517149
    17 https://doi.org/10.1117/12.256290
    18 https://doi.org/10.1162/jocn.1991.3.1.71
    19 schema:datePublished 1999-06
    20 schema:datePublishedReg 1999-06-01
    21 schema:description One major goal of active object recognition systems is to extract useful information from multiple measurements. We compare three frameworks for information fusion and view-planning using different uncertainty calculi: probability theory, possibility theory and Dempster-Shafer theory of evidence. The system dynamically repositions the camera to capture additional views in order to improve the classification result obtained from a single view. The active recognition problem can be tackled successfully by all the considered approaches with sometimes only slight differences in performance. Extensive experiments confirm that recognition rates can be improved considerably by performing active steps. Random selection of the next action is much less efficient than planning, both in recognition rate and in the average number of steps required for recognition. As long as the rate of wrong object-pose classifications stays low the probabilistic implementation always outperforms the other approaches. If the outlier rate increases averaging fusion schemes outperform conjunctive approaches for information integration. We use an appearance based object representation, namely the parametric eigenspace, but the planning algorithm is actually independent of the details of the specific object recognition environment.
    22 schema:genre research_article
    23 schema:inLanguage en
    24 schema:isAccessibleForFree false
    25 schema:isPartOf N88527a13e2ba4cc5ac02f87b198e3aef
    26 N9c109baba3f040f291c441deaff2875a
    27 sg:journal.1356894
    28 schema:name A Comparison of Probabilistic, Possibilistic and Evidence Theoretic Fusion Schemes for Active Object Recognition
    29 schema:pagination 293-319
    30 schema:productId N266d1c8a819b4c2da9748e61caa283c1
    31 N6c4977ad3dbf4b358a5f6212c256f172
    32 N752baec2a08c4479a09ab9cbbe6c498e
    33 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046512378
    34 https://doi.org/10.1007/s006070050026
    35 schema:sdDatePublished 2019-04-10T18:21
    36 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    37 schema:sdPublisher Ndcaad660533144a89cb4232017fd4d0b
    38 schema:url http://link.springer.com/10.1007%2Fs006070050026
    39 sgo:license sg:explorer/license/
    40 sgo:sdDataset articles
    41 rdf:type schema:ScholarlyArticle
    42 N266d1c8a819b4c2da9748e61caa283c1 schema:name readcube_id
    43 schema:value dd911bf02c891eab2022451f6cc4bd861b7a225c22169ea208879855c74903b5
    44 rdf:type schema:PropertyValue
    45 N6c4977ad3dbf4b358a5f6212c256f172 schema:name dimensions_id
    46 schema:value pub.1046512378
    47 rdf:type schema:PropertyValue
    48 N752baec2a08c4479a09ab9cbbe6c498e schema:name doi
    49 schema:value 10.1007/s006070050026
    50 rdf:type schema:PropertyValue
    51 N7e450dc5acbd4db3aa97fd128fef7fe9 rdf:first sg:person.010060055125.29
    52 rdf:rest N8781b267923042faaf4f699929292677
    53 N8781b267923042faaf4f699929292677 rdf:first sg:person.012033065653.49
    54 rdf:rest rdf:nil
    55 N88527a13e2ba4cc5ac02f87b198e3aef schema:volumeNumber 62
    56 rdf:type schema:PublicationVolume
    57 N89326e1124fa4cc8bdd23aaf4cea1d2b rdf:first sg:person.014253175607.32
    58 rdf:rest N7e450dc5acbd4db3aa97fd128fef7fe9
    59 N9c109baba3f040f291c441deaff2875a schema:issueNumber 4
    60 rdf:type schema:PublicationIssue
    61 Ndcaad660533144a89cb4232017fd4d0b schema:name Springer Nature - SN SciGraph project
    62 rdf:type schema:Organization
    63 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    64 schema:name Information and Computing Sciences
    65 rdf:type schema:DefinedTerm
    66 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    67 schema:name Artificial Intelligence and Image Processing
    68 rdf:type schema:DefinedTerm
    69 sg:journal.1356894 schema:issn 1436-5057
    70 1521-9615
    71 schema:name Computing
    72 rdf:type schema:Periodical
    73 sg:person.010060055125.29 schema:affiliation https://www.grid.ac/institutes/grid.410413.3
    74 schema:familyName Paletta
    75 schema:givenName L.
    76 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010060055125.29
    77 rdf:type schema:Person
    78 sg:person.012033065653.49 schema:affiliation https://www.grid.ac/institutes/grid.410413.3
    79 schema:familyName Pinz
    80 schema:givenName A.
    81 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49
    82 rdf:type schema:Person
    83 sg:person.014253175607.32 schema:affiliation https://www.grid.ac/institutes/grid.410413.3
    84 schema:familyName Borotschnig
    85 schema:givenName H.
    86 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014253175607.32
    87 rdf:type schema:Person
    88 sg:pub.10.1007/bf01414882 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039330411
    89 https://doi.org/10.1007/bf01414882
    90 rdf:type schema:CreativeWork
    91 sg:pub.10.1007/bf01421201 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042882432
    92 https://doi.org/10.1007/bf01421201
    93 rdf:type schema:CreativeWork
    94 sg:pub.10.1007/bf01421486 schema:sameAs https://app.dimensions.ai/details/publication/pub.1020275860
    95 https://doi.org/10.1007/bf01421486
    96 rdf:type schema:CreativeWork
    97 sg:pub.10.1007/bfb0015548 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040495258
    98 https://doi.org/10.1007/bfb0015548
    99 rdf:type schema:CreativeWork
    100 sg:pub.10.1007/s001380050075 schema:sameAs https://app.dimensions.ai/details/publication/pub.1010141098
    101 https://doi.org/10.1007/s001380050075
    102 rdf:type schema:CreativeWork
    103 https://doi.org/10.1016/0165-0114(78)90019-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1005739859
    104 rdf:type schema:CreativeWork
    105 https://doi.org/10.1016/0893-6080(94)90099-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1017057242
    106 rdf:type schema:CreativeWork
    107 https://doi.org/10.1016/s0167-8655(96)00092-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1053674868
    108 rdf:type schema:CreativeWork
    109 https://doi.org/10.1037/0033-295x.94.2.115 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034091054
    110 rdf:type schema:CreativeWork
    111 https://doi.org/10.1109/34.667881 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156743
    112 rdf:type schema:CreativeWork
    113 https://doi.org/10.1109/3468.477860 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157416
    114 rdf:type schema:CreativeWork
    115 https://doi.org/10.1109/70.345940 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061216136
    116 rdf:type schema:CreativeWork
    117 https://doi.org/10.1109/cvpr.1996.517149 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094085162
    118 rdf:type schema:CreativeWork
    119 https://doi.org/10.1117/12.256290 schema:sameAs https://app.dimensions.ai/details/publication/pub.1002508424
    120 rdf:type schema:CreativeWork
    121 https://doi.org/10.1162/jocn.1991.3.1.71 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043225769
    122 rdf:type schema:CreativeWork
    123 https://www.grid.ac/institutes/grid.410413.3 schema:alternateName Graz University of Technology
    124 schema:name Institute for Computer Graphics and Vision, Technical University Graz, Inffeldgasse 16/E/2, A-8010 Graz, Austria, e-mail: borotschnig@icg.tu-graz.ac.at, AT
    125 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...