Weak Hypotheses and Boosting for Generic Object Detection and Recognition View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2004

AUTHORS

A. Opelt , M. Fussenegger , A. Pinz , P. Auer

ABSTRACT

In this paper we describe the first stage of a new learning system for object detection and recognition. For our system we propose Boosting [5] as the underlying learning technique. This allows the use of very diverse sets of visual features in the learning process within a common framework: Boosting — together with a weak hypotheses finder — may choose very inhomogeneous features as most relevant for combination into a final hypothesis. As another advantage the weak hypotheses finder may search the weak hypotheses space without explicit calculation of all available hypotheses, reducing computation time. This contrasts the related work of Agarwal and Roth [1] where Winnow was used as learning algorithm and all weak hypotheses were calculated explicitly. In our first empirical evaluation we use four types of local descriptors: two basic ones consisting of a set of grayvalues and intensity moments and two high level descriptors: moment invariants [8] and SIFTs [12]. The descriptors are calculated from local patches detected by an interest point operator. The weak hypotheses finder selects one of the local patches and one type of local descriptor and efficiently searches for the most discriminative similarity threshold. This differs from other work on Boosting for object recognition where simple rectangular hypotheses [22] or complex classifiers [20] have been used. In relatively simple images, where the objects are prominent, our approach yields results comparable to the state-of-the-art [3]. But we also obtain very good results on more complex images, where the objects are located in arbitrary positions, poses, and scales in the images. These results indicate that our flexible approach, which also allows the inclusion of features from segmented regions and even spatial relationships, leads us a significant step towards generic object recognition. More... »

PAGES

71-84

References to SciGraph publications

  • 2000. Unsupervised Learning of Models for Recognition in COMPUTER VISION - ECCV 2000
  • 2004-02. Object Detection Using the Statistics of Parts in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 1996. Affine / photometric invariants for planar intensity patterns in COMPUTER VISION — ECCV '96
  • 2002. Learning a Sparse Representation for Object Detection in COMPUTER VISION — ECCV 2002
  • 2002. An Affine Invariant Interest Point Detector in COMPUTER VISION — ECCV 2002
  • 2000-06. Evaluation of Interest Point Detectors in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • Book

    TITLE

    Computer Vision - ECCV 2004

    ISBN

    978-3-540-21983-5
    978-3-540-24671-8

    Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/978-3-540-24671-8_6

    DOI

    http://dx.doi.org/10.1007/978-3-540-24671-8_6

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1046580260


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "name": [
                "Institute of Computer Science, 8700, Leoben, Austria", 
                "Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Opelt", 
            "givenName": "A.", 
            "id": "sg:person.013624034621.75", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013624034621.75"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "Institute of Computer Science, 8700, Leoben, Austria", 
                "Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fussenegger", 
            "givenName": "M.", 
            "id": "sg:person.01330551375.09", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01330551375.09"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Pinz", 
            "givenName": "A.", 
            "id": "sg:person.012033065653.49", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "Institute of Computer Science, 8700, Leoben, Austria"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Auer", 
            "givenName": "P.", 
            "id": "sg:person.010211007377.90", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010211007377.90"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/3-540-45054-8_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014228158", 
              "https://doi.org/10.1007/3-540-45054-8_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/3-540-47979-1_8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015337224", 
              "https://doi.org/10.1007/3-540-47979-1_8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000011202.85607.00", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029490995", 
              "https://doi.org/10.1023/b:visi.0000011202.85607.00"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1006/inco.1997.2686", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1031152048"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bfb0015574", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1039472253", 
              "https://doi.org/10.1007/bfb0015574"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/3-540-47969-4_9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1046596731", 
              "https://doi.org/10.1007/3-540-47969-4_9"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s0031-3203(98)00017-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1047916124"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/a:1008199403446", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1048406451", 
              "https://doi.org/10.1023/a:1008199403446"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/34.589215", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061156611"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/34.93808", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061157293"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2001.990517", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093187020"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2003.1238407", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093466890"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2003.1211479", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093624919"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2003.1211478", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093789386"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icpr.2002.1048077", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093861869"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.2001.990522", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094348361"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.2001.937561", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095654754"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/cvpr.1997.609446", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095750094"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iccv.1999.790410", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095766209"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.5244/c.2.23", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1099320318"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2004", 
        "datePublishedReg": "2004-01-01", 
        "description": "In this paper we describe the first stage of a new learning system for object detection and recognition. For our system we propose Boosting [5] as the underlying learning technique. This allows the use of very diverse sets of visual features in the learning process within a common framework: Boosting \u2014 together with a weak hypotheses finder \u2014 may choose very inhomogeneous features as most relevant for combination into a final hypothesis. As another advantage the weak hypotheses finder may search the weak hypotheses space without explicit calculation of all available hypotheses, reducing computation time. This contrasts the related work of Agarwal and Roth [1] where Winnow was used as learning algorithm and all weak hypotheses were calculated explicitly. In our first empirical evaluation we use four types of local descriptors: two basic ones consisting of a set of grayvalues and intensity moments and two high level descriptors: moment invariants [8] and SIFTs [12]. The descriptors are calculated from local patches detected by an interest point operator. The weak hypotheses finder selects one of the local patches and one type of local descriptor and efficiently searches for the most discriminative similarity threshold. This differs from other work on Boosting for object recognition where simple rectangular hypotheses [22] or complex classifiers [20] have been used. In relatively simple images, where the objects are prominent, our approach yields results comparable to the state-of-the-art [3]. But we also obtain very good results on more complex images, where the objects are located in arbitrary positions, poses, and scales in the images. These results indicate that our flexible approach, which also allows the inclusion of features from segmented regions and even spatial relationships, leads us a significant step towards generic object recognition.", 
        "editor": [
          {
            "familyName": "Pajdla", 
            "givenName": "Tom\u00e1s", 
            "type": "Person"
          }, 
          {
            "familyName": "Matas", 
            "givenName": "Ji\u0159\u00ed", 
            "type": "Person"
          }
        ], 
        "genre": "chapter", 
        "id": "sg:pub.10.1007/978-3-540-24671-8_6", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": true, 
        "isPartOf": {
          "isbn": [
            "978-3-540-21983-5", 
            "978-3-540-24671-8"
          ], 
          "name": "Computer Vision - ECCV 2004", 
          "type": "Book"
        }, 
        "name": "Weak Hypotheses and Boosting for Generic Object Detection and Recognition", 
        "pagination": "71-84", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1046580260"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/978-3-540-24671-8_6"
            ]
          }, 
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "dde3efcad855a051bffdfa1f23df17bd0e83ca3d07fa13dcd9a51692223e49b9"
            ]
          }
        ], 
        "publisher": {
          "location": "Berlin, Heidelberg", 
          "name": "Springer Berlin Heidelberg", 
          "type": "Organisation"
        }, 
        "sameAs": [
          "https://doi.org/10.1007/978-3-540-24671-8_6", 
          "https://app.dimensions.ai/details/publication/pub.1046580260"
        ], 
        "sdDataset": "chapters", 
        "sdDatePublished": "2019-04-16T08:08", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000360_0000000360/records_118318_00000001.jsonl", 
        "type": "Chapter", 
        "url": "https://link.springer.com/10.1007%2F978-3-540-24671-8_6"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-24671-8_6'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-24671-8_6'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-24671-8_6'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-540-24671-8_6'


     

    This table displays all metadata directly associated to this object as RDF triples.

    164 TRIPLES      23 PREDICATES      47 URIs      20 LITERALS      8 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/978-3-540-24671-8_6 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nd8f4ca12c3b84429a4e47919468a5242
    4 schema:citation sg:pub.10.1007/3-540-45054-8_2
    5 sg:pub.10.1007/3-540-47969-4_9
    6 sg:pub.10.1007/3-540-47979-1_8
    7 sg:pub.10.1007/bfb0015574
    8 sg:pub.10.1023/a:1008199403446
    9 sg:pub.10.1023/b:visi.0000011202.85607.00
    10 https://doi.org/10.1006/inco.1997.2686
    11 https://doi.org/10.1016/s0031-3203(98)00017-x
    12 https://doi.org/10.1109/34.589215
    13 https://doi.org/10.1109/34.93808
    14 https://doi.org/10.1109/cvpr.1997.609446
    15 https://doi.org/10.1109/cvpr.2001.990517
    16 https://doi.org/10.1109/cvpr.2001.990522
    17 https://doi.org/10.1109/cvpr.2003.1211478
    18 https://doi.org/10.1109/cvpr.2003.1211479
    19 https://doi.org/10.1109/iccv.1999.790410
    20 https://doi.org/10.1109/iccv.2001.937561
    21 https://doi.org/10.1109/iccv.2003.1238407
    22 https://doi.org/10.1109/icpr.2002.1048077
    23 https://doi.org/10.5244/c.2.23
    24 schema:datePublished 2004
    25 schema:datePublishedReg 2004-01-01
    26 schema:description In this paper we describe the first stage of a new learning system for object detection and recognition. For our system we propose Boosting [5] as the underlying learning technique. This allows the use of very diverse sets of visual features in the learning process within a common framework: Boosting — together with a weak hypotheses finder — may choose very inhomogeneous features as most relevant for combination into a final hypothesis. As another advantage the weak hypotheses finder may search the weak hypotheses space without explicit calculation of all available hypotheses, reducing computation time. This contrasts the related work of Agarwal and Roth [1] where Winnow was used as learning algorithm and all weak hypotheses were calculated explicitly. In our first empirical evaluation we use four types of local descriptors: two basic ones consisting of a set of grayvalues and intensity moments and two high level descriptors: moment invariants [8] and SIFTs [12]. The descriptors are calculated from local patches detected by an interest point operator. The weak hypotheses finder selects one of the local patches and one type of local descriptor and efficiently searches for the most discriminative similarity threshold. This differs from other work on Boosting for object recognition where simple rectangular hypotheses [22] or complex classifiers [20] have been used. In relatively simple images, where the objects are prominent, our approach yields results comparable to the state-of-the-art [3]. But we also obtain very good results on more complex images, where the objects are located in arbitrary positions, poses, and scales in the images. These results indicate that our flexible approach, which also allows the inclusion of features from segmented regions and even spatial relationships, leads us a significant step towards generic object recognition.
    27 schema:editor N8ea70d54d9bc4a7ba07b4a7f8df2d838
    28 schema:genre chapter
    29 schema:inLanguage en
    30 schema:isAccessibleForFree true
    31 schema:isPartOf N86e88e103f4c426ebb7611ebba49b910
    32 schema:name Weak Hypotheses and Boosting for Generic Object Detection and Recognition
    33 schema:pagination 71-84
    34 schema:productId N293161e751e7403cb96038a64846274e
    35 N5eb2ab2a8cda42e2b61310c7fbcb0cdf
    36 Nb62e4575ff224f45b561f008086d3f26
    37 schema:publisher N58e12a0b742d44ab8feab39d56977d3e
    38 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046580260
    39 https://doi.org/10.1007/978-3-540-24671-8_6
    40 schema:sdDatePublished 2019-04-16T08:08
    41 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    42 schema:sdPublisher N9553454f206c486c82b0510a0da83738
    43 schema:url https://link.springer.com/10.1007%2F978-3-540-24671-8_6
    44 sgo:license sg:explorer/license/
    45 sgo:sdDataset chapters
    46 rdf:type schema:Chapter
    47 N0368011167e941c09bcd84673db3acaa rdf:first sg:person.012033065653.49
    48 rdf:rest Naa8db14336494ad38cf6ff14b0cf0013
    49 N1a03377906844df786c7991be5c4fc97 schema:familyName Pajdla
    50 schema:givenName Tomás
    51 rdf:type schema:Person
    52 N1d57e29b82ed47878c8a246c84d5582d schema:name Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria
    53 rdf:type schema:Organization
    54 N22f33388cf504b459efd1566bc1bb33a schema:name Institute of Computer Science, 8700, Leoben, Austria
    55 rdf:type schema:Organization
    56 N293161e751e7403cb96038a64846274e schema:name doi
    57 schema:value 10.1007/978-3-540-24671-8_6
    58 rdf:type schema:PropertyValue
    59 N58e12a0b742d44ab8feab39d56977d3e schema:location Berlin, Heidelberg
    60 schema:name Springer Berlin Heidelberg
    61 rdf:type schema:Organisation
    62 N5eb2ab2a8cda42e2b61310c7fbcb0cdf schema:name readcube_id
    63 schema:value dde3efcad855a051bffdfa1f23df17bd0e83ca3d07fa13dcd9a51692223e49b9
    64 rdf:type schema:PropertyValue
    65 N76c1bddaf77b43ca97459bfbfc7e192e schema:familyName Matas
    66 schema:givenName Jiří
    67 rdf:type schema:Person
    68 N86e88e103f4c426ebb7611ebba49b910 schema:isbn 978-3-540-21983-5
    69 978-3-540-24671-8
    70 schema:name Computer Vision - ECCV 2004
    71 rdf:type schema:Book
    72 N8ea70d54d9bc4a7ba07b4a7f8df2d838 rdf:first N1a03377906844df786c7991be5c4fc97
    73 rdf:rest N99b69e80108f49ff86941c06c91175d8
    74 N9553454f206c486c82b0510a0da83738 schema:name Springer Nature - SN SciGraph project
    75 rdf:type schema:Organization
    76 N99b69e80108f49ff86941c06c91175d8 rdf:first N76c1bddaf77b43ca97459bfbfc7e192e
    77 rdf:rest rdf:nil
    78 Naa8db14336494ad38cf6ff14b0cf0013 rdf:first sg:person.010211007377.90
    79 rdf:rest rdf:nil
    80 Nb62e4575ff224f45b561f008086d3f26 schema:name dimensions_id
    81 schema:value pub.1046580260
    82 rdf:type schema:PropertyValue
    83 Nb71edd7ac28a4cc29ce992646a3ee802 schema:name Institute of Computer Science, 8700, Leoben, Austria
    84 Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria
    85 rdf:type schema:Organization
    86 Nd8f4ca12c3b84429a4e47919468a5242 rdf:first sg:person.013624034621.75
    87 rdf:rest Ne502af26de2a403d8670e9a4e693bc69
    88 Ne502af26de2a403d8670e9a4e693bc69 rdf:first sg:person.01330551375.09
    89 rdf:rest N0368011167e941c09bcd84673db3acaa
    90 Ne9823dc4be7e465c934d51488c5481d7 schema:name Institute of Computer Science, 8700, Leoben, Austria
    91 Institute of Electrical Measurement and Measurement Signal Processing, 8010, Graz, Austria
    92 rdf:type schema:Organization
    93 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    94 schema:name Information and Computing Sciences
    95 rdf:type schema:DefinedTerm
    96 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    97 schema:name Artificial Intelligence and Image Processing
    98 rdf:type schema:DefinedTerm
    99 sg:person.010211007377.90 schema:affiliation N22f33388cf504b459efd1566bc1bb33a
    100 schema:familyName Auer
    101 schema:givenName P.
    102 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010211007377.90
    103 rdf:type schema:Person
    104 sg:person.012033065653.49 schema:affiliation N1d57e29b82ed47878c8a246c84d5582d
    105 schema:familyName Pinz
    106 schema:givenName A.
    107 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49
    108 rdf:type schema:Person
    109 sg:person.01330551375.09 schema:affiliation Ne9823dc4be7e465c934d51488c5481d7
    110 schema:familyName Fussenegger
    111 schema:givenName M.
    112 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01330551375.09
    113 rdf:type schema:Person
    114 sg:person.013624034621.75 schema:affiliation Nb71edd7ac28a4cc29ce992646a3ee802
    115 schema:familyName Opelt
    116 schema:givenName A.
    117 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013624034621.75
    118 rdf:type schema:Person
    119 sg:pub.10.1007/3-540-45054-8_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014228158
    120 https://doi.org/10.1007/3-540-45054-8_2
    121 rdf:type schema:CreativeWork
    122 sg:pub.10.1007/3-540-47969-4_9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046596731
    123 https://doi.org/10.1007/3-540-47969-4_9
    124 rdf:type schema:CreativeWork
    125 sg:pub.10.1007/3-540-47979-1_8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015337224
    126 https://doi.org/10.1007/3-540-47979-1_8
    127 rdf:type schema:CreativeWork
    128 sg:pub.10.1007/bfb0015574 schema:sameAs https://app.dimensions.ai/details/publication/pub.1039472253
    129 https://doi.org/10.1007/bfb0015574
    130 rdf:type schema:CreativeWork
    131 sg:pub.10.1023/a:1008199403446 schema:sameAs https://app.dimensions.ai/details/publication/pub.1048406451
    132 https://doi.org/10.1023/a:1008199403446
    133 rdf:type schema:CreativeWork
    134 sg:pub.10.1023/b:visi.0000011202.85607.00 schema:sameAs https://app.dimensions.ai/details/publication/pub.1029490995
    135 https://doi.org/10.1023/b:visi.0000011202.85607.00
    136 rdf:type schema:CreativeWork
    137 https://doi.org/10.1006/inco.1997.2686 schema:sameAs https://app.dimensions.ai/details/publication/pub.1031152048
    138 rdf:type schema:CreativeWork
    139 https://doi.org/10.1016/s0031-3203(98)00017-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1047916124
    140 rdf:type schema:CreativeWork
    141 https://doi.org/10.1109/34.589215 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156611
    142 rdf:type schema:CreativeWork
    143 https://doi.org/10.1109/34.93808 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157293
    144 rdf:type schema:CreativeWork
    145 https://doi.org/10.1109/cvpr.1997.609446 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095750094
    146 rdf:type schema:CreativeWork
    147 https://doi.org/10.1109/cvpr.2001.990517 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093187020
    148 rdf:type schema:CreativeWork
    149 https://doi.org/10.1109/cvpr.2001.990522 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094348361
    150 rdf:type schema:CreativeWork
    151 https://doi.org/10.1109/cvpr.2003.1211478 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093789386
    152 rdf:type schema:CreativeWork
    153 https://doi.org/10.1109/cvpr.2003.1211479 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093624919
    154 rdf:type schema:CreativeWork
    155 https://doi.org/10.1109/iccv.1999.790410 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095766209
    156 rdf:type schema:CreativeWork
    157 https://doi.org/10.1109/iccv.2001.937561 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095654754
    158 rdf:type schema:CreativeWork
    159 https://doi.org/10.1109/iccv.2003.1238407 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093466890
    160 rdf:type schema:CreativeWork
    161 https://doi.org/10.1109/icpr.2002.1048077 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093861869
    162 rdf:type schema:CreativeWork
    163 https://doi.org/10.5244/c.2.23 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099320318
    164 rdf:type schema:CreativeWork
     




    Preview window. Press ESC to close (or click here)


    ...