A Boundary-Fragment-Model for Object Detection View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2006

AUTHORS

Andreas Opelt , Axel Pinz , Andrew Zisserman

ABSTRACT

The objective of this work is the detection of object classes, such as airplanes or horses. Instead of using a model based on salient image fragments, we show that object class detection is also possible using only the object’s boundary. To this end, we develop a novel learning technique to extract class-discriminative boundary fragments. In addition to their shape, these “codebook” entries also determine the object’s centroid (in the manner of Leibe et al. [19]). Boosting is used to select discriminative combinations of boundary fragments (weak detectors) to form a strong “Boundary-Fragment-Model” (BFM) detector. The generative aspect of the model is used to determine an approximate segmentation. We demonstrate the following results: (i) the BFM detector is able to represent and detect object classes principally defined by their shape, rather than their appearance; and (ii) in comparison with other published results on several object classes (airplanes, cars-rear, cows) the BFM detector is able to exceed previous performances, and to achieve this with less supervision (such as the number of training images). More... »

PAGES

575-588

References to SciGraph publications

Book

TITLE

Computer Vision – ECCV 2006

ISBN

978-3-540-33834-5
978-3-540-33835-2

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/11744047_44

DOI

http://dx.doi.org/10.1007/11744047_44

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1021318481


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "name": [
            "Vision-based Measurement Group, Inst. of El. Measurement and Meas. Sign. Proc., University of Technology, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Opelt", 
        "givenName": "Andreas", 
        "id": "sg:person.013624034621.75", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013624034621.75"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "name": [
            "Vision-based Measurement Group, Inst. of El. Measurement and Meas. Sign. Proc., University of Technology, Graz, Austria"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Pinz", 
        "givenName": "Axel", 
        "id": "sg:person.012033065653.49", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Oxford", 
          "id": "https://www.grid.ac/institutes/grid.4991.5", 
          "name": [
            "Visual Geometry Group, Department of Engineering Science, University of Oxford, UK"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zisserman", 
        "givenName": "Andrew", 
        "id": "sg:person.012270111307.09", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012270111307.09"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/978-3-540-24670-1_19", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1003570178", 
          "https://doi.org/10.1007/978-3-540-24670-1_19"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-24670-1_19", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1003570178", 
          "https://doi.org/10.1007/978-3-540-24670-1_19"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1006/jcss.1997.1504", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1004338842"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1023/b:visi.0000042934.15159.49", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1022591804", 
          "https://doi.org/10.1023/b:visi.0000042934.15159.49"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0262-8856(02)00047-1", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1030223325"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-28649-3_18", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040710754", 
          "https://doi.org/10.1007/978-3-540-28649-3_18"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-28649-3_18", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040710754", 
          "https://doi.org/10.1007/978-3-540-28649-3_18"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-24671-8_6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046580260", 
          "https://doi.org/10.1007/978-3-540-24671-8_6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-24671-8_6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1046580260", 
          "https://doi.org/10.1007/978-3-540-24671-8_6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-24671-8_41", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1047961023", 
          "https://doi.org/10.1007/978-3-540-24671-8_41"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-540-24671-8_41", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1047961023", 
          "https://doi.org/10.1007/978-3-540-24671-8_41"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.1000236", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061155588"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.391389", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156201"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.9107", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061157228"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2004.108", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061742623"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.251", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093426548"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2003.1211479", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093624919"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2003.1238356", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093732183"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.329", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093867537"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.329", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093867537"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.63", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093988741"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.77", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094132829"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/icpr.2004.1334079", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094584594"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.156", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094962249"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2004.1315149", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094991845"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.47", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095529215"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.270", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095574812"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.270", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095574812"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2005.250", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095759017"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.5244/c.13.21", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099368192"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.5244/c.18.81", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099382665"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.5244/c.17.78", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099383029"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2006", 
    "datePublishedReg": "2006-01-01", 
    "description": "The objective of this work is the detection of object classes, such as airplanes or horses. Instead of using a model based on salient image fragments, we show that object class detection is also possible using only the object\u2019s boundary. To this end, we develop a novel learning technique to extract class-discriminative boundary fragments. In addition to their shape, these \u201ccodebook\u201d entries also determine the object\u2019s centroid (in the manner of Leibe et al. [19]). Boosting is used to select discriminative combinations of boundary fragments (weak detectors) to form a strong \u201cBoundary-Fragment-Model\u201d (BFM) detector. The generative aspect of the model is used to determine an approximate segmentation. We demonstrate the following results: (i) the BFM detector is able to represent and detect object classes principally defined by their shape, rather than their appearance; and (ii) in comparison with other published results on several object classes (airplanes, cars-rear, cows) the BFM detector is able to exceed previous performances, and to achieve this with less supervision (such as the number of training images).", 
    "editor": [
      {
        "familyName": "Leonardis", 
        "givenName": "Ale\u0161", 
        "type": "Person"
      }, 
      {
        "familyName": "Bischof", 
        "givenName": "Horst", 
        "type": "Person"
      }, 
      {
        "familyName": "Pinz", 
        "givenName": "Axel", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/11744047_44", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-540-33834-5", 
        "978-3-540-33835-2"
      ], 
      "name": "Computer Vision \u2013 ECCV 2006", 
      "type": "Book"
    }, 
    "name": "A Boundary-Fragment-Model for Object Detection", 
    "pagination": "575-588", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1021318481"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/11744047_44"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "659255586cea429db09ae301ce090aef38ddccfa5da9f4ee6e2af9d0f74ef6dd"
        ]
      }
    ], 
    "publisher": {
      "location": "Berlin, Heidelberg", 
      "name": "Springer Berlin Heidelberg", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/11744047_44", 
      "https://app.dimensions.ai/details/publication/pub.1021318481"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-16T07:31", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000356_0000000356/records_57886_00000000.jsonl", 
    "type": "Chapter", 
    "url": "https://link.springer.com/10.1007%2F11744047_44"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/11744047_44'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/11744047_44'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/11744047_44'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/11744047_44'


 

This table displays all metadata directly associated to this object as RDF triples.

176 TRIPLES      23 PREDICATES      53 URIs      20 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/11744047_44 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N8ab20684fcad4911b35296037152eabc
4 schema:citation sg:pub.10.1007/978-3-540-24670-1_19
5 sg:pub.10.1007/978-3-540-24671-8_41
6 sg:pub.10.1007/978-3-540-24671-8_6
7 sg:pub.10.1007/978-3-540-28649-3_18
8 sg:pub.10.1023/b:visi.0000042934.15159.49
9 https://doi.org/10.1006/jcss.1997.1504
10 https://doi.org/10.1016/s0262-8856(02)00047-1
11 https://doi.org/10.1109/34.1000236
12 https://doi.org/10.1109/34.391389
13 https://doi.org/10.1109/34.9107
14 https://doi.org/10.1109/cvpr.2003.1211479
15 https://doi.org/10.1109/cvpr.2004.1315149
16 https://doi.org/10.1109/cvpr.2005.156
17 https://doi.org/10.1109/cvpr.2005.250
18 https://doi.org/10.1109/cvpr.2005.251
19 https://doi.org/10.1109/cvpr.2005.270
20 https://doi.org/10.1109/cvpr.2005.329
21 https://doi.org/10.1109/cvpr.2005.47
22 https://doi.org/10.1109/iccv.2003.1238356
23 https://doi.org/10.1109/iccv.2005.63
24 https://doi.org/10.1109/iccv.2005.77
25 https://doi.org/10.1109/icpr.2004.1334079
26 https://doi.org/10.1109/tpami.2004.108
27 https://doi.org/10.5244/c.13.21
28 https://doi.org/10.5244/c.17.78
29 https://doi.org/10.5244/c.18.81
30 schema:datePublished 2006
31 schema:datePublishedReg 2006-01-01
32 schema:description The objective of this work is the detection of object classes, such as airplanes or horses. Instead of using a model based on salient image fragments, we show that object class detection is also possible using only the object’s boundary. To this end, we develop a novel learning technique to extract class-discriminative boundary fragments. In addition to their shape, these “codebook” entries also determine the object’s centroid (in the manner of Leibe et al. [19]). Boosting is used to select discriminative combinations of boundary fragments (weak detectors) to form a strong “Boundary-Fragment-Model” (BFM) detector. The generative aspect of the model is used to determine an approximate segmentation. We demonstrate the following results: (i) the BFM detector is able to represent and detect object classes principally defined by their shape, rather than their appearance; and (ii) in comparison with other published results on several object classes (airplanes, cars-rear, cows) the BFM detector is able to exceed previous performances, and to achieve this with less supervision (such as the number of training images).
33 schema:editor Nf9893b135c9e4be18179407b30121168
34 schema:genre chapter
35 schema:inLanguage en
36 schema:isAccessibleForFree true
37 schema:isPartOf Ne717be6a847f4e5090b652d0b04b3b05
38 schema:name A Boundary-Fragment-Model for Object Detection
39 schema:pagination 575-588
40 schema:productId N5e5567b4d08a463390626688ef88b385
41 N62b46731a2f34274ab2eba736c28333c
42 N6c05f37280604310963fa6b59c823dd2
43 schema:publisher N1a1e5c5d67bc46caa6278645fb2db488
44 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021318481
45 https://doi.org/10.1007/11744047_44
46 schema:sdDatePublished 2019-04-16T07:31
47 schema:sdLicense https://scigraph.springernature.com/explorer/license/
48 schema:sdPublisher Ndcf9fe6532b44871954cc4407cad08d9
49 schema:url https://link.springer.com/10.1007%2F11744047_44
50 sgo:license sg:explorer/license/
51 sgo:sdDataset chapters
52 rdf:type schema:Chapter
53 N1a1e5c5d67bc46caa6278645fb2db488 schema:location Berlin, Heidelberg
54 schema:name Springer Berlin Heidelberg
55 rdf:type schema:Organisation
56 N1ee6c8f09f974801a0444516cf6e7fd3 schema:name Vision-based Measurement Group, Inst. of El. Measurement and Meas. Sign. Proc., University of Technology, Graz, Austria
57 rdf:type schema:Organization
58 N4c08009738044b09a860d4268ca2568d rdf:first sg:person.012033065653.49
59 rdf:rest Na766973393d44bfcb4d31cc1beca2e2c
60 N5e5567b4d08a463390626688ef88b385 schema:name readcube_id
61 schema:value 659255586cea429db09ae301ce090aef38ddccfa5da9f4ee6e2af9d0f74ef6dd
62 rdf:type schema:PropertyValue
63 N62b46731a2f34274ab2eba736c28333c schema:name doi
64 schema:value 10.1007/11744047_44
65 rdf:type schema:PropertyValue
66 N6c05f37280604310963fa6b59c823dd2 schema:name dimensions_id
67 schema:value pub.1021318481
68 rdf:type schema:PropertyValue
69 N79b22b98c16a44eebd03e72e6acc9c2e rdf:first Nd007231f950d46c3a0932239271e55a8
70 rdf:rest rdf:nil
71 N7fb41bfb1d9f4260a5c1e54ef35c14a9 schema:familyName Leonardis
72 schema:givenName Aleš
73 rdf:type schema:Person
74 N8ab20684fcad4911b35296037152eabc rdf:first sg:person.013624034621.75
75 rdf:rest N4c08009738044b09a860d4268ca2568d
76 N94b599f512094872bbe97667480e42b4 schema:familyName Bischof
77 schema:givenName Horst
78 rdf:type schema:Person
79 Na766973393d44bfcb4d31cc1beca2e2c rdf:first sg:person.012270111307.09
80 rdf:rest rdf:nil
81 Nabe5b4937dd748409dd74f445ab3a18e rdf:first N94b599f512094872bbe97667480e42b4
82 rdf:rest N79b22b98c16a44eebd03e72e6acc9c2e
83 Nd007231f950d46c3a0932239271e55a8 schema:familyName Pinz
84 schema:givenName Axel
85 rdf:type schema:Person
86 Ndcf9fe6532b44871954cc4407cad08d9 schema:name Springer Nature - SN SciGraph project
87 rdf:type schema:Organization
88 Ne717be6a847f4e5090b652d0b04b3b05 schema:isbn 978-3-540-33834-5
89 978-3-540-33835-2
90 schema:name Computer Vision – ECCV 2006
91 rdf:type schema:Book
92 Ne71a3887fd9548cc803eec7847332d75 schema:name Vision-based Measurement Group, Inst. of El. Measurement and Meas. Sign. Proc., University of Technology, Graz, Austria
93 rdf:type schema:Organization
94 Nf9893b135c9e4be18179407b30121168 rdf:first N7fb41bfb1d9f4260a5c1e54ef35c14a9
95 rdf:rest Nabe5b4937dd748409dd74f445ab3a18e
96 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
97 schema:name Information and Computing Sciences
98 rdf:type schema:DefinedTerm
99 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
100 schema:name Artificial Intelligence and Image Processing
101 rdf:type schema:DefinedTerm
102 sg:person.012033065653.49 schema:affiliation Ne71a3887fd9548cc803eec7847332d75
103 schema:familyName Pinz
104 schema:givenName Axel
105 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012033065653.49
106 rdf:type schema:Person
107 sg:person.012270111307.09 schema:affiliation https://www.grid.ac/institutes/grid.4991.5
108 schema:familyName Zisserman
109 schema:givenName Andrew
110 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012270111307.09
111 rdf:type schema:Person
112 sg:person.013624034621.75 schema:affiliation N1ee6c8f09f974801a0444516cf6e7fd3
113 schema:familyName Opelt
114 schema:givenName Andreas
115 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013624034621.75
116 rdf:type schema:Person
117 sg:pub.10.1007/978-3-540-24670-1_19 schema:sameAs https://app.dimensions.ai/details/publication/pub.1003570178
118 https://doi.org/10.1007/978-3-540-24670-1_19
119 rdf:type schema:CreativeWork
120 sg:pub.10.1007/978-3-540-24671-8_41 schema:sameAs https://app.dimensions.ai/details/publication/pub.1047961023
121 https://doi.org/10.1007/978-3-540-24671-8_41
122 rdf:type schema:CreativeWork
123 sg:pub.10.1007/978-3-540-24671-8_6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1046580260
124 https://doi.org/10.1007/978-3-540-24671-8_6
125 rdf:type schema:CreativeWork
126 sg:pub.10.1007/978-3-540-28649-3_18 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040710754
127 https://doi.org/10.1007/978-3-540-28649-3_18
128 rdf:type schema:CreativeWork
129 sg:pub.10.1023/b:visi.0000042934.15159.49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022591804
130 https://doi.org/10.1023/b:visi.0000042934.15159.49
131 rdf:type schema:CreativeWork
132 https://doi.org/10.1006/jcss.1997.1504 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004338842
133 rdf:type schema:CreativeWork
134 https://doi.org/10.1016/s0262-8856(02)00047-1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030223325
135 rdf:type schema:CreativeWork
136 https://doi.org/10.1109/34.1000236 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061155588
137 rdf:type schema:CreativeWork
138 https://doi.org/10.1109/34.391389 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156201
139 rdf:type schema:CreativeWork
140 https://doi.org/10.1109/34.9107 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157228
141 rdf:type schema:CreativeWork
142 https://doi.org/10.1109/cvpr.2003.1211479 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093624919
143 rdf:type schema:CreativeWork
144 https://doi.org/10.1109/cvpr.2004.1315149 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094991845
145 rdf:type schema:CreativeWork
146 https://doi.org/10.1109/cvpr.2005.156 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094962249
147 rdf:type schema:CreativeWork
148 https://doi.org/10.1109/cvpr.2005.250 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095759017
149 rdf:type schema:CreativeWork
150 https://doi.org/10.1109/cvpr.2005.251 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093426548
151 rdf:type schema:CreativeWork
152 https://doi.org/10.1109/cvpr.2005.270 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095574812
153 rdf:type schema:CreativeWork
154 https://doi.org/10.1109/cvpr.2005.329 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093867537
155 rdf:type schema:CreativeWork
156 https://doi.org/10.1109/cvpr.2005.47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095529215
157 rdf:type schema:CreativeWork
158 https://doi.org/10.1109/iccv.2003.1238356 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093732183
159 rdf:type schema:CreativeWork
160 https://doi.org/10.1109/iccv.2005.63 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093988741
161 rdf:type schema:CreativeWork
162 https://doi.org/10.1109/iccv.2005.77 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094132829
163 rdf:type schema:CreativeWork
164 https://doi.org/10.1109/icpr.2004.1334079 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094584594
165 rdf:type schema:CreativeWork
166 https://doi.org/10.1109/tpami.2004.108 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061742623
167 rdf:type schema:CreativeWork
168 https://doi.org/10.5244/c.13.21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099368192
169 rdf:type schema:CreativeWork
170 https://doi.org/10.5244/c.17.78 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099383029
171 rdf:type schema:CreativeWork
172 https://doi.org/10.5244/c.18.81 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099382665
173 rdf:type schema:CreativeWork
174 https://www.grid.ac/institutes/grid.4991.5 schema:alternateName University of Oxford
175 schema:name Visual Geometry Group, Department of Engineering Science, University of Oxford, UK
176 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...