The Pascal Visual Object Classes (VOC) Challenge View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2009-09-09

AUTHORS

Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, Andrew Zisserman

ABSTRACT

The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension. More... »

PAGES

303-338

References to SciGraph publications

  • 2004-11. Distinctive Image Features from Scale-Invariant Keypoints in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2006. The 2005 PASCAL Visual Object Classes Challenge in MACHINE LEARNING CHALLENGES. EVALUATING PREDICTIVE UNCERTAINTY, VISUAL OBJECT CLASSIFICATION, AND RECOGNISING TECTUAL ENTAILMENT
  • 2006-09-25. Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2007-10-31. LabelMe: A Database and Web-Based Tool for Image Annotation in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2008. Some Objects Are More Equal Than Others: Measuring and Predicting Importance in COMPUTER VISION – ECCV 2008
  • 2006. Relay Boost Fusion for Learning Rare Concepts in Multimedia in IMAGE AND VIDEO RETRIEVAL
  • 2006. Learning of Graphical Models and Efficient Inference for Object Class Recognition in PATTERN RECOGNITION
  • 2006. TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation in COMPUTER VISION – ECCV 2006
  • 2002-04. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2003-07. Contextual Priming for Object Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2006-07-01. Weakly Supervised Scale-Invariant Learning of Models for Visual Recognition in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2004-05. Robust Real-Time Face Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2006. Coloring Local Feature Extraction in COMPUTER VISION – ECCV 2006
  • 2007-08-09. Describing Visual Scenes Using Transformed Objects and Parts in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2008-01-01. Techniques for Image Classification, Object Detection and Object Segmentation in VISUAL INFORMATION SYSTEMS. WEB-BASED VISUAL INFORMATION SEARCH AND MANAGEMENT
  • 2007-01-01. Introduction to a Large-Scale General Purpose Ground Truth Database: Methodology, Annotation Tool and Benchmarks in ENERGY MINIMIZATION METHODS IN COMPUTER VISION AND PATTERN RECOGNITION
  • 2002-04-29. Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary in COMPUTER VISION — ECCV 2002
  • Journal

    Related Patents

  • Systems And Methods For End-To-End Object Detection
  • Room Layout Estimation Methods And Techniques
  • Semantically-Relevant Discovery Of Solutions
  • Detector Evolution With Multi-Order Contextual Co-Occurrence
  • Systems And Methods For Performing Three-Dimensional Semantic Parsing Of Indoor Spaces
  • Image Segmenting Apparatus And Method
  • A Computer-Implemented Method And System For Detecting Small Objects On An Image Using Convolutional Neural Networks
  • Self-Learning Object Detectors For Unlabeled Videos Using Multi-Task Learning
  • Image Segmentation Method, Apparatus, And Fully Convolutional Network System
  • Personalized Neural Network For Eye Tracking
  • Detailed Eye Shape Model For Robust Biometric Applications
  • Synthesizing Training Samples For Object Recognition
  • Optimal Multi-Class Classifier Threshold-Offset Estimation With Particle Swarm Optimization For Visual Object Recognition
  • Object Detection Method Using Cnn Model And Object Detection Apparatus Using The Same
  • Efficient Data Layouts For Convolutional Neural Networks
  • Utilizing Deep Learning For Automatic Digital Image Segmentation And Stylization
  • Augmented Reality Display Device With Deep Learning Sensors
  • Training A Neural Network To Detect Objects In Images
  • Methods And Systems For Diagnosing And Treating Higher Order Refractive Aberrations Of An Eye
  • Methods And Systems For Diagnosing And Treating Presbyopia
  • Systems And Methods For The Distributed Categorization Of Source Data
  • Systems For Performing Semantic Segmentation And Methods Thereof
  • Augmented Reality Display Device With Deep Learning Sensors
  • Object Detection Using Deep Neural Networks
  • Training A Neural Network To Detect Objects In Images
  • Weighting Scheme For Pooling Image Descriptors
  • Augmented Reality Systems And Methods With Variable Focus Lens Elements
  • Discovery Of Semantic Similarities Between Images And Text
  • Resource-Aware Computer Vision
  • Gesture Recognition Using Multi-Sensory Data
  • Deep Learning System For Cuboid Detection
  • Object Location Determination In Frames Of A Video Stream
  • Online Domain Adaptation For Multi-Object Tracking
  • Multi-Object Tracking With Generic Object Proposals
  • Deep Neural Network For Iris Identification
  • Neural Network For Eye Image Segmentation And Image Quality Estimation
  • Video Redaction Method And System
  • Training A Neural Network With Representations Of User Interface Devices
  • Structure Defect Detection Using Machine Learning Algorithms
  • Iris Boundary Estimation Using Cornea Curvature
  • Object Location Determination In Frames Of A Video Stream
  • Systems And Methods For Matching Visual Object Components
  • System And Method For Designing Efficient Super Resolution Deep Convolutional Neural Networks By Cascade Network Training, Cascade Network Trimming, And Dilated Convolutions
  • Utilizing Deep Learning For Automatic Digital Image Segmentation And Stylization
  • Automated Classification And Taxonomy Of 3d Teeth Data Using Deep Learning Methods
  • Methods And Systems For Diagnosing And Treating Presbyopia
  • Methods And Systems For Providing Wavefront Corrections For Treating Conditions Including Myopia, Hyperopia, And/Or Astigmatism
  • Motion Deblurring Using Neural Network Architectures
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-009-0275-4

    DOI

    http://dx.doi.org/10.1007/s11263-009-0275-4

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1014796149


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of Leeds, Leeds, UK", 
              "id": "http://www.grid.ac/institutes/grid.9909.9", 
              "name": [
                "University of Leeds, Leeds, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Everingham", 
            "givenName": "Mark", 
            "id": "sg:person.012363753235.37", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012363753235.37"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "KU Leuven, Leuven, Belgium", 
              "id": "http://www.grid.ac/institutes/grid.5596.f", 
              "name": [
                "KU Leuven, Leuven, Belgium"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Van Gool", 
            "givenName": "Luc", 
            "id": "sg:person.0616213477.88", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0616213477.88"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Edinburgh, Edinburgh, UK", 
              "id": "http://www.grid.ac/institutes/grid.4305.2", 
              "name": [
                "University of Edinburgh, Edinburgh, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Williams", 
            "givenName": "Christopher K. I.", 
            "id": "sg:person.07643001615.73", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07643001615.73"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Microsoft Research, Cambridge, UK", 
              "id": "http://www.grid.ac/institutes/grid.24488.32", 
              "name": [
                "Microsoft Research, Cambridge, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Winn", 
            "givenName": "John", 
            "id": "sg:person.01221574626.63", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01221574626.63"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Oxford, Oxford, UK", 
              "id": "http://www.grid.ac/institutes/grid.4991.5", 
              "name": [
                "University of Oxford, Oxford, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zisserman", 
            "givenName": "Andrew", 
            "id": "sg:person.012270111307.09", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012270111307.09"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/11744023_1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017544873", 
              "https://doi.org/10.1007/11744023_1"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11861898_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1019052814", 
              "https://doi.org/10.1007/11861898_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-007-0069-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1002609657", 
              "https://doi.org/10.1007/s11263-007-0069-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-85891-1_26", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1044481442", 
              "https://doi.org/10.1007/978-3-540-85891-1_26"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11788034_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1000100324", 
              "https://doi.org/10.1007/11788034_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-006-8707-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1029040038", 
              "https://doi.org/10.1007/s11263-006-8707-x"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-88682-2_40", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1025274003", 
              "https://doi.org/10.1007/978-3-540-88682-2_40"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/3-540-47979-1_7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1040055518", 
              "https://doi.org/10.1007/3-540-47979-1_7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-540-74198-5_14", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038699157", 
              "https://doi.org/10.1007/978-3-540-74198-5_14"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-007-0090-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1027534025", 
              "https://doi.org/10.1007/s11263-007-0090-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000029664.99615.94", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1052687286", 
              "https://doi.org/10.1023/b:visi.0000029664.99615.94"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/b:visi.0000013087.49260.fb", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001944608", 
              "https://doi.org/10.1023/b:visi.0000013087.49260.fb"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-006-9794-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008205152", 
              "https://doi.org/10.1007/s11263-006-9794-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11744047_26", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1032724317", 
              "https://doi.org/10.1007/11744047_26"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/11736790_8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021631860", 
              "https://doi.org/10.1007/11736790_8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/a:1014573219977", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004426816", 
              "https://doi.org/10.1023/a:1014573219977"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1023/a:1023052124951", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015899908", 
              "https://doi.org/10.1023/a:1023052124951"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2009-09-09", 
        "datePublishedReg": "2009-09-09", 
        "description": "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-009-0275-4", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "2", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "88"
          }
        ], 
        "keywords": [
          "PASCAL Visual Object Classes Challenge", 
          "object category recognition", 
          "object detection", 
          "standard datasets", 
          "category recognition", 
          "datasets", 
          "benchmarks", 
          "images", 
          "evaluation procedure", 
          "challenges", 
          "detection", 
          "annotation", 
          "machine", 
          "future improvements", 
          "standard evaluation procedure", 
          "vision", 
          "classification", 
          "recognition", 
          "method", 
          "art", 
          "extension", 
          "improvement", 
          "lessons", 
          "confuse", 
          "procedure", 
          "community", 
          "direction", 
          "state", 
          "year history", 
          "history", 
          "paper", 
          "Visual Object Classes (VOC) challenge", 
          "Object Classes (VOC) challenge", 
          "Classes (VOC) challenge", 
          "visual object category recognition"
        ], 
        "name": "The Pascal Visual Object Classes (VOC) Challenge", 
        "pagination": "303-338", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1014796149"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-009-0275-4"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-009-0275-4", 
          "https://app.dimensions.ai/details/publication/pub.1014796149"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2021-12-01T19:22", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20211201/entities/gbq_results/article/article_498.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-009-0275-4"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-009-0275-4'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-009-0275-4'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-009-0275-4'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-009-0275-4'


     

    This table displays all metadata directly associated to this object as RDF triples.

    201 TRIPLES      22 PREDICATES      77 URIs      52 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-009-0275-4 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nb0413794a3fb492aa08419e89360f331
    4 schema:citation sg:pub.10.1007/11736790_8
    5 sg:pub.10.1007/11744023_1
    6 sg:pub.10.1007/11744047_26
    7 sg:pub.10.1007/11788034_28
    8 sg:pub.10.1007/11861898_28
    9 sg:pub.10.1007/3-540-47979-1_7
    10 sg:pub.10.1007/978-3-540-74198-5_14
    11 sg:pub.10.1007/978-3-540-85891-1_26
    12 sg:pub.10.1007/978-3-540-88682-2_40
    13 sg:pub.10.1007/s11263-006-8707-x
    14 sg:pub.10.1007/s11263-006-9794-4
    15 sg:pub.10.1007/s11263-007-0069-5
    16 sg:pub.10.1007/s11263-007-0090-8
    17 sg:pub.10.1023/a:1014573219977
    18 sg:pub.10.1023/a:1023052124951
    19 sg:pub.10.1023/b:visi.0000013087.49260.fb
    20 sg:pub.10.1023/b:visi.0000029664.99615.94
    21 schema:datePublished 2009-09-09
    22 schema:datePublishedReg 2009-09-09
    23 schema:description The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
    24 schema:genre article
    25 schema:inLanguage en
    26 schema:isAccessibleForFree true
    27 schema:isPartOf Naaded00dc6bd42018f9f324ed894a8ef
    28 Nf4ebf6b279b34f79b52f2bdd263ce3db
    29 sg:journal.1032807
    30 schema:keywords Classes (VOC) challenge
    31 Object Classes (VOC) challenge
    32 PASCAL Visual Object Classes Challenge
    33 Visual Object Classes (VOC) challenge
    34 annotation
    35 art
    36 benchmarks
    37 category recognition
    38 challenges
    39 classification
    40 community
    41 confuse
    42 datasets
    43 detection
    44 direction
    45 evaluation procedure
    46 extension
    47 future improvements
    48 history
    49 images
    50 improvement
    51 lessons
    52 machine
    53 method
    54 object category recognition
    55 object detection
    56 paper
    57 procedure
    58 recognition
    59 standard datasets
    60 standard evaluation procedure
    61 state
    62 vision
    63 visual object category recognition
    64 year history
    65 schema:name The Pascal Visual Object Classes (VOC) Challenge
    66 schema:pagination 303-338
    67 schema:productId N8ef27badeaca4cc2bbf1b9c9d13592c1
    68 Nc66892dacc8140c2a5c912087d021a9b
    69 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
    70 https://doi.org/10.1007/s11263-009-0275-4
    71 schema:sdDatePublished 2021-12-01T19:22
    72 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    73 schema:sdPublisher N2d161122eed247e9a5af60cb95850503
    74 schema:url https://doi.org/10.1007/s11263-009-0275-4
    75 sgo:license sg:explorer/license/
    76 sgo:sdDataset articles
    77 rdf:type schema:ScholarlyArticle
    78 N0c415f8305884a5bb6f5b31954d95f5d rdf:first sg:person.0616213477.88
    79 rdf:rest N82a87a0c410e42649fff7d0b84e86aa1
    80 N2d161122eed247e9a5af60cb95850503 schema:name Springer Nature - SN SciGraph project
    81 rdf:type schema:Organization
    82 N43248a37e144474a91dad673e66e19ca rdf:first sg:person.01221574626.63
    83 rdf:rest N538bbbeb027743fd86c2f88f88c8e969
    84 N538bbbeb027743fd86c2f88f88c8e969 rdf:first sg:person.012270111307.09
    85 rdf:rest rdf:nil
    86 N82a87a0c410e42649fff7d0b84e86aa1 rdf:first sg:person.07643001615.73
    87 rdf:rest N43248a37e144474a91dad673e66e19ca
    88 N8ef27badeaca4cc2bbf1b9c9d13592c1 schema:name doi
    89 schema:value 10.1007/s11263-009-0275-4
    90 rdf:type schema:PropertyValue
    91 Naaded00dc6bd42018f9f324ed894a8ef schema:issueNumber 2
    92 rdf:type schema:PublicationIssue
    93 Nb0413794a3fb492aa08419e89360f331 rdf:first sg:person.012363753235.37
    94 rdf:rest N0c415f8305884a5bb6f5b31954d95f5d
    95 Nc66892dacc8140c2a5c912087d021a9b schema:name dimensions_id
    96 schema:value pub.1014796149
    97 rdf:type schema:PropertyValue
    98 Nf4ebf6b279b34f79b52f2bdd263ce3db schema:volumeNumber 88
    99 rdf:type schema:PublicationVolume
    100 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    101 schema:name Information and Computing Sciences
    102 rdf:type schema:DefinedTerm
    103 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    104 schema:name Artificial Intelligence and Image Processing
    105 rdf:type schema:DefinedTerm
    106 sg:journal.1032807 schema:issn 0920-5691
    107 1573-1405
    108 schema:name International Journal of Computer Vision
    109 schema:publisher Springer Nature
    110 rdf:type schema:Periodical
    111 sg:person.01221574626.63 schema:affiliation grid-institutes:grid.24488.32
    112 schema:familyName Winn
    113 schema:givenName John
    114 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01221574626.63
    115 rdf:type schema:Person
    116 sg:person.012270111307.09 schema:affiliation grid-institutes:grid.4991.5
    117 schema:familyName Zisserman
    118 schema:givenName Andrew
    119 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012270111307.09
    120 rdf:type schema:Person
    121 sg:person.012363753235.37 schema:affiliation grid-institutes:grid.9909.9
    122 schema:familyName Everingham
    123 schema:givenName Mark
    124 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012363753235.37
    125 rdf:type schema:Person
    126 sg:person.0616213477.88 schema:affiliation grid-institutes:grid.5596.f
    127 schema:familyName Van Gool
    128 schema:givenName Luc
    129 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0616213477.88
    130 rdf:type schema:Person
    131 sg:person.07643001615.73 schema:affiliation grid-institutes:grid.4305.2
    132 schema:familyName Williams
    133 schema:givenName Christopher K. I.
    134 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.07643001615.73
    135 rdf:type schema:Person
    136 sg:pub.10.1007/11736790_8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021631860
    137 https://doi.org/10.1007/11736790_8
    138 rdf:type schema:CreativeWork
    139 sg:pub.10.1007/11744023_1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017544873
    140 https://doi.org/10.1007/11744023_1
    141 rdf:type schema:CreativeWork
    142 sg:pub.10.1007/11744047_26 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032724317
    143 https://doi.org/10.1007/11744047_26
    144 rdf:type schema:CreativeWork
    145 sg:pub.10.1007/11788034_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1000100324
    146 https://doi.org/10.1007/11788034_28
    147 rdf:type schema:CreativeWork
    148 sg:pub.10.1007/11861898_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1019052814
    149 https://doi.org/10.1007/11861898_28
    150 rdf:type schema:CreativeWork
    151 sg:pub.10.1007/3-540-47979-1_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040055518
    152 https://doi.org/10.1007/3-540-47979-1_7
    153 rdf:type schema:CreativeWork
    154 sg:pub.10.1007/978-3-540-74198-5_14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038699157
    155 https://doi.org/10.1007/978-3-540-74198-5_14
    156 rdf:type schema:CreativeWork
    157 sg:pub.10.1007/978-3-540-85891-1_26 schema:sameAs https://app.dimensions.ai/details/publication/pub.1044481442
    158 https://doi.org/10.1007/978-3-540-85891-1_26
    159 rdf:type schema:CreativeWork
    160 sg:pub.10.1007/978-3-540-88682-2_40 schema:sameAs https://app.dimensions.ai/details/publication/pub.1025274003
    161 https://doi.org/10.1007/978-3-540-88682-2_40
    162 rdf:type schema:CreativeWork
    163 sg:pub.10.1007/s11263-006-8707-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1029040038
    164 https://doi.org/10.1007/s11263-006-8707-x
    165 rdf:type schema:CreativeWork
    166 sg:pub.10.1007/s11263-006-9794-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008205152
    167 https://doi.org/10.1007/s11263-006-9794-4
    168 rdf:type schema:CreativeWork
    169 sg:pub.10.1007/s11263-007-0069-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1002609657
    170 https://doi.org/10.1007/s11263-007-0069-5
    171 rdf:type schema:CreativeWork
    172 sg:pub.10.1007/s11263-007-0090-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027534025
    173 https://doi.org/10.1007/s11263-007-0090-8
    174 rdf:type schema:CreativeWork
    175 sg:pub.10.1023/a:1014573219977 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004426816
    176 https://doi.org/10.1023/a:1014573219977
    177 rdf:type schema:CreativeWork
    178 sg:pub.10.1023/a:1023052124951 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015899908
    179 https://doi.org/10.1023/a:1023052124951
    180 rdf:type schema:CreativeWork
    181 sg:pub.10.1023/b:visi.0000013087.49260.fb schema:sameAs https://app.dimensions.ai/details/publication/pub.1001944608
    182 https://doi.org/10.1023/b:visi.0000013087.49260.fb
    183 rdf:type schema:CreativeWork
    184 sg:pub.10.1023/b:visi.0000029664.99615.94 schema:sameAs https://app.dimensions.ai/details/publication/pub.1052687286
    185 https://doi.org/10.1023/b:visi.0000029664.99615.94
    186 rdf:type schema:CreativeWork
    187 grid-institutes:grid.24488.32 schema:alternateName Microsoft Research, Cambridge, UK
    188 schema:name Microsoft Research, Cambridge, UK
    189 rdf:type schema:Organization
    190 grid-institutes:grid.4305.2 schema:alternateName University of Edinburgh, Edinburgh, UK
    191 schema:name University of Edinburgh, Edinburgh, UK
    192 rdf:type schema:Organization
    193 grid-institutes:grid.4991.5 schema:alternateName University of Oxford, Oxford, UK
    194 schema:name University of Oxford, Oxford, UK
    195 rdf:type schema:Organization
    196 grid-institutes:grid.5596.f schema:alternateName KU Leuven, Leuven, Belgium
    197 schema:name KU Leuven, Leuven, Belgium
    198 rdf:type schema:Organization
    199 grid-institutes:grid.9909.9 schema:alternateName University of Leeds, Leeds, UK
    200 schema:name University of Leeds, Leeds, UK
    201 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...