Recognizing Materials from Virtual Examples View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2012

AUTHORS

Wenbin Li , Mario Fritz

ABSTRACT

Due to the strong impact of machine learning methods on visual recognition, performance on many perception task is driven by the availability of sufficient training data. A promising direction which has gained new relevance in recent years is the generation of virtual training examples by means of computer graphics methods in order to provide richer training sets for recognition and detection on real data. Success stories of this paradigm have been mostly reported for the synthesis of shape features and 3D depth maps. Therefore we investigate in this paper if and how appearance descriptors can be transferred from the virtual world to real examples. We study two popular appearance descriptors on the task of material categorization as it is a pure appearance-driven task. Beyond this initial study, we also investigate different approach of combining and adapting virtual and real data in order to bridge the gap between rendered and real-data. Our study is carried out using a new database of virtual materials VIPS that complements the existing KTH-TIPS material database. More... »

PAGES

345-358

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-33765-9_25

DOI

http://dx.doi.org/10.1007/978-3-642-33765-9_25

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1050815237


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Max Planck Institute for Informatics, Saarbrucken, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419528.3", 
          "name": [
            "Max Planck Institute for Informatics, Saarbrucken, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Li", 
        "givenName": "Wenbin", 
        "id": "sg:person.011262202211.32", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011262202211.32"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Max Planck Institute for Informatics, Saarbrucken, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419528.3", 
          "name": [
            "Max Planck Institute for Informatics, Saarbrucken, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Fritz", 
        "givenName": "Mario", 
        "id": "sg:person.013361072755.17", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013361072755.17"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2012", 
    "datePublishedReg": "2012-01-01", 
    "description": "Due to the strong impact of machine learning methods on visual recognition, performance on many perception task is driven by the availability of sufficient training data. A promising direction which has gained new relevance in recent years is the generation of virtual training examples by means of computer graphics methods in order to provide richer training sets for recognition and detection on real data. Success stories of this paradigm have been mostly reported for the synthesis of shape features and 3D depth maps. Therefore we investigate in this paper if and how appearance descriptors can be transferred from the virtual world to real examples. We study two popular appearance descriptors on the task of material categorization as it is a pure appearance-driven task. Beyond this initial study, we also investigate different approach of combining and adapting virtual and real data in order to bridge the gap between rendered and real-data. Our study is carried out using a new database of virtual materials VIPS that complements the existing KTH-TIPS material database.", 
    "editor": [
      {
        "familyName": "Fitzgibbon", 
        "givenName": "Andrew", 
        "type": "Person"
      }, 
      {
        "familyName": "Lazebnik", 
        "givenName": "Svetlana", 
        "type": "Person"
      }, 
      {
        "familyName": "Perona", 
        "givenName": "Pietro", 
        "type": "Person"
      }, 
      {
        "familyName": "Sato", 
        "givenName": "Yoichi", 
        "type": "Person"
      }, 
      {
        "familyName": "Schmid", 
        "givenName": "Cordelia", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-33765-9_25", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-642-33764-2", 
        "978-3-642-33765-9"
      ], 
      "name": "Computer Vision \u2013 ECCV 2012", 
      "type": "Book"
    }, 
    "keywords": [
      "appearance descriptors", 
      "sufficient training data", 
      "real data", 
      "computer graphics methods", 
      "rich training set", 
      "virtual examples", 
      "training examples", 
      "virtual world", 
      "depth map", 
      "training data", 
      "shape features", 
      "visual recognition", 
      "training set", 
      "material categorization", 
      "new database", 
      "material database", 
      "task", 
      "promising direction", 
      "real example", 
      "descriptors", 
      "perception task", 
      "recognition", 
      "different approaches", 
      "database", 
      "recent years", 
      "machine", 
      "graphic method", 
      "example", 
      "paradigm", 
      "success stories", 
      "data", 
      "set", 
      "categorization", 
      "order", 
      "method", 
      "performance", 
      "detection", 
      "maps", 
      "features", 
      "availability", 
      "world", 
      "new relevance", 
      "generation", 
      "initial study", 
      "strong impact", 
      "direction", 
      "means", 
      "gap", 
      "relevance", 
      "impact", 
      "story", 
      "study", 
      "VIP", 
      "years", 
      "synthesis", 
      "materials", 
      "paper", 
      "approach"
    ], 
    "name": "Recognizing Materials from Virtual Examples", 
    "pagination": "345-358", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1050815237"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-33765-9_25"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-33765-9_25", 
      "https://app.dimensions.ai/details/publication/pub.1050815237"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-12-01T06:54", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/chapter/chapter_56.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-642-33765-9_25"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33765-9_25'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33765-9_25'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33765-9_25'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33765-9_25'


 

This table displays all metadata directly associated to this object as RDF triples.

144 TRIPLES      22 PREDICATES      83 URIs      76 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-33765-9_25 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nbe442383e34d4a41b430a2bbc662fd55
4 schema:datePublished 2012
5 schema:datePublishedReg 2012-01-01
6 schema:description Due to the strong impact of machine learning methods on visual recognition, performance on many perception task is driven by the availability of sufficient training data. A promising direction which has gained new relevance in recent years is the generation of virtual training examples by means of computer graphics methods in order to provide richer training sets for recognition and detection on real data. Success stories of this paradigm have been mostly reported for the synthesis of shape features and 3D depth maps. Therefore we investigate in this paper if and how appearance descriptors can be transferred from the virtual world to real examples. We study two popular appearance descriptors on the task of material categorization as it is a pure appearance-driven task. Beyond this initial study, we also investigate different approach of combining and adapting virtual and real data in order to bridge the gap between rendered and real-data. Our study is carried out using a new database of virtual materials VIPS that complements the existing KTH-TIPS material database.
7 schema:editor N2de61fabed374ec5899d1d0f4d7ca5c4
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf N528f46795f6c4512998ab6966bac277d
11 schema:keywords VIP
12 appearance descriptors
13 approach
14 availability
15 categorization
16 computer graphics methods
17 data
18 database
19 depth map
20 descriptors
21 detection
22 different approaches
23 direction
24 example
25 features
26 gap
27 generation
28 graphic method
29 impact
30 initial study
31 machine
32 maps
33 material categorization
34 material database
35 materials
36 means
37 method
38 new database
39 new relevance
40 order
41 paper
42 paradigm
43 perception task
44 performance
45 promising direction
46 real data
47 real example
48 recent years
49 recognition
50 relevance
51 rich training set
52 set
53 shape features
54 story
55 strong impact
56 study
57 success stories
58 sufficient training data
59 synthesis
60 task
61 training data
62 training examples
63 training set
64 virtual examples
65 virtual world
66 visual recognition
67 world
68 years
69 schema:name Recognizing Materials from Virtual Examples
70 schema:pagination 345-358
71 schema:productId N0a855abd8205436b800dfa808c74afab
72 Nb39ba473cba94c9fa7c8a87700bf5753
73 schema:publisher Nd0506dfc87ab4873962443d85b4e3bba
74 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050815237
75 https://doi.org/10.1007/978-3-642-33765-9_25
76 schema:sdDatePublished 2022-12-01T06:54
77 schema:sdLicense https://scigraph.springernature.com/explorer/license/
78 schema:sdPublisher Nfedd211c110547228934ffa88582e588
79 schema:url https://doi.org/10.1007/978-3-642-33765-9_25
80 sgo:license sg:explorer/license/
81 sgo:sdDataset chapters
82 rdf:type schema:Chapter
83 N0a855abd8205436b800dfa808c74afab schema:name doi
84 schema:value 10.1007/978-3-642-33765-9_25
85 rdf:type schema:PropertyValue
86 N0ab5147f450542c59bfc0ce30bf781b9 schema:familyName Perona
87 schema:givenName Pietro
88 rdf:type schema:Person
89 N146d151bda7b48f5a88e60ccc908a7f9 rdf:first N0ab5147f450542c59bfc0ce30bf781b9
90 rdf:rest Nab367baf0cc84bf68e238aa897be948d
91 N18024c191c17412dab5e91be864e2002 schema:familyName Fitzgibbon
92 schema:givenName Andrew
93 rdf:type schema:Person
94 N1c6006b54d1c4edfa767bef8a2da487f rdf:first Nb963cb3382744296aaf69c5949467e99
95 rdf:rest rdf:nil
96 N2de61fabed374ec5899d1d0f4d7ca5c4 rdf:first N18024c191c17412dab5e91be864e2002
97 rdf:rest N7dad0339fffa4a45be88ab4a6a4d8e54
98 N528f46795f6c4512998ab6966bac277d schema:isbn 978-3-642-33764-2
99 978-3-642-33765-9
100 schema:name Computer Vision – ECCV 2012
101 rdf:type schema:Book
102 N7dad0339fffa4a45be88ab4a6a4d8e54 rdf:first Ndc17bd87a08645d884bfb4111d2897a4
103 rdf:rest N146d151bda7b48f5a88e60ccc908a7f9
104 Naabb81a56b454b3483295786d3d8b36c schema:familyName Sato
105 schema:givenName Yoichi
106 rdf:type schema:Person
107 Nab367baf0cc84bf68e238aa897be948d rdf:first Naabb81a56b454b3483295786d3d8b36c
108 rdf:rest N1c6006b54d1c4edfa767bef8a2da487f
109 Nb39ba473cba94c9fa7c8a87700bf5753 schema:name dimensions_id
110 schema:value pub.1050815237
111 rdf:type schema:PropertyValue
112 Nb963cb3382744296aaf69c5949467e99 schema:familyName Schmid
113 schema:givenName Cordelia
114 rdf:type schema:Person
115 Nbe442383e34d4a41b430a2bbc662fd55 rdf:first sg:person.011262202211.32
116 rdf:rest Nf2012ac370a4400dbf4549f8c7b40eb8
117 Nd0506dfc87ab4873962443d85b4e3bba schema:name Springer Nature
118 rdf:type schema:Organisation
119 Ndc17bd87a08645d884bfb4111d2897a4 schema:familyName Lazebnik
120 schema:givenName Svetlana
121 rdf:type schema:Person
122 Nf2012ac370a4400dbf4549f8c7b40eb8 rdf:first sg:person.013361072755.17
123 rdf:rest rdf:nil
124 Nfedd211c110547228934ffa88582e588 schema:name Springer Nature - SN SciGraph project
125 rdf:type schema:Organization
126 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
127 schema:name Information and Computing Sciences
128 rdf:type schema:DefinedTerm
129 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
130 schema:name Artificial Intelligence and Image Processing
131 rdf:type schema:DefinedTerm
132 sg:person.011262202211.32 schema:affiliation grid-institutes:grid.419528.3
133 schema:familyName Li
134 schema:givenName Wenbin
135 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011262202211.32
136 rdf:type schema:Person
137 sg:person.013361072755.17 schema:affiliation grid-institutes:grid.419528.3
138 schema:familyName Fritz
139 schema:givenName Mario
140 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013361072755.17
141 rdf:type schema:Person
142 grid-institutes:grid.419528.3 schema:alternateName Max Planck Institute for Informatics, Saarbrucken, Germany
143 schema:name Max Planck Institute for Informatics, Saarbrucken, Germany
144 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...