Saliency Filtering of SIFT Detectors: Application to CBIR View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2012

AUTHORS

Dounia Awad , Vincent Courboulay , Arnaud Revel

ABSTRACT

The recognition of object categories is one of the most challenging problems in computer vision field.It is still an open problem , especially in content based image retrieval (CBIR).When using analysis algorithm, a trade-off must be found between the quality of the results expected, and the amount of computer resources allocated to manage huge amount of generated data. In human, the mechanisms of evolution have generated the visual attention system which selects the most important information in order to reduce both cognitive load and scene understanding ambiguity. In computer science, most powerful algorithms use local approaches as bag-of-features or sparse local features. In this article, we propose to evaluate the integration of one of the most recent visual attention model in one of the most efficient CBIR method. First, we present these two algorithms and the database used to test results. Then, we present our approach which consists in pruning interest points in order to select a certain percentage of them (40% to 10% ). This filtering is guided by a saliency map provided by a visual attention system. Finally, we present our results which clearly demonstrate that interest points used in classical CBIR methods can be drastically pruned without seriously impacting results. We also demonstrate that we have to smartly filter learning and training data set to obtain such results. More... »

PAGES

290-300

References to SciGraph publications

Book

TITLE

Advanced Concepts for Intelligent Vision Systems

ISBN

978-3-642-33139-8
978-3-642-33140-4

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-33140-4_26

DOI

http://dx.doi.org/10.1007/978-3-642-33140-4_26

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1035706474


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of La Rochelle", 
          "id": "https://www.grid.ac/institutes/grid.11698.37", 
          "name": [
            "L3I-University of La Rochelle, Av Michel Crepeau, 17000\u00a0La Rochelle, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Awad", 
        "givenName": "Dounia", 
        "id": "sg:person.014642765277.89", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014642765277.89"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of La Rochelle", 
          "id": "https://www.grid.ac/institutes/grid.11698.37", 
          "name": [
            "L3I-University of La Rochelle, Av Michel Crepeau, 17000\u00a0La Rochelle, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Courboulay", 
        "givenName": "Vincent", 
        "id": "sg:person.010373263721.95", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010373263721.95"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of La Rochelle", 
          "id": "https://www.grid.ac/institutes/grid.11698.37", 
          "name": [
            "L3I-University of La Rochelle, Av Michel Crepeau, 17000\u00a0La Rochelle, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Revel", 
        "givenName": "Arnaud", 
        "id": "sg:person.010634335021.86", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010634335021.86"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/j.patrec.2005.10.010", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1013701558"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-009-0275-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1014796149", 
          "https://doi.org/10.1007/s11263-009-0275-4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-009-0275-4", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1014796149", 
          "https://doi.org/10.1007/s11263-009-0275-4"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1023/b:visi.0000027790.02288.f2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024638466", 
          "https://doi.org/10.1023/b:visi.0000027790.02288.f2"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1145/1658349.1658355", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1026829088"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/s0167-8655(00)00082-9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1034889060"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.730558", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061156881"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/69.929893", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061213929"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1561/0600000017", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1068000465"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2005.77", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094132829"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2006.288", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094648993"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2012", 
    "datePublishedReg": "2012-01-01", 
    "description": "The recognition of object categories is one of the most challenging problems in computer vision field.It is still an open problem , especially in content based image retrieval (CBIR).When using analysis algorithm, a trade-off must be found between the quality of the results expected, and the amount of computer resources allocated to manage huge amount of generated data. In human, the mechanisms of evolution have generated the visual attention system which selects the most important information in order to reduce both cognitive load and scene understanding ambiguity. In computer science, most powerful algorithms use local approaches as bag-of-features or sparse local features. In this article, we propose to evaluate the integration of one of the most recent visual attention model in one of the most efficient CBIR method. First, we present these two algorithms and the database used to test results. Then, we present our approach which consists in pruning interest points in order to select a certain percentage of them (40% to 10% ). This filtering is guided by a saliency map provided by a visual attention system. Finally, we present our results which clearly demonstrate that interest points used in classical CBIR methods can be drastically pruned without seriously impacting results. We also demonstrate that we have to smartly filter learning and training data set to obtain such results.", 
    "editor": [
      {
        "familyName": "Blanc-Talon", 
        "givenName": "Jacques", 
        "type": "Person"
      }, 
      {
        "familyName": "Philips", 
        "givenName": "Wilfried", 
        "type": "Person"
      }, 
      {
        "familyName": "Popescu", 
        "givenName": "Dan", 
        "type": "Person"
      }, 
      {
        "familyName": "Scheunders", 
        "givenName": "Paul", 
        "type": "Person"
      }, 
      {
        "familyName": "Zem\u010d\u00edk", 
        "givenName": "Pavel", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-33140-4_26", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-642-33139-8", 
        "978-3-642-33140-4"
      ], 
      "name": "Advanced Concepts for Intelligent Vision Systems", 
      "type": "Book"
    }, 
    "name": "Saliency Filtering of SIFT Detectors: Application to CBIR", 
    "pagination": "290-300", 
    "productId": [
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-33140-4_26"
        ]
      }, 
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "aa17cedf7b43f706626b3a442707c84c0bc56606b88e600d181546cd8a415eee"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1035706474"
        ]
      }
    ], 
    "publisher": {
      "location": "Berlin, Heidelberg", 
      "name": "Springer Berlin Heidelberg", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-33140-4_26", 
      "https://app.dimensions.ai/details/publication/pub.1035706474"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2019-04-15T13:29", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8664_00000265.jsonl", 
    "type": "Chapter", 
    "url": "http://link.springer.com/10.1007/978-3-642-33140-4_26"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33140-4_26'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33140-4_26'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33140-4_26'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33140-4_26'


 

This table displays all metadata directly associated to this object as RDF triples.

131 TRIPLES      23 PREDICATES      37 URIs      20 LITERALS      8 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-33140-4_26 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Ndcb512be559a489abfa2f5eb323ee63e
4 schema:citation sg:pub.10.1007/s11263-009-0275-4
5 sg:pub.10.1023/b:visi.0000027790.02288.f2
6 https://doi.org/10.1016/j.patrec.2005.10.010
7 https://doi.org/10.1016/s0167-8655(00)00082-9
8 https://doi.org/10.1109/34.730558
9 https://doi.org/10.1109/69.929893
10 https://doi.org/10.1109/cvpr.2006.288
11 https://doi.org/10.1109/iccv.2005.77
12 https://doi.org/10.1145/1658349.1658355
13 https://doi.org/10.1561/0600000017
14 schema:datePublished 2012
15 schema:datePublishedReg 2012-01-01
16 schema:description The recognition of object categories is one of the most challenging problems in computer vision field.It is still an open problem , especially in content based image retrieval (CBIR).When using analysis algorithm, a trade-off must be found between the quality of the results expected, and the amount of computer resources allocated to manage huge amount of generated data. In human, the mechanisms of evolution have generated the visual attention system which selects the most important information in order to reduce both cognitive load and scene understanding ambiguity. In computer science, most powerful algorithms use local approaches as bag-of-features or sparse local features. In this article, we propose to evaluate the integration of one of the most recent visual attention model in one of the most efficient CBIR method. First, we present these two algorithms and the database used to test results. Then, we present our approach which consists in pruning interest points in order to select a certain percentage of them (40% to 10% ). This filtering is guided by a saliency map provided by a visual attention system. Finally, we present our results which clearly demonstrate that interest points used in classical CBIR methods can be drastically pruned without seriously impacting results. We also demonstrate that we have to smartly filter learning and training data set to obtain such results.
17 schema:editor Nd8350fc22cf44813be19fb3ee3c9e766
18 schema:genre chapter
19 schema:inLanguage en
20 schema:isAccessibleForFree false
21 schema:isPartOf N13b548dae9b149259c6ed2be099540a2
22 schema:name Saliency Filtering of SIFT Detectors: Application to CBIR
23 schema:pagination 290-300
24 schema:productId N410fc08e46bb41e88cc7e4ef3028fc09
25 Nc6df1ad338024dff89ff96dfa87cf260
26 Nd10fa0fbc4934e81ab944165f64407c8
27 schema:publisher N71927599e2764f488994eaf4c5a95860
28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1035706474
29 https://doi.org/10.1007/978-3-642-33140-4_26
30 schema:sdDatePublished 2019-04-15T13:29
31 schema:sdLicense https://scigraph.springernature.com/explorer/license/
32 schema:sdPublisher N628b263c72a047abbbee5957ba5a9032
33 schema:url http://link.springer.com/10.1007/978-3-642-33140-4_26
34 sgo:license sg:explorer/license/
35 sgo:sdDataset chapters
36 rdf:type schema:Chapter
37 N0a6209adad894d6b9b3e69756c194280 schema:familyName Scheunders
38 schema:givenName Paul
39 rdf:type schema:Person
40 N13b548dae9b149259c6ed2be099540a2 schema:isbn 978-3-642-33139-8
41 978-3-642-33140-4
42 schema:name Advanced Concepts for Intelligent Vision Systems
43 rdf:type schema:Book
44 N143628a59b904a7a9f22c42d044a39f5 rdf:first sg:person.010634335021.86
45 rdf:rest rdf:nil
46 N2892a32faaf84ebe8ee26bfc097804db rdf:first Nae9d14204a5f4984bafc8785d8edb013
47 rdf:rest Nb5b2ef289fb94cb094c7fb2737a0609f
48 N3169da1358834e9a92a3000ecce11d0c schema:familyName Blanc-Talon
49 schema:givenName Jacques
50 rdf:type schema:Person
51 N410fc08e46bb41e88cc7e4ef3028fc09 schema:name readcube_id
52 schema:value aa17cedf7b43f706626b3a442707c84c0bc56606b88e600d181546cd8a415eee
53 rdf:type schema:PropertyValue
54 N593f6141c7574b0c9763412b9de1501f rdf:first Na0b8faba4dc944c79b81d332ee9c6718
55 rdf:rest rdf:nil
56 N628b263c72a047abbbee5957ba5a9032 schema:name Springer Nature - SN SciGraph project
57 rdf:type schema:Organization
58 N71927599e2764f488994eaf4c5a95860 schema:location Berlin, Heidelberg
59 schema:name Springer Berlin Heidelberg
60 rdf:type schema:Organisation
61 N7a5c9dbefc1a4ad093e3a11ba6a7e044 schema:familyName Popescu
62 schema:givenName Dan
63 rdf:type schema:Person
64 Na0b8faba4dc944c79b81d332ee9c6718 schema:familyName Zemčík
65 schema:givenName Pavel
66 rdf:type schema:Person
67 Nae9d14204a5f4984bafc8785d8edb013 schema:familyName Philips
68 schema:givenName Wilfried
69 rdf:type schema:Person
70 Nb5b2ef289fb94cb094c7fb2737a0609f rdf:first N7a5c9dbefc1a4ad093e3a11ba6a7e044
71 rdf:rest Nb7ba9ab229754605bb0f8dc754295e22
72 Nb7ba9ab229754605bb0f8dc754295e22 rdf:first N0a6209adad894d6b9b3e69756c194280
73 rdf:rest N593f6141c7574b0c9763412b9de1501f
74 Nc6df1ad338024dff89ff96dfa87cf260 schema:name dimensions_id
75 schema:value pub.1035706474
76 rdf:type schema:PropertyValue
77 Nd10fa0fbc4934e81ab944165f64407c8 schema:name doi
78 schema:value 10.1007/978-3-642-33140-4_26
79 rdf:type schema:PropertyValue
80 Nd559737144f248b2a8176489d8810e02 rdf:first sg:person.010373263721.95
81 rdf:rest N143628a59b904a7a9f22c42d044a39f5
82 Nd8350fc22cf44813be19fb3ee3c9e766 rdf:first N3169da1358834e9a92a3000ecce11d0c
83 rdf:rest N2892a32faaf84ebe8ee26bfc097804db
84 Ndcb512be559a489abfa2f5eb323ee63e rdf:first sg:person.014642765277.89
85 rdf:rest Nd559737144f248b2a8176489d8810e02
86 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
87 schema:name Information and Computing Sciences
88 rdf:type schema:DefinedTerm
89 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
90 schema:name Artificial Intelligence and Image Processing
91 rdf:type schema:DefinedTerm
92 sg:person.010373263721.95 schema:affiliation https://www.grid.ac/institutes/grid.11698.37
93 schema:familyName Courboulay
94 schema:givenName Vincent
95 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010373263721.95
96 rdf:type schema:Person
97 sg:person.010634335021.86 schema:affiliation https://www.grid.ac/institutes/grid.11698.37
98 schema:familyName Revel
99 schema:givenName Arnaud
100 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010634335021.86
101 rdf:type schema:Person
102 sg:person.014642765277.89 schema:affiliation https://www.grid.ac/institutes/grid.11698.37
103 schema:familyName Awad
104 schema:givenName Dounia
105 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014642765277.89
106 rdf:type schema:Person
107 sg:pub.10.1007/s11263-009-0275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
108 https://doi.org/10.1007/s11263-009-0275-4
109 rdf:type schema:CreativeWork
110 sg:pub.10.1023/b:visi.0000027790.02288.f2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024638466
111 https://doi.org/10.1023/b:visi.0000027790.02288.f2
112 rdf:type schema:CreativeWork
113 https://doi.org/10.1016/j.patrec.2005.10.010 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013701558
114 rdf:type schema:CreativeWork
115 https://doi.org/10.1016/s0167-8655(00)00082-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034889060
116 rdf:type schema:CreativeWork
117 https://doi.org/10.1109/34.730558 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061156881
118 rdf:type schema:CreativeWork
119 https://doi.org/10.1109/69.929893 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061213929
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1109/cvpr.2006.288 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094648993
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1109/iccv.2005.77 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094132829
124 rdf:type schema:CreativeWork
125 https://doi.org/10.1145/1658349.1658355 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026829088
126 rdf:type schema:CreativeWork
127 https://doi.org/10.1561/0600000017 schema:sameAs https://app.dimensions.ai/details/publication/pub.1068000465
128 rdf:type schema:CreativeWork
129 https://www.grid.ac/institutes/grid.11698.37 schema:alternateName University of La Rochelle
130 schema:name L3I-University of La Rochelle, Av Michel Crepeau, 17000 La Rochelle, France
131 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...