Enhancement of Textual Images Classification Using Segmented Visual Contents for Image Search Engine View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2005-03

AUTHORS

Sabrina Tollari, Hervé Glotin, Jacques Le Maitre

ABSTRACT

This paper deals with the use of the dependencies between the textual indexation of an image (a set of keywords) and its visual indexation (colour and shape features). Experiments are realized on a corpus of photographs of a press agency (EDITING) and on another corpus of animals and landscape photographs (COREL). Both are manually indexed by keywords. Keywords of the news photos are extracted from a hierarchically structured thesaurus. Keywords of Corel corpus are semantically linked using WordNet database. A semantic clustering of the photos is constructed from their textual indexation. We use two different visual segmentation schemes. One is based on areas of interest, the other one on blobs of homogenous colour. Both segmentation schemes are used to evaluate the performance of a content-based image retrieval system combining textual and visual descriptions. Results of visuo-textual classifications show an improvement of 50% against classification using only textual information. Finally, we show how to apply this system in order to enhance a web image search engine. To this purpose, we illustrate a method allowing selecting only accurate images resulting from a textual query. More... »

PAGES

405-417

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s11042-005-6543-6

DOI

http://dx.doi.org/10.1007/s11042-005-6543-6

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1013027935


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "name": [
            "Laboratoire SIS - Equipe informatique, Universit\u00e9 du Sud Toulon-Var, B\u00e2timent R, BP 20132, F-83957, LA GARDE cedex, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Tollari", 
        "givenName": "Sabrina", 
        "id": "sg:person.014463616133.55", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014463616133.55"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "name": [
            "Laboratoire SIS - Equipe informatique, Universit\u00e9 du Sud Toulon-Var, B\u00e2timent R, BP 20132, F-83957, LA GARDE cedex, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Glotin", 
        "givenName": "Herv\u00e9", 
        "id": "sg:person.016622300103.82", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016622300103.82"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "name": [
            "Laboratoire SIS - Equipe informatique, Universit\u00e9 du Sud Toulon-Var, B\u00e2timent R, BP 20132, F-83957, LA GARDE cedex, France"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Maitre", 
        "givenName": "Jacques Le", 
        "id": "sg:person.015627601511.39", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015627601511.39"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1093/comjnl/9.4.373", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1011783359"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/0306-4573(88)90021-0", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1032478827"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1145/321439.321441", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1036029230"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/34.868688", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061157130"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.3166/isi.7.5-6.169-186", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1071066128"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.3166/isi.7.5-6.65-90", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1071066130"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2005-03", 
    "datePublishedReg": "2005-03-01", 
    "description": "This paper deals with the use of the dependencies between the textual indexation of an image (a set of keywords) and its visual indexation (colour and shape features). Experiments are realized on a corpus of photographs of a press agency (EDITING) and on another corpus of animals and landscape photographs (COREL). Both are manually indexed by keywords. Keywords of the news photos are extracted from a hierarchically structured thesaurus. Keywords of Corel corpus are semantically linked using WordNet database. A semantic clustering of the photos is constructed from their textual indexation. We use two different visual segmentation schemes. One is based on areas of interest, the other one on blobs of homogenous colour. Both segmentation schemes are used to evaluate the performance of a content-based image retrieval system combining textual and visual descriptions. Results of visuo-textual classifications show an improvement of 50% against classification using only textual information. Finally, we show how to apply this system in order to enhance a web image search engine. To this purpose, we illustrate a method allowing selecting only accurate images resulting from a textual query.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s11042-005-6543-6", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1044869", 
        "issn": [
          "1380-7501", 
          "1573-7721"
        ], 
        "name": "Multimedia Tools and Applications", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "3", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "25"
      }
    ], 
    "name": "Enhancement of Textual Images Classification Using Segmented Visual Contents for Image Search Engine", 
    "pagination": "405-417", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "3e107211151fd7058b68d4dca594dce290edb1b4888a2c28fa95cd28fb226945"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s11042-005-6543-6"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1013027935"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s11042-005-6543-6", 
      "https://app.dimensions.ai/details/publication/pub.1013027935"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T19:57", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8681_00000511.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "http://link.springer.com/10.1007%2Fs11042-005-6543-6"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11042-005-6543-6'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11042-005-6543-6'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11042-005-6543-6'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11042-005-6543-6'


 

This table displays all metadata directly associated to this object as RDF triples.

96 TRIPLES      21 PREDICATES      33 URIs      19 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s11042-005-6543-6 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nfd6e8b47c2b242319209954bdd9977fd
4 schema:citation https://doi.org/10.1016/0306-4573(88)90021-0
5 https://doi.org/10.1093/comjnl/9.4.373
6 https://doi.org/10.1109/34.868688
7 https://doi.org/10.1145/321439.321441
8 https://doi.org/10.3166/isi.7.5-6.169-186
9 https://doi.org/10.3166/isi.7.5-6.65-90
10 schema:datePublished 2005-03
11 schema:datePublishedReg 2005-03-01
12 schema:description This paper deals with the use of the dependencies between the textual indexation of an image (a set of keywords) and its visual indexation (colour and shape features). Experiments are realized on a corpus of photographs of a press agency (EDITING) and on another corpus of animals and landscape photographs (COREL). Both are manually indexed by keywords. Keywords of the news photos are extracted from a hierarchically structured thesaurus. Keywords of Corel corpus are semantically linked using WordNet database. A semantic clustering of the photos is constructed from their textual indexation. We use two different visual segmentation schemes. One is based on areas of interest, the other one on blobs of homogenous colour. Both segmentation schemes are used to evaluate the performance of a content-based image retrieval system combining textual and visual descriptions. Results of visuo-textual classifications show an improvement of 50% against classification using only textual information. Finally, we show how to apply this system in order to enhance a web image search engine. To this purpose, we illustrate a method allowing selecting only accurate images resulting from a textual query.
13 schema:genre research_article
14 schema:inLanguage en
15 schema:isAccessibleForFree false
16 schema:isPartOf N903a9e4860774f5f9f33c5819ef79889
17 Nc8867c9301e34f65a589778eba1c18f5
18 sg:journal.1044869
19 schema:name Enhancement of Textual Images Classification Using Segmented Visual Contents for Image Search Engine
20 schema:pagination 405-417
21 schema:productId N7bb92d577c5f401f962f3b97d6b997be
22 Nbde5f73744df44e385094ca844a80071
23 Neae0cd8278e34487896ff9a0a58da5ca
24 schema:sameAs https://app.dimensions.ai/details/publication/pub.1013027935
25 https://doi.org/10.1007/s11042-005-6543-6
26 schema:sdDatePublished 2019-04-10T19:57
27 schema:sdLicense https://scigraph.springernature.com/explorer/license/
28 schema:sdPublisher Nb18ec77df2094da2843ed1a4a916970e
29 schema:url http://link.springer.com/10.1007%2Fs11042-005-6543-6
30 sgo:license sg:explorer/license/
31 sgo:sdDataset articles
32 rdf:type schema:ScholarlyArticle
33 N044acfee654b42468dfe64f78c6fd73f rdf:first sg:person.016622300103.82
34 rdf:rest N59bb1a7cffb84c86914ce31b968ca9c6
35 N475b313d533d48ea85c1b4df08ba9d1f schema:name Laboratoire SIS - Equipe informatique, Université du Sud Toulon-Var, Bâtiment R, BP 20132, F-83957, LA GARDE cedex, France
36 rdf:type schema:Organization
37 N522fe0549fe445b287dcdca52322f8c8 schema:name Laboratoire SIS - Equipe informatique, Université du Sud Toulon-Var, Bâtiment R, BP 20132, F-83957, LA GARDE cedex, France
38 rdf:type schema:Organization
39 N59bb1a7cffb84c86914ce31b968ca9c6 rdf:first sg:person.015627601511.39
40 rdf:rest rdf:nil
41 N7bb92d577c5f401f962f3b97d6b997be schema:name doi
42 schema:value 10.1007/s11042-005-6543-6
43 rdf:type schema:PropertyValue
44 N903a9e4860774f5f9f33c5819ef79889 schema:issueNumber 3
45 rdf:type schema:PublicationIssue
46 Nb18ec77df2094da2843ed1a4a916970e schema:name Springer Nature - SN SciGraph project
47 rdf:type schema:Organization
48 Nbde5f73744df44e385094ca844a80071 schema:name readcube_id
49 schema:value 3e107211151fd7058b68d4dca594dce290edb1b4888a2c28fa95cd28fb226945
50 rdf:type schema:PropertyValue
51 Nc8867c9301e34f65a589778eba1c18f5 schema:volumeNumber 25
52 rdf:type schema:PublicationVolume
53 Ncbf254b6c6d6419fbc4d42150c581b4d schema:name Laboratoire SIS - Equipe informatique, Université du Sud Toulon-Var, Bâtiment R, BP 20132, F-83957, LA GARDE cedex, France
54 rdf:type schema:Organization
55 Neae0cd8278e34487896ff9a0a58da5ca schema:name dimensions_id
56 schema:value pub.1013027935
57 rdf:type schema:PropertyValue
58 Nfd6e8b47c2b242319209954bdd9977fd rdf:first sg:person.014463616133.55
59 rdf:rest N044acfee654b42468dfe64f78c6fd73f
60 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
61 schema:name Information and Computing Sciences
62 rdf:type schema:DefinedTerm
63 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
64 schema:name Artificial Intelligence and Image Processing
65 rdf:type schema:DefinedTerm
66 sg:journal.1044869 schema:issn 1380-7501
67 1573-7721
68 schema:name Multimedia Tools and Applications
69 rdf:type schema:Periodical
70 sg:person.014463616133.55 schema:affiliation Ncbf254b6c6d6419fbc4d42150c581b4d
71 schema:familyName Tollari
72 schema:givenName Sabrina
73 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014463616133.55
74 rdf:type schema:Person
75 sg:person.015627601511.39 schema:affiliation N475b313d533d48ea85c1b4df08ba9d1f
76 schema:familyName Maitre
77 schema:givenName Jacques Le
78 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015627601511.39
79 rdf:type schema:Person
80 sg:person.016622300103.82 schema:affiliation N522fe0549fe445b287dcdca52322f8c8
81 schema:familyName Glotin
82 schema:givenName Hervé
83 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016622300103.82
84 rdf:type schema:Person
85 https://doi.org/10.1016/0306-4573(88)90021-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1032478827
86 rdf:type schema:CreativeWork
87 https://doi.org/10.1093/comjnl/9.4.373 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011783359
88 rdf:type schema:CreativeWork
89 https://doi.org/10.1109/34.868688 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061157130
90 rdf:type schema:CreativeWork
91 https://doi.org/10.1145/321439.321441 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036029230
92 rdf:type schema:CreativeWork
93 https://doi.org/10.3166/isi.7.5-6.169-186 schema:sameAs https://app.dimensions.ai/details/publication/pub.1071066128
94 rdf:type schema:CreativeWork
95 https://doi.org/10.3166/isi.7.5-6.65-90 schema:sameAs https://app.dimensions.ai/details/publication/pub.1071066130
96 rdf:type schema:CreativeWork
 




Preview window. Press ESC to close (or click here)


...