Ontology type: schema:Chapter
2002-07-02
AUTHORS ABSTRACTIn this paper, we propose a three-layer visual information processing architecture for extracting concise non-textual descriptions from visual contents. These coded descriptions capture both local saliencies and spatial configurations present in visual contents via prototypical visual tokens called visual “keywords”. Categorization of images and video shots represented by keyframes can be performed by comparing their coded descriptions. We demonstrate our proposed architecture in natural scene image categorization that outperforms methods which use aggregate measures of low-level features. More... »
PAGES367-374
Visual Information and Information Systems
ISBN
978-3-540-66079-8
978-3-540-48762-3
http://scigraph.springernature.com/pub.10.1007/3-540-48762-x_46
DOIhttp://dx.doi.org/10.1007/3-540-48762-x_46
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1042663072
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"name": [
"Information-Base Functions KRDL Lab, RWCP, 21 Heng Mui Kent Terrace, S(119613), Singapore"
],
"type": "Organization"
},
"familyName": "Lim",
"givenName": "Joo-Hwee",
"id": "sg:person.0607463760.21",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0607463760.21"
],
"type": "Person"
}
],
"citation": [
{
"id": "https://doi.org/10.1117/12.171772",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1001640412"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1002/(sici)1097-4571(199009)41:6<391::aid-asi1>3.0.co;2-9",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1012153938"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/12.143648",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1015213336"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/bf00123143",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024066299",
"https://doi.org/10.1007/bf00123143"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/bf00123143",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024066299",
"https://doi.org/10.1007/bf00123143"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/12.234785",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1042501265"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/243199.243276",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1044163989"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/218380.218454",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1046858645"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1147/rd.422.0233",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1063182324"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/iccv.1998.710772",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094088849"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.1997.609453",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094877086"
],
"type": "CreativeWork"
}
],
"datePublished": "2002-07-02",
"datePublishedReg": "2002-07-02",
"description": "In this paper, we propose a three-layer visual information processing architecture for extracting concise non-textual descriptions from visual contents. These coded descriptions capture both local saliencies and spatial configurations present in visual contents via prototypical visual tokens called visual \u201ckeywords\u201d. Categorization of images and video shots represented by keyframes can be performed by comparing their coded descriptions. We demonstrate our proposed architecture in natural scene image categorization that outperforms methods which use aggregate measures of low-level features.",
"editor": [
{
"familyName": "Huijsmans",
"givenName": "Dionysius P.",
"type": "Person"
},
{
"familyName": "Smeulders",
"givenName": "Arnold W. M.",
"type": "Person"
}
],
"genre": "chapter",
"id": "sg:pub.10.1007/3-540-48762-x_46",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": {
"isbn": [
"978-3-540-66079-8",
"978-3-540-48762-3"
],
"name": "Visual Information and Information Systems",
"type": "Book"
},
"name": "Categorizing Visual Contents by Matching Visual \u201cKeywords\u201d",
"pagination": "367-374",
"productId": [
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/3-540-48762-x_46"
]
},
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"cb38d23d429f506690050bc5287afa4327a18e61a54ef9251de608eb564dc66b"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1042663072"
]
}
],
"publisher": {
"location": "Berlin, Heidelberg",
"name": "Springer Berlin Heidelberg",
"type": "Organisation"
},
"sameAs": [
"https://doi.org/10.1007/3-540-48762-x_46",
"https://app.dimensions.ai/details/publication/pub.1042663072"
],
"sdDataset": "chapters",
"sdDatePublished": "2019-04-16T05:47",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000347_0000000347/records_89814_00000001.jsonl",
"type": "Chapter",
"url": "https://link.springer.com/10.1007%2F3-540-48762-X_46"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-48762-x_46'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-48762-x_46'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-48762-x_46'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-48762-x_46'
This table displays all metadata directly associated to this object as RDF triples.
100 TRIPLES
23 PREDICATES
36 URIs
19 LITERALS
8 BLANK NODES