Ontology type: schema:ScholarlyArticle
2007-10
AUTHORS ABSTRACTIn the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception. More... »
PAGES293-301
http://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4
DOIhttp://dx.doi.org/10.1007/s11760-007-0025-4
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1026704159
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "University of Ottawa",
"id": "https://www.grid.ac/institutes/grid.28046.38",
"name": [
"VIVA Laboratory, STE 5023 School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, K1N 6N5, Ottawa, ON, Canada"
],
"type": "Organization"
},
"familyName": "Liu",
"givenName": "Zheng",
"id": "sg:person.010045203007.52",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010045203007.52"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Ottawa",
"id": "https://www.grid.ac/institutes/grid.28046.38",
"name": [
"VIVA Laboratory, STE 5023 School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, K1N 6N5, Ottawa, ON, Canada"
],
"type": "Organization"
},
"familyName": "Lagani\u00e8re",
"givenName": "Robert",
"id": "sg:person.01144533722.06",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144533722.06"
],
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.1007/s10044-005-0020-8",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011591987",
"https://doi.org/10.1007/s10044-005-0020-8"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10044-005-0020-8",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1011591987",
"https://doi.org/10.1007/s10044-005-0020-8"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s1566-2535(03)00046-0",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024362114"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s1566-2535(03)00046-0",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1024362114"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/12.639711",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1038000595"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1201/9781420026986.ch1",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1038074034"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/1.2136903",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1042856958"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/1.2136903",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1042856958"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/s0167-8655(01)00047-2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1049634412"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/18.119725",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061098596"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/aipr.2005.9",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093226265"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/icip.1995.537667",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094903731"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/icif.2003.177504",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095039863"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/aipr.2005.14",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095812438"
],
"type": "CreativeWork"
}
],
"datePublished": "2007-10",
"datePublishedReg": "2007-10-01",
"description": "In the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception.",
"genre": "research_article",
"id": "sg:pub.10.1007/s11760-007-0025-4",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1050964",
"issn": [
"1863-1703",
"1863-1711"
],
"name": "Signal, Image and Video Processing",
"type": "Periodical"
},
{
"issueNumber": "4",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "1"
}
],
"name": "Context enhancement through infrared vision: a modified fusion scheme",
"pagination": "293-301",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"7f3778216fe0ca7093936d6bfedbf8db4b292934094210010273556229412332"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s11760-007-0025-4"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1026704159"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s11760-007-0025-4",
"https://app.dimensions.ai/details/publication/pub.1026704159"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-10T14:12",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8660_00000522.jsonl",
"type": "ScholarlyArticle",
"url": "http://link.springer.com/10.1007%2Fs11760-007-0025-4"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'
This table displays all metadata directly associated to this object as RDF triples.
102 TRIPLES
21 PREDICATES
38 URIs
19 LITERALS
7 BLANK NODES