Context enhancement through infrared vision: a modified fusion scheme View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2007-10

AUTHORS

Zheng Liu, Robert Laganière

ABSTRACT

In the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception. More... »

PAGES

293-301

References to SciGraph publications

  • 2006-02. Concealed weapon detection and visualization in a synthesized image in PATTERN ANALYSIS AND APPLICATIONS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4

    DOI

    http://dx.doi.org/10.1007/s11760-007-0025-4

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1026704159


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of Ottawa", 
              "id": "https://www.grid.ac/institutes/grid.28046.38", 
              "name": [
                "VIVA Laboratory, STE 5023 School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, K1N 6N5, Ottawa, ON, Canada"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Liu", 
            "givenName": "Zheng", 
            "id": "sg:person.010045203007.52", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010045203007.52"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Ottawa", 
              "id": "https://www.grid.ac/institutes/grid.28046.38", 
              "name": [
                "VIVA Laboratory, STE 5023 School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, K1N 6N5, Ottawa, ON, Canada"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lagani\u00e8re", 
            "givenName": "Robert", 
            "id": "sg:person.01144533722.06", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144533722.06"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/s10044-005-0020-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1011591987", 
              "https://doi.org/10.1007/s10044-005-0020-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10044-005-0020-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1011591987", 
              "https://doi.org/10.1007/s10044-005-0020-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s1566-2535(03)00046-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024362114"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s1566-2535(03)00046-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024362114"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1117/12.639711", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038000595"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1201/9781420026986.ch1", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1038074034"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1117/1.2136903", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042856958"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1117/1.2136903", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1042856958"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s0167-8655(01)00047-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049634412"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/18.119725", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061098596"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/aipr.2005.9", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093226265"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icip.1995.537667", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094903731"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/icif.2003.177504", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095039863"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/aipr.2005.14", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095812438"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2007-10", 
        "datePublishedReg": "2007-10-01", 
        "description": "In the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception.", 
        "genre": "research_article", 
        "id": "sg:pub.10.1007/s11760-007-0025-4", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1050964", 
            "issn": [
              "1863-1703", 
              "1863-1711"
            ], 
            "name": "Signal, Image and Video Processing", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "4", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "1"
          }
        ], 
        "name": "Context enhancement through infrared vision: a modified fusion scheme", 
        "pagination": "293-301", 
        "productId": [
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "7f3778216fe0ca7093936d6bfedbf8db4b292934094210010273556229412332"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11760-007-0025-4"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1026704159"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11760-007-0025-4", 
          "https://app.dimensions.ai/details/publication/pub.1026704159"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2019-04-10T14:12", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8660_00000522.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "http://link.springer.com/10.1007%2Fs11760-007-0025-4"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11760-007-0025-4'


     

    This table displays all metadata directly associated to this object as RDF triples.

    102 TRIPLES      21 PREDICATES      38 URIs      19 LITERALS      7 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11760-007-0025-4 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N61d234e84b5743f6bf20b29fcdeca59d
    4 schema:citation sg:pub.10.1007/s10044-005-0020-8
    5 https://doi.org/10.1016/s0167-8655(01)00047-2
    6 https://doi.org/10.1016/s1566-2535(03)00046-0
    7 https://doi.org/10.1109/18.119725
    8 https://doi.org/10.1109/aipr.2005.14
    9 https://doi.org/10.1109/aipr.2005.9
    10 https://doi.org/10.1109/icif.2003.177504
    11 https://doi.org/10.1109/icip.1995.537667
    12 https://doi.org/10.1117/1.2136903
    13 https://doi.org/10.1117/12.639711
    14 https://doi.org/10.1201/9781420026986.ch1
    15 schema:datePublished 2007-10
    16 schema:datePublishedReg 2007-10-01
    17 schema:description In the night vision applications, visual and infrared images are often fused for an improved awareness of situation or environment. The fusion algorithms can generate a composite image that retains most important information from source images for human perception. The state of the art includes manipulating in the color spaces and implementing pixel-level fusion with multiresolution algorithms. In this paper, a modified scheme based on multiresolution fusion is proposed to process monochrome visual and infrared images. The visual image is first enhanced based on corresponding infrared image. The final result is obtained by fusing the enhanced image with the visual image. The process highlights the features from visual image, which is most suitable for human perception.
    18 schema:genre research_article
    19 schema:inLanguage en
    20 schema:isAccessibleForFree false
    21 schema:isPartOf N25897fd558d1408d98a39bc21cf24a03
    22 N8602240605854e2c8a15b0fbbf6701ea
    23 sg:journal.1050964
    24 schema:name Context enhancement through infrared vision: a modified fusion scheme
    25 schema:pagination 293-301
    26 schema:productId N764dafbdf5af4493b89384956f4c5f6b
    27 Nda8387bbe3c04004895ae4e063852953
    28 Ne25f1dafb86d4d1cab40d5190c4db1d1
    29 schema:sameAs https://app.dimensions.ai/details/publication/pub.1026704159
    30 https://doi.org/10.1007/s11760-007-0025-4
    31 schema:sdDatePublished 2019-04-10T14:12
    32 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    33 schema:sdPublisher Nd312a3f345134db88bbfe203fcba953a
    34 schema:url http://link.springer.com/10.1007%2Fs11760-007-0025-4
    35 sgo:license sg:explorer/license/
    36 sgo:sdDataset articles
    37 rdf:type schema:ScholarlyArticle
    38 N25897fd558d1408d98a39bc21cf24a03 schema:volumeNumber 1
    39 rdf:type schema:PublicationVolume
    40 N3fedcbe75ef14f5d8d541e951cea46cb rdf:first sg:person.01144533722.06
    41 rdf:rest rdf:nil
    42 N61d234e84b5743f6bf20b29fcdeca59d rdf:first sg:person.010045203007.52
    43 rdf:rest N3fedcbe75ef14f5d8d541e951cea46cb
    44 N764dafbdf5af4493b89384956f4c5f6b schema:name dimensions_id
    45 schema:value pub.1026704159
    46 rdf:type schema:PropertyValue
    47 N8602240605854e2c8a15b0fbbf6701ea schema:issueNumber 4
    48 rdf:type schema:PublicationIssue
    49 Nd312a3f345134db88bbfe203fcba953a schema:name Springer Nature - SN SciGraph project
    50 rdf:type schema:Organization
    51 Nda8387bbe3c04004895ae4e063852953 schema:name doi
    52 schema:value 10.1007/s11760-007-0025-4
    53 rdf:type schema:PropertyValue
    54 Ne25f1dafb86d4d1cab40d5190c4db1d1 schema:name readcube_id
    55 schema:value 7f3778216fe0ca7093936d6bfedbf8db4b292934094210010273556229412332
    56 rdf:type schema:PropertyValue
    57 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    58 schema:name Information and Computing Sciences
    59 rdf:type schema:DefinedTerm
    60 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    61 schema:name Artificial Intelligence and Image Processing
    62 rdf:type schema:DefinedTerm
    63 sg:journal.1050964 schema:issn 1863-1703
    64 1863-1711
    65 schema:name Signal, Image and Video Processing
    66 rdf:type schema:Periodical
    67 sg:person.010045203007.52 schema:affiliation https://www.grid.ac/institutes/grid.28046.38
    68 schema:familyName Liu
    69 schema:givenName Zheng
    70 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010045203007.52
    71 rdf:type schema:Person
    72 sg:person.01144533722.06 schema:affiliation https://www.grid.ac/institutes/grid.28046.38
    73 schema:familyName Laganière
    74 schema:givenName Robert
    75 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01144533722.06
    76 rdf:type schema:Person
    77 sg:pub.10.1007/s10044-005-0020-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011591987
    78 https://doi.org/10.1007/s10044-005-0020-8
    79 rdf:type schema:CreativeWork
    80 https://doi.org/10.1016/s0167-8655(01)00047-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049634412
    81 rdf:type schema:CreativeWork
    82 https://doi.org/10.1016/s1566-2535(03)00046-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024362114
    83 rdf:type schema:CreativeWork
    84 https://doi.org/10.1109/18.119725 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061098596
    85 rdf:type schema:CreativeWork
    86 https://doi.org/10.1109/aipr.2005.14 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095812438
    87 rdf:type schema:CreativeWork
    88 https://doi.org/10.1109/aipr.2005.9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093226265
    89 rdf:type schema:CreativeWork
    90 https://doi.org/10.1109/icif.2003.177504 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095039863
    91 rdf:type schema:CreativeWork
    92 https://doi.org/10.1109/icip.1995.537667 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094903731
    93 rdf:type schema:CreativeWork
    94 https://doi.org/10.1117/1.2136903 schema:sameAs https://app.dimensions.ai/details/publication/pub.1042856958
    95 rdf:type schema:CreativeWork
    96 https://doi.org/10.1117/12.639711 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038000595
    97 rdf:type schema:CreativeWork
    98 https://doi.org/10.1201/9781420026986.ch1 schema:sameAs https://app.dimensions.ai/details/publication/pub.1038074034
    99 rdf:type schema:CreativeWork
    100 https://www.grid.ac/institutes/grid.28046.38 schema:alternateName University of Ottawa
    101 schema:name VIVA Laboratory, STE 5023 School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, K1N 6N5, Ottawa, ON, Canada
    102 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...