Microscopic image super resolution using deep convolutional neural networks View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-03-09

AUTHORS

Selen Ayas, Murat Ekinci

ABSTRACT

Recently, deep convolutional neural networks (CNNs) have achieved excellent results in single image super resolution (SISR). Owing to the strength of deep CNNs, it gives promising results compared to state-of-the-art learning based models on natural images. Therefore, deep CNNs techniques have also been successfully applied to medical images to obtain better quality images. In this study, we present the first multi-scale deep CNNs capable of SISR for low resolution (LR) microscopic images. To achieve the difficulty of training deep CNNs, residual learning scheme is adopted where the residuals are explicitly supervised by the difference between the high resolution (HR) and the LR images and HR image is reconstructed by adding the lost details into the LR image. Furthermore, gradient clipping is used to avoid gradient explosions with high learning rates. Unlike the deep CNNs based SISR on natural images where the corresponding LR images are obtained by blurring and subsampling HR images, the proposed deep CNNs approach is tested using thin smear blood samples that are imaged at lower objective lenses and the performance is compared with the HR images taken at higher objective lenses. Extensive evaluations show that the superior performance on SISR for microscopic images is obtained using the proposed approach. More... »

PAGES

15397-15415

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s11042-019-7397-7

DOI

http://dx.doi.org/10.1007/s11042-019-7397-7

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1112672304


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey", 
          "id": "http://www.grid.ac/institutes/grid.31564.35", 
          "name": [
            "Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Ayas", 
        "givenName": "Selen", 
        "id": "sg:person.016357602171.07", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016357602171.07"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey", 
          "id": "http://www.grid.ac/institutes/grid.31564.35", 
          "name": [
            "Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Ekinci", 
        "givenName": "Murat", 
        "id": "sg:person.010406724457.47", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010406724457.47"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s11760-014-0708-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1031985196", 
          "https://doi.org/10.1007/s11760-014-0708-6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-017-4495-2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1084028919", 
          "https://doi.org/10.1007/s11042-017-4495-2"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-16817-3_8", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040555336", 
          "https://doi.org/10.1007/978-3-319-16817-3_8"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-46475-6_25", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1011486685", 
          "https://doi.org/10.1007/978-3-319-46475-6_25"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2019-03-09", 
    "datePublishedReg": "2019-03-09", 
    "description": "Recently, deep convolutional neural networks (CNNs) have achieved excellent results in single image super resolution (SISR). Owing to the strength of deep CNNs, it gives promising results compared to state-of-the-art learning based models on natural images. Therefore, deep CNNs techniques have also been successfully applied to medical images to obtain better quality images. In this study, we present the first multi-scale deep CNNs capable of SISR for low resolution (LR) microscopic images. To achieve the difficulty of training deep CNNs, residual learning scheme is adopted where the residuals are explicitly supervised by the difference between the high resolution (HR) and the LR images and HR image is reconstructed by adding the lost details into the LR image. Furthermore, gradient clipping is used to avoid gradient explosions with high learning rates. Unlike the deep CNNs based SISR on natural images where the corresponding LR images are obtained by blurring and subsampling HR images, the proposed deep CNNs approach is tested using thin smear blood samples that are imaged at lower objective lenses and the performance is compared with the HR images taken at higher objective lenses. Extensive evaluations show that the superior performance on SISR for microscopic images is obtained using the proposed approach.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/s11042-019-7397-7", 
    "inLanguage": "en", 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1044869", 
        "issn": [
          "1380-7501", 
          "1573-7721"
        ], 
        "name": "Multimedia Tools and Applications", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "21-22", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "79"
      }
    ], 
    "keywords": [
      "deep convolutional neural network", 
      "Single Image Super Resolution", 
      "convolutional neural network", 
      "image super resolution", 
      "LR images", 
      "HR image", 
      "neural network", 
      "natural images", 
      "multi-scale deep convolutional neural network", 
      "super resolution", 
      "deep CNN approach", 
      "Deep CNN techniques", 
      "gradient clipping", 
      "good quality images", 
      "CNN approach", 
      "CNN techniques", 
      "lost details", 
      "medical images", 
      "gradient explosion", 
      "learning scheme", 
      "art learning", 
      "learning rate", 
      "quality images", 
      "extensive evaluation", 
      "higher learning rate", 
      "microscopic images", 
      "superior performance", 
      "images", 
      "network", 
      "promising results", 
      "high resolution", 
      "performance", 
      "scheme", 
      "learning", 
      "technique", 
      "excellent results", 
      "explosion", 
      "resolution", 
      "model", 
      "results", 
      "detail", 
      "difficulties", 
      "evaluation", 
      "residuals", 
      "objective lenses", 
      "state", 
      "clipping", 
      "rate", 
      "lenses", 
      "strength", 
      "study", 
      "samples", 
      "differences", 
      "approach", 
      "blood samples", 
      "first multi-scale deep CNNs", 
      "low resolution (LR) microscopic images", 
      "resolution (LR) microscopic images", 
      "residual learning scheme", 
      "corresponding LR images", 
      "thin smear blood samples", 
      "smear blood samples", 
      "lower objective lenses", 
      "higher objective lenses", 
      "Microscopic image super resolution"
    ], 
    "name": "Microscopic image super resolution using deep convolutional neural networks", 
    "pagination": "15397-15415", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1112672304"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s11042-019-7397-7"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s11042-019-7397-7", 
      "https://app.dimensions.ai/details/publication/pub.1112672304"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2021-11-01T18:37", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20211101/entities/gbq_results/article/article_831.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/s11042-019-7397-7"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-7397-7'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-7397-7'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-7397-7'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-7397-7'


 

This table displays all metadata directly associated to this object as RDF triples.

146 TRIPLES      22 PREDICATES      94 URIs      82 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s11042-019-7397-7 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N841c60a2e8cd4fd4aa2ee998c1ef4a82
4 schema:citation sg:pub.10.1007/978-3-319-16817-3_8
5 sg:pub.10.1007/978-3-319-46475-6_25
6 sg:pub.10.1007/s11042-017-4495-2
7 sg:pub.10.1007/s11760-014-0708-6
8 schema:datePublished 2019-03-09
9 schema:datePublishedReg 2019-03-09
10 schema:description Recently, deep convolutional neural networks (CNNs) have achieved excellent results in single image super resolution (SISR). Owing to the strength of deep CNNs, it gives promising results compared to state-of-the-art learning based models on natural images. Therefore, deep CNNs techniques have also been successfully applied to medical images to obtain better quality images. In this study, we present the first multi-scale deep CNNs capable of SISR for low resolution (LR) microscopic images. To achieve the difficulty of training deep CNNs, residual learning scheme is adopted where the residuals are explicitly supervised by the difference between the high resolution (HR) and the LR images and HR image is reconstructed by adding the lost details into the LR image. Furthermore, gradient clipping is used to avoid gradient explosions with high learning rates. Unlike the deep CNNs based SISR on natural images where the corresponding LR images are obtained by blurring and subsampling HR images, the proposed deep CNNs approach is tested using thin smear blood samples that are imaged at lower objective lenses and the performance is compared with the HR images taken at higher objective lenses. Extensive evaluations show that the superior performance on SISR for microscopic images is obtained using the proposed approach.
11 schema:genre article
12 schema:inLanguage en
13 schema:isAccessibleForFree false
14 schema:isPartOf N3b6db4bf99484583978dfa80f2fcbb41
15 Nf0f615c0ad70415c8037eaaf36c5945e
16 sg:journal.1044869
17 schema:keywords CNN approach
18 CNN techniques
19 Deep CNN techniques
20 HR image
21 LR images
22 Microscopic image super resolution
23 Single Image Super Resolution
24 approach
25 art learning
26 blood samples
27 clipping
28 convolutional neural network
29 corresponding LR images
30 deep CNN approach
31 deep convolutional neural network
32 detail
33 differences
34 difficulties
35 evaluation
36 excellent results
37 explosion
38 extensive evaluation
39 first multi-scale deep CNNs
40 good quality images
41 gradient clipping
42 gradient explosion
43 high resolution
44 higher learning rate
45 higher objective lenses
46 image super resolution
47 images
48 learning
49 learning rate
50 learning scheme
51 lenses
52 lost details
53 low resolution (LR) microscopic images
54 lower objective lenses
55 medical images
56 microscopic images
57 model
58 multi-scale deep convolutional neural network
59 natural images
60 network
61 neural network
62 objective lenses
63 performance
64 promising results
65 quality images
66 rate
67 residual learning scheme
68 residuals
69 resolution
70 resolution (LR) microscopic images
71 results
72 samples
73 scheme
74 smear blood samples
75 state
76 strength
77 study
78 super resolution
79 superior performance
80 technique
81 thin smear blood samples
82 schema:name Microscopic image super resolution using deep convolutional neural networks
83 schema:pagination 15397-15415
84 schema:productId N6dfa33c553d44a0fb652f09229c895ef
85 Nade6645fc23d42f29a7c07744d0fa5b2
86 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112672304
87 https://doi.org/10.1007/s11042-019-7397-7
88 schema:sdDatePublished 2021-11-01T18:37
89 schema:sdLicense https://scigraph.springernature.com/explorer/license/
90 schema:sdPublisher N55b8d303710b43a79f85ec3a9fcd1bd8
91 schema:url https://doi.org/10.1007/s11042-019-7397-7
92 sgo:license sg:explorer/license/
93 sgo:sdDataset articles
94 rdf:type schema:ScholarlyArticle
95 N3b6db4bf99484583978dfa80f2fcbb41 schema:volumeNumber 79
96 rdf:type schema:PublicationVolume
97 N55b8d303710b43a79f85ec3a9fcd1bd8 schema:name Springer Nature - SN SciGraph project
98 rdf:type schema:Organization
99 N63a6c3ba95124521ba98618c55964d9a rdf:first sg:person.010406724457.47
100 rdf:rest rdf:nil
101 N6dfa33c553d44a0fb652f09229c895ef schema:name doi
102 schema:value 10.1007/s11042-019-7397-7
103 rdf:type schema:PropertyValue
104 N841c60a2e8cd4fd4aa2ee998c1ef4a82 rdf:first sg:person.016357602171.07
105 rdf:rest N63a6c3ba95124521ba98618c55964d9a
106 Nade6645fc23d42f29a7c07744d0fa5b2 schema:name dimensions_id
107 schema:value pub.1112672304
108 rdf:type schema:PropertyValue
109 Nf0f615c0ad70415c8037eaaf36c5945e schema:issueNumber 21-22
110 rdf:type schema:PublicationIssue
111 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
112 schema:name Information and Computing Sciences
113 rdf:type schema:DefinedTerm
114 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
115 schema:name Artificial Intelligence and Image Processing
116 rdf:type schema:DefinedTerm
117 sg:journal.1044869 schema:issn 1380-7501
118 1573-7721
119 schema:name Multimedia Tools and Applications
120 schema:publisher Springer Nature
121 rdf:type schema:Periodical
122 sg:person.010406724457.47 schema:affiliation grid-institutes:grid.31564.35
123 schema:familyName Ekinci
124 schema:givenName Murat
125 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010406724457.47
126 rdf:type schema:Person
127 sg:person.016357602171.07 schema:affiliation grid-institutes:grid.31564.35
128 schema:familyName Ayas
129 schema:givenName Selen
130 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016357602171.07
131 rdf:type schema:Person
132 sg:pub.10.1007/978-3-319-16817-3_8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040555336
133 https://doi.org/10.1007/978-3-319-16817-3_8
134 rdf:type schema:CreativeWork
135 sg:pub.10.1007/978-3-319-46475-6_25 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011486685
136 https://doi.org/10.1007/978-3-319-46475-6_25
137 rdf:type schema:CreativeWork
138 sg:pub.10.1007/s11042-017-4495-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084028919
139 https://doi.org/10.1007/s11042-017-4495-2
140 rdf:type schema:CreativeWork
141 sg:pub.10.1007/s11760-014-0708-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1031985196
142 https://doi.org/10.1007/s11760-014-0708-6
143 rdf:type schema:CreativeWork
144 grid-institutes:grid.31564.35 schema:alternateName Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey
145 schema:name Department of Computer Engineering, Karadeniz Technical University, 61080, Trabzon, Turkey
146 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...