Medical image processing with contextual style transfer View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2020-11-10

AUTHORS

Yin Xu, Yan Li, Byeong-Seok Shin

ABSTRACT

With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer. More... »

PAGES

46

References to SciGraph publications

  • 2017-09-26. Deep MR to CT Synthesis Using Unpaired Data in SIMULATION AND SYNTHESIS IN MEDICAL IMAGING
  • 2016-09-17. Colorful Image Colorization in COMPUTER VISION – ECCV 2016
  • 2018-10-07. Multimodal Unsupervised Image-to-Image Translation in COMPUTER VISION – ECCV 2018
  • 2018-10-09. The Contextual Loss for Image Transformation with Non-aligned Data in COMPUTER VISION – ECCV 2018
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1186/s13673-020-00251-9

    DOI

    http://dx.doi.org/10.1186/s13673-020-00251-9

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1132499198


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea", 
              "id": "http://www.grid.ac/institutes/grid.202119.9", 
              "name": [
                "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Xu", 
            "givenName": "Yin", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea", 
              "id": "http://www.grid.ac/institutes/grid.202119.9", 
              "name": [
                "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Li", 
            "givenName": "Yan", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea", 
              "id": "http://www.grid.ac/institutes/grid.202119.9", 
              "name": [
                "Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Shin", 
            "givenName": "Byeong-Seok", 
            "id": "sg:person.01061057524.02", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01061057524.02"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01264-9_47", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107502749", 
              "https://doi.org/10.1007/978-3-030-01264-9_47"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-68127-6_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1091962600", 
              "https://doi.org/10.1007/978-3-319-68127-6_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01219-9_11", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463195", 
              "https://doi.org/10.1007/978-3-030-01219-9_11"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46487-9_40", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016085920", 
              "https://doi.org/10.1007/978-3-319-46487-9_40"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2020-11-10", 
        "datePublishedReg": "2020-11-10", 
        "description": "With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2\u2013\u20094% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.", 
        "genre": "article", 
        "id": "sg:pub.10.1186/s13673-020-00251-9", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1136381", 
            "issn": [
              "2192-1962"
            ], 
            "name": "Human-centric Computing and Information Sciences", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "10"
          }
        ], 
        "keywords": [
          "style transfer", 
          "generative methods", 
          "deep learning research", 
          "medical image processing", 
          "different feature extractors", 
          "original U-Net", 
          "image augmentation", 
          "feature extractor", 
          "U-Net", 
          "segmentation model", 
          "segmentation accuracy", 
          "image synthesis", 
          "image processing", 
          "spine segmentation", 
          "semantic loss", 
          "target image", 
          "generative model", 
          "generative framework", 
          "learning research", 
          "training set", 
          "current industrial applications", 
          "possible solutions", 
          "gray scale", 
          "segmentation", 
          "extractor", 
          "same time", 
          "network", 
          "industrial applications", 
          "great achievements", 
          "images", 
          "robustness", 
          "technology", 
          "framework", 
          "accuracy", 
          "processing", 
          "set", 
          "recent advances", 
          "model", 
          "applications", 
          "method", 
          "performance", 
          "generation", 
          "research", 
          "solution", 
          "work", 
          "augmentation", 
          "advances", 
          "detailed analysis", 
          "terms", 
          "wider discussion", 
          "time", 
          "important role", 
          "transfer", 
          "discussion", 
          "analysis", 
          "CT scan", 
          "scans", 
          "achievement", 
          "scale", 
          "distribution", 
          "loss", 
          "role", 
          "synthesis"
        ], 
        "name": "Medical image processing with contextual style transfer", 
        "pagination": "46", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1132499198"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1186/s13673-020-00251-9"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1186/s13673-020-00251-9", 
          "https://app.dimensions.ai/details/publication/pub.1132499198"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:42", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_870.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1186/s13673-020-00251-9"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00251-9'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00251-9'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00251-9'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1186/s13673-020-00251-9'


     

    This table displays all metadata directly associated to this object as RDF triples.

    151 TRIPLES      21 PREDICATES      92 URIs      79 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1186/s13673-020-00251-9 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Ndc547e28440d4be1a9157e2a76d6b089
    4 schema:citation sg:pub.10.1007/978-3-030-01219-9_11
    5 sg:pub.10.1007/978-3-030-01264-9_47
    6 sg:pub.10.1007/978-3-319-24574-4_28
    7 sg:pub.10.1007/978-3-319-46487-9_40
    8 sg:pub.10.1007/978-3-319-68127-6_2
    9 schema:datePublished 2020-11-10
    10 schema:datePublishedReg 2020-11-10
    11 schema:description With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.
    12 schema:genre article
    13 schema:isAccessibleForFree true
    14 schema:isPartOf N30bb1d66f97e4f56bb232b8893fcbc15
    15 N5879614deda04854a1923609763f3eb1
    16 sg:journal.1136381
    17 schema:keywords CT scan
    18 U-Net
    19 accuracy
    20 achievement
    21 advances
    22 analysis
    23 applications
    24 augmentation
    25 current industrial applications
    26 deep learning research
    27 detailed analysis
    28 different feature extractors
    29 discussion
    30 distribution
    31 extractor
    32 feature extractor
    33 framework
    34 generation
    35 generative framework
    36 generative methods
    37 generative model
    38 gray scale
    39 great achievements
    40 image augmentation
    41 image processing
    42 image synthesis
    43 images
    44 important role
    45 industrial applications
    46 learning research
    47 loss
    48 medical image processing
    49 method
    50 model
    51 network
    52 original U-Net
    53 performance
    54 possible solutions
    55 processing
    56 recent advances
    57 research
    58 robustness
    59 role
    60 same time
    61 scale
    62 scans
    63 segmentation
    64 segmentation accuracy
    65 segmentation model
    66 semantic loss
    67 set
    68 solution
    69 spine segmentation
    70 style transfer
    71 synthesis
    72 target image
    73 technology
    74 terms
    75 time
    76 training set
    77 transfer
    78 wider discussion
    79 work
    80 schema:name Medical image processing with contextual style transfer
    81 schema:pagination 46
    82 schema:productId N581507899097471d9c9b118a2f9d6b0c
    83 Nf43ecac96c134fea84f53f886ee7bf8b
    84 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132499198
    85 https://doi.org/10.1186/s13673-020-00251-9
    86 schema:sdDatePublished 2022-12-01T06:42
    87 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    88 schema:sdPublisher N7d12d2dfb8c144c499dc5fa82389caf2
    89 schema:url https://doi.org/10.1186/s13673-020-00251-9
    90 sgo:license sg:explorer/license/
    91 sgo:sdDataset articles
    92 rdf:type schema:ScholarlyArticle
    93 N1a7a45445b1b4cc1bd9b50dfade053ba schema:affiliation grid-institutes:grid.202119.9
    94 schema:familyName Li
    95 schema:givenName Yan
    96 rdf:type schema:Person
    97 N30bb1d66f97e4f56bb232b8893fcbc15 schema:issueNumber 1
    98 rdf:type schema:PublicationIssue
    99 N581507899097471d9c9b118a2f9d6b0c schema:name doi
    100 schema:value 10.1186/s13673-020-00251-9
    101 rdf:type schema:PropertyValue
    102 N5879614deda04854a1923609763f3eb1 schema:volumeNumber 10
    103 rdf:type schema:PublicationVolume
    104 N664df10e32004859a13594776477e795 rdf:first sg:person.01061057524.02
    105 rdf:rest rdf:nil
    106 N7d12d2dfb8c144c499dc5fa82389caf2 schema:name Springer Nature - SN SciGraph project
    107 rdf:type schema:Organization
    108 Nd663bd3c7ce64d109f5c2e43ff49c215 schema:affiliation grid-institutes:grid.202119.9
    109 schema:familyName Xu
    110 schema:givenName Yin
    111 rdf:type schema:Person
    112 Ndc547e28440d4be1a9157e2a76d6b089 rdf:first Nd663bd3c7ce64d109f5c2e43ff49c215
    113 rdf:rest Nfda9fe1b07784487bbdc211f93ffdcf0
    114 Nf43ecac96c134fea84f53f886ee7bf8b schema:name dimensions_id
    115 schema:value pub.1132499198
    116 rdf:type schema:PropertyValue
    117 Nfda9fe1b07784487bbdc211f93ffdcf0 rdf:first N1a7a45445b1b4cc1bd9b50dfade053ba
    118 rdf:rest N664df10e32004859a13594776477e795
    119 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    120 schema:name Information and Computing Sciences
    121 rdf:type schema:DefinedTerm
    122 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    123 schema:name Artificial Intelligence and Image Processing
    124 rdf:type schema:DefinedTerm
    125 sg:journal.1136381 schema:issn 2192-1962
    126 schema:name Human-centric Computing and Information Sciences
    127 schema:publisher Springer Nature
    128 rdf:type schema:Periodical
    129 sg:person.01061057524.02 schema:affiliation grid-institutes:grid.202119.9
    130 schema:familyName Shin
    131 schema:givenName Byeong-Seok
    132 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01061057524.02
    133 rdf:type schema:Person
    134 sg:pub.10.1007/978-3-030-01219-9_11 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463195
    135 https://doi.org/10.1007/978-3-030-01219-9_11
    136 rdf:type schema:CreativeWork
    137 sg:pub.10.1007/978-3-030-01264-9_47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107502749
    138 https://doi.org/10.1007/978-3-030-01264-9_47
    139 rdf:type schema:CreativeWork
    140 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    141 https://doi.org/10.1007/978-3-319-24574-4_28
    142 rdf:type schema:CreativeWork
    143 sg:pub.10.1007/978-3-319-46487-9_40 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016085920
    144 https://doi.org/10.1007/978-3-319-46487-9_40
    145 rdf:type schema:CreativeWork
    146 sg:pub.10.1007/978-3-319-68127-6_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091962600
    147 https://doi.org/10.1007/978-3-319-68127-6_2
    148 rdf:type schema:CreativeWork
    149 grid-institutes:grid.202119.9 schema:alternateName Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea
    150 schema:name Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea
    151 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...