End-to-End Boundary Aware Networks for Medical Image Segmentation View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2019-10-10

AUTHORS

Ali Hatamizadeh , Demetri Terzopoulos , Andriy Myronenko

ABSTRACT

Fully convolutional neural networks (CNNs) have proven to be effective at representing and classifying textural information, thus transforming image intensity into output class masks that achieve semantic image segmentation. In medical image analysis, however, expert manual segmentation often relies on the boundaries of anatomical structures of interest. We propose boundary aware CNNs for medical image segmentation. Our networks are designed to account for organ boundary information, both by providing a special network edge branch and edge-aware loss terms, and they are trainable end-to-end. We validate their effectiveness on the task of brain tumor segmentation using the BraTS 2018 dataset. Our experiments reveal that our approach yields more accurate segmentation results, which makes it promising for more extensive application to medical image segmentation. More... »

PAGES

187-194

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-030-32692-0_22

DOI

http://dx.doi.org/10.1007/978-3-030-32692-0_22

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1121612413


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "NVIDIA, Santa Clara, CA, USA", 
          "id": "http://www.grid.ac/institutes/grid.451133.1", 
          "name": [
            "Computer Science Department, University of California, Los Angeles, CA, USA", 
            "NVIDIA, Santa Clara, CA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Hatamizadeh", 
        "givenName": "Ali", 
        "id": "sg:person.015555505250.21", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015555505250.21"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computer Science Department, University of California, Los Angeles, CA, USA", 
          "id": "http://www.grid.ac/institutes/grid.19006.3e", 
          "name": [
            "Computer Science Department, University of California, Los Angeles, CA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Terzopoulos", 
        "givenName": "Demetri", 
        "id": "sg:person.016347323445.35", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016347323445.35"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "NVIDIA, Santa Clara, CA, USA", 
          "id": "http://www.grid.ac/institutes/grid.451133.1", 
          "name": [
            "NVIDIA, Santa Clara, CA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Myronenko", 
        "givenName": "Andriy", 
        "id": "sg:person.01100761007.14", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01100761007.14"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2019-10-10", 
    "datePublishedReg": "2019-10-10", 
    "description": "Fully convolutional neural networks (CNNs) have proven to be effective at representing and classifying textural information, thus transforming image intensity into output class masks that achieve semantic image segmentation. In medical image analysis, however, expert manual segmentation often relies on the boundaries of anatomical structures of interest. We propose boundary aware CNNs for medical image segmentation. Our networks are designed to account for organ boundary information, both by providing a special network edge branch and edge-aware loss terms, and they are trainable end-to-end. We validate their effectiveness on the task of brain tumor segmentation using the BraTS 2018 dataset. Our experiments reveal that our approach yields more accurate segmentation results, which makes it promising for more extensive application to medical image segmentation.", 
    "editor": [
      {
        "familyName": "Suk", 
        "givenName": "Heung-Il", 
        "type": "Person"
      }, 
      {
        "familyName": "Liu", 
        "givenName": "Mingxia", 
        "type": "Person"
      }, 
      {
        "familyName": "Yan", 
        "givenName": "Pingkun", 
        "type": "Person"
      }, 
      {
        "familyName": "Lian", 
        "givenName": "Chunfeng", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-030-32692-0_22", 
    "inLanguage": "en", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-030-32691-3", 
        "978-3-030-32692-0"
      ], 
      "name": "Machine Learning in Medical Imaging", 
      "type": "Book"
    }, 
    "keywords": [
      "convolutional neural network", 
      "medical image segmentation", 
      "image segmentation", 
      "semantic image segmentation", 
      "medical image analysis", 
      "brain tumor segmentation", 
      "accurate segmentation results", 
      "expert manual segmentation", 
      "aware network", 
      "trainable end", 
      "tumor segmentation", 
      "neural network", 
      "segmentation results", 
      "boundary information", 
      "textural information", 
      "manual segmentation", 
      "segmentation", 
      "image intensity", 
      "network", 
      "image analysis", 
      "edge branch", 
      "loss term", 
      "extensive application", 
      "information", 
      "anatomical structures", 
      "dataset", 
      "task", 
      "applications", 
      "effectiveness", 
      "end", 
      "mask", 
      "experiments", 
      "interest", 
      "terms", 
      "results", 
      "branches", 
      "boundaries", 
      "analysis", 
      "structure", 
      "intensity", 
      "approach", 
      "output class masks", 
      "class masks", 
      "boundary aware CNNs", 
      "aware CNNs", 
      "organ boundary information", 
      "special network edge branch", 
      "network edge branch", 
      "edge-aware loss terms", 
      "End Boundary Aware Networks", 
      "Boundary Aware Networks"
    ], 
    "name": "End-to-End Boundary Aware Networks for Medical Image Segmentation", 
    "pagination": "187-194", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1121612413"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-030-32692-0_22"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-030-32692-0_22", 
      "https://app.dimensions.ai/details/publication/pub.1121612413"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-01-01T19:23", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/chapter/chapter_404.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-030-32692-0_22"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-32692-0_22'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-32692-0_22'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-32692-0_22'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-32692-0_22'


 

This table displays all metadata directly associated to this object as RDF triples.

144 TRIPLES      23 PREDICATES      76 URIs      69 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-030-32692-0_22 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N8d96fec811684d91b500467a32faecc7
4 schema:datePublished 2019-10-10
5 schema:datePublishedReg 2019-10-10
6 schema:description Fully convolutional neural networks (CNNs) have proven to be effective at representing and classifying textural information, thus transforming image intensity into output class masks that achieve semantic image segmentation. In medical image analysis, however, expert manual segmentation often relies on the boundaries of anatomical structures of interest. We propose boundary aware CNNs for medical image segmentation. Our networks are designed to account for organ boundary information, both by providing a special network edge branch and edge-aware loss terms, and they are trainable end-to-end. We validate their effectiveness on the task of brain tumor segmentation using the BraTS 2018 dataset. Our experiments reveal that our approach yields more accurate segmentation results, which makes it promising for more extensive application to medical image segmentation.
7 schema:editor N228d4a8360564c47ba17e646d9f1049c
8 schema:genre chapter
9 schema:inLanguage en
10 schema:isAccessibleForFree true
11 schema:isPartOf Nebf7ae1f86134166b7747f71422f76e1
12 schema:keywords Boundary Aware Networks
13 End Boundary Aware Networks
14 accurate segmentation results
15 analysis
16 anatomical structures
17 applications
18 approach
19 aware CNNs
20 aware network
21 boundaries
22 boundary aware CNNs
23 boundary information
24 brain tumor segmentation
25 branches
26 class masks
27 convolutional neural network
28 dataset
29 edge branch
30 edge-aware loss terms
31 effectiveness
32 end
33 experiments
34 expert manual segmentation
35 extensive application
36 image analysis
37 image intensity
38 image segmentation
39 information
40 intensity
41 interest
42 loss term
43 manual segmentation
44 mask
45 medical image analysis
46 medical image segmentation
47 network
48 network edge branch
49 neural network
50 organ boundary information
51 output class masks
52 results
53 segmentation
54 segmentation results
55 semantic image segmentation
56 special network edge branch
57 structure
58 task
59 terms
60 textural information
61 trainable end
62 tumor segmentation
63 schema:name End-to-End Boundary Aware Networks for Medical Image Segmentation
64 schema:pagination 187-194
65 schema:productId N48555ac877d44efea7ec1a4c27ecb05a
66 N86c7e9f5088740888b8077cf751c55c5
67 schema:publisher N27eeaf19d60f49e8bbd67a01a4aea9d9
68 schema:sameAs https://app.dimensions.ai/details/publication/pub.1121612413
69 https://doi.org/10.1007/978-3-030-32692-0_22
70 schema:sdDatePublished 2022-01-01T19:23
71 schema:sdLicense https://scigraph.springernature.com/explorer/license/
72 schema:sdPublisher Nbf9830ad2651489ebe8cfc3329b16d4b
73 schema:url https://doi.org/10.1007/978-3-030-32692-0_22
74 sgo:license sg:explorer/license/
75 sgo:sdDataset chapters
76 rdf:type schema:Chapter
77 N0881347c1c4444b89cbb51d54b2ab644 schema:familyName Liu
78 schema:givenName Mingxia
79 rdf:type schema:Person
80 N228d4a8360564c47ba17e646d9f1049c rdf:first N9c5307e5b41c46f8902f683e7345abe1
81 rdf:rest Nce4c29ce3e4a49bf8607744455cc8741
82 N27eeaf19d60f49e8bbd67a01a4aea9d9 schema:name Springer Nature
83 rdf:type schema:Organisation
84 N48555ac877d44efea7ec1a4c27ecb05a schema:name dimensions_id
85 schema:value pub.1121612413
86 rdf:type schema:PropertyValue
87 N5a8251df77bd463191c889bc9db6d42e rdf:first sg:person.016347323445.35
88 rdf:rest Nfb353b2dabd842edaaada027a7609f7b
89 N6cbb0e14773e4235ad7a6038c10dfc45 rdf:first Nb595f3d7aad4421ca29ae3636c7891de
90 rdf:rest rdf:nil
91 N7069a4c816684d2aaa9af9f781cea9e8 schema:familyName Yan
92 schema:givenName Pingkun
93 rdf:type schema:Person
94 N86c7e9f5088740888b8077cf751c55c5 schema:name doi
95 schema:value 10.1007/978-3-030-32692-0_22
96 rdf:type schema:PropertyValue
97 N8d96fec811684d91b500467a32faecc7 rdf:first sg:person.015555505250.21
98 rdf:rest N5a8251df77bd463191c889bc9db6d42e
99 N9c5307e5b41c46f8902f683e7345abe1 schema:familyName Suk
100 schema:givenName Heung-Il
101 rdf:type schema:Person
102 Nb595f3d7aad4421ca29ae3636c7891de schema:familyName Lian
103 schema:givenName Chunfeng
104 rdf:type schema:Person
105 Nbf9830ad2651489ebe8cfc3329b16d4b schema:name Springer Nature - SN SciGraph project
106 rdf:type schema:Organization
107 Nc3b41c8c21754fde92f08e5f1c3e383a rdf:first N7069a4c816684d2aaa9af9f781cea9e8
108 rdf:rest N6cbb0e14773e4235ad7a6038c10dfc45
109 Nce4c29ce3e4a49bf8607744455cc8741 rdf:first N0881347c1c4444b89cbb51d54b2ab644
110 rdf:rest Nc3b41c8c21754fde92f08e5f1c3e383a
111 Nebf7ae1f86134166b7747f71422f76e1 schema:isbn 978-3-030-32691-3
112 978-3-030-32692-0
113 schema:name Machine Learning in Medical Imaging
114 rdf:type schema:Book
115 Nfb353b2dabd842edaaada027a7609f7b rdf:first sg:person.01100761007.14
116 rdf:rest rdf:nil
117 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
118 schema:name Information and Computing Sciences
119 rdf:type schema:DefinedTerm
120 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
121 schema:name Artificial Intelligence and Image Processing
122 rdf:type schema:DefinedTerm
123 sg:person.01100761007.14 schema:affiliation grid-institutes:grid.451133.1
124 schema:familyName Myronenko
125 schema:givenName Andriy
126 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01100761007.14
127 rdf:type schema:Person
128 sg:person.015555505250.21 schema:affiliation grid-institutes:grid.451133.1
129 schema:familyName Hatamizadeh
130 schema:givenName Ali
131 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015555505250.21
132 rdf:type schema:Person
133 sg:person.016347323445.35 schema:affiliation grid-institutes:grid.19006.3e
134 schema:familyName Terzopoulos
135 schema:givenName Demetri
136 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016347323445.35
137 rdf:type schema:Person
138 grid-institutes:grid.19006.3e schema:alternateName Computer Science Department, University of California, Los Angeles, CA, USA
139 schema:name Computer Science Department, University of California, Los Angeles, CA, USA
140 rdf:type schema:Organization
141 grid-institutes:grid.451133.1 schema:alternateName NVIDIA, Santa Clara, CA, USA
142 schema:name Computer Science Department, University of California, Los Angeles, CA, USA
143 NVIDIA, Santa Clara, CA, USA
144 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...