Ontology type: schema:Chapter Open Access: True
2018
AUTHORSDwarikanath Mahapatra , Behzad Bozorgtabar , Jean-Philippe Thiran , Mauricio Reyes
ABSTRACTTraining robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about \(35\%\) of the full dataset, thus saving significant time and effort over conventional methods. More... »
PAGES580-588
Medical Image Computing and Computer Assisted Intervention – MICCAI 2018
ISBN
978-3-030-00933-5
978-3-030-00934-2
http://scigraph.springernature.com/pub.10.1007/978-3-030-00934-2_65
DOIhttp://dx.doi.org/10.1007/978-3-030-00934-2_65
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1107024340
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "IBM Research - Australia",
"id": "https://www.grid.ac/institutes/grid.481553.e",
"name": [
"IBM Research Australia"
],
"type": "Organization"
},
"familyName": "Mahapatra",
"givenName": "Dwarikanath",
"id": "sg:person.01100662063.91",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01100662063.91"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne",
"id": "https://www.grid.ac/institutes/grid.5333.6",
"name": [
"Ecole Polytechnique Federale de Lausanne"
],
"type": "Organization"
},
"familyName": "Bozorgtabar",
"givenName": "Behzad",
"id": "sg:person.013213672445.40",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013213672445.40"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne",
"id": "https://www.grid.ac/institutes/grid.5333.6",
"name": [
"Ecole Polytechnique Federale de Lausanne"
],
"type": "Organization"
},
"familyName": "Thiran",
"givenName": "Jean-Philippe",
"id": "sg:person.01056563742.51",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01056563742.51"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Bern",
"id": "https://www.grid.ac/institutes/grid.5734.5",
"name": [
"University of Bern"
],
"type": "Organization"
},
"familyName": "Reyes",
"givenName": "Mauricio",
"type": "Person"
}
],
"citation": [
{
"id": "https://doi.org/10.1016/j.media.2005.02.002",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1002136321"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.media.2005.02.002",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1002136321"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-24574-4_28",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1017774818",
"https://doi.org/10.1007/978-3-319-24574-4_28"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-642-40763-5_27",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1027682231",
"https://doi.org/10.1007/978-3-642-40763-5_27"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tcsvt.2016.2589879",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061576837"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tmi.2016.2535302",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061696712"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-66179-7_46",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1091428982",
"https://doi.org/10.1007/978-3-319-66179-7_46"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-66179-7_44",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1091432173",
"https://doi.org/10.1007/978-3-319-66179-7_44"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2016.90",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093359587"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2013.116",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093551533"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2013.116",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093551533"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2016.278",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095706293"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2017.369",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095848734"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2017.632",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095850445"
],
"type": "CreativeWork"
}
],
"datePublished": "2018",
"datePublishedReg": "2018-01-01",
"description": "Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about \\(35\\%\\) of the full dataset, thus saving significant time and effort over conventional methods.",
"editor": [
{
"familyName": "Frangi",
"givenName": "Alejandro F.",
"type": "Person"
},
{
"familyName": "Schnabel",
"givenName": "Julia A.",
"type": "Person"
},
{
"familyName": "Davatzikos",
"givenName": "Christos",
"type": "Person"
},
{
"familyName": "Alberola-L\u00f3pez",
"givenName": "Carlos",
"type": "Person"
},
{
"familyName": "Fichtinger",
"givenName": "Gabor",
"type": "Person"
}
],
"genre": "chapter",
"id": "sg:pub.10.1007/978-3-030-00934-2_65",
"inLanguage": [
"en"
],
"isAccessibleForFree": true,
"isPartOf": {
"isbn": [
"978-3-030-00933-5",
"978-3-030-00934-2"
],
"name": "Medical Image Computing and Computer Assisted Intervention \u2013 MICCAI 2018",
"type": "Book"
},
"name": "Efficient Active Learning for Image Classification and Segmentation Using a Sample Selection and Conditional Generative Adversarial Network",
"pagination": "580-588",
"productId": [
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/978-3-030-00934-2_65"
]
},
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"7190f676f90930ed5925e86fd94f8f1bbfc1d9d3127a235f6b6825f03177446c"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1107024340"
]
}
],
"publisher": {
"location": "Cham",
"name": "Springer International Publishing",
"type": "Organisation"
},
"sameAs": [
"https://doi.org/10.1007/978-3-030-00934-2_65",
"https://app.dimensions.ai/details/publication/pub.1107024340"
],
"sdDataset": "chapters",
"sdDatePublished": "2019-04-15T19:49",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8684_00000605.jsonl",
"type": "Chapter",
"url": "http://link.springer.com/10.1007/978-3-030-00934-2_65"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-00934-2_65'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-00934-2_65'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-00934-2_65'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-00934-2_65'
This table displays all metadata directly associated to this object as RDF triples.
151 TRIPLES
23 PREDICATES
39 URIs
20 LITERALS
8 BLANK NODES